20 Nov 2025
Planet Debian
Bálint Réczey: Think you can’t interpose static binaries with LD_PRELOAD? Think again!

Well, you are right, you can't. At least not directly. This is well documented in many projects relying on interposing binaries, like faketime.
But what if we could write something that would take a static binary, replace at least the direct syscalls with ones going through libc and load it with the dynamic linker? We are in luck, because the excellent QEMU project has a user space emulator! It can be compiled as a dynamically linked executable, honors LD_PRELOAD and uses the host libc's syscall - well, at least sometimes. Sometimes syscalls just bypass libc.
The missing piece was a way to make QEMU always take the interposable path and call the host libc instead of using an arch-specifix assembly routine (`safe_syscall_base`) to construct the syscall and going directly to the kernel. Luckily, this turned out to be doable. A small patch later, QEMU gained a switch that forces all syscalls through libc. Suddenly, our static binaries started looking a lot more dynamic!
$ faketime '2008-12-24 08:15:42' qemu-x86_64 ./test_static_clock_gettime
2008-12-24 08:15:42.725404654
$ file test_static_clock_gettime
test_clock_gettime: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, ...
With this in place, Firebuild can finally wrap even those secretive statically linked tools. QEMU runs them, libc catches their syscalls, LD_PRELOAD injects libfirebuild.so, and from there the usual interposition magic happens. The result: previously uncachable build steps can now be traced, cached, and shortcut just like their dynamic friends.
There is one more problem though. Why would the static binaries deep in the build be run by QEMU? Firebuild also intercepts the `exec()` calls and now it rewrites them on the fly whenever the executed binary would be statically linked!
$ firebuild -d comm bash -c ./test_static
...
FIREBUILD: fd 9.1: ({ExecedProcess 161077.1, running, "bash -c ./test_static", fds=[0: {FileFD ofd={FileO
FD #0 type=FD_PIPE_IN r} cloexec=false}, 1: {FileFD ofd={FileOFD #3 type=FD_PIPE_OUT w} {Pipe #0} close_o
n_popen=false cloexec=false}, 2: {FileFD ofd={FileOFD #4 type=FD_PIPE_OUT w} {Pipe #1} close_on_popen=fal
se cloexec=false}, 3: {FileFD NULL} /* times 2 */]})
{
"[FBBCOMM_TAG]": "exec",
"file": "test_static",
"// fd": null,
"// dirfd": null,
"arg": [
"./test_static"
],
"env": [
"SHELL=/bin/bash",
...
"FB_SOCKET=/tmp/firebuild.cpMn75/socket",
"_=./test_static"
],
"with_p": false,
"// path": null,
"utime_u": 0,
"stime_u": 1017
}
FIREBUILD: -> proc_ic_msg() (message_processor.cc:782) proc={ExecedProcess 161077.1, running, "bash -c
./test_static", fds=[0: {FileFD ofd={FileOFD #0 type=FD_PIPE_IN r} cloexec=false}, 1: {FileFD ofd={FileOF
D #3 type=FD_PIPE_OUT w} {Pipe #0} close_on_popen=false cloexec=false}, 2: {FileFD ofd={FileOFD #4 type=F
D_PIPE_OUT w} {Pipe #1} close_on_popen=false cloexec=false}, 3: {FileFD NULL} /* times 2 */]}, fd_conn=9.
1, tag=exec, ack_num=0
FIREBUILD: -> send_fbb() (utils.cc:292) conn=9.1, ack_num=0 fd_count=0
Sending message with ancillary fds []:
{
"[FBBCOMM_TAG]": "rewritten_args",
"arg": [
"/usr/bin/qemu-user-interposable",
"-libc-syscalls",
"./test_static"
],
"path": "/usr/bin/qemu-user-interposable"
}
...
FIREBUILD: -> accept_ic_conn() (firebuild.cc:139) listener=6
...
FIREBUILD: fd 9.2: ({Process NULL})
{
"[FBBCOMM_TAG]": "scproc_query",
"pid": 161077,
"ppid": 161073,
"cwd": "/home/rbalint/projects/firebuild/test",
"arg": [
"/usr/bin/qemu-user-interposable",
"-libc-syscalls",
"./test_static"
],
"env_var": [
"CCACHE_DISABLE=1",
...
"SHELL=/bin/bash",
"SHLVL=0",
"_=./test_static"
],
"umask": "0002",
"jobserver_fds": [],
"// jobserver_fifo": null,
"executable": "/usr/bin/qemu-user-interposable",
"// executed_path": null,
"// original_executed_path": null,
"libs": [
"/lib/x86_64-linux-gnu/libatomic.so.1",
"/lib/x86_64-linux-gnu/libc.so.6",
"/lib/x86_64-linux-gnu/libglib-2.0.so.0",
"/lib/x86_64-linux-gnu/libm.so.6",
"/lib/x86_64-linux-gnu/libpcre2-8.so.0",
"/lib64/ld-linux-x86-64.so.2"
],
"version": "0.8.5.1"
}
The QEMU patch is forwarded to qemu-devel. If it lands, anyone using QEMU user-mode emulation could benefit - not just Firebuild.
For Firebuild users, though, the impact is immediate. Toolchains that mix dynamic and static helpers? Cross-builds that pull in odd little statically linked utilities? Previously "invisible" steps in your builds? All now fair game for caching.
Firebuild 0.8.5 ships this new capability out of the box. Just update, make sure you're using a patched QEMU, and enjoy the feeling of watching even static binaries fall neatly into place in your cached build graph. Ubuntu users can get the prebuilt patched QEMU packages from the Firebuild PPA already.
Static binaries, welcome to the party!
20 Nov 2025 8:56pm GMT
Planet Lisp
Neil Munro: Ningle Tutorial 13: Adding Comments
Contents
- Part 1 (Hello World)
- Part 2 (Basic Templates)
- Part 3 (Introduction to middleware and Static File management)
- Part 4 (Forms)
- Part 5 (Environmental Variables)
- Part 6 (Database Connections)
- Part 7 (Envy Configuation Switching)
- Part 8 (Mounting Middleware)
- Part 9 (Authentication System)
- Part 10 (Email)
- Part 11 (Posting Tweets & Advanced Database Queries)
- Part 12 (Clean Up & Bug Fix)
- Part 13 (Adding Comments)
Introduction
Hello and welcome back, I hope you are well! In this tutorial we will be exploring how to work with comments, I originally didn't think I would add too many Twitter like features, but I realised that having a self-referential model would actually be a useful lesson. In addition to demonstrating how to achieve this, we can look at how to complete a migration successfully.
This will involve us adjusting our models, adding a form (and respective validator), improving and expanding our controllers, adding the appropriate controller to our app and tweak our templates to accomodate the changes.
Note: There is also an improvement to be made in our models code, mito provides a convenience method to get the id, created-at, and updated-at slots. We will integrate it as we alter our models.
src/models.lisp
When it comes to changes to the post model it is very important that the :col-type is set to (or :post :null) and that :initform nil is also set. This is because when you run the migrations, existing rows will not have data for the parent column and so in the process of migration we have to provide a default. It should be possible to use (or :post :integer) and set :initform 0 if you so wished, but I chose to use :null and nil as my migration pattern.
This also ensures that new posts default to having no parent, which is the right design choice here.
Package and Post model
(defpackage ningle-tutorial-project/models
(:use :cl :mito :sxql)
(:import-from :ningle-auth/models #:user)
(:export #:post
#:id
#:content
+ #:comments
#:likes
#:user
#:liked-post-p
- #:logged-in-posts
- #:not-logged-in-posts
+ #:posts
+ #:parent
#:toggle-like))
(in-package ningle-tutorial-project/models)
(deftable post ()
((user :col-type ningle-auth/models:user :initarg :user :accessor user)
+ (parent :col-type (or :post :null) :initarg :parent :reader parent :initform nil)
(content :col-type (:varchar 140) :initarg :content :accessor content)))
Comments
Comments are really a specialist type of post that happens to have a non-nil parent value, we will take what we previously learned from working with post objects and extend it. In reality the only real difference is (sxql:where (:= parent :?)), perhaps I shall see if this could support conditionals inside it, but that's another experiment for another day.
I want to briefly remind you of what the :? does, as security is important!
The :? is a placeholder, it is a way to ensure that values are not placed in the SQL without being escaped, this prevents SQL Injection attacks, the retrieve-by-sql takes a key argument :binds which takes a list of values that will be interpolated into the right parts of the SQL query with the correct quoting.
We used this previously, but I want to remind you to not just inject values into a SQL query without quoting them.
(defmethod likes ((post post))
(mito:count-dao 'likes :post post))
+(defgeneric comments (post user)
+ (:documentation "Gets the comments for a logged in user"))
+
+(defmethod comments ((post post) (user user))
+ (mito:retrieve-by-sql
+ (sxql:yield
+ (sxql:select
+ (:post.*
+ (:as :user.username :username)
+ (:as (:count :likes.id) :like_count)
+ (:as (:count :user_likes.id) :liked_by_user))
+ (sxql:from :post)
+ (sxql:where (:= :parent :?))
+ (sxql:left-join :user :on (:= :post.user_id :user.id))
+ (sxql:left-join :likes :on (:= :post.id :likes.post_id))
+ (sxql:left-join (:as :likes :user_likes)
+ :on (:and (:= :post.id :user_likes.post_id)
+ (:= :user_likes.user_id :?)))
+ (sxql:group-by :post.id)
+ (sxql:order-by (:desc :post.created_at))
+ (sxql:limit 50)))
+ :binds (list (mito:object-id post) (mito:object-id user))))
+
+(defmethod comments ((post post) (user null))
+ (mito:retrieve-by-sql
+ (sxql:yield
+ (sxql:select
+ (:post.*
+ (:as :user.username :username)
+ (:as (:count :likes.id) :like_count))
+ (sxql:from :post)
+ (sxql:where (:= :parent :?))
+ (sxql:left-join :user :on (:= :post.user_id :user.id))
+ (sxql:left-join :likes :on (:= :post.id :likes.post_id))
+ (sxql:group-by :post.id)
+ (sxql:order-by (:desc :post.created_at))
+ (sxql:limit 50)))
+ :binds (list (mito:object-id post))))
Posts refactor
I had not originally planned on this, but as I was writing the comments code it became clear that I was creating lots of duplication, and maybe I still am, but I hit upon a way to simplify the model interface, at least. Ideally it makes no difference if a user is logged in or not at the point the route is hit, the api should be to give the user object (whatever that might be, because it may be nil) and let a specialised method figure out what to do there. So in addition to adding comments (which is what prompted this change) we will also slightly refactor the posts logged-in-posts and not-logged-in-posts into a single, unified posts method cos it's silly of me to have split them like that.
(defmethod liked-post-p ((ningle-auth/models:user user) (post post))
(mito:find-dao 'likes :user user :post post))
-(defgeneric logged-in-posts (user)
- (:documentation "Gets the posts for a logged in user"))
+(defgeneric posts (user)
+ (:documentation "Gets the posts"))
+
-(defmethod logged-in-posts ((user user))
- (let ((uuid (slot-value user 'mito.dao.mixin::id)))
+(defmethod posts ((user user))
+ (mito:retrieve-by-sql
+ (sxql:yield
+ (sxql:select
+ (:post.*
+ (:as :user.username :username)
+ (:as (:count :likes.id) :like_count)
+ (:as (:count :user_likes.id) :liked_by_user))
+ (sxql:from :post)
+ (sxql:left-join :user :on (:= :post.user_id :user.id))
+ (sxql:left-join :likes :on (:= :post.id :likes.post_id))
+ (sxql:left-join (:as :likes :user_likes)
+ :on (:and (:= :post.id :user_likes.post_id)
+ (:= :user_likes.user_id :?)))
+ (sxql:group-by :post.id)
+ (sxql:order-by (:desc :post.created_at))
+ (sxql:limit 50)))
+ :binds (list (mito:object-id user))))
+
-(defun not-logged-in-posts ()
+(defmethod posts ((user null))
+ (mito:retrieve-by-sql
+ (sxql:yield
+ (sxql:select
+ (:post.*
+ (:as :user.username :username)
+ (:as (:count :likes.id) :like_count))
+ (sxql:from :post)
+ (sxql:left-join :user :on (:= :post.user_id :user.id))
+ (sxql:left-join :likes :on (:= :post.id :likes.post_id))
+ (sxql:group-by :post.id)
+ (sxql:order-by (:desc :post.created_at))
+ (sxql:limit 50)))))
There is also another small fix in this code, turns out there's a set of convenience methods that mito provides:
- (mito:object-at ...)
- (mito:created-at ...)
- (mito:updated-at ...)
Previously we used mito.dao.mixin::id (and could have done the same for create-at, and updated-at), in combination with slot-value, which means (slot-value user 'mito.dao.mixin::id') simply becomes (mito:object-id user), which is much nicer!
Full Listing
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 |
(defpackage ningle-tutorial-project/models (:use :cl :mito :sxql) (:import-from :ningle-auth/models #:user) (:export #:post #:id #:content #:comments #:likes #:user #:liked-post-p #:posts #:parent #:toggle-like)) (in-package ningle-tutorial-project/models) (deftable post () ((user :col-type ningle-auth/models:user :initarg :user :accessor user) (parent :col-type (or :post :null) :initarg :parent :reader parent :initform nil) (content :col-type (:varchar 140) :initarg :content :accessor content))) (deftable likes () ((user :col-type ningle-auth/models:user :initarg :user :reader user) (post :col-type post :initarg :post :reader post)) (:unique-keys (user post))) (defgeneric likes (post) (:documentation "Returns the number of likes a post has")) (defmethod likes ((post post)) (mito:count-dao 'likes :post post)) (defgeneric comments (post user) (:documentation "Gets the comments for a logged in user")) (defmethod comments ((post post) (user user)) (mito:retrieve-by-sql (sxql:yield (sxql:select (:post.* (:as :user.username :username) (:as (:count :likes.id) :like_count) (:as (:count :user_likes.id) :liked_by_user)) (sxql:from :post) (sxql:where (:= :parent :?)) (sxql:left-join :user :on (:= :post.user_id :user.id)) (sxql:left-join :likes :on (:= :post.id :likes.post_id)) (sxql:left-join (:as :likes :user_likes) :on (:and (:= :post.id :user_likes.post_id) (:= :user_likes.user_id :?))) (sxql:group-by :post.id) (sxql:order-by (:desc :post.created_at)) (sxql:limit 50))) :binds (list (mito:object-id post) (mito:object-id user)))) (defmethod comments ((post post) (user null)) (mito:retrieve-by-sql (sxql:yield (sxql:select (:post.* (:as :user.username :username) (:as (:count :likes.id) :like_count)) (sxql:from :post) (sxql:where (:= :parent :?)) (sxql:left-join :user :on (:= :post.user_id :user.id)) (sxql:left-join :likes :on (:= :post.id :likes.post_id)) (sxql:group-by :post.id) (sxql:order-by (:desc :post.created_at)) (sxql:limit 50))) :binds (list (mito:object-id post)))) (defgeneric toggle-like (user post) (:documentation "Toggles the like of a user to a given post")) (defmethod toggle-like ((ningle-auth/models:user user) (post post)) (let ((liked-post (liked-post-p user post))) (if liked-post (mito:delete-dao liked-post) (mito:create-dao 'likes :post post :user user)) (not liked-post))) (defgeneric liked-post-p (user post) (:documentation "Returns true if a user likes a given post")) (defmethod liked-post-p ((ningle-auth/models:user user) (post post)) (mito:find-dao 'likes :user user :post post)) (defgeneric posts (user) (:documentation "Gets the posts")) (defmethod posts ((user user)) (mito:retrieve-by-sql (sxql:yield (sxql:select (:post.* (:as :user.username :username) (:as (:count :likes.id) :like_count) (:as (:count :user_likes.id) :liked_by_user)) (sxql:from :post) (sxql:left-join :user :on (:= :post.user_id :user.id)) (sxql:left-join :likes :on (:= :post.id :likes.post_id)) (sxql:left-join (:as :likes :user_likes) :on (:and (:= :post.id :user_likes.post_id) (:= :user_likes.user_id :?))) (sxql:group-by :post.id) (sxql:order-by (:desc :post.created_at)) (sxql:limit 50))) :binds (list (mito:object-id user)))) (defmethod posts ((user null)) (mito:retrieve-by-sql (sxql:yield (sxql:select (:post.* (:as :user.username :username) (:as (:count :likes.id) :like_count)) (sxql:from :post) (sxql:left-join :user :on (:= :post.user_id :user.id)) (sxql:left-join :likes :on (:= :post.id :likes.post_id)) (sxql:group-by :post.id) (sxql:order-by (:desc :post.created_at)) (sxql:limit 50))))) |
src/forms.lisp
All we have to do here is define our form and validators and ensure they are exported, not really a lot of work!
(defpackage ningle-tutorial-project/forms
(:use :cl :cl-forms)
(:export #:post
#:content
- #:submit))
+ #:submit
+ #:comment
+ #:parent))
(in-package ningle-tutorial-project/forms)
(defparameter *post-validator* (list (clavier:not-blank)
(clavier:is-a-string)
(clavier:len :max 140)))
+(defparameter *post-parent-validator* (list (clavier:not-blank)
+ (clavier:fn (lambda (x) (> (parse-integer x) 0)) "Checks positive integer")))
(defform post (:id "post" :csrf-protection t :csrf-field-name "csrftoken" :action "/post")
((content :string :value "" :constraints *post-validator*)
(submit :submit :label "Post")))
+(defform comment (:id "post" :csrf-protection t :csrf-field-name "csrftoken" :action "/post/comment")
+ ((content :string :value "" :constraints *post-validator*)
+ (parent :hidden :value 0 :constraints *post-parent-validator*)
+ (submit :submit :label "Post")))
In our *post-parent-validator* we validate that the content of the parent field is not blank (as it is a comment and needs a reference to a parent) and we used a custom validator using clavier:fn and passing a lambda to verify the item is a positive integer.
We then create our comment form, which is very similar to our existing post form, with the difference of pointing to a different http endpoint /post/comment rather than just /post, and we have a hidden parent slot, which we set to 0 by default, so by default the form will be invalid, but that's ok, because we can't possibly know what the parent id would be until the form is rendered and we can set the parent id value at the point we render the form, so it really is nothing to worry about.
Full Listing
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
(defpackage ningle-tutorial-project/forms (:use :cl :cl-forms) (:export #:post #:content #:submit #:comment #:parent)) (in-package ningle-tutorial-project/forms) (defparameter *post-validator* (list (clavier:not-blank) (clavier:is-a-string) (clavier:len :max 140))) (defparameter *post-parent-validator* (list (clavier:not-blank) (clavier:fn (lambda (x) (> (parse-integer x) 0)) "Checks positive integer"))) (defform post (:id "post" :csrf-protection t :csrf-field-name "csrftoken" :action "/post") ((content :string :value "" :constraints *post-validator*) (submit :submit :label "Post"))) (defform comment (:id "post" :csrf-protection t :csrf-field-name "csrftoken" :action "/post/comment") ((content :string :value "" :constraints *post-validator*) (parent :hidden :value 0 :constraints *post-parent-validator*) (submit :submit :label "Post"))) |
src/controllers.lisp
Having simplified the models, we can also simplify the controllers!
Let's start by setting up our package information:
(defpackage ningle-tutorial-project/controllers
- (:use :cl :sxql :ningle-tutorial-project/forms)
+ (:use :cl :sxql)
+ (:import-from :ningle-tutorial-project/forms
+ #:post
+ #:content
+ #:parent
+ #:comment)
- (:export #:logged-in-index
- #:index
+ (:export #:index
#:post-likes
#:single-post
#:post-content
+ #:post-comment
#:logged-in-profile
#:unauthorized-profile
#:people
#:person))
(in-package ningle-tutorial-project/controllers)
The index and logged-in-index can now be consolidated:
-(defun logged-in-index (params)
+(defun index (params)
(let* ((user (gethash :user ningle:*session*))
- (form (cl-forms:find-form 'post))
- (posts (ningle-tutorial-project/models:logged-in-posts user)))
- (djula:render-template* "main/index.html" nil :title "Home" :user user :posts posts :form form)))
-
-
-(defun index (params))
-(let ((posts (ningle-tutorial-project/models:not-logged-in-posts)))
- (djula:render-template* "main/index.html" nil :title "Home" :user (gethash :user ningle:*session*) :posts posts)))
+ (posts (ningle-tutorial-project/models:posts user))
+ (djula:render-template* "main/index.html" nil :title "Home" :user user :posts posts :form (if user (cl-forms:find-form 'post) nil))))
Our post-likes controller comes next:
(defun post-likes (params)
(let* ((user (gethash :user ningle:*session*))
(post (mito:find-dao 'ningle-tutorial-project/models:post :id (parse-integer (ingle:get-param :id params))))
(res (make-hash-table :test 'equal)))
- (setf (gethash :post res) (parse-integer (ingle:get-param :id params)) )
- (setf (gethash :likes res) (ningle-tutorial-project/models:likes post))
- (setf (gethash :liked res) (ningle-tutorial-project/models:toggle-like user post))
+ ;; Bail out if post does not exist
+ (unless post
+ (setf (gethash "error" res) "post not found")
+ (setf (getf (lack.response:response-headers ningle:*response*) :content-type) "application/json")
+ (setf (lack.response:response-status ningle:*response*) 404)
+ (return-from post-likes (com.inuoe.jzon.stringify res)))
+
+ (setf (gethash "post" res) (mito:object-id post))
+ (setf (gethash "liked" res) (ningle-tutorial-project/models:toggle-like user post))
+ (setf (gethash "likes" res) (ningle-tutorial-project/models:likes post))
+ (setf (getf (lack.response:response-headers ningle:*response*) :content-type) "application/json")
+ (setf (lack.response:response-status ningle:*response*) 201)
+ (com.inuoe.jzon:stringify res)))
Here we begin by first checking that the post exists, if for some reason someone sent a request to our server without a valid post an error might be thrown and no response would be sent at all, which is not good, so we use unless as our "if not" check to return the standard http code for not found, the good old 404!
If however there is no error (a post matching the id exists) we can continue, we build up the hash-table, including the "post", "liked", and "likes" properties of a post. Remember these are not direct properties of a post model, but calculated based on information in other tables, especially the toggle-like (actually it's very important to ensure you call toggle-like first, as it changes the db state that calling likes will depend on), as it returns the toggled status, that is, if a user clicks it once it will like the post, but if they click it again it will "unlike" the post.
Now, with our single post, we have implemented a lot more information, comments, likes, our new comment form, etc so we have to really build up a more comprehensive single-post controller.
(defun single-post (params)
(handler-case
- (let ((post (mito:find-dao 'ningle-tutorial-project/models:post :id (parse-integer (ingle:get-param :id params)))))
- (djula:render-template* "main/post.html" nil :title "Post" :post post))
+
+ (let* ((post-id (parse-integer (ingle:get-param :id params)))
+ (post (mito:find-dao 'ningle-tutorial-project/models:post :id post-id))
+ (comments (ningle-tutorial-project/models:comments post (gethash :user ningle:*session*)))
+ (likes (ningle-tutorial-project/models:likes post))
+ (form (cl-forms:find-form 'comment))
+ (user (gethash :user ningle:*session*)))
+ (cl-forms:set-field-value form 'ningle-tutorial-project/forms:parent post-id)
+ (djula:render-template* "main/post.html" nil
+ :title "Post"
+ :post post
+ :comments comments
+ :likes likes
+ :form form
+ :user user))
(parse-error (err)
(setf (lack.response:response-status ningle:*response*) 404)
(djula:render-template* "error.html" nil :title "Error" :error err))))
Where previously we just rendered the template, we now do a lot more! We can get the likes, comments etc which is a massive step up in functionality.
The next function to look at is post-content, thankfully there isn't too much to change here, all we need to do is ensure we pass through the parent (which will be nil).
(when valid
(cl-forms:with-form-field-values (content) form
- (mito:create-dao 'ningle-tutorial-project/models:post :content content :user user)
+ (mito:create-dao 'ningle-tutorial-project/models:post :content content :user user :parent nil)
(ingle:redirect "/")))))
Now, finally in our controllers we add the post-comment controller.
+(defun post-comment (params)
+ (let ((user (gethash :user ningle:*session*))
+ (form (cl-forms:find-form 'comment)))
+ (handler-case
+ (progn
+ (cl-forms:handle-request form) ; Can throw an error if CSRF fails
+
+ (multiple-value-bind (valid errors)
+ (cl-forms:validate-form form)
+
+ (when errors
+ (format t "Errors: ~A~%" errors))
+
+ (when valid
+ (cl-forms:with-form-field-values (content parent) form
+ (mito:create-dao 'ningle-tutorial-project/models:post :content content :user user :parent (parse-integer parent))
+ (ingle:redirect "/")))))
+
+ (simple-error (err)
+ (setf (lack.response:response-status ningle:*response*) 403)
+ (djula:render-template* "error.html" nil :title "Error" :error err)))))
We have seen this pattern before, but with some minor differences in which form to load (comment instead of post), and setting the parent from the value injected into the form at the point the form is rendered.
Full Listing
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 |
(defpackage ningle-tutorial-project/controllers (:use :cl :sxql) (:import-from :ningle-tutorial-project/forms #:post #:content #:parent #:comment) (:export #:index #:post-likes #:single-post #:post-content #:post-comment #:logged-in-profile #:unauthorized-profile #:people #:person)) (in-package ningle-tutorial-project/controllers) (defun index (params) (let* ((user (gethash :user ningle:*session*)) (posts (ningle-tutorial-project/models:posts user))) (djula:render-template* "main/index.html" nil :title "Home" :user user :posts posts :form (if user (cl-forms:find-form 'post) nil)))) (defun post-likes (params) (let* ((user (gethash :user ningle:*session*)) (post (mito:find-dao 'ningle-tutorial-project/models:post :id (parse-integer (ingle:get-param :id params)))) (res (make-hash-table :test 'equal))) ;; Bail out if post does not exist (unless post (setf (getf (lack.response:response-headers ningle:*response*) :content-type) "application/json") (setf (gethash "error" res) "post not found") (setf (lack.response:response-status ningle:*response*) 404) (return-from post-likes (com.inuoe.jzon.stringify res))) ;; success, continue (setf (gethash "post" res) (mito:object-id post)) (setf (gethash "liked" res) (ningle-tutorial-project/models:toggle-like user post)) (setf (gethash "likes" res) (ningle-tutorial-project/models:likes post)) (setf (getf (lack.response:response-headers ningle:*response*) :content-type) "application/json") (setf (lack.response:response-status ningle:*response*) 201) (com.inuoe.jzon:stringify res))) (defun single-post (params) (handler-case (let ((post (mito:find-dao 'ningle-tutorial-project/models:post :id (parse-integer (ingle:get-param :id params)))) (form (cl-forms:find-form 'comment))) (cl-forms:set-field-value form 'ningle-tutorial-project/forms:parent (mito:object-id post)) (djula:render-template* "main/post.html" nil :title "Post" :post post :comments (ningle-tutorial-project/models:comments post (gethash :user ningle:*session*)) :likes (ningle-tutorial-project/models:likes post) :form form :user (gethash :user ningle:*session*))) (parse-error (err) (setf (lack.response:response-status ningle:*response*) 404) (djula:render-template* "error.html" nil :title "Error" :error err)))) (defun post-content (params) (let ((user (gethash :user ningle:*session*)) (form (cl-forms:find-form 'post))) (handler-case (progn (cl-forms:handle-request form) ; Can throw an error if CSRF fails (multiple-value-bind (valid errors) (cl-forms:validate-form form) (when errors (format t "Errors: ~A~%" errors)) (when valid (cl-forms:with-form-field-values (content) form (mito:create-dao 'ningle-tutorial-project/models:post :content content :user user :parent nil) (ingle:redirect "/"))))) (simple-error (err) (setf (lack.response:response-status ningle:*response*) 403) (djula:render-template* "error.html" nil :title "Error" :error err))))) (defun post-comment (params) (let ((user (gethash :user ningle:*session*)) (form (cl-forms:find-form 'comment))) (handler-case (progn (cl-forms:handle-request form) ; Can throw an error if CSRF fails (multiple-value-bind (valid errors) (cl-forms:validate-form form) (when errors (format t "Errors: ~A~%" errors)) (when valid (cl-forms:with-form-field-values (content parent) form (mito:create-dao 'ningle-tutorial-project/models:post :content content :user user :parent (parse-integer parent)) (ingle:redirect "/"))))) (simple-error (err) (setf (lack.response:response-status ningle:*response*) 403) (djula:render-template* "error.html" nil :title "Error" :error err))))) (defun logged-in-profile (params) (let ((user (gethash :user ningle:*session*))) (djula:render-template* "main/profile.html" nil :title "Profile" :user user))) (defun unauthorized-profile (params) (setf (lack.response:response-status ningle:*response*) 403) (djula:render-template* "error.html" nil :title "Error" :error "Unauthorized")) (defun people (params) (let ((users (mito:retrieve-dao 'ningle-auth/models:user))) (djula:render-template* "main/people.html" nil :title "People" :users users :user (cu-sith:logged-in-p)))) (defun person (params) (let* ((username-or-email (ingle:get-param :person params)) (person (first (mito:select-dao 'ningle-auth/models:user (where (:or (:= :username username-or-email) (:= :email username-or-email))))))) (djula:render-template* "main/person.html" nil :title "Person" :person person :user (cu-sith:logged-in-p)))) |
src/main.lisp
The change to our main.lisp file is a single line that connects our controller to the urls we have declared we are using.
(setf (ningle:route *app* "/post" :method :POST :logged-in-p t) #'post-content)
+(setf (ningle:route *app* "/post/comment" :method :POST :logged-in-p t) #'post-comment)
(setf (ningle:route *app* "/profile" :logged-in-p t) #'logged-in-profile)
Full Listing
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
(defpackage ningle-tutorial-project (:use :cl :ningle-tutorial-project/controllers) (:export #:start #:stop)) (in-package ningle-tutorial-project) (defvar *app* (make-instance 'ningle:app)) ;; requirements (setf (ningle:requirement *app* :logged-in-p) (lambda (value) (and (cu-sith:logged-in-p) value))) ;; routes (setf (ningle:route *app* "/") #'index) (setf (ningle:route *app* "/post/:id/likes" :method :POST :logged-in-p t) #'post-likes) (setf (ningle:route *app* "/post/:id") #'single-post) (setf (ningle:route *app* "/post" :method :POST :logged-in-p t) #'post-content) (setf (ningle:route *app* "/post/comment" :method :POST :logged-in-p t) #'post-comment) (setf (ningle:route *app* "/profile" :logged-in-p t) #'logged-in-profile) (setf (ningle:route *app* "/profile") #'unauthorized-profile) (setf (ningle:route *app* "/people") #'people) (setf (ningle:route *app* "/people/:person") #'person) (defmethod ningle:not-found ((app ningle:<app>)) (declare (ignore app)) (setf (lack.response:response-status ningle:*response*) 404) (djula:render-template* "error.html" nil :title "Error" :error "Not Found")) (defun start (&key (server :woo) (address "127.0.0.1") (port 8000)) (djula:add-template-directory (asdf:system-relative-pathname :ningle-tutorial-project "src/templates/")) (djula:set-static-url "/public/") (clack:clackup (lack.builder:builder (envy-ningle:build-middleware :ningle-tutorial-project/config *app*)) :server server :address address :port port)) (defun stop (instance) (clack:stop instance)) |
src/templates/main/index.html
There are some small changes needed in the index.html file, they're largely just optimisations. The first is changing a boolean around likes to integer, this gets into the weeds of JavaScript types, and ensuring things were of the Number type in JS just made things easier. Some of the previous code even treated booleans as strings, which was pretty bad, I don't write JS in any real capacity, so I often make mistakes with it, because it so very often appears to work instead of just throwing an error.
~ Lines 28 - 30
data-logged-in="true"
- data-liked="false"
+ data-liked="0"
aria-label="Like post ">
~ Lines 68 - 70
const icon = btn.querySelector("i");
- const liked = btn.dataset.liked === "true";
+ const liked = Number(btn.dataset.liked) === 1;
const previous = parseInt(countSpan.textContent, 10) || 0;
~ Lines 96 - 100
if (!resp.ok) {
// Revert optimistic changes on error
countSpan.textContent = previous;
countSpan.textContent = previous;
- btn.dataset.liked = liked ? "true" : "false";
+ btn.dataset.liked = liked ? 1 : 0;
if (liked) {
~ Lines 123 - 129
console.error("Like failed:", err);
// Revert optimistic changes on error
countSpan.textContent = previous;
- btn.dataset.liked = liked ? "true" : "false";
+ btn.dataset.liked = liked ? 1 : 0;
if (liked) {
icon.className = "bi bi-hand-thumbs-up-fill text-primary";
} else {
src/templates/main/post.html
The changes to this file as so substantial that the file might as well be brand new, so in the interests of clarity, I will simply show the file in full.
Full Listing
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 |
{% extends "base.html" %}
{% block content %}
<div class="container">
<div class="row">
<div class="col-12">
<div class="card post mb-3" data-href="/post/{{ post.id }}">
<div class="card-body">
<h5 class="card-title mb-2">{{ post.content }}</h5>
<p class="card-subtitle text-muted mb-0">@{{ post.user.username }}</p>
</div>
<div class="card-footer d-flex justify-content-between align-items-center">
<button type="button"
class="btn btn-sm btn-outline-primary like-button"
data-post-id="{{ post.id }}"
data-logged-in="{% if user.username != "" %}true{% else %}false{% endif %}"
data-liked="{% if post.liked-by-user == 1 %}1{% else %}0{% endif %}"
aria-label="Like post {{ post.id }}">
{% if post.liked-by-user == 1 %}
<i class="bi bi-hand-thumbs-up-fill text-primary" aria-hidden="true"></i>
{% else %}
<i class="bi bi-hand-thumbs-up text-muted" aria-hidden="true"></i>
{% endif %}
<span class="ms-1 like-count">{{ likes }}</span>
</button>
<small class="text-muted">Posted on: {{ post.created-at }}</small>
</div>
</div>
</div>
</div>
<!-- Post form -->
{% if user %}
<div class="row mb-4">
<div class="col">
{% if form %}
{% form form %}
{% endif %}
</div>
</div>
{% endif %}
{% if comments %}
<div class="row mb-4">
<div class="col-12">
<h2>Comments</h2>
</div>
</div>
{% endif %}
{% for comment in comments %}
<div class="row mb-4">
<div class="col-12">
<div class="card post mb-3" data-href="/post/{{ comment.id }}">
<div class="card-body">
<h5 class="card-title mb-2">{{ comment.content }}</h5>
<p class="card-subtitle text-muted mb-0">@{{ comment.username }}</p>
</div>
<div class="card-footer d-flex justify-content-between align-items-center">
<button type="button"
class="btn btn-sm btn-outline-primary like-button"
data-post-id="{{ comment.id }}"
data-logged-in="{% if user.username != "" %}true{% else %}false{% endif %}"
data-liked="{% if comment.liked-by-user == 1 %}1{% else %}0{% endif %}"
aria-label="Like post {{ comment.id }}">
{% if comment.liked-by-user == 1 %}
<i class="bi bi-hand-thumbs-up-fill text-primary" aria-hidden="true"></i>
{% else %}
<i class="bi bi-hand-thumbs-up text-muted" aria-hidden="true"></i>
{% endif %}
<span class="ms-1 like-count">{{ comment.like-count }}</span>
</button>
<small class="text-muted">Posted on: {{ comment.created-at }}</small>
</div>
</div>
</div>
</div>
{% endfor %}
</div>
{% endblock %}
{% block js %}
document.querySelectorAll(".like-button").forEach(btn => {
btn.addEventListener("click", function (e) {
e.stopPropagation();
e.preventDefault();
// Check login
if (btn.dataset.loggedIn !== "true") {
alert("You must be logged in to like posts.");
return;
}
const postId = btn.dataset.postId;
const countSpan = btn.querySelector(".like-count");
const icon = btn.querySelector("i");
const liked = Number(btn.dataset.liked) === 1;
const previous = parseInt(countSpan.textContent, 10) || 0;
const url = `/post/${postId}/likes`;
// Optimistic UI toggle
countSpan.textContent = liked ? previous - 1 : previous + 1;
btn.dataset.liked = liked ? 0 : 1;
// Toggle icon classes optimistically
if (liked) {
// Currently liked, so unlike it
icon.className = "bi bi-hand-thumbs-up text-muted";
} else {
// Currently not liked, so like it
icon.className = "bi bi-hand-thumbs-up-fill text-primary";
}
const csrfTokenMeta = document.querySelector('meta[name="csrf-token"]');
const headers = { "Content-Type": "application/json" };
if (csrfTokenMeta) headers["X-CSRF-Token"] = csrfTokenMeta.getAttribute("content");
fetch(url, {
method: "POST",
headers: headers,
body: JSON.stringify({ toggle: true })
})
.then(resp => {
if (!resp.ok) {
// Revert optimistic changes on error
countSpan.textContent = previous;
btn.dataset.liked = liked ? 1 : 0;
icon.className = liked ? "bi bi-hand-thumbs-up-fill text-primary" : "bi bi-hand-thumbs-up text-muted";
throw new Error("Network response was not ok");
}
return resp.json();
})
.then(data => {
if (data && typeof data.likes !== "undefined") {
countSpan.textContent = data.likes;
btn.dataset.liked = data.liked ? 1 : 0;
icon.className = data.liked ? "bi bi-hand-thumbs-up-fill text-primary" : "bi bi-hand-thumbs-up text-muted";
}
})
.catch(err => {
console.error("Like failed:", err);
// Revert optimistic changes on error
countSpan.textContent = previous;
btn.dataset.liked = liked ? 1 : 0;
icon.className = liked ? "bi bi-hand-thumbs-up-fill text-primary" : "bi bi-hand-thumbs-up text-muted";
});
});
});
document.querySelectorAll(".card.post").forEach(card => {
card.addEventListener("click", function () {
const href = card.dataset.href;
if (href) {
window.location.href = href;
}
});
});
{% endblock %}
|
Conclusion
Learning Outcomes
| Level | Learning Outcome |
|---|---|
| Understand | Understand how to model a self-referential post table in Mito (using a nullable parent column) and why (or :post :null)/:initform nil are important for safe migrations and representing "top-level" posts versus comments. |
| Apply | Apply Mito, SXQL, and cl-forms to implement a comment system end-to-end: defining comments/posts generics, adding validators (including a custom clavier:fn), wiring controllers and routes, and rendering comments and like-buttons in templates. |
| Analyse | Analyse and reduce duplication in the models/controllers layer by consolidating separate code paths (logged-in vs anonymous) into generic functions specialised on user/null, and by examining how SQL joins and binds shape the returned data. |
| Evaluate | Evaluate different design and safety choices in the implementation (nullable vs sentinel parents, optimistic UI vs server truth, HTTP status codes, SQL placeholders, CSRF and login checks) and judge which approaches are more robust and maintainable. |
Github
- The link for this tutorials code is available here.
Common Lisp HyperSpec
| Symbol | Type | Why it appears in this lesson | CLHS |
|---|---|---|---|
defpackage |
Macro | Define project packages like ningle-tutorial-project/models, /forms, /controllers, and the main system package. |
http://www.lispworks.com/documentation/HyperSpec/Body/m_defpac.htm |
in-package |
Macro | Enter each package before defining tables, forms, controllers, and the main app functions. | http://www.lispworks.com/documentation/HyperSpec/Body/m_in_pkg.htm |
defvar |
Special Operator | Define *app* as a global Ningle application object. |
http://www.lispworks.com/documentation/HyperSpec/Body/s_defvar.htm |
defparameter |
Special Operator | Define validator configuration variables like *post-validator* and *post-parent-validator*. |
http://www.lispworks.com/documentation/HyperSpec/Body/s_defpar.htm |
defgeneric |
Macro | Declare generic functions such as likes, comments, toggle-like, liked-post-p, and posts. |
http://www.lispworks.com/documentation/HyperSpec/Body/m_defgen.htm |
defmethod |
Macro | Specialise behaviour for likes, comments, toggle-like, liked-post-p, posts, and ningle:not-found. |
http://www.lispworks.com/documentation/HyperSpec/Body/m_defmet.htm |
defun |
Macro | Define controller functions like index, post-likes, single-post, post-content, post-comment, people, person, start, etc. |
http://www.lispworks.com/documentation/HyperSpec/Body/m_defun.htm |
make-instance |
Generic Function | Create the Ningle app object: (make-instance 'ningle:app). |
http://www.lispworks.com/documentation/HyperSpec/Body/f_mk_ins.htm |
let / let* |
Special Operator | Introduce local bindings like user, posts, post, comments, likes, form, and res in controllers. |
http://www.lispworks.com/documentation/HyperSpec/Body/s_let_l.htm |
lambda |
Special Operator | Used for the :logged-in-p requirement: (lambda (value) (and (cu-sith:logged-in-p) value)). |
http://www.lispworks.com/documentation/HyperSpec/Body/s_fn_lam.htm |
setf |
Macro | Set routes, response headers/status codes, and update hash-table entries in the JSON response. | http://www.lispworks.com/documentation/HyperSpec/Body/m_setf.htm |
gethash |
Function | Access session values (e.g. the :user from ningle:*session*) and JSON keys in result hash-tables. |
http://www.lispworks.com/documentation/HyperSpec/Body/f_gethas.htm |
make-hash-table |
Function | Build the hash-table used as the JSON response body in post-likes. |
http://www.lispworks.com/documentation/HyperSpec/Body/f_mk_has.htm |
equal |
Function | Used as the :test function for the JSON response hash-table. |
http://www.lispworks.com/documentation/HyperSpec/Body/f_equal.htm |
list |
Function | Build the :binds list for mito:retrieve-by-sql and other list values. |
http://www.lispworks.com/documentation/HyperSpec/Body/f_list.htm |
first |
Accessor | Take the first result from mito:select-dao in the person controller. |
http://www.lispworks.com/documentation/HyperSpec/Body/f_firstc.htm |
slot-value |
Function | Discussed when explaining the old pattern (slot-value user '...:id) that was replaced by mito:object-id. |
http://www.lispworks.com/documentation/HyperSpec/Body/f_slot__.htm |
parse-integer |
Function | Convert route params and hidden form parent values into integers (post-id, parent, etc.). |
http://www.lispworks.com/documentation/HyperSpec/Body/f_parse_.htm |
format |
Function | Print validation error information in the controllers ((format t "Errors: ~A~%" errors)). |
http://www.lispworks.com/documentation/HyperSpec/Body/f_format.htm |
handler-case |
Macro | Handle parse-error for invalid ids and simple-error for CSRF failures, mapping them to 404 / 403 responses. |
http://www.lispworks.com/documentation/HyperSpec/Body/m_hand_1.htm |
parse-error |
Condition Type | Signalled when parsing fails (e.g. malformed :id route parameters), caught in single-post. |
http://www.lispworks.com/documentation/HyperSpec/Body/e_parse_.htm |
simple-error |
Condition Type | Used to represent CSRF and similar failures caught in post-content and post-comment. |
http://www.lispworks.com/documentation/HyperSpec/Body/e_smp_er.htm |
multiple-value-bind |
Macro | Bind the (valid errors) results from cl-forms:validate-form. |
http://www.lispworks.com/documentation/HyperSpec/Body/m_mpv_bn.htm |
progn |
Special Operator | Group side-effecting calls (handle request, validate, then create/redirect) under a single handler in handler-case. |
http://www.lispworks.com/documentation/HyperSpec/Body/s_progn.htm |
when |
Macro | Conditionally log validation errors and perform DAO creation only when the form is valid. | http://www.lispworks.com/documentation/HyperSpec/Body/m_when_.htm |
unless |
Macro | Early-exit error path in post-likes when the post cannot be found ((unless post ... (return-from ...))). |
http://www.lispworks.com/documentation/HyperSpec/Body/m_when_.htm |
return-from |
Special Operator | Non-locally return from post-likes after sending a 404 JSON response. |
http://www.lispworks.com/documentation/HyperSpec/Body/s_ret_fr.htm |
declare |
Special Operator | Used with (declare (ignore app)) in the ningle:not-found method to silence unused-argument warnings. |
http://www.lispworks.com/documentation/HyperSpec/Body/s_declar.htm |
and / or |
Macro | Logical composition in the login requirement and in the where clause for username/email matching. |
http://www.lispworks.com/documentation/HyperSpec/Body/a_and.htm |
20 Nov 2025 8:00am GMT
19 Nov 2025
Planet Debian
Dirk Eddelbuettel: digest 0.6.39 on CRAN: Micro Update

Release 0.6.39 of the digest package arrived at CRAN today and has also been uploaded to Debian.
digest creates hash digests of arbitrary R objects. It can use a number different hashing algorithms (md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, spookyhash, blake3,crc32c, xxh3_64 and xxh3_128), and enables easy comparison of (potentially large and nested) R language objects as it relies on the native serialization in R. It is a mature and widely-used package (with 86.8 million downloads just on the partial cloud mirrors of CRAN which keep logs) as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation to quickly identify the various objects.
As noted last week in the 0.6.38 release note, hours after it was admitted to CRAN, I heard from the ever-so-tireless Brian Ripley about an SAN issue on arm64 only (and apparently non-reproducible elsewhere). He kindly provided a fix; it needed a cast. Checking this on amd64 against our Rocker-based ASAN and UBSAN containers (where is remains impossible to replicate, this issue is apparently known for some arm64 issues) another micro-issue (of a missing final argument NULL missing in one .Call()) was detected. Both issues were fixed the same day, and constitute the only change here. I merely waited a week to avoid a mechanical nag triggered when release happen within a week.
My CRANberries provides a summary of changes to the previous version. For questions or comments use the issue tracker off the GitHub repo. For documentation (including the changelog) see the documentation site.
If you like this or other open-source work I do, you can now sponsor me at GitHub.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
19 Nov 2025 11:29pm GMT
Dirk Eddelbuettel: #055: More Frequent r2u Updates

Welcome to post 55 in the R4 series.
r2u brings CRAN packages for R to Ubuntu. We mentioned it in the R4 series within the last year in posts #54 about faster CI, #48 about the r2u keynote at U Mons, #47 reviewing r2u it at its third birthday, #46 about adding arm64 support, and #44 about the r2u for mlops talk.
Today brings news of an important (internal) update. Following both the arm64 builds as well as the last bi-annual BioConductor package update (and the extension of BioConductor coverage to arm64), more and more of our build setup became automated at GitHub. This has now been unified. We dispatch builds for amd64 packages for 'jammy' (22.04) and 'noble' (24.04) (as well as for the arm64 binaries for 'noble') from the central build repository and enjoy the highly parallel build of the up to fourty available GitHub Runners. In the process we also switched fully to source builds.
In the past, we had relied on p3m.dev (formerly known as ppm and rspm) using its binaries. These so-called 'naked binaries' are what R produces when called as R CMD INSTALL --build. They are portable with the same build architecture and release, but do not carry packaging information. Now, when a Debian or Ubuntu .deb binary is built, the same step of R CMD INSTALL --build happens. So our earlier insight was to skip the compilation step, use the p3m binary, and then wrap the remainder of a complete package around it. Which includes the all-important dependency information for both the R package relations (from hard Depends / Imports / LinkingTo or soft Suggests declarations) as well as the shared library dependency resolution we can do when building for a Linux distribution.
That served us well, and we remain really grateful for the p3m.dev build service. But it also meant were dependending on the 'clock' and 'cadence' of p3m.dev. Which was not really a problem when it ran reliably daily, and early too, included weekends, and showed a timestamp of last updates. By now it is a bit more erratic, frequently late, skips weekends more regularly and long stopped showing when it was last updated. Late afternoon releases reflecting the CRAN updates ending one and half-days earlier is still good, it's just not all that current. Plus there was always the very opaque occurrencem where maybe one in 50 packages or so would not even be provided as a binary so we had to build it anyway-the fallback always existing, and was used for both BioConductor (no binaries) and arm64 (no binaries at first, this now changed). So now we just build packages the standard way, albeit as GitHub Actions.
In doing so we can ignore p3m.dev, and rather follow the CRAN clock and cadence (as for example CRANberries does), and can update several times a day. For example early this morning (Central time) we ran update for the then-new 28 source packages resulting in 28 jammy and 36 noble binary packages; right now in mid-afternoon we are running another build for 37 source packages resuling in 37 jammy and 47 noble packages. (Packages without a src/ directory and hence no compilation can be used across amd64 and arm64; those that do have src/ are rebuilt for arm64 hence the different sets of jammy and noble packages as only the latter has arm64 now.) This gets us packages from this morning into r2u which p3m.dev should have by tomorrow afternoon or so.
And with that r2u remains "Fast. Easy. Reliable. Pick all three!" and also a little more predictable and current in its delivery. What's not to like?
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.
19 Nov 2025 8:15pm GMT
18 Nov 2025
Planet Lisp
Tim Bradshaw: The lost cause of the Lisp machines
I am just really bored by Lisp Machine romantics at this point: they should go away. I expect they never will.
History
Symbolics went bankrupt in early 1993. In the way of these things various remnants of the company lingered on for, in this case, decades. But 1983 was when the Lisp Machines died.
The death was not unexpected: by the time I started using mainstream Lisps in 19891 everyone knew that special hardware for Lisp was a dead idea. The common idea was that the arrival of RISC machines had killed it, but in fact machines like the Sun 3/260 in its 'AI' configuration2 were already hammering nails in its coffin. In 1987 I read a report showing the Lisp performance of an early RISC machine, using Kyoto Common Lisp, not a famously fast implementation of CL, beating a Symbolics on the Gabriel benchmarks [PDF link].
1993 is 32 years ago. The Symbolics 3600, probably the first Lisp machine that sold in more than tiny numbers, was introduced in 1983, ten years earlier. People who used Lisp machines other than as historical artefacts are old today3.
Lisp machines were both widely available and offered the best performance for Lisp for a period of about five years which ended nearly forty years ago. They were probably never competitive in terms of performance for the money.
It is time, and long past time, to let them go.
But still the romantics - some of them even old enough to remember the Lisp machines - repeat their myths.
'It was the development environment'
No, it wasn't.
The development environments offered by both families of Lisp machines were seriously cool, at least for the 1980s. I mean, they really were very cool indeed. Some of the ways they were cool matter today, but some don't. For instance in the 1980s and early 1990s Lisp images were very large compared to available memory, and machines were also extremely slow in general. So good Lisp development environents did a lot of work to hide this slowness, and in general making sure you only very seldom had to restart everthing, which took significant fractions of an hour, if not more. None of that matters today, because machines are so quick and Lisps so relatively small.
But that's not the only way they were cool. They really were just lovely things to use in many ways. But, despite what people might believe: this did not depend on the hardware: there is no reason at all why a development environent that cool could not be built on stock hardware. Perhaps, (perhaps) that was not true in 1990: it is certainly true today.
So if a really cool Lisp development environment doesn't exist today, it is nothing to do with Lisp machines not existing. In fact, as someone who used Lisp machines, I find the LispWorks development environment at least as comfortable and productive as they were. But, oh no, the full-fat version is not free, and no version is open source. Neither, I remind you, were they.
'They were much faster than anything else'
No, they weren't. Please, stop with that.
'The hardware was user-microcodable, you see'
Please, stop telling me things about machines I used: believe it or not, I know those things.
Many machines were user-microcodable before about 1990. That meant that, technically, a user of the machine could implement their own instruction set. I am sure there are cases where people even did that, and a much smaller number of cases where doing that was not just a waste of time.
But in almost all cases the only people who wrote microcode were the people who built the machine. And the reason they wrote microcode was because it is the easiest way of implementing a very complex instruction set, especially when you can't use vast numbers of transistors. For instance if you're going to provide an 'add' instruction which will add numbers of any type, trapping back into user code for some cases, then by far the easiest way of doing that is going to be by writing code, not building hardware. And that's what the Lisp machines did.
Of course, the compiler could have generated that code for hardware without that instruction. But with the special instruction the compiler's job is much easier, and code is smaller. A small, quick compiler and small compiled code were very important with slow machines which had tiny amounts of memory. Of course a compiler not made of wet string could have used type information to avoid generating the full dispatch case, but wet string was all that was available.
What microcodable machines almost never meant was that users of the machines would write microcode.
At the time, the tradeoffs made by Lisp machines might even have been reasonable. CISC machines in general were probably good compromises given the expense of memory and how rudimentary compilers were: I can remember being horrified at the size of compiled code for RISC machines. But I was horrified because I wasn't thinking about it properly. Moore's law was very much in effect in about 1990 and, among other things, it meant that the amount of memory you could afford was rising exponentially with time: the RISC people understood that.
'They were Lisp all the way down'
This, finally, maybe, is a good point. They were, and you could dig around and change things on the fly, and this was pretty cool. Sometimes you could even replicate the things you'd done later. I remember playing with sound on a 3645 which was really only possible because you could get low-level access to the disk from Lisp, as the disk could just marginally provide data fast enough to stream sound.
On the other hand they had no isolation and thus no security at all: people didn't care about that in 1985, but if I was using a Lisp-based machine today I would certainly be unhappy if my web browser could modify my device drivers on the fly, or poke and peek at network buffers. A machine that was Lisp all the way down today would need to ensure that things like that couldn't happen.
So may be it would be Lisp all the way down, but you absolutely would not have the kind of ability to poke around in and redefine parts of the guts you had on Lisp machines. Maybe that's still worth it.
Not to mention that I'm just not very interested in spending a huge amount of time grovelling around in the guts of something like an SSL implementation: those things exist already, and I'd rather do something new and cool. I'd rather do something that Lisp is uniquely suited for, not reinvent wheels. Well, may be that's just me.
Machines which were Lisp all the way down might, indeed, be interesting, although they could not look like 1980s Lisp machines if they were to be safe. But that does not mean they would need special hardware for Lisp: they wouldn't. If you want something like this, hardware is not holding you back: there's no need to endlessly mourn the lost age of Lisp machines, you can start making one now. Shut up and code.
And now we come to the really strange arguments, the arguments that we need special Lisp machines either for reasons which turn out to be straightforwardly false, or because we need something that Lisp machines never were.
'Good Lisp compilers are too hard to write for stock hardware'
This mantra is getting old.
The most important thing is that we have good stock-hardware Lisp compilers today. As an example, today's CL compilers are not far from CLANG/LLVM for floating-point code. I tested SBCL and LispWorks: it would be interesting to know how many times more work has gone into LLVM than them for such a relatively small improvement. I can't imagine a world where these two CL compilers would not be at least comparable to LLVM if similar effort was spent on them4.
These things are so much better than the wet-cardboard-and-string compilers that the LispMs had it's not funny.
A large amount of work is also going into compilation for other dynamically-typed, interactive languages which aim at high performance. That means on-the-fly compilation and recompilation of code where both the compilation and the resulting code must be quick. Example: Julia. Any of that development could be reused by Lisp compiler writers if they needed to or wanted to (I don't know if they do, or should).
Ah, but then it turns out that that's not what is meant by a 'good compiler' after all. It turns out that 'good' means 'compillation is fast'.
All these compilers are pretty quick: the computational resources used by even a pretty hairy compiler have not scaled anything like as fast as those needed for the problems we want to solve (that's why Julia can use LLVM on the fly). Compilation is also not an Amdahl bottleneck as it can happen on the node that needs the compiled code.
Compilers are so quick that a widely-used CL implementation exists where EVAL uses the compiler, unless you ask it not to.
Compilation options are also a thing: you can ask compilers to be quick, fussy, sloppy, safe, produce fast code and so on. Some radically modern languages also allow this to be done in a standardised (but extensible) way at the language level, so you can say 'make this inner loop really quick, and I have checked all the bounds so don't bother with that'.
The tradeoff between a fast Lisp compiler and a really good Lisp compiler is imaginary, at this point.
'They had wonderful keyboards'
Well, if you didn't mind the weird layouts: yes, they did5. And has exactly nothing to do with Lisp.
And so it goes on.
Bored now
There's a well-known syndrome amongst photographers and musicians called GAS: gear acquisition syndrome. Sufferers from this6 pursue an endless stream of purchases of gear - cameras, guitars, FX pedals, the last long-expired batch of a legendary printing paper - in the strange hope that the next camera, the next pedal, that paper, will bring out the Don McCullin, Jimmy Page or Chris Killip in them. Because, of course, Don McCullin & Chris Killip only took the pictures they did because he had the right cameras: it was nothing to do with talent, practice or courage, no.
GAS is a lie we tell ourselves to avoid the awkward reality that what we actually need to do is practice, a lot, and that even if we did that we might not actually be very talented.
Lisp machine romanticism is the same thing: a wall we build ourself so that, somehow unable to climb over it or knock it down, we never have to face the fact that the only thing stopping us is us.
There is no purpose to arguing with Lisp machine romantics because they will never accept that the person building the endless barriers in their way is the same person they see in the mirror every morning. They're too busy building the walls.
As a footnote, I went to a talk by an HPC person in the early 90s (so: after the end of the cold war7 and when the HPC money had gone) where they said that HPC people needed to be aiming at machines based on what big commercial systems looked like as nobody was going to fund dedicated HPC designs any more. At the time that meant big cache-coherent SMP systems. Those hit their limits and have really died out now: the bank I worked for had dozens of fully-populated big SMP systems in 2007, it perhaps still has one or two they can't get rid of because of some legacy application. So HPC people now run on enormous shared-nothing farms of close-to-commodity processors with very fat interconnect and are wondering about / using GPUs. That's similar to what happened to Lisp systems, of course: perhaps, in the HPC world, there are romantics who mourn the lost glories of the Cray-3. Well, if I was giving a talk to people interested in the possibilities of hardware today I'd be saying that in a few years there are going to be a lot of huge farms of GPUs going very cheap if you can afford the power. People could be looking at whether those can be used for anything more interesting than the huge neural networks they were designed for. I don't know if they can.
-
Before that I had read about Common Lisp but actually written programs in Cambridge Lisp and Standard Lisp. ↩
-
This had a lot of memory and a higher-resolution screen, I think, and probably was bundled with a rebadged Lucid Common Lisp. ↩
-
I am at the younger end of people who used these machines in anger: I was not there for the early part of the history described here, and I was also not in the right part of the world at a time when that mattered more. But I wrote Lisp from about 1985 and used Lisp machines of both families from 1989 until the mid to late 1990s. I know from first-hand experience what these machines were like. ↩
-
If anyone has good knowledge of Arm64 (specifically Apple M1) assembler and performance, and the patience to pore over a couple of assembler listings and work out performance differences, please get in touch. I have written most of a document exploring the difference in performance, but I lost the will to live at the point where it came down to understanding just what details made the LLVM code faster. All the compilers seem to do a good job of the actual float code, but perhaps things like array access or loop overhead are a little slower in Lisp. The difference between SBCL & LLVM is a factor of under 1.2. ↩
-
The Sun type 3 keyboard was both wonderful and did not have a weird layout, so there's that. ↩
-
I am one: I know what I'm talking about here. ↩
-
The cold war did not end in 1991. America did not win. ↩
18 Nov 2025 8:52am GMT
16 Nov 2025
Planet Lisp
Joe Marshall: AI success anecdotes
Anecdotes are not data.
You cannot extrapolate trends from anecdotes. A sample size of one is rarely significant. You cannot derive general conclusions based on a single data point.
Yet, a single anecdote can disprove a categorical. You only need one counterexample to disprove a universal claim. And an anecdote can establish a possibility. If you run a benchmark once and it takes one second, you have at least established that the benchmark can complete in one second, as well as established that the benchmark can take as long as one second. You can also make some educated guesses about the likely range of times the benchmark might take, probably within a couple of orders of magnitude more or less than the one second anecdotal result. It probably won't be as fast as a microsecond nor as slow as a day.
An anecdote won't tell you what is typical or what to expect in general, but that doesn't mean it is completely worthless. And while one anecdote is not data, enough anecdotes can be.
Here are a couple of AI success story anecdotes. They don't necessarily show what is typical, but they do show what is possible.
I was working on a feature request for a tool that I did not author and had never used. The feature request was vague. It involved saving time by feeding back some data from one part of the tool to an earlier stage so that subsequent runs of the same tool would bypass redundant computation. The concept was straightforward, but the details were not. What exactly needed to be fed back? Where exactly in the workflow did this data appear? Where exactly should it be fed back to? How exactly should the tool be modified to do this?
I browsed the code, but it was complex enough that it was not obvious where the code surgery should be done. So I loaded the project into an AI coding assistant and gave it the JIRA request. My intent was get some ideas on how to proceed. The AI assistant understood the problem - it was able to describe it back to me in detail better than the engineer who requested the feature. It suggested that an additional API endpoint would solve the problem. I was unwilling to let it go to town on the codebase. Instead, I asked it to suggest the steps I should take to implement the feature. In particular, I asked it exactly how I should direct Copilot to carry out the changes one at a time. So I had a daisy chain of interactions: me to the high-level AI assistant, which returned to me the detailed instructions for each change. I vetted the instructions and then fed them along to Copilot to make the actual code changes. When it had finished, I also asked Copilot to generate unit tests for the new functionality.
The two AIs were given different system instructions. The high-level AI was instructed to look at the big picture and design a series of effective steps while the low-level AI was instructed to ensure that the steps were precise and correct. This approach of cascading the AI tools worked well. The high-level AI assistant was able to understand the problem and break it down into manageable steps. The low-level AI was able to understand each step individually and carry out the necessary code changes without the common problem of the goals of one step interfering with goals of other steps. It is an approach that I will consider using in the future.
The second anecdote was concerning a user interface that a colleague was designing. He had mocked up a wire-frame of the UI and sent me a screenshot as a .png file to get my feedback. Out of curiousity, I fed the screenshot to the AI coding tool and asked what it made of the .png file. The tool correctly identified the screenshot as a user interface wire-frame. It then went on to suggest a couple of improvements to the workflow that the UI was trying to implement. The suggestions were good ones, and I passed them along to my colleague. I had expected the AI to recognize that the image was a screenshot, and maybe even identify it as a UI wire-frame, but I had not expected it to analyze the workflow and make useful suggestions for improvement.
These anecdotes provide two situations where the AI tools provided successful results. They do not establish that such success is common or typical, but they do establish that such success is possible. They also establish that it is worthwhile to throw random crap at the AI to see what happens. I will be doing this more frequently in the future.
16 Nov 2025 9:32pm GMT
15 Nov 2025
FOSDEM 2026
FOSDEM 2026 Accepted Stands
With great pleasure we can announce that the following project will have a stand at FOSDEM 2026! ASF Community BSD + FreeBSD Project Checkmk CiviCRM Cloud Native Computing Foundation + Open Source Security Foundation Codeberg and Forgejo Computer networks with BIRD, KNOT and Turris Debian Delta Chat (Sunday) Digital Public Goods Dolibar ERP CRM + Odoo Community Association (OCA) Dronecode Foundation Eclipse Foundation F-Droid and /e/OS Fedora Project Firefly Zero Foreman FOSS United + fundingjson (and FLOSS/fund) FOSSASIA Framework Computer Free Android World: From Hardware to Apps - An Open, Sustainable Ecosystem (BlissLabs, IzzyOnDroid & SHIFTphone) Free Software Foundation Europe舰
15 Nov 2025 11:00pm GMT
13 Nov 2025
FOSDEM 2026
FOSDEM 2026 Main Track Deadline Reminder
Submit your proposal for the FOSDEM main track before it's too late! The deadline for main track submissions is earlier than it usually is (16th November, that's in a couple of days!), so don't be caught out. For full details on submission information, look at the original call for participation.
13 Nov 2025 11:00pm GMT
08 Nov 2025
FOSDEM 2026
FOSDEM Junior Call for Participation
Proposals for FOSDEM JUNIOR can now be submitted! FOSDEM Junior is a specific track to organise workshops and activities for children from age 7 to 17 during the FOSDEM weekend. These activities are for children to learn and get inspired about technology and open source. We are looking for activities for children from age 7 to 17. These activities are for children to learn and get inspired about technology. Last year's activities included microcontrollers, game development, embroidery, python programming, mobile application development, music, and data visualization. If you are still unsure if your activity fits FOSDEM Junior, feel free to舰
08 Nov 2025 11:00pm GMT