14 Sep 2025
Planet Debian
Ian Jackson: tag2upload in the first month of forky
tl;dr: tag2upload (beta) is going well so far, and is already handling around one in 13 uploads to Debian.
- Introduction and some stats
- Recent UI/UX improvements
- Why we are still in beta
- Other notable ongoing work
- Common problems
- Get involved
Introduction and some stats
We announced tag2upload's open beta in mid-July. That was in the middle of the the freeze for trixie, so usage was fairly light until the forky floodgates opened.
Since then the service has successfully performed 637 uploads, of which 420 were in the last 32 days. That's an average of about 13 per day. For comparison, during the first half of September up to today there have been 2475 uploads to unstable. That's about 176/day.
So, tag2upload is already handling around 7.5% of uploads. This is very gratifying for a service which is advertised as still being in beta!
Sean and I are very pleased both with the uptake, and with the way the system has been performing.
Recent UI/UX improvements
During this open beta period we have been hard at work. We have made many improvements to the user experience.
Current git-debpush
in forky, or trixie-backports, is much better at detecting various problems ahead of time.
When uploads do fail on the service the emailed error reports are now more informative. For example, anomalies involving orig tarballs, which by definition can't be detected locally (since one point of tag2upload is not to have tarballs locally) now generally result in failure reports containing a diffstat, and instructions for a local repro.
Why we are still in beta
There are a few outstanding work items that we currently want to complete before we declare the end of the beta.
Retrying on Salsa-side failures
The biggest of these is that the service should be able to retry when Salsa fails. Sadly, Salsa isn't wholly reliable, and right now if it breaks when the service is trying to handle your tag, your upload can fail.
We think most of these failures could be avoided. Implementing retries is a fairly substantial task, but doesn't pose any fundamental difficulties. We're working on this right now.
Other notable ongoing work
We want to support pristine-tar, so that pristine-tar users can do a new upstream release. Andrea Pappacoda is working on that with us. See #1106071. (Note that we would generally recommend against use of pristine-tar within Debian. But we want to support it.)
We have been having conversations with Debusine folks about what integration between tag2upload and Debusine would look like. We're making some progress there, but a lot is still up in the air.
We are considering how best to provide tag2upload pre-checks as part of Salsa CI. There are several problems detected by the tag2upload service that could be detected by Salsa CI too, but which can't be detected by git-debpush
.
Common problems
We've been monitoring the service and until very recently we have investigated every service-side failure, to understand the root causes. This has given us insight into the kinds of things our users want, and the kinds of packaging and git practices that are common. We've been able to improve the system's handling of various anomalies and also improved the documentation.
Right now our failure rate is still rather high, at around 7%. Partly this is because people are trying out the system on packages that haven't ever seen git tooling with such a level of rigour.
There are two classes of problem that are responsible for the vast majority of the failures that we're still seeing:
Reuse of version numbers, and attempts to re-tag
tag2upload, like git (and like dgit
), hates it when you reuse a version number, or try to pretend that a (perhaps busted) release never happened.
git tags aren't namespaced, and tend to spread about promiscuously. So replacing a signed git tag, with a different tag of the same name, is a bad idea. More generally, reusing the same version number for a different (signed!) package is poor practice. Likewise, it's usually a bad idea to remove changelog entries for versions which were actually released, just because they were later deemed improper.
We understand that many Debian contributors have gotten used to this kind of thing. Indeed, tools like dcut
encourage it. It does allow you to make things neat-looking, even if you've made mistakes - but really it does so by covering up those mistakes!
The bottom line is that tag2upload can't support such history-rewriting. If you discover a mistake after you've signed the tag, please just burn the version number and add a new changelog stanza.
One bonus of tag2upload's approach is that it will discover if you are accidentally overwriting an NMU, and report that as an error.
Discrepancies between git and orig tarballs
tag2upload promises that the source package that it generates corresponds precisely to the git tree you tag and sign.
Orig tarballs make this complicated. They aren't present on your laptop when you git-debpush
. When you're not uploading a new upstream version, the tag2upload service reuses existing orig tarballs from the archive. If your git and the archive's orig don't agree, the tag2upload service will report an error, rather than upload a package with contents that differ from your git tag.
With the most common Debian workflows, everything is fine:
If you base everything on upstream git, and make your orig tarballs with git archive
(or git deborig
), your orig tarballs are the same as the git, by construction. We recommend usually ignoring upstream tarballs: most upstreams work in git, and their tarballs can contain weirdness that we don't want. (At worst, the tarball can contain an attack that isn't visible in git, as with xz
!)
Alternatively, if you use gbp import-orig
, the differences (including an attack like Jia Tan's) are imported into git for you. Then, once again, your git and the orig tarball will correspond.
But there are other workflows where this correspondence may not hold. Those workflows are hazardous, because the thing you're probably working with locally for your routine development is the git view. Then, when you upload, your work is transplanted onto the orig tarball, which might be quite different - so what you upload isn't what you've been working on!
This situation is detected by tag2upload, precisely because tag2upload checks that it's keeping its promise: the source package is identical to the git view. (dgit push
makes the same promise.)
Get involved
Of course the easiest way to get involved is to start using tag2upload.
We would love to have more contributors. There are some easy tasks to get started with, in bugs we've tagged "newcomer" - mostly UX improvements such as detecting certain problems earlier, in git-debpush
.
More substantially, we are looking for help with sbuild
: we'd like it to be able to work directly from git, rather than needing to build source packages: #868527.
comments
14 Sep 2025 3:36pm GMT
Dirk Eddelbuettel: RcppSimdJson 0.1.14 on CRAN: New Upstream Major
A brand new release 0.1.14 of the RcppSimdJson package is now on CRAN.
RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is 'faster than CPU speed' as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle per byte parsed; see the video of the talk by Daniel Lemire at QCon.
This version includes the new major upstream release 4.0.0 with major new features including a 'builder' for creating JSON from the C++ side objects. This is something a little orthogonal to the standard R usage of the package to parse and load JSON data but could still be of interest to some.
The short NEWS entry for this release follows.
Changes in version 0.1.14 (2025-09-13)
simdjson was upgraded to version 4.0.0 (Dirk in #96
Continuous integration now relies a token for codecov.io
Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.
14 Sep 2025 2:52pm GMT
Otto Kekäläinen: Zero-configuration TLS and password management best practices in MariaDB 11.8
Locking down database access is probably the single most important thing for a system administrator or software developer to prevent their application from leaking its data. As MariaDB 11.8 is the first long-term supported version with a few new key security features, let's recap what the most important things are every DBA should know about MariaDB in 2025.
Back in the old days, MySQL administrators had a habit of running the clumsy mysql-secure-installation
script, but it has long been obsolete. A modern MariaDB database server is already secure by default and locked down out of the box, and no such extra scripts are needed. On the contrary, the database administrator is expected to open up access to MariaDB according to the specific needs of each server. Therefore, it is important that the DBA can understand and correctly configure three things:
- Separate application-specific users with granular permissions allowing only necessary access and no more.
- Distributing and storing passwords and credentials securely
- Ensuring all remote connections are properly encrypted
For holistic security, one should also consider proper auditing, logging, backups, regular security updates and more, but in this post we will focus only on the above aspects related to securing database access.
How encrypting database connections with TLS differs from web server HTTP(S)
Even though MariaDB (and other databases) use the same SSL/TLS protocol for encrypting remote connections as web servers and HTTPS, the way it is implemented is significantly different, and the different security assumptions are important for a database administrator to grasp.
Firstly, most HTTP requests to a web server are unauthenticated, meaning the web server serves public web pages and does not require users to log in. Traditionally, when a user logs in over a HTTP connection, the username and password were transmitted in plaintext as a HTTP POST request. Modern TLS, which was previously called SSL, does not change how HTTP works but simply encapsulates it. When using HTTPS, a web browser and a web server will start an encrypted TLS connection as the very first thing, and only once established, do they send HTTP requests and responses inside it. There are no passwords or other shared secrets needed to form the TLS connection. Instead, the web server relies on a trusted third party, a Certificate Authority (CA), to vet that the TLS certificate offered by the web server can be trusted by the web browser.
For a database server like MariaDB, the situation is quite different. All users need to first authenticate and log in to the server before getting being allowed to run any SQL and getting any data out of the server. The database server and client programs have built-in authentication methods, and passwords are not, and have never been, sent in plaintext. Over the years, MySQL and its successor, MariaDB, have had multiple password authentication methods: the original SHA-1-based hashing, later double SHA-1-based mysql_native_password, followed by sha256_password and caching_sha256_password in MySQL and ed25519 in MariaDB. The MariaDB.org blog post by Sergei Golubchik recaps the history of these well.
Even though most modern MariaDB installations should be using TLS to encrypt all remote connections in 2025, having the authentication method be as secure as possible still matters, because authentication is done before the TLS connection is fully established.
To further harden the authentication agains man-in-the-middle attacks, a new password the authentication method PARSEC was introduced in MariaDB 11.8, which builds upon the previous ed25519 public-key-based verification (similar to how modern SSH does), and also combines key derivation with PBKDF2 with hash functions (SHA512,SHA256) and a high iteration count.
At first it may seem like a disadvantage to not wrap all connections in a TLS tunnel like HTTPS does, but actually not having the authentication done in a MitM resistant way regardless of the connection encryption status allows a clever extra capability that is now available in MariaDB: as the database server and client already have a shared secret that is being used by the server to authenticate the user, it can also be used by the client to validate the server's TLS certificate and no third parties like CAs or root certificates are needed. MariaDB 11.8 was the first LTS version to ship with this capability for zero-configuration TLS.
Note that the zero-configuration TLS also works on older password authentication methods and does not require users to have PARSEC enabled. As PARSEC is not yet the default authentication method in MariaDB, it is recommended to enable it in installations that use zero-configuration TLS encryption to maximize the security of the TLS certificate validation.
Why the 'root' user in MariaDB has no password and how it makes the database more secure
Relying on passwords for security is problematic, as there is always a risk that they could leak, and a malicious user could access the system using the leaked password. It is unfortunately far too common for database passwords to be stored in plaintext in configuration files that are accidentally committed into version control and published on GitHub and similar platforms. Every application or administrative password that exists should be tracked to ensure only people who need it know it, and rotated at regular intervals to ensure old employees etc won't be able to use old passwords. This password management is complex and error-prone.
Replacing passwords with other authentication methods is always advisable when possible. On a database server, whoever installed the database by running e.g. apt install mariadb-server
, and configured it with e.g. nano /etc/mysql/mariadb.cnf
, already has full root access to the operating system, and asking them for a password to access the MariaDB database shell is moot, since they could circumvent any checks by directly accessing the files on the system anyway. Therefore, MariaDB, since version 10.4 stopped requiring the root user to enter a password when connecting locally, and instead checks using socket authentication whether the user is the operating-system root user or equivalent (e.g. running sudo
). This is an elegant way to get rid of a password that was actually unnecessary to begin with. As there is no root password anymore, the risk of an external user accessing the database as root with a leaked password is fully eliminated.
Note that socket authentication only works for local connections on the same server. If you want to access a MariaDB server remotely as the root
user, you would need to configure a password for it first. This is not generally recommended, as explained in the next section.
Create separate database users for normal use and keep 'root' for administrative use only
Out of the box a MariaDB installation is already secure by default, and only the local root
user can connect to it. This account is intended for administrative use only, and for regular daily use you should create separate database users with access limited to the databases they need and the permissions required.
The most typical commands needed to create a new database for an app and a user the app can use to connect to the database would be the following:
CREATE DATABASE app_db; CREATE USER 'app_user'@'%' IDENTIFIED BY 'your_secure_password'; GRANT ALL PRIVILEGES ON app_db.* TO 'app_user'@'%'; FLUSH PRIVILEGES;
CREATE DATABASE app_db;
CREATE USER 'app_user'@'%' IDENTIFIED BY 'your_secure_password';
GRANT ALL PRIVILEGES ON app_db.* TO 'app_user'@'%';
FLUSH PRIVILEGES;
Alternatively, if you want to use the parsec authentication method, run this to create the user:
CREATE OR REPLACE USER 'app_user'@'%' IDENTIFIED VIA parsec USING PASSWORD('your_secure_password');
CREATE OR REPLACE USER 'app_user'@'%'
IDENTIFIED VIA parsec
USING PASSWORD('your_secure_password');
Note that the plugin auth_parsec is not enabled by default. If you see the error message ERROR 1524 (HY000): Plugin 'parsec' is not loaded
fix this by running INSTALL SONAME 'auth_parsec';
.
In the CREATE USER
statements, the @'%'
means that the user is allowed to connect from any host. This needs to be defined, as MariaDB always checks permissions based on both the username and the remote IP address or hostname of the user, combined with the authentication method. Note that it is possible to have multiple user@remote
combinations, and they can have different authentication methods. A user could, for example, be allowed to log in locally using the socket authentication and over the network using a password.
If you are running a custom application and you know exactly what permissions are sufficient for the database users, replace the ALL PRIVILEGES
with a subset of privileges listed in the MariaDB documentation.
For new permissions to take effect, restart the database or run FLUSH PRIVILEGES
.
Allow MariaDB to accept remote connections and enforce TLS
Using the above 'app_user'@'%'
is not enough on its own to allow remote connections to MariaDB. The MariaDB server also needs to be configured to listen on a network interface to accept remote connections. As MariaDB is secure by default, it only accepts connections from localhost
until the administrator updates its configuration. On a typical Debian/Ubuntu system, the recommended way is to drop a new custom config in e.g. /etc/mysql/mariadb.conf.d/99-server-customizations.cnf
, with the contents:
[mariadbd] # Listen for connections from anywhere bind-address = 0.0.0.0 # Only allow TLS encrypted connections require-secure-transport = on
[mariadbd]
# Listen for connections from anywhere
bind-address = 0.0.0.0
# Only allow TLS encrypted connections
require-secure-transport = on
For settings to take effect, restart the server with systemctl restart mariadb
. After this, the server will accept connections on any network interface. If the system is using a firewall, the port 3306 would additionally need to be allow-listed.
To confirm that the settings took effect, run e.g. mariadb -e "SHOW VARIABLES LIKE 'bind_address';"
, which should now show 0.0.0.0
.
When allowing remote connections, it is important to also always define require-secure-transport = on
to enforce that only TLS-encrypted connections are allowed. If the server is running MariaDB 11.8 and the clients are also MariaDB 11.8 or newer, no additional configuration is needed thanks to MariaDB automatically providing TLS certificates and appropriate certificate validation in recent versions.
On older long-term-supported versions of the MariaDB server one would have had to manually create the certificates and configure the ssl_key
, ssl_cert
and ssl_ca
values on the server, and distribute the certificate to the clients as well, which was cumbersome, so good it is not required anymore. In MariaDB 11.8 the only additional related config that might still be worth setting is tls_version = TLSv1.3
to ensure only the latest TLS protocol version is used.
Finally, test connections to ensure they work and to confirm that TLS is used by running e.g.:
mariadb --user=app_user --password=your_secure_password \ --host=192.168.1.66 -e '\s'
mariadb --user=app_user --password=your_secure_password \
--host=192.168.1.66 -e '\s'
The result should show something along:
-------------- mariadb from 11.8.3-MariaDB, client 15.2 for debian-linux-gnu (x86_64) ... Current user: app_user@192.168.1.66 SSL: Cipher in use is TLS_AES_256_GCM_SHA384, cert is OK ...
--------------
mariadb from 11.8.3-MariaDB, client 15.2 for debian-linux-gnu (x86_64)
...
Current user: app_user@192.168.1.66
SSL: Cipher in use is TLS_AES_256_GCM_SHA384, cert is OK
...
If running a Debian/Ubuntu system, see the bundled README with zcat /usr/share/doc/mariadb-server/README.Debian.gz
to read more configuration tips.
Should TLS encryption be used also on internal networks?
If a database server and app are running on the same private network, the chances that the connection gets eavesdropped on or man-in-the-middle attacked by a malicious user are low. However, it is not zero, and if it happens, it can be difficult to detect or prove that it didn't happen. The benefit of using end-to-end encryption is that both the database server and the client can validate the certificates and keys used, log it, and later have logs audited to prove that connections were indeed encrypted and show how they were encrypted.
If all the computers on an internal network already have centralized user account management and centralized log collection that includes all database sessions, reusing existing SSH connections, SOCKS proxies, dedicated HTTPS tunnels, point-to-point VPNs, or similar solutions might also be a practical option. Note that the zero-configuration TLS only works with password validation methods. This means that systems configured to use PAM or Kerberos/GSSAPI can't use it, but again those systems are typically part of a centrally configured network anyway and are likely to have certificate authorities and key distribution or network encryption facilities already set up.
In a typical software app stac however, the simplest solution is often the best and I recommend DBAs use the end-to-end TLS encryption in MariaDB 11.8 in most cases.
Hopefully with these tips you can enjoy having your MariaDB deployments both simpler and more secure than before!
14 Sep 2025 12:00am GMT
21 Aug 2025
Planet Lisp
TurtleWare: Using Common Lisp from inside the Browser
Table of Contents
- Scripting a website with Common Lisp
- JS-FFI - low level interface
- LIME/SLUG - interacting from Emacs
- Injecting CL runtime in arbitrary websites
- Current Caveats
- Funding
Web Embeddable Common Lisp is a project that brings Common Lisp and the Web Browser environments together. In this post I'll outline the current progress of the project and provide some technical details, including current caveats and future plans.
It is important to note that this is not a release and none of the described APIs and functionalities is considered to be stable. Things are still changing and I'm not accepting bug reports for the time being.
The source code of the project is available: https://fossil.turtleware.eu/wecl/.
Scripting a website with Common Lisp
The easiest way to use Common Lisp on a website is to include WECL and insert script tags with a type "text/common-lisp". When the attribute src is present, then first the runtime loads the script from that url, and then it executes the node body. For example create and run this HTML document from localhost:
<!doctype html>
<html>
<head>
<title>Web Embeddable Common Lisp</title>
<link rel="stylesheet" href="https://turtleware.eu/static/misc/wecl-20250821/easy.css" />
<script type="text/javascript" src="https://turtleware.eu/static/misc/wecl-20250821/boot.js"></script>
<script type="text/javascript" src="https://turtleware.eu/static/misc/wecl-20250821/wecl.js"></script>
</head>
<body>
<script type="text/common-lisp" src="https://turtleware.eu/static/misc/wecl-20250821/easy.lisp" id='easy-script'>
(defvar *div* (make-element "div" :id "my-ticker"))
(append-child [body] *div*)
(dotimes (v 4)
(push-counter v))
(loop for tic from 6 above 0
do (replace-children *div* (make-paragraph "~a" tic))
(js-sleep 1000)
finally (replace-children *div* (make-paragraph "BOOM!")))
(show-script-text "easy-script")
</script>
</body>
</html>
We may use Common Lisp that can call to JavaScript, and register callbacks to be called on specified events. The source code of the script can be found here:
- https://turtleware.eu/static/misc/wecl-20250821/easy.html
- https://turtleware.eu/static/misc/wecl-20250821/easy.lisp
Because the runtime is included as a script, the browser will usually cache the ~10MB WebAssembly module.
JS-FFI - low level interface
The initial foreign function interface has numerous macros defining wrappers that may be used from Common Lisp or passed to JavaScript.
Summary of currently available operators:
- define-js-variable: an inlined expression, like
document
- define-js-object: an object referenced from the object store
- define-js-function: a function
- define-js-method: a method of the argument, like
document.foobar()
- define-js-getter: a slot reader of the argument
- define-js-setter: a slot writer of the first argument
- define-js-accessor: combines define-js-getter and define-js-setter
- define-js-script: template for JavaScript expressions
- define-js-callback: Common Lisp function reference callable from JavaScript
- lambda-js-callback: anonymous Common Lisp function reference (for closures)
Summary of argument types:
type name | lisp side | js side |
---|---|---|
:object | Common Lisp object | Common Lisp object reference |
:js-ref | JavaScript object reference | JavaScript object |
:fixnum | fixnum (coercible) | fixnum (coercible) |
:symbol | symbol | symbol (name inlined) |
:string | string (coercible) | string (coercible) |
:null | nil | null |
All operators, except for LAMBDA-JS-CALLBACK
have a similar lambda list:
(DEFINE-JS NAME-AND-OPTIONS [ARGUMENTS [,@BODY]])
The first argument is a list (name &key js-expr type)
that is common to all defining operators:
- name: Common Lisp symbol denoting the object
- js-expr: a string denoting the JavaScript expression, i.e "innerText"
- type: a type of the object returned by executing the expression
For example:
(define-js-variable ([document] :js-expr "document" :type :symbol))
;; document
(define-js-object ([body] :js-expr "document.body" :type :js-ref))
;; wecl_ensure_object(document.body) /* -> id */
;; wecl_search_object(id) /* -> node */
The difference between a variable and an object in JS-FFI is that variable expression is executed each time when the object is used (the expression is inlined), while the object expression is executed only once and the result is stored in the object store.
The second argument is a list of pairs (name type)
. Names will be used in the lambda list of the operator callable from Common Lisp, while types will be used to coerce arguments to the type expected by JavaScript.
(define-js-function (parse-float :js-expr "parseFloat" :type :js-ref)
((value :string)))
;; parseFloat(value)
(define-js-method (add-event-listener :js-expr "addEventListener" :type :null)
((self :js-ref)
(name :string)
(fun :js-ref)))
;; self.addEventListener(name, fun)
(define-js-getter (get-inner-text :js-expr "innerText" :type :string)
((self :js-ref)))
;; self.innerText
(define-js-setter (set-inner-text :js-expr "innerText" :type :string)
((self :js-ref)
(new :string)))
;; self.innerText = new
(define-js-accessor (inner-text :js-expr "innerText" :type :string)
((self :js-ref)
(new :string)))
;; self.innerText
;; self.innerText = new
(define-js-script (document :js-expr "~a.forEach(~a)" :type :js-ref)
((nodes :js-ref)
(callb :object)))
;; nodes.forEach(callb)
The third argument is specific to callbacks, where we define Common Lisp body of the callback. Argument types are used to coerce values from JavaScript to Common Lisp.
(define-js-callback (print-node :type :object)
((elt :js-ref)
(nth :fixnum)
(seq :js-ref))
(format t "Node ~2d: ~a~%" nth elt))
(let ((start 0))
(add-event-listener *my-elt* "click"
(lambda-js-callback :null ((event :js-ref)) ;closure!
(incf start)
(setf (inner-text *my-elt*)
(format nil "Hello World! ~a" start)))
Note that callbacks are a bit different, because define-js-callback
does not accept js-expr
option and lambda-js-callback
has unique lambda list. It is important for callbacks to have an exact arity as they are called with, because JS-FFI does not implement variable number of arguments yet.
Callbacks can be referred by name with an operator (js-callback name)
.
LIME/SLUG - interacting from Emacs
While working on FFI I've decided to write an adapter for SLIME/SWANK that will allow interacting with WECL from Emacs. The principle is simple: we connect with a websocket to Emacs that is listening on the specified port (i.e on localhost). This adapter uses the library emacs-websocket
written by Andrew Hyatt.
It allows for compiling individual forms with C-c C-c
, but file compilation does not work (because files reside on a different "host"). REPL interaction works as expected, as well as SLDB. The connection may occasionally be unstable, and until Common Lisp call returns, the whole page is blocked. Notably waiting for new requests is not a blocking operation from the JavaScript perspective, because it is an asynchronous operation.
You may find my changes to SLIME here: https://github.com/dkochmanski/slime/, and it is proposed upstream here: https://github.com/slime/slime/pull/879. Before these changes are merged, we'll patch SLIME:
;;; Patches for SLIME 2.31 (to be removed after the patch is merged).
;;; It is assumed that SLIME is already loaded into Emacs.
(defun slime-net-send (sexp proc)
"Send a SEXP to Lisp over the socket PROC.
This is the lowest level of communication. The sexp will be READ and
EVAL'd by Lisp."
(let* ((payload (encode-coding-string
(concat (slime-prin1-to-string sexp) "\n")
'utf-8-unix))
(string (concat (slime-net-encode-length (length payload))
payload))
(websocket (process-get proc :websocket)))
(slime-log-event sexp)
(if websocket
(websocket-send-text websocket string)
(process-send-string proc string))))
(defun slime-use-sigint-for-interrupt (&optional connection)
(let ((c (or connection (slime-connection))))
(cl-ecase (slime-communication-style c)
((:fd-handler nil) t)
((:spawn :sigio :async) nil))))
Now we can load the LIME adapter opens a websocket server. The source code may be downloaded from https://turtleware.eu/static/misc/wecl-20250821/lime.el:
;;; lime.el --- Lisp Interaction Mode for Emacs -*-lexical-binding:t-*-
;;;
;;; This program extends SLIME with an ability to listen for lisp connections.
;;; The flow is reversed - normally SLIME is a client and SWANK is a server.
(require 'websocket)
(defvar *lime-server* nil
"The LIME server.")
(cl-defun lime-zipit (obj &optional (start 0) (end 72))
(let* ((msg (if (stringp obj)
obj
(slime-prin1-to-string obj)))
(len (length msg)))
(substring msg (min start len) (min end len))))
(cl-defun lime-message (&rest args)
(with-current-buffer (process-buffer *lime-server*)
(goto-char (point-max))
(dolist (arg args)
(insert (lime-zipit arg)))
(insert "\n")
(goto-char (point-max))))
(cl-defun lime-client-process (client)
(websocket-conn client))
(cl-defun lime-process-client (process)
(process-get process :websocket))
;;; c.f slime-net-connect
(cl-defun lime-add-client (client)
(lime-message "LIME connecting a new client")
(let* ((process (websocket-conn client))
(buffer (generate-new-buffer "*lime-connection*")))
(set-process-buffer process buffer)
(push process slime-net-processes)
(slime-setup-connection process)
client))
;;; When SLIME kills the process, then it invokes LIME-DISCONNECT hook.
;;; When SWANK kills the process, then it invokes LIME-DEL-CLIENT hook.
(cl-defun lime-del-client (client)
(when-let ((process (lime-client-process client)))
(lime-message "LIME client disconnected")
(slime-net-sentinel process "closed by peer")))
(cl-defun lime-disconnect (process)
(when-let ((client (lime-process-client process)))
(lime-message "LIME disconnecting client")
(websocket-close client)))
(cl-defun lime-on-error (client fun error)
(ignore client fun)
(lime-message "LIME error: " (slime-prin1-to-string error)))
;;; Client sends the result over a websocket. Handling responses is implemented
;;; by SLIME-NET-FILTER. As we can see, the flow is reversed in our case.
(cl-defun lime-handle-message (client frame)
(let ((process (lime-client-process client))
(data (websocket-frame-text frame)))
(lime-message "LIME-RECV: " data)
(slime-net-filter process data)))
(cl-defun lime-net-listen (host port &rest parameters)
(when *lime-server*
(error "LIME server has already started"))
(setq *lime-server*
(apply 'websocket-server port
:host host
:on-open (function lime-add-client)
:on-close (function lime-del-client)
:on-error (function lime-on-error)
:on-message (function lime-handle-message)
parameters))
(unless (memq 'lime-disconnect slime-net-process-close-hooks)
(push 'lime-disconnect slime-net-process-close-hooks))
(let ((buf (get-buffer-create "*lime-server*")))
(set-process-buffer *lime-server* buf)
(lime-message "Welcome " *lime-server* "!")
t))
(cl-defun lime-stop ()
(when *lime-server*
(websocket-server-close *lime-server*)
(setq *lime-server* nil)))
After loading this file into Emacs invoke (lime-net-listen "localhost" 8889)
. Now our Emacs listens for new connections from SLUG (the lisp-side part adapting SWANK, already bundled with WECL). There are two SLUG backends in a repository:
- WANK: for web browser environment
- FRIG: for Common Lisp runtime (uses
websocket-driver-client
)
Now you can open a page listed here and connect to SLIME:
<!doctype html>
<html>
<head>
<title>Web Embeddable Common Lisp</title>
<link rel="stylesheet" href="easy.css" />
<script type="text/javascript" src="https://turtleware.eu/static/misc/wecl-20250821/boot.js"></script>
<script type="text/javascript" src="https://turtleware.eu/static/misc/wecl-20250821/wecl.js"></script>
<script type="text/common-lisp" src="https://turtleware.eu/static/misc/wecl-20250821/slug.lisp"></script>
<script type="text/common-lisp" src="https://turtleware.eu/static/misc/wecl-20250821/wank.lisp"></script>
<script type="text/common-lisp" src="https://turtleware.eu/static/misc/wecl-20250821/easy.lisp">
(defvar *connect-button* (make-element "button" :text "Connect"))
(define-js-callback (connect-to-slug :type :null) ((event :js-ref))
(wank-connect "localhost" 8889)
(setf (inner-text *connect-button*) "Crash!"))
(add-event-listener *connect-button* "click" (js-callback connect-to-slug))
(append-child [body] *connect-button*)
</script>
</head>
<body>
</body>
</html>
This example shows an important limitation - Emscripten does not allow for multiple asynchronous contexts in the same thread. That means that if Lisp call doesn't return (i.e because it waits for input in a loop), then we can't execute other Common Lisp statements from elsewhere because the application will crash.
Injecting CL runtime in arbitrary websites
Here's another example. It is more a cool gimmick than anything else, but let's try it. Open a console on this very website (on firefox C-S-i) and execute:
function inject_js(url) {
var head = document.getElementsByTagName('head')[0];
var script = document.createElement('script');
head.appendChild(script);
script.type = 'text/javascript';
return new Promise((resolve) => {
script.onload = resolve;
script.src = url;
});
}
function inject_cl() {
wecl_eval('(wecl/impl::js-load-slug "https://turtleware.eu/static/misc/wecl-20250821")');
}
inject_js('https://turtleware.eu/static/misc/wecl-20250821/boot.js')
.then(() => {
wecl_init_hooks.push(inject_cl);
inject_js('https://turtleware.eu/static/misc/wecl-20250821/wecl.js');
});
With this, assuming that you've kept your LIME server open, you'll have a REPL onto uncooperative website. Now we can fool around with queries and changes:
(define-js-accessor (title :js-expr "title" :type :string)
((self :js-ref)
(title :string)))
(define-js-accessor (background :js-expr "body.style.backgroundColor" :type :string)
((self :js-ref)
(background :string)))
(setf (title [document]) "Write in Lisp!")
(setf (background [document]) "#aaffaa")
Current Caveats
The first thing to address is the lack of threading primitives. Native threads can be implemented with web workers, but then our GC wouldn't know how to stop the world to clean up. Another option is to use cooperative threads, but that also won't work, because Emscripten doesn't support independent asynchronous contexts, nor ECL is ready for that yet.
I plan to address both issues simultaneously in the second stage of the project when I port the runtime to WASI. We'll be able to use browser's GC, so running in multiple web workers should not be a problem anymore. Unwinding and rewinding the stack will require tinkering with ASYNCIFY and I have somewhat working green threads implementation in place, so I will finish it and upstream in ECL.
Currently I'm focusing mostly on having things working, so JS and CL interop is brittle and often relies on evaluating expressions, trampolining and coercing. That impacts the performance in a significant way. Moreover all loaded scripts are compiled with a one-pass compiler, so the result bytecode is not optimized.
There is no support for loading cross-compiled files onto the runtime, not to mention that it is not possible to precompile systems with ASDF definitions.
JS-FFI requires more work to allow for defining functions with variable number of arguments and with optional arguments. There is no dynamic coercion of JavaScript exceptions to Common Lisp conditions, but it is planned.
Funding
This project is funded through NGI0 Commons Fund, a fund established by NLnet with financial support from the European Commission's Next Generation Internet program. Learn more at the NLnet project page.
21 Aug 2025 12:00am GMT
19 Aug 2025
Planet Lisp
Scott L. Burson: FSet 1.5.0 gets custom orderings!
The ordering of the "setlike" collections - sets, maps, and bags - in FSet has always been determined by the generic function fset:compare. This approach is often very convenient, as it allows you to define the ordering of a new type simply by adding a method on compare; there is no need to supply the ordering explicitly every time you create a new collection.
However, as people have complained from time to time, it is also a bit limiting. Say you want to make something like a telephone directory (anyone remember telephone directories?) which maps string keys to values, and you would like it maintained in lexicographic order of the keys. To do this with FSet, you have heretofore had to define a wrapper class, and then a compare method on that class, something like:
Then you would have to wrap your keys in lexi-strings before adding them to your map. That seems a little wasteful of both time and space.
A second problem with always using fset:compare is that you have to pay the cost of the generic function dispatch several times every time the collection gets searched for an element, as in contains? on a set or lookup on a map. (The number of such calls is roughly the base-2 logarithm of the size of the collection.) One micro-benchmark I ran showed this cost to be around 20% of the access time, which is not insignificant.
So, in response to popular demand, I have added custom orderings to FSet: you can supply your own comparison functions when creating collections, and FSet will call those instead of compare. Use of this feature is completely optional; existing code is not affected. But if you want to do it, now you can!
I refer you to the PR description for the details.
There is one aspect of this change that might surprise you. When given objects of different classes, fset:compare doesn't compare the contents of the objects; it just compares their class names and returns :less or :greater accordingly. So, for instance, a list cannot be equal? to a vector or seq, even if they have the same elements in the same order. This rule now also covers cases where the objects are collections of the same kind (sets, bags, or maps) but with different orderings. So just as a wb-set and a ch-set can never be :equal, so two wb-sets with different orderings can never be :equal; compare will just look at the comparison function names to impose an artificial ordering.
I'm not suggesting this is an ideal situation, but I don't see a way around it. Since comparing two wb-sets of the same ordering relies on that ordering, a combined relation on wb-sets of different orderings would in general fail to be transitive; you would get situations where a < b and b < c, but c < a.
19 Aug 2025 8:59am GMT
16 Aug 2025
Planet Lisp
Joe Marshall: Dinosaurs
What did the dinosaurs think in their twilight years as their numbers dwindled and small scurrying mammals began to challenge their dominance? Did they reminisce of the glory days when Tyrannosaurus Rex ruled the land and Pteranodon soared through the air? Probably not. They were, after all, just dumb animals.
Our company has decided to buy in to Cursor as an AI coding tool. Cursor is one of many AI coding tools that have recently been brought to market, and it is a fine tool. It is based on a fork of VSCode and has AI coding capabilities built in to it. One of the more useful ones (and one that is available in many other AI tools) is AI code completion. This anticipates what you are going to type and tries to complete it for you. It gets it right maybe 10-20% of the time if you are lucky, and not far wrong maybe 80% of the time. You can get into a flow where you reflexively keep or discard its suggestions or accept the near misses and then correct them. This turns out to be faster than typing everything yourself, once you get used to it. It isn't for everyone, but it works for me.
Our company has been using GitHub Copilot for several months now. There is an Emacs package that allows you to use the Copilot code completion in Emacs, and I have been using it for these past few months. In addition to code completion, it will complete sentences and paragraphs in text mode and html mode. I generally reject its suggestions because it doesn't phrase things the way I prefer, but I really like seeing the suggestions as I type. It offers an alternative train of thought that I can mull over. If the suggestions wildly diverge from what I am thinking, it is usually because I didn't lay the groundwork for my train of thought, so I can go back and rework my text to make it clearer. It seems to make my prose more focused.
But now comes Cursor, and it has one big problem. It is a closed proprietary tool with no API or SDK. It won't talk to Emacs. So do I abandon Emacs and jump on the Cursor bandwagon, or do I stick with Emacs and miss out on the latest AI coding tools? Is there really a question? I've been using Emacs since before my manager was born, and I am not about to give it up now. My company will continue with a few GitHub Copilot licenses for those that have a compelling reason to not switch to Cursor, and I think Emacs compatibility is pretty compelling.
But no one uses Emacs and Lisp anymore but us dinosaurs. They all have shiny new toys like Cursor and Golang. I live for the schadenfreude of watching the gen Z kids rediscover and attempt to solve the same problems that were solved fifty years ago. The same bugs, but the tools are now clumsier.
16 Aug 2025 3:04pm GMT
31 Jan 2025
FOSDEM 2025
FOSDEM Treasure Hunt Update – Signs Stolen, But the Hunt Continues!
Treasure hunters, we have an update! Unfortunately, some of our signs have been removed or stolen, but don't worry-the hunt is still on! To ensure everyone can continue, we will be posting all signs online so you can still access the riddles and keep progressing. However, there is one exception: the 4th riddle must still be heard in person at Building H, as it includes an important radio message. Keep your eyes on our updates, stay determined, and don't let a few missing signs stop you from cracking the code! Good luck, and see you at Infodesk K with舰
31 Jan 2025 11:00pm GMT
29 Jan 2025
FOSDEM 2025
Join the FOSDEM Treasure Hunt!
Are you ready for a challenge? We're hosting a treasure hunt at FOSDEM, where participants must solve six sequential riddles to uncover the final answer. Teamwork is allowed and encouraged, so gather your friends and put your problem-solving skills to the test! The six riddles are set up across different locations on campus. Your task is to find the correct locations, solve the riddles, and progress to the next step. No additional instructions will be given after this announcement, it's up to you to navigate and decipher the clues! To keep things fair, no hints or tips will be given舰
29 Jan 2025 11:00pm GMT
26 Jan 2025
FOSDEM 2025
Introducing Lightning Lightning Talks
The regular FOSDEM lightning talk track isn't chaotic enough, so this year we're introducing Lightning Lightning Talks (now with added lightning!). Update: we've had a lot of proposals, so submissions are now closed! Thought of a last minute topic you want to share? Got your interesting talk rejected? Has something exciting happened in the last few weeks you want to talk about? Get that talk submitted to Lightning Lightning Talks! This is an experimental session taking place on Sunday afternoon (13:00 in k1105), containing non-stop lightning fast 5 minute talks. Submitted talks will be automatically presented by our Lightning舰
26 Jan 2025 11:00pm GMT