30 Jun 2022

feedPlanet Lisp

Tycho Garen : Common Gotchas

This is a post I wrote a long time ago and never posted, but I've started getting back into doing some work in Common Lisp and thought it'd be good to send this one off.

On my recent "(re)learn Common Lisp" journey, I've happened across a few things that I've found frustrating or confusing: this post is a collection of them, in hopes that other people don't struggle with them:

  • Implementing an existing generic function for a class of your own, and have other callers specialize use your method implementation you must import the generic function, otherwise other callers will (might?) fall back to another method. This makes sense in retrospect, but definitely wasn't clear on the first go.
  • As a related follow on, you don't have to define a generic function in order to write or use a method, and I've found that using methods is actually quite nice for doing some type checking, at the same time, it can get you into a pickle if you later add the generic function and it's not exported/imported as you want.
  • Property lists seem cool for a small light weight mapping, but they're annoying to handle as part of public APIs, mostly because they're indistinguishable from regular lists, association lists are preferable, and maybe with make-hash even hash-tables.
  • Declaring data structures inline is particularly gawky. I sometimes want to build a list or a hash map in-line an alist, and it's difficult to do that in a terse way that doesn't involve building the structure programatically. I've been writing (list (cons "a" t) (cons "b" nil)) sort of things, which I don't love.
  • If you have a variadic macro (i.e. that takes &rest args), or even I suppose any kind of macro, and you have it's arguments in a list, there's no a way, outside of eval to call the macro, which is super annoying, and makes macros significantly less appealing as part of public APIs. My current conclusion is that macros are great when you want to add syntax to make the code you're writing clearer or to introduce a new paradigm, but for things that could also be a function, or are thin wrappers on for function, just use a function.

30 Jun 2022 12:00am GMT

24 Jun 2022

feedPlanet Lisp

vindarel: State of Common Lisp Web Development - an overview

Caution [from 2017: what a road since then]: this is a draft. I take notes and write more in other resources (the Cookbook, my blog).

update, June 2022: see my web project skeleton, it illustrates and fixes common issues: https://github.com/vindarel/cl-cookieweb

update july, 5th 2019: I put this content into the Cookbook: https://lispcookbook.github.io/cl-cookbook/web.html, fixing a long-standing request.

new post: why and how to live-reload one's running web application: https://lisp-journey.gitlab.io/blog/i-realized-that-to-live-reload-my-web-app-is-easy-and-convenient/

new project skeleton: lisp-web-template-productlist: Hunchentoot + easy-routes + Djula templates + Bulma CSS + a Makefile to build the project

See also the Awesome CL list.

Information is at the moment scarce and spread appart, Lisp web frameworks and libraries evolve and take different approaches.

I'd like to know what's possible, what's lacking, see how to quickstart everything, see code snippets and, most of all, see how to do things that I couldn't do before such as hot reloading, building self-contained executables, shipping a multiplatform web app.

Prior notice:

Some people sell ten pages long ebooks or publish their tutorial on Gitbook to have a purchase option. I prefer to enhance the collaborative Cookbook (I am by far the main contributor). You can tip me on Kofi if you like: https://ko-fi.com/vindarel Thanks !

Table of Contents

Web application environments

Clack, Lack

Clack is to Lisp what WSGI is to Python. However it is mostly undocumented and not as battle-proofed as Hunchentoot.

Web frameworks


The de-facto web server, with the best documentation (cough looking old cough), the most websites on production. Lower level than a web framework (defining routes seems weird at first). I think worth knowing.

Its terminology is different from what we are used to ("routes" are not called routes but we create handlers), part I don't know why and part because the Lisp image-based development allows for more, and thus needs more terminology. For example, we can run two applications on different URLs on the same image.


edit: here's a modern looking page: https://digikar99.github.io/common-lisp.readthedocs/hunchentoot/


A popular web framework, or so it seems by the github stars, written by a super-productive lisper, with nice documentation for basic stuff but lacking for the rest, based on Clack (webserver interface, think Python's WSGI), uses Hunchentoot by default.

I feel like basic functions are too cumbersome (accessing url parameters).



By the maintainer of Sly, Emacs' Yasnippet,...

Defining routes is like defining functions. Built-in features that are available as extensions in Clack-based frameworks (setting to get a stacktrace on the browser, to fire up the debugger or to return a 404,...). Definitely worth exploring.



Radiance, with extensive tutorial and existing apps.

It doesn't look like a web framework to me. It has ready-to-use components:



a library for writing REST Web APIs in Common Lisp.

Features: validation via schemas, Swagger support, authentication, logging, caching, permission checking...

It seems complete, it is maintained, the author seems to be doing web development in CL for a living. Note to self: I want to interview him.



An asynchronous web server, by an impressive lisper, who built many async libraries. Used for the Turtl api backend. Dealing with async brings its own set of problems (how will be debugging ?).

Nice api to build routes, good documentation: http://wookie.lyonbros.com/

Weblocks (solving the Javascript problem)

Weblocks allows to create dynamic pages without a line of JavaScript, all in Lisp. It was started years ago and it saw a large update and refactoring lately.

It isn't the easy path to web development in CL but there's great potential IMO.

It doesn't do double data binding as in modern JS frameworks. But new projects are being developed...

See our presentation below.



Accessing url parameters

It is easy and well explained with Hunchentoot or easy-routes in the Cookbook.

Lucerne has a nice with-params macro that makes accessing post or url query parameters a breeze:

@route app (:post "/tweet")
(defview tweet ()
  (if (lucerne-auth:logged-in-p)
      (let ((user (current-user)))
        (with-params (tweet)
          (utweet.models:tweet user tweet))
        (redirect "/"))
      (render-template (+index+)
                       :error "You are not logged in.")))

Snooze's way is simple and lispy: we define routes like methods and parameters as keys:

(defroute lispdoc (:get :text/* name &key (package :cl) (doctype 'function))

matches /lispdoc, /lispdoc/foo and /lispdoc/foo?package=arg.

On the contrary, I find Caveman's and Ningle's ways cumbersome.


(setf (ningle:route *app* "/hello/:name")
      #'(lambda (params)
          (format nil "Hello, ~A" (cdr (assoc "name" params :test #'string=)))))

The above controller will be invoked when you access to "/hello/Eitaro" or "/hello/Tomohiro", and then (cdr (assoc "name" params :test #'string=)) will be "Eitaro" and "Tomohiro".

and it doesn't say about query parameters. I had to ask:

(assoc "the-query-param" (clack.request:query-parameter lucerne:*request*) :test 'string=)


Parameter keys contain square brackets ("[" & "]") will be parsed as structured parameters. You can access the parsed parameters as _parsed in routers.

(defroute "/edit" (&key _parsed)
  (format nil "~S" (cdr (assoc "person" _parsed :test #'string=))))
;=> "((\"name\" . \"Eitaro\") (\"email\" . \"e.arrows@gmail.com\") (\"birth\" . ((\"year\" . 2000) (\"month\" . 1) (\"day\" . 1))))"

Session an cookies

Data storage


Mito works for MySQL, Postgres and SQLite3 on SBCL and CCL.


We can define models with a regular class which has a mito:dao-table-class :metaclass:

(defclass user ()
  ((name :col-type (:varchar 64)
         :initarg :name
         :accessor user-name)
   (email :col-type (:varchar 128)
          :initarg :email
          :accessor user-email))
  (:metaclass mito:dao-table-class)
  (:unique-keys email))

We create the table with ensure-table-exists:

(ensure-table-exists 'user)
;       "name" VARCHAR(64) NOT NULL,
;       "email" VARCHAR(128),
;       "created_at" TIMESTAMP,
;       "updated_at" TIMESTAMP
;   ) () [0 rows] | MITO.DAO:ENSURE-TABLE-EXISTS

Persistent datastores


Mito has migrations support and DB schema versioning for MySQL, Postgres and SQLite3, on SBCL and CCL. Once we have changed our model definition, we have commands to see the generated SQL and to apply the migration.

We inspect the SQL: (suppose we just added the email field into the user class above)

(mito:migration-expressions 'user)
;=> (#<SXQL-STATEMENT: ALTER TABLE user ALTER COLUMN email TYPE character varying(128), ALTER COLUMN email SET NOT NULL>
;    #<SXQL-STATEMENT: CREATE UNIQUE INDEX unique_user_email ON user (email)>)

and we can apply the migration:

(mito:migrate-table 'user)
;-> ;; ALTER TABLE "user" ALTER COLUMN "email" TYPE character varying(128), ALTER COLUMN "email" SET NOT NULL () [0 rows] | MITO.MIGRATION.TABLE:MIGRATE-TABLE
;   ;; CREATE UNIQUE INDEX "unique_user_email" ON "user" ("email") () [0 rows] | MITO.MIGRATION.TABLE:MIGRATE-TABLE
;-> (#<SXQL-STATEMENT: ALTER TABLE user ALTER COLUMN email TYPE character varying(128), ALTER COLUMN email SET NOT NULL>
;    #<SXQL-STATEMENT: CREATE UNIQUE INDEX unique_user_email ON user (email)>)

Crane advertises automatic migrations, i.e. it would run them after a C-c C-c. Unfortunately Crane has some issues, it doesn't work with sqlite yet and the author is busy elsewhere. It didn't work for me at first try.

Let's hope the author comes back to work on this in a near future.


There are a few libraries, see the awesome-cl list. At least one is well active.


On an error we are usually dropped into the interactive debugger by default.

Snooze gives options:

clack-errors. Like a Flask or Django stacktrace in the browser. For Caveman, Ningle and family.

By default, when Clack throws an exception when rendering a page, the server waits for the response until it times out while the exception waits in the REPL. This isn't very useful. So now there's this.

It prints the stacktrace along with some request details on the browser. Can return a custom error page in production.


Are you tired of jumping to your web browser every time you need to test your work in Clack? Clack-pretend will capture and replay calls to your clack middleware stack. When developing a web application with clack, you will often find it inconvenient to run your code from the lisp REPL because it expects a clack environment, including perhaps, cookies or a logged-in user. With clack-pretend, you can run prior web requests from your REPL, moving development back where it belongs.


Testing with a local DB: example of a testing macro.

We would use envy to switch configurations.


Oauth, Job queues, etc

Templating engines


Djula: as Django templates. Good documentation. Comes by default in Lucerne and Caveman.

We also use a dot to access attributes of dict-like variables (plists, alists, hash-tables, arrays and CLOS objects), such a feature being backed by the access library.

We wanted once to use structs and didn't find how to it directly in Djula, so we resorted in a quick helper function to transform the struct in an alist.

Defining custom template filters is straightforward in Djula, really a breeze compared to Django.

Eco - a mix of html with lisp expressions.

Truncated example:

      <% if posts %>
        <h1>Recent Posts</h1>
        <ul id="post-list">
          <% loop for (title . snippet) in posts %>
            <li><%= title %> - <%= snippet %></li>
          <% end %>


I prefer the semantics of Spinneret over cl-who. It also has more features (like embeddable markdown, warns on malformed html, and more).



Parenscript is a translator from an extended subset of Common Lisp to JavaScript. Parenscript code can run almost identically on both the browser (as JavaScript) and server (as Common Lisp). Parenscript code is treated the same way as Common Lisp code, making the full power of Lisp macros available for JavaScript. This provides a web development environment that is unmatched in its ability to reduce code duplication and provide advanced meta-programming facilities to web developers.



A Lisp-to-Javascript compiler bootstrapped from Common Lisp and executed from the browser.




Is it possible to write Ajax-based pages only in CL?

The case Webblocks - Reblocks, 2017

Weblocks is an "isomorphic" web frameworks that solves the "Javascript problem". It allows to write the backend and an interactive client interface in Lisp, without a line of Javascript, in our usual Lisp development environment.

The framework evolves around widgets, that are updated server-side and are automatically redisplayed with transparent ajax calls on the client.

It is being massively refactored, simplified, rewritten and documented since 2017. See the new quickstart:


Writing a dynamic todo-app resolves in:

(defwidget task ()
          :initarg :title
          :accessor title)
          :initarg :done
          :initform nil
          :accessor done)))

(defwidget task-list ()
          :initarg :tasks
          :accessor tasks)))

(defmethod render ((task task))
        "Render a task."
              (:span (if (done task)
                               (:s (title task)))
                       (title task)))))

(defmethod render ((widget task-list))
        "Render a list of tasks."
              (:h1 "Tasks")
                (loop for task in (tasks widget) do
                      (:li (render task))))))

(defmethod weblocks/session:init ((app tasks))
         (declare (ignorable app))
         (let ((tasks (make-task-list "Make my first Weblocks app"
                                      "Deploy it somewhere"
                                      "Have a profit")))
           (make-instance 'task-list :tasks tasks)))

(defmethod add-task ((task-list task-list) title)
        (push (make-task title)
              (tasks task-list))
        (update task-list))

Adding an html form and calling the new add-task function:

(defmethod render ((task-list task-list))
          (:h1 "Tasks")
          (loop for task in (tasks task-list) do
            (render task))
          (with-html-form (:POST (lambda (&key title &allow-other-keys)
                                         (add-task task-list title)))
            (:input :type "text"
                    :name "title"
                    :placeholder "Task's title")
            (:input :type "submit"
                    :value "Add"))))


We can build our web app from sources, no worries, that works.


We can build an executable also for web apps. That simplifies a deployment process drastically.

We can even get a Lisp REPL and interact with the running web app, in which we can even install new Quicklisp dependencies. That's quite incredible, and it's very useful, if not to hot-reload a web app (which I do anyways but that might be risky), at least to reload a user's configuration file.

This is the general way:

(sb-ext:save-lisp-and-die #p"name-of-executable" :toplevel #'main :executable t)

But this is an SBCL-specific command, so we can be generic and use asdf:make, with a couple lines inside our system .asd declaration. See the Cookbook: https://lispcookbook.github.io/cl-cookbook/scripting.html#with-asdf

Now if you run your binary, your app will start up all fine, but it will quit instantly. We need to put the web server thread on the foreground:

(defun main ()
    ;; with bordeaux-threads. Also sb-ext: join-thread, thread-name, list-all-threads.
    (bt:join-thread (find-if (lambda (th)
                                (search "hunchentoot" (bt:thread-name th)))

When I run it, Hunchentoot stays listening at the foreground:

$ ./my-webapp
Hunchentoot server is started.
Listening on localhost:9003.

I can use a tmux session (tmux, then C-b d to detach it) or better yet, start the app with Systemd, see below.

But we need something more for real apps, we need to ship foreign libraries. Deploy to the rescue. Edit your .asd file slightly to have:

:defsystem-depends-on (:deploy)  ;; (ql:quickload "deploy") before
:build-operation "deploy-op"     ;; instead of "program-op" as above
:build-pathname "my-webapp"  ;; doesn't change
:entry-point "my-webapp:main"  ;; doesn't change

Build the app again with asdf:make, and see how Deploy discovers and ships in the required foreign libraries: libssl.so, libosicat.so, libmagic.so...

It puts the final binary and the .so libraries in a bin/ directory. This is what you'll have to ship.

I can now build my web app, send it to my VPS and see it live \o/

One more thing. We don't want to ship libssl and libcrypto, so we ask Deploy to not ship them. We prefer to rely on the target OS.

;; .asd
;; Deploy may not find libcrypto on your system.
;; But anyways, we won't ship it to rely instead
;; on its presence on the target OS.
(require :cl+ssl)  ; sometimes necessary.
#+linux (deploy:define-library cl+ssl::libssl :dont-deploy T)
#+linux (deploy:define-library cl+ssl::libcrypto :dont-deploy T)

There's also a hack if you have an issue with ASDF trying to update itself... see the skeleton: https://github.com/vindarel/cl-cookieweb

To be exhaustive, here's how to catch a user's C-c and stop our app correctly. By default, you would get a full backtrace. Yuk.

(defun main ()
  (start-app :port 9003)
  ;; with bordeaux-threads
  (handler-case (bt:join-thread (find-if (lambda (th)
                                             (search "hunchentoot" (bt:thread-name th)))
    (#+sbcl sb-sys:interactive-interrupt
      #+ccl  ccl:interrupt-signal-condition
      #+clisp system::simple-interrupt-condition
      #+ecl ext:interactive-interrupt
      #+allegro excl:interrupt-signal
      () (progn
           (format *error-output* "Aborting.~&")
           (clack:stop *server*)
           (uiop:quit 1)) ;; portable exit, included in ASDF, already loaded.
    ;; for others, unhandled errors (we might want to do the same).
    (error (c) (format t "Woops, an unknown error occured:~&~a~&" c)))))

To see:

Multiplatform delivery with Electron (Ceramic)

Ceramic makes all the work for us.

It is as simple as this:

;; Load Ceramic and our app
(ql:quickload '(:ceramic :our-app))

;; Ensure Ceramic is set up

;; Start our app (here based on the Lucerne framework)
(lucerne:start our-app.views:app :port 8000)

;; Open a browser window to it
(defvar window (ceramic:make-window :url "http://localhost:8000/"))

;; start Ceramic
(ceramic:show-window window)

and we can ship this on Linux, Mac and Windows.


Ceramic applications are compiled down to native code, ensuring both performance and enabling you to deliver closed-source, commercial applications.

(so no need to minify our JS)

with one more line:

(ceramic.bundler:bundle :ceramic-hello-world
                                 :bundle-pathname #p"/home/me/app.tar")
Copying resources...
Compiling app...

This last line was buggy for us.


When you build a self-contained binary (see above, "Shipping"), deployment gets easy.


sbcl --load <my-app> --eval '(start-my-app)'

For example, a run Makefile target:

        sbcl --load my-app.asd \
             --eval '(ql:quickload :my-app)' \
             --eval '(my-app:start-app)'  ;; given this function starts clack or hunchentoot.

this keeps sbcl in the foreground. We can use tmux to put it in background, or use Systemd.

Then, we need of a task supervisor, that will restart our app on failures, start it after a reboot, handle logging. See the section below and example projects.

with Clack

$ clackup app.lisp
Hunchentoot server is started.
Listening on localhost:5000.

with Docker

So we have various implementations ready to use: sbcl, ecl, ccl... with Quicklisp well configured.


On Heroku

See heroku-buildpack-common-lisp and the Awesome CL#deploy section.

Systemd: daemonizing, restarting in case of crashes, handling logs

Generally, this depends on your system. But most GNU/Linux distros now come with Systemd. Write a service file like this:

$ /etc/systemd/system/my-app.service
Description=stupid simple example

ExecStart=/usr/bin/sbcl --load run.lisp  # your command, full path

One gotcha is that your app must be run on the foreground. See the threads snippet above in Shipping, and add it when running the app from sources too. Otherwise, you'll see a "compilation unit aborted, caught 1 fatal error condition" error. That's simply your Lisp quitting too early.

Run this command to start the service:

sudo systemctl start my-app.service

to check its status:

systemctl status my-app.service

Systemd handles logging for you. We write to stdout or stderr, it writes logs:

journalctl -u my-app.service

use -f -n 30 to see live updates of logs.

This tells Systemd to handle crashes and to restart the app:


and it can start the app after a reboot:


to enable it:

sudo systemctl enable my-app.service

Debugging SBCL error: ensure_space: failed to allocate n bytes

If you get this error with SBCL on your server:

mmap: wanted 1040384 bytes at 0x20000000, actually mapped at 0x715fa2145000
ensure_space: failed to allocate 1040384 bytes at 0x20000000
(hint: Try "ulimit -a"; maybe you should increase memory limits.)

then disable ASLR:

sudo bash -c "echo 0 > /proc/sys/kernel/randomize_va_space"

Connecting to a remote Swank server

Little example here: http://cvberry.com/tech_writings/howtos/remotely_modifying_a_running_program_using_swank.html.

It defines a simple function that prints forever:

;; a little common lisp swank demo
;; while this program is running, you can connect to it from another terminal or machine
;; and change the definition of doprint to print something else out!
;; (ql:quickload '(:swank :bordeaux-threads))

(require :swank)
(require :bordeaux-threads)

(defparameter *counter* 0)

(defun dostuff ()
  (format t "hello world ~a!~%" *counter*))

(defun runner ()
  (bt:make-thread (lambda ()
                    (swank:create-server :port 4006)))
  (format t "we are past go!~%")
  (loop while t do
       (sleep 5)
       (incf *counter*)))


On our server, we run it with

sbcl --load demo.lisp

we do port forwarding on our development machine:

ssh -L4006: username@example.com

this will securely forward port 4006 on the server at example.com to our local computer's port 4006 (swanks accepts connections from localhost).

We connect to the running swank with M-x slime-connect, typing in port 4006.

We can write new code:

(defun dostuff ()
  (format t "goodbye world ~a!~%" *counter*))
(setf *counter* 0)

and eval it as usual with M-x slime-eval-region for instance. The output should change.

There are more pointers on CV Berry's page.

Hot reload

When we run the app as a script we get a Lisp REPL, so we can hot-reload the running web app. Here we demonstrate a recipe to update it remotely.

Example taken from Quickutil.

It has a Makefile target:

        $(call $(LISP), \
                (ql:quickload :quickutil-server) (ql:quickload :swank-client), \
                (swank-client:with-slime-connection (conn "localhost" $(SWANK_PORT)) \
                        (swank-client:slime-eval (quote (handler-bind ((error (function continue))) \
                                (ql:quickload :quickutil-utilities) (ql:quickload :quickutil-server) \
                                (funcall (symbol-function (intern "STOP" :quickutil-server))) \
                                (funcall (symbol-function (intern "START" :quickutil-server)) $(start_args)))) conn)) \

It has to be run on the server (a simple fabfile command can call this through ssh). Beforehand, a fab update has run git pull on the server, so new code is present but not running. It connects to the local swank server, loads the new code, stops and starts the app in a row.

Buy Me a Coffee at ko-fi.com

(yes that currently helps, thanks!)

24 Jun 2022 6:51am GMT

22 Jun 2022

feedPlanet Lisp

Tycho Garen : Methods of Adoption

Before I started actually working as a software engineer full time, writing code was this fun thing I was always trying to figure out on my own, and it was fun, and I could hardly sit down at my computer without learning something. These days, I do very little of this kind of work. I learn more about computers by doing my job and frankly, the kind of software I write for work is way more satisfying than any of the software I would end up writing for myself.

I think this is because the projects that a team of engineers can work on are necessarily larger and more impactful. When you build software with a team, most of the time the product either finds users (or your end up without a job.) When you build software with other people and for other people, the things that make software good (more rigorous design, good test discipline, scale,) are more likely to be prevalent. Those are the things that make writing software fun.

Wait, you ask "this is a lisp post?" and "where is the lisp content?" Wait for it...

In Pave the On/Off Ramps [1] I started exploring this idea that technical adoption is less a function of basic capabilities or numbers of features in general, but rather about the specific features that support and further adoption and create confidence in maintenance and interoperability. A huge part of the decision process is finding good answers to "can I use these tools as part of the larger system of tools that I'm using?" and "can I use this tool a bit without needing to commit to using it for everything?"

Technologies which are and demand ideological compliance are very difficult to move into with confidence. A lot of technologies and tools demand ideological compliance, and their adoption depends on once-in-a-generation sea changes or significant risks. [2] The alternate method, to integrate into people's existing workflows and systems, and provide great tools that work for some usecases and to prove their capability is much more reliable: if somewhat less exciting.

The great thing about Common Lisp is that it always leans towards the pragmatic rather than the ideological. Common Lisp has a bunch of tools--both in the langauge and in the ecosystem--which are great to use but also not required. You don't have to use CLOS (but it's really cool), you don't have to use ASDF, there isn't one paradigm of developing or designing software that you have to be constrained to. Do what works.

I think there are a lot of questions that sort of follow on from this, particularly about lisp and the adoption of new technologies. So let's go through the ones I can think of, FAQ style:

  • What kind of applications would a "pave the exits" support?

    It almost doesn't matter, but the answer is probably a fairly boring set of industrial applications: services that transform and analyze data, data migration tools, command-line (build, deployment) tools for developers and operators, platform orchestration tools, and the like. This is all boring (on the one hand,) but most software is boring, and it's rarely the case that programming langauge actually matters much.

    In addition, CL has a pretty mature set of tools for integrating with C libaries and might be a decent alternative to other langauges with more complex distribution stories. You could see CL being a good langauge for writing extensions on top of existing tools (for both Java with ABCL and C/C++ with ECL and CLASP), depending.

  • How does industrial adoption of Common Lisp benefit the Common Lisp community?

    First, more people writing common lisp for their jobs, which (assuming they have a good experience,) could proliferate into more projects. A larger community, maybe means a larger volume of participation in existing projects (and more projects in general.) Additionally, more industrial applications means more jobs for people who are interested in writing CL, and that seems pretty cool.

  • How can CL compete with more established languages like Java, Go, and Rust?

    I'm not sure competition is really the right model for thinking about this: there's so much software to write that "my langauge vs your langauge" is just a poor model for thinking about this: there's enough work to be done that everyone can be successful.

    At the same time, I haven't heard about people who are deeply excited about writing Java, and Go folks (which I count myself among) tend to be pretty pragmatic as well. I see lots of people who are excited about Rust, and it's definitely a cool langauge though it shines best at lower level problems than CL and has a reasonable FFI so it might be the case that there's some exciting room for using CL for higher level tasks on top of rust fundamentals.

[1] In line with the idea that product management and design is about identifying what people are doing and then institutionalizing this is similar to the urban planning idea of "paving cowpaths," I sort of think of this as "paving the exits," though I recognize that this is a bit force.d
[2] I'm thinking of things like the moment of enterprise "object oriented programing" giving rise to Java and friends, or the big-data watershed moment in 2009 (or so) giving rise to so-called NoSQL databases. Without these kinds of events you the adoption of these big paradigm-shifting technologies is spotty and relies on the force of will of a particular technical leader, for better (and often) worse.

22 Jun 2022 12:00am GMT

10 Nov 2011

feedPlanet Python

Terry Jones: Emacs buffer mode histogram

Tonight I noticed that I had over 200 buffers open in emacs. I've been programming a lot in Python recently, so many of them are in Python mode. I wondered how many Python files I had open, and I counted them by hand. About 90. I then wondered how many were in Javascript mode, in RST mode, etc. I wondered what a histogram would look like, for me and for others, at times when I'm programming versus working on documentation, etc.

Because it's emacs, it wasn't hard to write a function to display a buffer mode histogram. Here's mine:

235 buffers open, in 23 distinct modes

91               python +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
47          fundamental +++++++++++++++++++++++++++++++++++++++++++++++
24                  js2 ++++++++++++++++++++++++
21                dired +++++++++++++++++++++
16                 html ++++++++++++++++
 7                 text +++++++
 4                 help ++++
 4           emacs-lisp ++++
 3                   sh +++
 3       makefile-gmake +++
 2          compilation ++
 2                  css ++
 1          Buffer-menu +
 1                 mail +
 1                 grep +
 1      completion-list +
 1                   vm +
 1                  org +
 1               comint +
 1              apropos +
 1                 Info +
 1           vm-summary +
 1      vm-presentation +

Tempting as it is, I'm not going to go on about the heady delights of having a fully programmable editor. You either already know, or you can just drool in slack-jawed wonder.

Unfortunately I'm a terrible emacs lisp programmer. I can barely remember a thing each time I use it. But the interpreter is of course just emacs itself and the elisp documentation is in emacs, so it's a really fun environment to develop in. And because emacs lisp has a ton of support for doing things to itself, code that acts on emacs and your own editing session or buffers is often very succinct. See for example the save-excursion and with-output-to-temp-buffer functions below.

(defun buffer-mode-histogram ()
"Display a histogram of emacs buffer modes."
(let* ((totals '())
(buffers (buffer-list()))
(total-buffers (length buffers))
(ht (make-hash-table :test 'equal)))
(dolist (buffer buffers)
(set-buffer buffer)
((mode-name (symbol-name major-mode)))
(puthash mode-name (1+ (gethash mode-name ht 0)) ht))))
(maphash (lambda (key value)
(setq totals (cons (list key value) totals)))
(setq totals (sort totals (lambda (x y) (> (cadr x) (cadr y)))))
(with-output-to-temp-buffer "Buffer mode histogram"
(princ (format "%d buffers open, in %d distinct modes\n\n"
total-buffers (length totals)))
(dolist (item totals)
((key (car item))
(count (cadr item)))
(if (equal (substring key -5) "-mode")
(setq key (substring key 0 -5)))
(princ (format "%2d %20s %s\n" count key
(make-string count ?+))))))))

Various things about the formatting could be improved. E.g., not use fixed-width fields for the count and the mode names, and make the + signs indicate more than one buffer mode when there are many.

10 Nov 2011 2:42pm GMT

Mike Driscoll: wxPython: ANN: Namespace Diff Tool

Last night, Andrea Gavana released his new Namespace Diff Tool (NDT) to the world. I got his permission to reprint his announcement here for all those people who don't follow the wxPython mailing list. I think it sounds like a really cool tool. You should check it out and see what you think. Here is the announcement:


The `Namespace Diff Tool` (NDT) is a graphical user interface that can
be used to discover differences between different versions of a library,
or even between different iterations/sub-versions of the same library.

The tool can be used to identify what is missing and still needs to be
implemented, or what is new in a new release, which items do not have
docstrings and so on.

Full description of the original idea by Robin Dunn:


:warning: As most of the widgets in the GUI are owner drawn or custom,
it is highly probable that the interface itself will look messy on other
platforms (Mac, I am talking to you). Please do try and create a patch to
fix any possible issue in this sense.

:note: Please refer to the TODOs section for a list of things that still
to be implemented.


In order to run NDT, these packages need to be installed:

- Python 2.X (where 5 = X = 7);
- wxPython >= 2.8.10;
- SQLAlchemy >= 0.6.4.

More detailed instructions on how to use it, TODO items, list of
libraries/packages I tested NDT against, screenshots and download links can
be found here:


If you stumble upon a bug (which is highly probable), please do let me
know. But most importantly, please do try and make an effort to create a
patch for the bug.

According to the thread, some bugs were already found and fixed.

10 Nov 2011 1:15pm GMT


OSDir.com - Java: Oracle Introduces New Java Specification Requests to Evolve Java Community Process

From the Yet Another dept.:

To further its commitment to the Java Community Process (JCP), Oracle has submitted the first of two Java Specification Requests (JSRs) to update and revitalize the JCP.

10 Nov 2011 6:01am GMT

OSDir.com - Java: No copied Java code or weapons of mass destruction found in Android

From the Fact Checking dept.:

ZDNET: Sometimes the sheer wrongness of what is posted on the web leaves us speechless. Especially when it's picked up and repeated as gospel by otherwise reputable sites like Engadget. "Google copied Oracle's Java code, pasted in a new license, and shipped it," they reported this morning.

Sorry, but that just isn't true.

10 Nov 2011 6:01am GMT

OSDir.com - Java: Java SE 7 Released

From the Grande dept.:

Oracle today announced the availability of Java Platform, Standard Edition 7 (Java SE 7), the first release of the Java platform under Oracle stewardship.

10 Nov 2011 6:01am GMT

feedPlanet Python

Andy Todd: Extracting a discrete set of values

Today's I love Python moment is bought to you by set types.

I have a file, XML naturally, the contains a series of transactions. Each transaction has a reference number, but the reference number may be repeated. I want to pull the distinct set of reference numbers from this file. The way I learnt to build up a discrete set of items (many years ago) was to use a dict and set default.

>>> ref_nos = {}
>>> for record in records:
>>>     ref_nos.setdefault(record.key, 1)
>>> ref_nos.keys()

But Python has had a sets module since 2.3 and the set standard data type since 2.6 so my knowledge is woefully out of date. The latest way to get the unique values from a sequence looks something like this;

>>> ref_nos = set([record.key for record in records])

I think I should get bonus points for using a list comprehension as well.

10 Nov 2011 5:42am GMT

09 Nov 2011

feedPlanet Grep

Thomas Vander Stichele: Mach 1.0.0 “Madera” released

Another November, another Fedora. 16 came out, so it was time to update mach again.

And today I thought, is there any reason mach isn't 1.0 yet ? Am I going to do anything more to this piece of code before I want to call it that ?

And the answer is, no. It's the first Python application I've written, and I'm not particularly proud of the code, but I'm happy I've made good use of it for so long, and that it helped push packaging approaches forward and sparked ideas for the Fedora build system.

Since I didn't like the original code for mach2 (there was a version 1 which was Makefile-based), I started a rewrite with unit tests, better code layout, decent classes for abstracting distro-specific stuff, and so on.

The experience of how mock was created based off mach2 was a slightly sour one however, so I wasn't really motivated to finish the mach3 rewrite. Sometimes that's the drawback of open source - sure, forking is specifically allowed, so don't whine about it when it happens. But when it's done gratuitously, with no serious attempt at collaborating, it doesn't feel like it's in the spirit of open source.

Anyway, that was a long time ago. mach2 as it is today, is done. It really only needs updating for newer versions. As long as it works for me, it's unlikely I will continue mach3, but who knows?

Enjoy the release!

09 Nov 2011 10:56pm GMT

feedPlanet Debian

Matthew Garrett: Properly booting a Mac

This is mostly for my own reference, but since it might be useful to others:

By "Properly booting" I mean "Integrating into the boot system as well as Mac OS X does". The device should be visible from the boot picker menu and should be selectable as a startup disk. For this to happen the boot should be in HFS+ format and have the following files:

That's enough to get it to appear in the startup disk boot pane. Getting it in the boot picker requires it to be blessed. You probably also want a .VolumeIcon.icns in / in order to get an appropriate icon.

Now all I need is an aesthetically appealing boot loader.

comment count unavailable comments

09 Nov 2011 10:06pm GMT

Martin Zobel-Helas: How to read Debian's mailing list archives locally

From time to time i want to answer on mails on Debian mailinglists that i am not subcribed to. To have proper reply-headers set, i usually copied the archive mbox from master.debian.org to my local machine.

Now i found a much nicer way.

apt-get install fuse afuse sshfs
adduser zobel fuse
mkdir ~/fuse/
afuse -o mount_template="sshfs %r:/ %m" -o unmount_template="fusermount -u -z %m" -o timeout=60 ~/fuse
mutt -f /home/zobel/fuse/master.debian.org/home/debian/lists/debian-user/debian-user.201111

09 Nov 2011 10:04pm GMT

Gunnar Wolf: On the social-based Web and my reluctance to give it my time

I recently started getting mails from no-reply@joindiaspora.com. Usually, a mail from no-reply@whatever is enough to make me believe that the admins of said whatever are clueless regarding what e-mail means and how should it work. And in this case, it really amazes me - If I get an invite to Diaspora*, right, I should not pester a hypothetical sysadmin@joindiaspora.com to get me off his list, but I should be able to reply to the person mailing me - Maybe requesting extra details on what he is inviting me to, or allowing me to tell him why I'm not interested. But yes, Diaspora* has fallen to the ease of requiring me to join their network to be able to communicate back with the "friend" who invited me.

Some of the (three?) readers of this site might not be familiar with the Diaspora* project. It is a free reimplementation (as far as I know) of something similar to Facebook - Free not only in the sense that it runs free software, but also because it is federated - Your data will not belong to a specific company (that is, you are not the value object they sell and make money with), but you can choose and switch (or become) the provider for your information. A very interesting proposal, socially and technically.

I find that a gross violation of netiquette. I should be able to reply to the mail - Even if in this case it were to (and sorry - As you are spreading my name/mail, you will excuse me if I spread your name ;-) ) fernando.estrada.invite1501335@joindiaspora.com. Such an (fictional FWIW) address would allow for mail to reach back the submitter by the same medium it was sent, without allowing open spamming into the network.

Now, what prompted me to write this mail (just before adding no-reply@joindiaspora.com to my blacklist) is the message I got (in an ugly HTML-only mail which erroneously promised to be text/plain, sigh...) is that Fernando sent me as the inviting message, «So, at least are you going to give Diaspora a chance?»

The answer is: No..

But not because of being a fundamentalist. Right, I am among what many people qualify as Free Software zealots, but many of my choices (as this one is) is in no way related to the software's freeness. I use non-free Web services, as much as many of you do. Yes, I tend to use them less, rather than more (as the tendency goes).

But the main reason I don't use Twitter is the same reason I don't use Identi.ca, its free counterpart - And the reason I'm not interested in Facebook is the same reason I will not join Diaspora* - Because I lack time for yet another stream of activity, of information, of things to do and think about.

Yes, even if I care about you and I want to follow what's going on in your life: The best way to do it is to sit over a cup of coffee, or have some dinner, or to meet once a year in the most amazing conference ever. Or we can be part of distributed projects together, and we will really interact lots. Or you can write a blog! I do follow the blogs of many of my friends (plus several planets), even if they have fallen out of fashion - A blog post pulls me to read it as it is a unit of information, not too much depending on context (a problem when I read somebody's Twitter/Identica lines: You have to hunt a lot of conversations to understand what's going on), gives a true dump of (at least one aspect of) your state of (mind|life|work), and is a referenceable unit I can forward to other people, or quote if needed.

So, yes, I might look old-fashioned, clinging to the tools of the last-decade for my Social Web presence. I will never be a Social Media Expert. I accept it - But please, don't think it is a Stallmanesque posture from me. It is just that of a person who can lose too much time, and needs to get some work done in the meantime.

(oh, of course: Blog posts also don't have to make much sense or be logically complete. But at least they allow me to post a full argument!)

09 Nov 2011 5:55pm GMT

feedPlanet Grep

Xavier Mertens: Biology Rules Apply to Infosec?

(Source: www.esa.org)

In biology, it is proven that consanguinity between members belonging to the same group (example: people living in the same closed area or animals from the same breed) may affect their resistance to certain diseases or reduce certain physical characteristics. It's important to keep some level of diversity. The latest Juniper story made me remember the talk about "monoculture" presented at BlackHat Europe 2011.

A few days ago, some parts of the Internet were affected by a bug in Juniper routers BGP update code. If you have a look at the market of the core-routers, it is dominated by two manufacturers: Cisco & Juniper. Routers operated by major ISPs are crucial to maintain the Internet reliable. If most of those devices are coming from a unique manufacturer (or a very limited number of them), you increase the risks to face big issues if they are affected by a bug or a security flaw.

Now, speaking about devices or applications in general (the core-routers were just an example) and from a business point of view, monoculture is positive:

But, putting the layer-8 (the "political layer") aside, monoculture has side effects:

Like in biology, monoculture can generate catastrophic situations in case of a successful attack or major bug. I don't say that big players do a bad job (otherwise they could never reach such part of the market). Just don't behave like a lemming. Choose the solution which match your requirements and not just because "it's a big name".

Do you remember the French movie "Les Rivières Pourpre" ("The Crimson Rivers") with the closed society of Guernon?

09 Nov 2011 1:44pm GMT

Thomas Bouve: Flynn has to spend 10 days at the hospital due to a kidney &...

Flynn has to spend 10 days at the hospital due to a kidney & blood infection :_( (Taken with Instagram at AZ Sint-Lucas)

09 Nov 2011 12:18pm GMT

08 Nov 2011

feedfosdem - Google Blog Search

papupapu39 (papupapu39)'s status on Tuesday, 08-Nov-11 00:28 ...

papupapu39 · http://identi.ca/url/56409795 #fosdem #freeknowledge #usamabinladen · about a day ago from web. Help · About · FAQ · TOS · Privacy · Source · Version · Contact. Identi.ca is a microblogging service brought to you by Status.net. ...

08 Nov 2011 12:28am GMT

05 Nov 2011

feedfosdem - Google Blog Search

Write and Submit your first Linux kernel Patch | HowLinux.Tk ...

FOSDEM (Free and Open Source Development European Meeting) is a European event centered around Free and Open Source software development. It is aimed at developers and all interested in the Free and Open Source news in the world. ...

05 Nov 2011 1:19am GMT

03 Nov 2011

feedfosdem - Google Blog Search

Silicon Valley Linux Users Group – Kernel Walkthrough | Digital Tux

FOSDEM (Free and Open Source Development European Meeting) is a European event centered around Free and Open Source software development. It is aimed at developers and all interested in the Free and Open Source news in the ...

03 Nov 2011 3:45pm GMT

28 Oct 2011

feedPlanet Ruby

O'Reilly Ruby: MacRuby: The Definitive Guide

Ruby and Cocoa on OS X, the iPhone, and the Device That Shall Not Be Named

28 Oct 2011 8:00pm GMT

14 Oct 2011

feedPlanet Ruby

Charles Oliver Nutter: Why Clojure Doesn't Need Invokedynamic (Unless You Want It to be More Awesome)

This was originally posted as a comment on @fogus's blog post "Why Clojure doesn't need invokedynamic, but it might be nice". I figured it's worth a top-level post here.

Ok, there's some good points here and a few misguided/misinformed positions. I'll try to cover everything.

First, I need to point out a key detail of invokedynamic that may have escaped notice: any case where you must bounce through a generic piece of code to do dispatch -- regardless of how fast that bounce may be -- prevents a whole slew of optimizations from happening. This might affect Java dispatch, if there's any argument-twiddling logic shared between call sites. It would definitely affect multimethods, which are using a hand-implemented PIC. Any case where there's intervening code between the call site and the target would benefit from invokedynamic, since invokedynamic could be used to plumb that logic and let it inline straight through. This is, indeed, the primary benefit of using invokedynamic: arbitrarily complex dispatch logic folds away allowing the dispatch to optimize as if it were direct.

Your point about inference in Java dispatch is a fair one...if Clojure is able to infer all cases, then there's no need to use invokedynamic at all. But unless Clojure is able to infer all cases, then you've got this little performance time bomb just waiting to happen. Tweak some code path and obscure the inference, and kablam, you're back on a slow reflective impl. Invokedynamic would provide a measure of consistency; the only unforeseen perf impact would be when the dispatch turns out to *actually* be polymorphic, in which case even a direct call wouldn't do much better.

For multimethods, the benefit should be clear: the MM selection logic would be mostly implemented using method handles and "leaf" logic, allowing hotspot to inline it everywhere it is used. That means for small-morphic MM call sites, all targets could potentially inline too. That's impossible without invokedynamic unless you generate every MM path immediately around the eventual call.

Now, on to defs and Var lookup. Depending on the cost of Var lookup, using a SwitchPoint-based invalidation plus invokedynamic could be a big win. In Java 7u2, SwitchPoint-based invalidation is essentially free until invalidated, and as you point out that's a rare case. There would essentially be *no* cost in indirecting through a var until that var changes...and then it would settle back into no cost until it changes again. Frequently-changing vars could gracefully degrade to a PIC.

It's also dangerous to understate the impact code size has on JVM optimization. The usual recommendation on the JVM is to move code into many small methods, possibly using call-through logic as in multimethods to reuse the same logic in many places. As I've mentioned, that defeats many optimizations, so the next approach is often to hand-inline logic everywhere it's used, to let the JVM have a more optimizable view of the system. But now we're stepping on our own feet...by adding more bytecode, we're almost certainly impacting the JVM's optimization and inlining budgets.

OpenJDK (and probably the other VMs too) has various limits on how far it will go to optimize code. A large number of these limits are based on the bytecoded size of the target methods. Methods that get too big won't inline, and sometimes won't compile. Methods that inline a lot of code might not get inlined into other methods. Methods that inline one path and eat up too much budget might push out more important calls later on. The only way around this is to reduce bytecode size, which is where invokedynamic comes in.

As of OpenJDK 7u2, MethodHandle logic is not included when calculating inlining budgets. In other words, if you push all the Java dispatch logic or multimethod dispatch logic or var lookup into mostly MethodHandles, you're getting that logic *for free*. That has had a tremendous impact on JRuby performance; I had previous versions of our compiler that did indeed infer static target methods from the interpreter, but they were often *slower* than call site caching solely because the code was considerably larger. With invokedynamic, a call is a call is a call, and the intervening plumbing is not counted against you.

Now, what about negative impacts to Clojure itself...

#0 is a red herring. JRuby supports Java 5, 6, and 7 with only a few hundred lines of changes in the compiler. Basically, the compiler has abstract interfaces for doing things like constant lookup, literal loading, and dispatch that we simply reimplement to use invokedynamic (extending the old non-indy logic for non-indified paths). In order to compile our uses of invokedynamic, we use Rémi Forax's JSR-292 backport, which includes a "mock" jar with all the invokedynamic APIs stubbed out. In our release, we just leave that library out, reflectively load the invokedynamic-based compiler impls, and we're off to the races.

#1 would be fair if the Oracle Java 7u2 early-access drops did not already include the optimizations that gave JRuby those awesome numbers. The biggest of those optimizations was making SwitchPoint free, but also important are the inlining discounting and MutableCallSite improvements. The perf you see for JRuby there can apply to any indirected behavior in Clojure, with the same perf benefits as of 7u2.

For #2, to address the apparent vagueness in my blog post...the big perf gain was largely from using SwitchPoint to invalidate constants rather than pinging a global serial number. Again, indirection folds away if you can shove it into MethodHandles. And it's pretty easy to do it.

#3 is just plain FUD. Oracle has committed to making invokedynamic work well for Java too. The current thinking is that "lambda", the support for closures in Java 7, will use invokedynamic under the covers to implement "function-like" constructs. Oracle has also committed to Nashorn, a fully invokedynamic-based JavaScript implementation, which has many of the same challenges as languages like Ruby or Python. I talked with Adam Messinger at Oracle, who explained to me that Oracle chose JavaScript in part because it's so far away from Java...as I put it (and he agreed) it's going to "keep Oracle honest" about optimizing for non-Java languages. Invokedynamic is driving the future of the JVM, and Oracle knows it all too well.

As for #4...well, all good things take a little effort :) I think the effort required is far lower than you suspect, though.

14 Oct 2011 2:40pm GMT

07 Oct 2011

feedPlanet Ruby

Ruby on Rails: Rails 3.1.1 has been released!

Hi everyone,

Rails 3.1.1 has been released. This release requires at least sass-rails 3.1.4










You can find an exhaustive list of changes on github. Along with the closed issues marked for v3.1.1.

Thanks to everyone!

07 Oct 2011 5:26pm GMT

26 Jul 2008

feedFOSDEM - Free and Open Source Software Developers' European Meeting

Update your RSS link

If you see this message in your RSS reader, please correct your RSS link to the following URL: http://fosdem.org/rss.xml.

26 Jul 2008 5:55am GMT

25 Jul 2008

feedFOSDEM - Free and Open Source Software Developers' European Meeting

Archive of FOSDEM 2008

These pages have been archived.
For information about the latest FOSDEM edition please check this url: http://fosdem.org

25 Jul 2008 4:43pm GMT

09 Mar 2008

feedFOSDEM - Free and Open Source Software Developers' European Meeting

Slides and videos online

Two weeks after FOSDEM and we are proud to publish most of the slides and videos from this year's edition.

All of the material from the Lightning Talks has been put online. We are still missing some slides and videos from the Main Tracks but we are working hard on getting those completed too.

We would like to thank our mirrors: HEAnet (IE) and Unixheads (US) for hosting our videos, and NamurLUG for quick recording and encoding.

The videos from the Janson room were live-streamed during the event and are also online on the Linux Magazin site.

We are having some synchronisation issues with Belnet (BE) at the moment. We're working to sort these out.

09 Mar 2008 3:12pm GMT