24 Apr 2015

feedPlanet Lisp

Lispjobs: Common Lisp Web Developer, Somewrite, Tokyo or remote

We're looking for Common Lisp web developers. The number of positions available is 2 at most (for the current phase).

## About Us

Somewrite is located in Tokyo, Japan. We have an office in Aoyama, however, no programmers are working at there. Working at home, pushing commits and reviewing on GitHub. We talk via Slack usually, and via Google Hangouts once in a week.

## What We Do

* Advertising delivery
* Recommendation engine for ads
* Internal admin pages

There're a lot of technical challenges. The applications must be fast and scalable.

## Technologies We Use

* Caveman2
* Integral
* Woo
* Roswell
* Qlot
* React.js
* Ansible
* AWS EC2, S3, RDS and ELB

## Must

* Experience of development in Common Lisp
* Experience of web application development with a certain web application framework (not necessary in Common Lisp)
* Basic knowledges of \*nix OS and RDBMS
* English or Japanese

## Preferable

* Experience of web application development in Common Lisp
* High-performance Common Lisp development
* High-traffic website experience
* Rich UI development with JavaScript
* Basic knowledges about algorithm and various data structures

Send CV and a Cover Letter to fukamachi@somewrite.co.jp if you're interested.

24 Apr 2015 11:44am GMT

23 Apr 2015

feedPlanet Lisp

Nicolas Hafner: 8th European Lisp Symposium - Confession 53

I'm currently sitting in a lecture hall at the ETH in Zürich. It's 8 AM9 in the morning, and despite having had a long night's sleep, I'm still quite tired. It's amazing what two days of just listening and talking can do to you. It also feels so much longer than that, just because of all the things that happened. I suppose I should start from the beginning though.

The European Lisp Symposium is an annual conference organised by fellow Lispers. This year it took place at the Goldsmiths university of London from the 20th to the 21st of April and featured two keynotes1, twelve talks, and thirteen lightning talks. Aside from the symposium's talks, there was also a lot of discussion and chatter during the coffee breaks and even at the end of the days, long into the nights. With a new record of 89 registrants, there were a lot of people to meet and talk to.

When I decided to take part, it was mostly in order to meet the people I knew from Freenode's #lisp channel, but I was very pleasantly surprised to find that the various talks too turned out to be very interesting to me, even opening new ways to solve existing problems in my projects2. Of the people I've met, everyone was nice, interesting, and fun to talk to. I spent the most time with Robert Strandh (beach), Christian Schafmeister (drmeister), and Masatoshi Sano (@snmsts), but I would have loved to speak more with others as well, if only there had been more time to do so. Hopefully I'll be able to join again next year and have a chance to do so then.

My journey began on Sunday afternoon, flying to London. The airport was quite crowded and the flight fully booked out, crying children and babies included. Though the amount of people for the flight was not altogether surprising given that it departed at acceptable hours and was just at the beginning of the school holidays in Zürich. Regardless, the flight went well thanks to being able to drown out the noise with headphones. Arriving in Luton, the next challenge was to find my way through London to Greenwich, where my hotel was located. It turns out that Google Maps does not plan the best routes through the city, as it took me through districts I had no idea about, and I had to wander about aimlessly in the night more than once before reaching my destination. As I would learn on my trip back home, it's much better to ask the clerks at the railway stations for directions.

I arrived at the hotel around 9 'o clock local time, which was unfortunately too late to join the pre-ELS dinner that had been arranged. Though with over forty people, it would probably not have been very enjoyable to me to begin with. I had other things on my plate anyway. Apparently my booking (through booking.com) did not go into Mercure's systems, so they had to arrange something for me on the fly. Fortunately that was solved without a hitch, so after a quarter of an hour of standing around at the reception I was finally allowed into my freezing cold air-conditioned hotel room. Eager to meet people, I left my room again soon after and tried to lure whoever I could get into the hotel's bar by offering chocolate on the mailing list.

After waiting for a very brief while, I met a German guy3 and we talked about all sorts of interesting things. Turns out he's sneaking Common Lisp into high-performance computing. Later we were joined by Christian Schafmeister, who had just arrived after a long plane ride, and the discussion turned to Clasp, LLVM, and I forgot what all else we talked about until after midnight.

The next morning I woke up way too early, even escaping my phone's alarm. I made my way through the streets over to Goldsmiths, grabbing a croissant on the way. The weather was very nice; blue skies all over, and a nicely cool temperature to boot. I couldn't have asked for anything better. Finding my way through the campus I was surprised to find that a bunch of people had already arrived before me. I picked up my badge4 and sat down to wait for everyone to show up.

The first keynote was lead by Zach Beane and showed a really nice overview of what happened in the five years of Quicklisp's existence, and gave a taste of all the things to come in the future. I was very surprised and humbled by the brief shout-out he gave to me for my library contributions. I wanted to speak with Zach about a few plans5 regarding dists and versioning in Quicklisp later on, but I never got the chance to do so, unfortunately.

After a brief coffee break followed the first session of talks of which I most remember the Racket-powered computer vision and the lisp-backed visual data-flow system. There were a surprising number of Racket-related talks overall, suggesting to me that I really should invest some time some day soon to take a good look at it. I'm not quite sure what to think of the visual data-flow system, it seems really slow and inconvenient to work with, even if it offers a nice way to write automatically parallelised programs. I don't remember much about the last talk about Processing in Racket, which I'm really sorry about.

For the lunch break I remained in the building and spent the time talking to Robert Strandh, his wife6, and Christian Schafmeister. I'm not sure why, but for the entire time that I was in London I never felt hungry, so I barely ate anything and still always had the same amount of energy as I always do. It'd be neat if that could continue on for a while longer, to be honest.

After lunch we had a change of plans, due to the second keynote being rescheduled for the next day. What followed in its place was the second session of talks, which included a very interesting one about executable pseudo-code. Using this system (composed of a couple of CL macros), translating pseudo-code from textbooks to similar-looking, but actually executable code, became trivial and even rivalled the speed of optimised C++ implementations thereof for a specific example. The group is now testing the same for more algorithms, to see if this holds true. Exciting! We also heard about a Racket based system of language-generating specifications in order to allow for a better system to constrain an application's permissions in situations like that of Android, and we heard about a new algorithm to implement fast processing of lists in reverse order, using the stack for an implicit reversing operation.

A second coffee break with cookies ensured, followed by another Keynote talk, this time from a Googler at the ITA-Software group, speaking about unwanted memory retention despite garbage collection. He spoke about a long journey through the madness of debugging GC, with the end-result being that -if I understood correctly- the problem didn't lie in the GC at all, but instead in the way mmaped resources weren't being properly cleaned up by hand. Oh dear!

Closing off the first day we had a round of lightning talks, where I too took part and raced through my presentation about Qtools. We also heard about asynchronous hash-table operations, a mobile game engine using MOCL, OpenCV interaction from CL, an Erlang-like system for CL, and using Common Lisp in high-performance computing.

I had invited Christopher Hurley (Mithent) over for the conference banquet, and he luckily managed to make it fine, so I spent a large part of the remaining evening talking with him. It was really nice to finally meet in person again, if only we didn't live so far apart we could have that opportunity more often. Later, Masatoshi Sano joined us at the table and for a long while we spoke about the various cultural and lingual challenges involved in learning a new language. I was too embarrassed to try to say anything in Japanese to him, my mind completely locked up when I tried to form a sentence. I clearly still have a long, long way to go! By comparison, his English was immaculate, especially once he started to feel more comfortable talking to us.

After the banquet I once again hung around the hotel bar with a bunch of lispers, talking about this and that, but mostly compilers. I had to excuse myself around midnight, since I didn't want to be exhausted for the next day.

We started into the morning with an introduction to Gendl, a system that engineers use to build models for complex machinery. Apparently it has a long history and is mostly used in the aircraft industry. Unfortunately we didn't have time to get into any more involved stuff with the system, but what we did get was nonetheless a very interesting way to build and compose models using Common Lisp.

Following the subsequent coffee break we had a talk about Lambdatalk, a lisp-like extensible markup system, that reminded me a lot of my FuncEM project from back when I was still clueless about lisp. It was unfortunately also rather clearly apparent to me that the author didn't have much experience as a web-designer. The previews and examples he showed were stuffed with the kind of design I did when I first started out. Stuffed with shadows, gradients, and all sorts of other effects that don't do anything but distract from the important things. Anyway, before I go off on another rant about bad design, I'll move on to the second talk, which was about a symbolic pattern-matching system in Clojure. I was immediately reminded of the chapters about that in PAIP, which I should really pick up again soon. The main motivation behind this kind of system was the hope of better engaging students for the author's CS course. I myself would definitely be interested! We then had a talk about various data structures optimised for fast persistent access, which are currently used for Franz' AllegroGraph database.

My lunch break was once again spent talking to Robert Strandh about various things, this time less Lisp related, and more about general politics, ethics, and all the various hard problems that are fun to complain about, but almost insurmountably difficult to do anything about.

The afternoon was the highlight of the entire symposium. We had absolutely amazing talks, most notably Christian Schafmeister's about Clasp and his struggles of writing a C++-interoperating Common Lisp implementation in order to build gigantic molecules, Robert Strandh's first-class global environments that allow for a clean bootstrapping environment in CL, Eitaro Fukamachi's Woo beating node.js in HTTP server benchmarks and now shooting for the stars by aiming to be the fastest HTTP server ever, Miroslav Urbanek using CL in high-performance computing to crunch numbers for quantum physics simulations, and finally Chris Bagley introducing a modern approach to writing and experimenting with OpenGL comfortably in CL. Now I'm noticing that I put all the talks in the 'most notably' listing, though I think that is well-deserved.7

But the excitement wasn't over yet for me, as I had another lightning talk to do. Since I knew I had a lot to say on Radiance, even if I did cut it down to a minimum already, I had to ramp up my talking speed to eleven and raced through it as quickly as I could. I hope people managed to follow along regardless! The rest of the lightning talks were about lisp job hiring at RavenPack, information about the Common Lisp Foundation, an update on the status and plans for the common-lisp.net project, a tool to quickly install and set up CL implementations, and closing off with physics simulations for chocolate.

While the symposium had ended, the journey wasn't over quite yet. I spent the rest of the evening going for a drink with a couple of folks (beach, drmeister, and splittist were there too), where they gave me valuable advice on how to schedule my work. I left them around eight, as I was getting a headache and wanted to head back before it got dark. After relaxing in my hotel room for a brief while and swallowing a pain-killer, I felt well enough again to head back down to the bar. A couple of people once again gathered, and probably talked late into the night. I had to leave them early, even though there wasn't anything that I wanted to do more than stay and talk to them. I was too anxious about oversleeping and messing things up with my journey back home in the morning, so I had to go to bed on time.

Fortunately my alarm clock did its job proper; still tired I beat myself out of bed, did my packing, and checked out. I then set out for Greenwich Station, which confused me a lot at first, as there was a front entrance that didn't have any clerks, only ticket machines that didn't take my 50£. After wandering about worried for a good while I found the main entrance, and a very nice clerk printed out a good travel-route for me. Thanks to that, my journey to Luton was very comfortable and easy, with the only hiccup being a delay in the Tube line due to some clog up in the system somewhere. So I arrived at Luton about two hours too early. I spent my time there drinking a hot chocolate and paying five pounds for their WiFi. Getting onto the plane went by very quickly, and I'm quite sure the plane wasn't even half-full by the end. There was quite a bit of turbulence during the flight though, so my sketch didn't go as well as the ones from the flight over, though even those weren't great to begin with. Oh well!

I arrived at Zürich airport in the afternoon, euphoric about having successfully undergone such a wild journey, but also still dead tired from travelling, listening, and talking so much. I originally wanted to do post-recordings of my lightning talks to put on Youtube still8, but I just didn't have the energy to. I couldn't even get myself to stay up until midnight as I usually do, so I just called it an early night. And here we are already!

In conclusion, it was an absolute blast. I really hope that I can do it again next year, but depending on university and similar circumstances, it might not be as easily possible as it was in this case. A tremendous "Thank you!" goes to everyone who attended, and especially those who took the time to write a talk and organise the event. I'm also extremely thankful for all the people I had the chance to speak to personally; I enjoyed every minute of it.

Now that this is all over, I have to focus all the more on university, so that I may pass the coming exams in summer. Due to this I'll also most likely have to heavily tune down the amount of work I can spend on my lisp projects, maybe drop it altogether for a while. Still, if you have anything you'd be interested in knowing or speaking to me about, I'll be available. Send me a mail, hit me up on twitter, or preferably find me on Freenode's #lisp IRC channel.

[1] Originally three, though one was cancelled due to unfortunate illness. Get well soon, Bodil Stokke!

[2] Mostly Chris Bagley's CEPL, which I hope to use to implement Parasol's shaders with, but Eitaro Fukamachi's talk about Woo also reminded me to finish off the Radiance driver for it, and probably switch to it in production soon.

[3] Whose name escapes me, I'm so sorry! I'm awful at remembering names!

[4] Which were very stylish. The only mishap was that they didn't include a field for the IRC nickname or twitter username, and printed the name a tad too small, so you had to squint to read it from afar. Hopefully that'll be corrected for the next one.

[5] Mostly I'm interested in adding GIT capabilities to Quicklisp and allowing some kind of 'version fluidity', analogous to allowing certain systems to be upgraded, while keeping others in place.

[6] I don't think I ever caught her name, but if I did, I must have forgotten again. As I said, I am terrible with names and am very sorry about that. Edit: I've been helpfully informed that her first name is Kathleen.

[7] The videos are going to be put up in a couple of weeks, so that you may experience the excitement as well if you couldn't make it to the conference.

[8] I'll do that later today and link it here

[9] Now that I'm done writing, which is almost all I did for the time, it's 12 'o clock. If only writing didn't take that much time to do, I'd do it oh so more often!


23 Apr 2015 11:48am GMT

Christophe Rhodes: els2015 it happened

Oh boy.

It turns out that organizing a conference is a lot of work. Who'd have thought? And it's a lot of work even after accounting for the benefits of an institutional Conference Services division, who managed things that only crossed my mind very late: signage, extra supplies for college catering outlets - the kinds of things that are almost unnoticeable if they're present, but whose absence would cause real problems. Thanks to Julian Padget, who ran the programme, and Didier Verna, who handled backend-financials and the website; but even after all that there were still a good number of things I didn't manage to delegate - visa invitation letters, requests for sponsorship, printing proceedings, attempting to find a last-minute solution for recording talks after being reminded of it on the Internet somewhere... I'm sure there is more (e.g. overly-restrictive campus WiFi, blocking outbound ssh and TLS-enabled IMAP) but it's beginning to fade into a bit of a blur. (An enormous "thank you" to Richard Lewis for stepping in to handle recording the talks as best he could at very short notice).

And the badges! People said nice things about the badges on twitter, but... I used largely the same code for the ILC held in Cambridge in 2007, and the comment passed back to me then was that while the badges were clearly going to become collectors' items, they failed in the primary purpose of a badge at a technical conference, which is to give to the introvert with poor facial recognition some kind of clue who they are talking to: the font size for the name was too small. Inevitably, I got round to doing the badges at the last minute, and between finding the code to generate PDFs of badges (I'd lost my local copy, but the Internet had one), finding a supplier for double-sided sheets of 10 85x54mm business cards, and fighting with the office printer (which insisted it had run out of toner) the thought of modifying the code beyond the strictly necessary didn't cross my mind. Since I asked for feedback in the closing session, it was completely fair for a couple of delegates to say that the badges could have been better in this respect, so in partial mitigation I offer a slightly cleaned-up and adjusted version of the badge code with the same basic design but larger names: here you go (sample output). (Another obvious improvement suggested to me at dinner on Tuesday: print a list of delegate names and affiliations and pin it up on a wall somewhere).

My experience of the conference is likely to be atypical - being the responsible adult, I did have to stay awake at all times, and do some of the necessary behind-the-scenes stuff while the event was going on. But I did get to participate; I listened to most of most of the talks, with particular highlights for me being Breanndán Ó Nualláin's talk about a DSL for graph algorithms, Martin Cracauer's dense and technical discussion of conservative garbage collection, and the demo session on Tuesday afternoon: three distinct demos in three different areas, each both well-delivered and with exciting content. Those highlights were merely the stand-out moments for me; the rest of the programme was pretty good, too, and it looked like there were some good conversations happening in the breaks, over lunch, and at the banquet on Monday evening. We ended up with 90 registrations all told, with people travelling in from 18 other countries; the delegate with the shortest distance to travel lived 500m from Goldsmiths; the furthest came from 9500km away.

The proceedings are now available for free download from the conference website; some speakers have begun putting up their talk materials, and in the next few weeks we'll try to collect as much of that as we can, along with getting release permissions from the speakers to edit and publish the video recordings. At some point there will be a financial reckoning, too; Goldsmiths has delivered a number of services on trust, while ELSAA has collected registration fees in order to pay for those services - one of my next actions is to figure out the bureaucracy to enable these two organizations to talk to each other. Of course, Goldsmiths charges in pounds, while ELSAA collected fees in euros, and there's also the small matter of cross-border sales tax to wrap my head around... it's exciting being a currency speculator!

In summary, things went well - at least judging by the things people said to my face. I'm not quite going to say "A+ would organize again", because it is a lot of work - but organizing it once is fine, and a price worth paying to help sustain and to contribute to the communication between multiple different Lisp communities. It would be nice to do some Lisp programming myself some day: some of the stuff that you can do with it is apparently quite neat!

23 Apr 2015 10:47am GMT

14 Apr 2015

feedPlanet Lisp

Vsevolod Dyomkin: Announcing SHOULD-TEST

Once upon a time, it occurred to me that all sound software should be slightly self-ironic. That is how this library's name came into being: yes, you should test even Common Lisp code sometimes. :) But that's not the whole irony...

So, y u makes YATF?

Testing software always fascinated me because it is both almost always necessary and at the same time almost always excessive - it's extremely hard to find the right amount of resources you should allocate to it. You will most likely end up fearing to change your system either because you have too few tests, and some of the important scenarios aren't covered, or too many tests and you need to be constantly re-writing them. Surely, in Lisp the problem is not so drastic because in many cases you can rely on the REPL to help, but it's not a one-fit-all solution. There's also too much dogma in the space of general error handling in programming (that I addressed a little bit in this post). So, to find out how to test properly, around 7 years ago I had written my first test framework, which was called NUTS (non-unit test suite). It worked ok, and I used it in a couple of projects including the huge test suite of CL-REDIS that I'm really proud of. However, it was the first version, and you always have to re-write the first version. :) This is how MUTEST (microtest) appeared. In it, I was aiming at making a tool with the smallest footprint possible. It was also partially inspired by RT, which I consider to be the simplest (with a positive connotation) Lisp test framework (before ST). But both of them, MUTEST and RT, are not lispy because they are not extensible, and it's a shame to not have extensibility in Lisp, which provides excellent tools for building it in.

Well, "version 2 always sucks, but version 3..." So, SHOULD-TEST is version 3, and I'm really happy with it. It's truly minimal and intuitive to the extreme: like in the popular BDD approach you just write (in Yodaspeak, obviously): should be = 1 this-stuff and then st:test. And it's extensible - you can add specialized assertion strategies to the provided 3 basic ones: normal testing, exception catching, and capturing output streams.

I wasn't content with the existing Lisp test frameworks because they aren't concerned first and foremost with the things I care about:

These are the 3 things that SHOULD-TEST should do the best.

Over more than a year, I have written or re-written with it the whole test suites for the main open-source libraries I support - RUTILS, CL-REDIS, and CL-NLP (which doesn't yet have an extensive test coverage). And I also use it for all my in-house projects.

Working with ST

Here's a quick overview of the SHOULD-TEST workflow.

Test are defined with deftest:

(deftest some-fn ()
(should be = 1 (some-fn 2))
(should be = 2 (some-fn 1)))

Being run, the defined test returns either T or NIL as a primary value. Secondary and third values in case of NIL are lists of:

should is a macro that takes care of checking assertions. If the assertion doesn't hold should signals a condition of types should-failed or should-erred which are aggregated by deftest. Also, should returns either T or NIL and a list of a failed expression with expected and actual outputs as values.

Under the hood, should calls the generic function should-check and passes it a keyword produced from the first symbol (in this case, :be), a test predicate (here, '=), and a tested expression as thunk (here it will be e.g. (lambda () (some-fn 1))), and expected results if any. If multiple expected results are given, like in (should be eql nil #{:failed 1} (some-other-fn :dummy)), it means that multiple values are expected. As you see, the keyword and test predicate are passed unevaluated, so you can't use expressions here.

The pre-defined types of assertions are be, signal, and print-to. They check correspondingly.

deftest and should write the summary of test results to *test-output* (by default bound to *standard-output*). The var *verbose* (default T) controls if the summary contains full failure reports or just test names.

Tests are defined as lambda-functions attached to a symbol's test property, so (deftest some-fn ... will do the following:

(setf (get some-fn 'test)
(lambda () ...))

One feature that is pending implementation is establishing dependencies between tests while defining them, i.e. specifying the partial order in which they should be run. However, I haven't seen heavy demand for it in my test code so far.

To run the tests, use test. Without arguments, it runs all the tests in the current package. Given a :package argument it will do the same for that package, and given a :test argument it will run that individual test. In case of individual test's failure, it will return NIL and a list of failed assertions and a list of assertions, which triggered uncaught errors. In case of failed test of a package, it will return NIL and 2 hash-tables holding the same lists as above keyed by failed test's names.

As you see, the system uses a somewhat recursive protocol for test results:

So, the structure of the summary, returned from test, will be the following:

failed-test-1 ((failed-assertion-1 expected actual)
(failed-assertion-2 ...
failed-test-2 ...

There's also :failed key to test that will re-test only tests which failed at their last run.

Usage patterns

As SHOULD-TEST is agnostic, it doesn't impose any restrictions on how each project organizes its tests. Yet, having established patterns and best-practices never hearts. Below is the approach I use...

There's no restriction on naming tests. Though, it seems like a good approach to name them the same as functions they test. As for generic functions, I have different tests for different methods. In this case, I add some suffix to the test's name to indicate which method is tested (like transform-string for one of the methods of gf transform that is specialized for the string class of arguments).

As for code organization, I use the following directory structure of the typical project:

| `----module
| `-----file.lisp

I also usually place the tests in the same package as the code they test but protect them with #+dev guard, so that in production environment they are not compiled and loaded altogether.

ASDF provides a way to define the standard for testing a system that can be invoked with asdf:test-system. The easiest way to hook into this facility is to define the following method for asdf:test-op somewhere either in package.lisp or in some common file in the test module (in the example above: some-general-tests.lisp):

(defmethod asdf:perform ((o asdf:test-op)
(s (eql (asdf:find-system <your-system>))))
(asdf:load-system <your-system>)
(st:test :package <your-package>))

There's also a minimal test suite defined in src/self-test.lisp. The test suite is also hooked to asdf:test-op for the should-test system - just as described above :)
Finally, there's an idea that ST will provide useful connector facilities that are mostly lacking in the existing Lisp test frameworks, to be able to integrate into the general testing landscape (primarily, CI systems). As a start, xUnit support was implemented by us (most of the thanks go to Maxim Zholoback). As it often happens, it was, actually, almost impossible to find the proper xUnit spec, but this SO answer saved the day for us. test-for-xunit generates appropriate XML string to *test-output*. I also plan on implementing TAP support some day (this should be pretty easy, actually), but I'm not in a hurry.

Well, if SHOULD-TEST proves useful to some of you, I'd be glad. Enjoy the hacking!

14 Apr 2015 1:41pm GMT

13 Apr 2015

feedPlanet Lisp

François-René Rideau: Common Lisp as a Scripting Language, 2015 edition

The first computer I used had about 2KB of RAM. The other day, I compiled a 2KB Common Lisp script into a 16MB executable to get its startup (and total execution) time down from 2s to subjectively instantaneous - and that didn't bother me the least, for my current computer has 8GB of working memory and over 100GB of persistent memory. But it did bother me that it didn't bother me, for 16MB was also the memory on the first computer in which I felt I wasn't RAM-starved: I could run an X server, an Emacs editor and a shell terminal simultaneously without swapping! Now an entire comfortable software development universe could be casually wasted over a stupid optimization - that I have to care about because software systems still suck. And to imagine that before sentientkind reaches its malthusian future, code bumming will have become a popular activity again...

Background (skip to the next paragraph if you don't care for hardware war stories): I just returned my many-years old work laptop (a Lenovo Thinkpad X230), because of various hardware issues I was starting to experience: mostly a bad connection with the batteries at times causing the machine to shutdown at the least auspicious moment, in addition to the traditional overheating and the wifi card that often failed to connect requiring the wpa_supplicant daemon to be killed. I liked the Thinkpad form factor a lot, but my employer wasn't offering Thinkpad-s anymore, so I opted instead for a slim HP EliteBook Folio 1040, the form factor is obviously inspired from the macbook air, except it was running a Linux system whereby I was master of my ship. Now, the EliteBook has a touchpad that is particularly bad, even worse than the Thinkpad's in being triggered all the time by my thumb as I type; I decided to disable it immediately, just like I did eventually with the Thinkpad; however, unlike the Thinkpad, the EliteBook doesn't have a "clit" interface to supplement the touchpad. Therefore I had to toggle the touchpad on and off instead of permanently disabling it. A Google search quickly found a shell script to toggle the touchpad, and instructions on how to map Penguin-Space to it (the Penguin is Super, much more so than the Windows it replaces). But the shell script frankly made me puke, and I decided to rewrite it in Lisp, which yielded a very nice program less than 2KB long...

Indeed for several years now, I've been peddling the use of Common Lisp as a scripting language: the combination of syntactic abstraction, higher-order functions and an advanced object system, the relatively simple semantic model allowing for efficient compilation, the robust compilers with decent performance and portability to all platforms that matter, and the support for interactive debugging and structured editing - all put it years ahead of all the other dynamically typed scripting languages in common use (shell, perl, python, ruby, javascript), even though it was initially developed years or decades before them. However, until recently, it was missing a few bits to be usable as a scripting language, and I am proud of having hammered the last few nails on the coffin: zero-configuration in looking for source libraries, zero-management in storing compiled outputs, portable invocation from other programs, portable invocation of other program - if you implement these, you too can make your favorite programming language suitable for "scripting".

Well, I recently added an extra nail to the coffin, that addresses the remaining tradeoff between startup times and memory occupancy: it is now possible to easily share a dumped image between all the scripts you need, to achieve instant startup without massive bloat of either working memory or persistent storage. Admittedly, you could already do it semi-portably on SBCL and CCL using Xach's buildapp; but now you can do it fully portably on all implementations using the cl-launch utility that you would use to invoke the program as a script.

The portable way to write a Common Lisp script is to use cl-launch, typically via a #!/usr/bin/cl script specification line when you're using Unix. However, when launching a script this way, even a relatively simple program can take one to several seconds to start: the Lisp compiler locates and loads all the object files into memory, linking the code into symbol, class and method tables; and this somehow takes a non-negligible amount of time even when the files were precompiled, because compilers were never optimized to make this fast;r indeed the typical Lisp hacker only recompiles and reloads one file at a time at his interactive REPL, and doesn't often reload all the files from scratch. By installing ASDF 3.1.4 over the one provided by SBCL using the provided install-asdf.lisp script, and by using the provided cl-source-registry-cache.lisp script to avoid search through my quite large collection of CL source code, I could get the startup time down to around .7 or .8s, but that was still too much. This is fine for computation-intensive and/or long-running programs for which this startup latency doesn't matter. But that makes this solution totally impractical for interactive scripts where execution latency is all-important, as compared to other scripting languages, that while inferior as languages, at least start up instantaneously in subjective time.

/bin/sh or perl execute an empty command in about 5 ms of wall clock time, python in about 18ms (all timing and sizes rough averages and estimates on my current linux x86-64 laptop). Without my portability infrastructure, you can also do the same with sbcl in 10 ms or clisp in 15 ms, but then you lose the portability and are either restricted to not using any software library, or are back in non-portable configuration and compilation hell in addition to having the same slow loading issue. With such startup pause, Common Lisp might remain somewhat suitable to scripting, unlike the vast majority of compiled programming languages, that require a explicit compilation step with non-trivial configuration of source and object files; still it finds itself unsuitable for producing scripts destined for use as instantaneous interactive commands outside its own autistic interactive development environment.

Now, all serious Common Lisp implementations also allow you to dump a memory image, with all the code already loaded and linked, and such images start quite fast, about 20 ms for a fully loaded image on sbcl, about 35 ms on clisp; and you can portably dump an image using my cl-launch utility by just adding --output /path/to/executable --dump ! to the very same command you'd use to start a script. Thus, at the expense of an extra but trivial build step that takes many seconds once, you can portably transform your slow-starting scripts into a precompiled executable, that will have startup time competitive with other scripting language, and efficiency competitive with other compiled languages.

The problem is that such an image has a significant overhead in terms of space: an empty cl-launch program has an image of size 13MB with CLISP, 28MB on CCL, or 52MB on SBCL (which isn't that bad when you consider this contains the entire compiler and basic libraries - GCC is bigger than that!); an image with all the code I want loaded takes 27MB on CLISP, 50MB on CCL, 82MB on SBCL. A poly-deca-megabyte image file is no big deal. The biggest of these images is 1% of the memory of my laptop. So, by today's standards, it's a small additive overhead. But if you need one image per script, then 80MB of memory to execute a 2KB script is a multiplicative factor 40000 in memory waste - and that is not acceptable if like me you want to replace lots of small shell scripts with Common Lisp code. Compare that to the incremental space expenditure for each additional 1KB of scripting code, which is typically between 1KB and 10KB of additional size for the image, a reasonable factor of 1 to 10. This suggests an obvious solution: to share the image-dumping expenditure between all your CL scripts, so the space overhead is back to a negligible additive overhead and reasonable multiplicative factor, instead of being an outrageous multiplicative factor.

busybox made popular the old concept of a multi-call binary: a same executable binary program that when executed behaves differently based on what name the program was called, such that by using multiple symbolic links (or hardlinks) to the same program, you can replace multiple different binaries with a single one, benefitting from both the sharing effects of dynamic linking and the optimizations of static linking. The same can be done for Common Lisp code. Xach's buildapp already let you do that on SBCL then on CCL using its option --dispatched-entry. I just enriched cl-launch 4.1.2 to support the very same interface on all its 12+ supported implementations (well, the same interface, modulo a different treatment of corner cases). Now, I already share the same executable for 7 personal use scripts, and will only use CL for new scripts while slowly migrating all my old scripts.

The feature was a hundred lines of code total, including comments, documentation and a new cl-launch-dispatch.asd file; the Lisp support for this feature is only loaded on-demand if you use --dispatched-entry, at which point it is marginally free to load a tiny additional ASDF system. I love how Common Lisp lets me implement this feature in such a modular way. Here is the documentation:

  • If option -DE --dispatch-entry is used, then the next argument must follow the format NAME/ENTRY, where NAME is a name that the program may be invoked as (the basename of the uiop:argv0 argument), and ENTRY is a function to be invoked as if by --entry when that is the case. Support for option -DE --dispatch-entry is delegated to a dispatch library, distributed with cl-launch but not part of cl-launch itself, by
    1. registering a dependency on the dispatch library as if --system cl-launch-dispatch had been specified (if not already)
    2. if neither --restart nor --entry was specified yet, registering a default entry function as if by --entry cl-launch-dispatch:dispatch-entry.
    3. registering an init-form that registers the dispatch entry as if (cl-launch-dispatch:register-name/entry "NAME/ENTRY" :PACKAGE) had been specified where PACKAGE is the current package. See the documentation of said library for further details.

Now, this is a great workaround, but doesn't fully solve the original issue. To completely solve it, an obvious strategy would be for some implementation to radically optimize loading of compiled objects (so called FASL files, for FASt Loading, which some jest should be renamed SLOw Loading), so it becomes actually fast. For instance, the compiler could produce a prelinked object that optimistically assumes it knows the load address, that there will be no conflict in symbol tables, class and method definitions, etc., and at runtime patches only a minimal set of pointers in the usual case. Doing it for 12+ implementations is not doable, but only one suffices, say SBCL or CCL. Alternatively, an "incremental image" feature might do, whereby one could dump all the symbols in some set of packages and not others, with associated functions, classes, etc.; it would require minor change in programmers' habits, though, so is less likely to happen. But any such complete solution will require hacking into the guts of a CL implementation, and that's no small undertaking.

Assuming we are not going to improve the underlying implementations, a more long-winded "solution" might be to extend the workaround until it becomes a solution: enabling the automatic sharing of executables between all the programs that matter. The old Common-Lisp-Controller from Debian could be resurrected, to create shared images and/or shared executables for software installed by the system's package manager; a similar mechanism could declaratively manage all the programs of a given user (possibly layered on top of the above when available). This might require some tweaks to ASDF so that it doesn't try to build pre-built software from system-managed directories using system-managed implementations, but compiles the usual way when there is a user-specified upgrade, the software wasn't built, or the implementation isn't system-managed. Importantly, there must not be an insecure writeable system-wide FASL cache. (i.e. reverting to per-user cache when any write access is required, or somehow talking to a trusted daemon to compile trusted sources with trusted compilers). This workaround through system management is somewhat ugly, though.

Note that these issues do not affect Common Lisp developers who run the functionality provided by these scripts from the Common Lisp REPL; they can already do that. It only affect users who run the functionality from these scripts from the shell command line or some other external non-Lisp programs. To a Common Lisp developer who needs such a use case, the solution to these issues is now trivial thanks to this new cl-launch feature. But these issues do make it hard for people to publish scripts that will "just work" for end-users - an end-user being someone who shan't be required to manage an installation or configuration step. These end-users will have to either suffer a multi-second pause at startup, or be burdened with a poly-deca-mega-byte executable for every script or set of related scripts they use. And so, the temporary conclusion is that while Common Lisp is in many ways far ahead of competition with respect to being a low-overhead "scripting language", it does at the moment have an issue putting it at a disadvantage against this competition in one crucial way with respect to deployment to end-users.

13 Apr 2015 5:51pm GMT

08 Apr 2015

feedPlanet Lisp

Quicklisp news: April 2015 Quicklisp dist update now available

This Quicklisp update is supported by my employer, Clozure Associates. If you need commercial support for Quicklisp, or any other Common Lisp programming needs, it's available via Clozure Associates.

New projects:

Updated projects: 3bmd, access, antik, arrow-macros, buffalo, buildapp, chanl, cl-ana, cl-annot, cl-ansi-term, cl-async, cl-charms, cl-dbi, cl-dot, cl-factoring, cl-gobject-introspection, cl-grace, cl-indeterminism, cl-launch, cl-libyaml, cl-mlep, cl-mtgnet, cl-mw, cl-netstring-plus, cl-openid, cl-ply, cl-python, cl-quickcheck, cl-random, cl-readline, cl-reddit, cl-redis, cl-rlimit, cl-sdl2, cl-slug, cl-svg, cl-syntax, cl-tcod, cl-vectors, cl-voxelize, cl-yaml, clack, classimp, clavier, clhs, clim-widgets, clinch, clipper, clos-fixtures, closer-mop, clsql-helper, clx, codata-recommended-values, colleen, com.informatimago, common-doc, common-doc-plump, common-html, commonqt, croatoan, defclass-std, djula, drakma, eazy-project, esrap-liquid, fare-memoization, fast-http, fast-io, femlisp, gendl, graph, gsll, hdf5-cffi, hl7-client, hl7-parser, http-body, hu.dwim.util, hunchentoot, hyperluminal-mem, integral, interface, introspect-environment, js-parser, jsown, jwacs, lass, let-over-lambda, lfarm, linedit, lisp-interface-library, lisp-invocation, lisp-namespace, local-time, lucerne, magicffi, mcclim, media-types, mgl-pax, mk-string-metrics, nibbles, ningle, nst, plump, protobuf, qtools, quri, quux-time, racer, rutils, scalpl, scriptl, sdl2kit, serapeum, shellpool, simple-rgb, skippy, slime, st-json, staple, stumpwm, symbol-munger, trivial-backtrace, trivial-benchmark, trivial-debug-console, trivial-download, trivial-update, umlisp, verbose, vertex, vgplot, weblocks-stores, weft, workout-timer, xmls, zaws, zcdb, zpb-exif, zpng.

Removed projects: autoproject, brlapi, cambl, cl-couch, cl-ledger, hctsmsl, nekthuth, red-black.

To get these updates, use (ql:update-dist "quicklisp").


08 Apr 2015 4:38am GMT

06 Apr 2015

feedPlanet Lisp

Quicklisp news: March 2015 download stats

Here are the top 100 download for last month:

 5853   alexandria
4149 cl-ppcre
3465 closer-mop
3382 trivial-features
3296 babel
3086 named-readtables
2977 cffi
2814 flexi-streams
2795 bordeaux-threads
2785 trivial-gray-streams
2784 cl+ssl
2722 cl-fad
2641 trivial-garbage
2315 nibbles
2313 usocket
2277 chunga
2251 anaphora
2235 cl-base64
2218 optima
2142 split-sequence
1942 ironclad
1839 puri
1803 fiveam
1768 iterate
1753 drakma
1729 chipz
1641 cl-colors
1629 trivial-backtrace
1586 local-time
1559 md5
1493 slime
1486 let-plus
1401 fare-utils
1392 fare-quasiquote
1340 cl-ansi-text
1190 prove
1180 hunchentoot
1098 cl-unicode
1072 trivial-types
1053 rfc2388
1021 metabang-bind
1010 cl-utilities
986 cl-interpol
961 cl-syntax
950 trivial-utf-8
889 introspect-environment
846 quicklisp-slime-helper
827 cl-annot
802 parse-number
757 st-json
716 quri
710 osicat
709 postmodern
686 cl-marshal
658 trivial-mimes
654 xsubseq
651 lparallel
648 plump
645 jsown
641 asdf-system-connections
614 ieee-floats
613 trivial-indent
605 metatilities-base
604 uuid
603 array-utils
601 cl-containers
596 cl-json
590 lquery
584 fast-http
572 cl-sqlite
567 salza2
556 clss
538 clack
500 static-vectors
495 clx
490 command-line-arguments
464 cl-markdown
463 py-configparser
462 dynamic-classes
455 circular-streams
446 asdf-finalizers
446 zpb-ttf
446 cl-log
445 fast-io
443 cl-abnf
442 garbage-pools
440 buildapp
435 cl-mssql
425 cl-who
399 zpng
397 esrap
395 http-body
394 vecto
391 cl-csv
389 cl-vectors
388 iolib
358 closure-common
353 lisp-namespace
351 cl-opengl
351 cl-dbi

06 Apr 2015 4:18pm GMT

Zach Beane: Go-to libraries

I got a lot of interesting responses last week to my request forlibraries. Many people sent their CL library stack to me, with commentary. I don't want to list every single library; there were a few libraries that stood out with multiple mentions from different people. Here they are.

cl-store - I reach for it when a problem is in the gap between simple serializing with s-expressions and complex serialization with a custom scheme.

cl-smtp - for sending mail via SMTP. cl-pop and smtp4cl are also used by multiple people.

plexippus-xpath and xuriella - for XPath and XSLT. DJ wrote: "I am writing a CL library system of sorts that grabs data from many online services (Library of Congress and many others). These services mostly supply XML, so getting at the content I want requires use of xslt and xpath. Sometimes json is available, but since I am fairly good at xslt/xpath from work a decade ago, why bother?"

clsql - for database access

serapeum - for utilities

uiop - for filesystem access; cl-fad was also plugged a few times

html-template, cl-who, and parenscript - for client-side HTML and JavaScript generation

closure-html - for parsing HTML

usocket - for portable networking

log5 and cl-log - for logging

lparallel - for managing parallel tasks via threads

split-sequence - string splitting

ironclad - for cryptographic operations

local-time - for working with dates and times

Multiple people plugged Quicklisp as the way to get their favorite libraries, which always makes me happy.

There were a few test libraries mentioned, but not as many as I expected. If you have a favorite way to test your code, and you think everyone should know about it, let me know.

06 Apr 2015 2:56pm GMT

29 Mar 2015

feedPlanet Lisp

Lispjobs: Clojure/Clojurescript Web Developer, Kira Systems, Toronto or Remote

We're looking for a developer to work on our Clojure/ClojureScript/Om web stack. Our team is small, pragmatic, and inquisitive; we love learning new technologies and balance adoption with good analysis. We prefer to hire near us, but also welcome remote work in a time zone within North America.

Web technology can be built better. If single-page web design driven by a reactive data model sounds interesting to you, get in touch!

Technologies we use:

You should have knowledge of some of these. Most of all we look for those interested in learning.

This position starts immediately.

Application link: Clojure/Clojurescript Web Developer

29 Mar 2015 10:36am GMT

27 Mar 2015

feedPlanet Lisp

Zach Beane: Your go-to libraries

Here are some of the libraries I reach for when I want to do stuff:

What are some of the libraries you use for day-to-day programming tasks? Email me and I'll summarize next week.

27 Mar 2015 12:52am GMT