18 Nov 2014

feedPlanet Lisp

Patrick Stein: Alternatives (v0.2.20141117) Released

I have uploaded a new version of my Alternatives library. In addition to the ALTERNATIVES macro, there is an ALTERNATIVES* macro which allows one to specify a name for the set of choices. Then, one can check the DOCUMENTATION to see which alternative was last macroexpanded.

(defun random-letter ()
(alt:alternatives* random-letter-algorithm
***
(:constant-a
"Always pick the letter A."
#\A)

(:uniform
"Choose any letter with equal probability"
(random:random-elt "ABCDEFGHIJKLMNOPQRSTUVWXYZ"))))

(documentation 'random-letter-algorithm 'alt:alternatives)
=> "CONSTANT-A
Always pick the letter A."

18 Nov 2014 2:44am GMT

16 Nov 2014

feedPlanet Lisp

Nicolas Hafner: Retrospective 731520 Minutes - Confession 39

header It's apparently been just 508 days since I first joined github. In that time I've written a lot of Common Lisp code and apparently made around 4000-5000 commits. I now want to make a retrospective and go over all the projects I've started. I'll omit some of the smaller, uninteresting ones though.

The projects are very roughly in the order I remember creating them. I can't recall exactly what it was, so things might be all over the place, but it matters not. An approximate order like this is sufficient.

lquery

This was my first big CL project that I started as I was investigating tools for radiance. Radiance already began conceptually before this, but I didn't write significant enough code for it to count. lQuery tries to bring the very convenient jQuery syntax for manipulating the DOM to CL. I did this because I knew jQuery and I did not find the alternatives very appealing. Initially it was supposed to help with templating for the most part, but it turned out to be more useful for other tasks in the end.

The first version of lQuery was written in a hotel room in Japan during my one-week holiday there. Time well spent! Don't worry though, I got out often enough as well.

lquery-doc

lQuery was also my first library to be published and distributed via Quicklisp, so I needed it to have easy to read documentation. Docstrings are great, but I wanted that information to be on the documentation page as well, so I looked for libraries that allowed me to bundle that somehow. Given that I couldn't find anything I liked, I quickly wrote up my own thing that used lQuery to generate the page. It was a matter of some hours and made me very pleased at the time.

radiance

Radiance is sort of the reason I really got into CL to begin with. The previous versions of the TyNET framework that my websites run on were written in PHP and I got really sick of the sources, so I never really went to fix bugs or make necessary improvements. Things worked, but they didn't work great.

As I picked up CL I had to look for a project to really get used to the language and rewriting my framework seemed like the first, obvious step. I wanted Radiance to become a mature, stable and usable system that other people could profit from as well. So, unlike in previous attempts I tried to take good care to do things right, even if my understanding of the language at that point was questionable at best.

One and a half years and almost a complete re-write (again) later, I still don't regret choosing this as my major project, as I'm now fairly confident that it will become something that people can use in the future. It's not quite there yet, but well on its way.

piping

I dislike breakpoints and love good logging, so the next step that Radiance demanded was a good logging solution. I first tried my hands on log4cl, but didn't quite like it, mostly for a lack of being able to figure out how to make it work the way I wanted. So, rolling my own it was. I wanted something very flexible, so I though up a pipeline system for log message processing and distribution.

That was this library; a very small thing that allowed you to create (albeit in a cumbersome fashion) pipelines that could be used to process and distribute arbitrary messages.

verbose

From there on out I went to write the actual logger mechanisms, including threading support. Verbose was the result, and I still use and like it today.

cl-wiki

For a while then I was occupied with the task of writing a bot for the Encyclopedia Dramatica wiki that should handle new registrations and bannings by adding templates to the user pages. In order to make this possible I checked out a few IRC libraries and wrote a crude thing that would sit on a channel and accept simple commands.

In order for it to actually do its thing though, I had to interact with the mediawiki API, so I wrote a tiny wrapper library around some of the calls that I needed. I never put this on Quicklisp because it was never fleshed-out enough to be there and it still isn't. Maybe some day I'll revise this to be a properly usable thing.

xencl

After I finished the bot I wanted to extend it to be able to interact with the forums of ED, which ran on XenForo. Unfortunately that forum offered absolutely zero APIs to access. There was a plugin, but I couldn't get the admins to install it as the forum was apparently so twisted that doing anything could make it crash and burn. Oh well.

So, I set out the classic way of parsing webpage content. Thanks to lQuery this was not that huge of a pain in the butt, but it still took a lot of fiddling to get things to run. This library too is not on QL as it is a big hack and far from complete as well.

colleen

At this point I'm really unsure about the order of the projects. Either way, the little bot project I made for ED was a mess and I wanted a proper bot framework to replace my previous bot, Kizai. As I wasn't impressed by the available IRC libraries either, I wrote Colleen from scratch.

Colleen is still being worked on every now and again today, but (with some bigger and smaller rewrites along the way) it has proven to be a very good framework that I am very glad I took the time to write.

plaster

In order to test out Radiance and because I was sick of pastebin as a paste service, I set out to write my own. This, too, has proven to be a good investment of my time as I still use plaster as my primary pasting service today. There's a few things I'd like to improve about it whenever I do get the time to, but for the most part it just works.

chirp

At some point I noticed that I'd like to have twitter interaction for some of my web-services, so I looked around for API interfaces for that. However there wasn't anything that really worked well. So, once more I went to write something that fit my needs.

This was my first really frustrating project to get going, mostly because figuring out how oAuth is supposed to work is a huge pain. Web-APIs are some of the worst things to interact with, as often enough there is absolutely no way to figure out what exactly went wrong, so you're left stumbling in the dark until you find something that works.

Even though I haven't really used Chirp much myself, it seems to have been of use to a couple of people at least, if Github stars are anything to go by.

south

Since oAuth is a repeating pattern on the web and it was sufficiently painful to figure out for Chirp, I segregated that part out into its own library. I'm not sure if anyone aside from myself has used South for anything though.

Universal-Config

During one of my rewriting iterations of Colleen I noticed that a very common pattern was to save and load some kind of storage. Moving that pattern out into the framework and thus automating configuration and storage seemed like a good idea. However, since Colleen was also an end-user application, I needed to make sure that the configuration could be saved in a format that the user wanted, rather than simple sexprs.

And that's what Universal-Config is supposed to do: Generalise the access of configuration as well as the storage. It works really well on the code side; accessing parts and changing the config is very simple and convenient. It only works so-so on the configuration storage side of things though, as I needed to strike some gross compromises in the serialisation of the objects to ensure compatibility between formats.

Maybe some day I'll figure out a smarter solution to the problems UC has.

deferred

Deferred was an attempt at providing mechanisms for optional features of your code. Meaning that your could would work depending on what kind of libraries are loaded at the time. Therefore I could for example provide a local server based authentication with South without explicitly requiring Hunchentoot or some other webserver. Deferred is more a proof-of-concept than anything though, as I haven't actually utilised it in any of my projects.

However, the problem is an interesting one and whenever I do return to it, I want to try to tackle it from a different angle (extending ASDF to allow something like optional dependencies and conditional components).

plump

The first version of lQuery used Closure-HTML, CXML, and css-selectors to do most of the work. However, CHTML and CXML suffered from big problems: CXML would not parse regular HTML (of course) and CHTML would not parse HTML5 as it required a strict DTD to conform to. Also, css-selectors' performance wasn't the greatest either.

So, in order to clean up all these issues I set out to write my own HT/X/ML parser that should both be fast and lenient towards butchered documents. Well, fast it is, and lenient it is as well. Plump is probably so far my best project in my opinion, as its code is straight-forward, extensible, and just does its job very well.

CLSS

The next step was to build a CSS-selectors DOM search engine on top of Plump. This turned out to be quite simple, as I could re-use the tools from Plump to parse the selectors and searching the DOM efficiently was not that big of a deal either.

After these two were done, the last job was to re-write lQuery to work with the new systems Plump and CLSS provided. The re-write was a very good idea, as it made lQuery a lot more extensible and easier to read and test. It was quite funny to read such old code, after having worked with CL for about a year by then.

clip

The templating engine I used in Radiance so far had been a combination of lQuery and "uibox", which provided some very crude tools to fill in fields of nodes on the DOM. I didn't like this approach very much as there was too much lQuery clutter in the code that should've been in the template.

Clip now provides a templating system that hasn't been done in CL before and I don't think has really been done ever. All the code that manipulates your template is in the template itself, but the template is a valid HTML5 document at all times. The trick is to take advantage what HTML already allows you to do: custom tags and attributes. Clip picks those up, parses them and then modifies the DOM according to their instructions. All you have to do in your CL code is to pass in the data the page needs.

staple

lQuery-Doc left a lot to wish for, so another rewrite was in order. This time I took advantage of Clip's capabilities to provide a very straight-forward, no-bullshit tool to generate documentation.

The only drawback it has currently is that its default template doesn't have the greatest stylesheet in the world, but that hardly bothers me. Maybe I'll get to writing a fancy one some day.

parasol

I always wanted to write my own painting application, mostly because MyPaint and others were never completely to my liking. I even took attempts at this before in Java. At some point, out of curiosity, I looked into how I would go about grabbing tablet input. Investigating the jPen library brought me absolutely nothing but confusion, so I looked for other ways. Luckily enough it turned out that Qt already provided a built-in way to grab events from tablets and from previous experience with a minor project I knew that CommonQt allowed me to use Qt rather easily from CL out.

So, what started out as a quick test to see whether it would even be possible to make a painting application quickly turned into a big thing that had a lot of potential. You can read more about it here.

modularize

A lot of time had passed since I last worked on Radiance. I took time off as I noticed that the framework had turned into something uncanny and I needed to fix that. And the way to fix it was to write a lot of design drafts and work out all the issues that came to mind on paper.

My conclusion after all this was: Radiance needed a complete, from scratch, rewrite. Oh boy. The first part that needed to be done is a proper library to provide the encapsulation into modules. Modules are Radiance's primary abstraction that allow you to neatly separate parts, but also unify the access and interaction between them.

Modularize was the solution for this and it works pretty well. In fact, it works so well that I don't even think about it anymore nowadays, it just does its job as I expect it to. Aside from Modularize itself I wrote two extensions that tucker on support for triggers and the much-needed interfaces and implementations mechanism that is vital to Radiance. I won't explain what these do exactly right now, that'll be for when I write the comprehensive guide to Radiance.

reader

After a long time of rewriting Radiance's core and contribs, it was time to rewrite another component from the old version of TyNET: my blog. This time I tried to focus on simplicity and getting it done well. Simple it is indeed, it's barely 200 lines of code. And as you can probably see as you read this, it works quite nicely.

LASS

Writing CSS is a pain in the butt, as it involves a lot of duplication and other annoyances. At some point I had the idea of writing a Lisp to CSS compiler. Taking inspiration from Sass this idea grew into LASS in a matter of.. a day or two, I think?

I now use LASS for all of my style sheet writing concerns as it just works very well and with some minor emacs fiddling I don't even have to worry about compiling it to CSS myself.

humbler

Sometimes Xach would talk on IRC about wanting to interact with Tumblr through CL. As Tumblr is a service I use too and the biggest hurdle (oAuth) was already handled by South I took the challenge of writing yet another web-API client.

Humbler turned out a lot nicer than Chirp did in terms of end-user experience, I would say. However, I cannot at all say the same about my own experience while writing it. Tumblr's API "documentation" is quite bad, to be blunt. A lot of the returned fields are not noted on the page, some things are plain wrong (probably out of date) and in general there's just not enough things actually being documented. The worst part about it all was the audacity that the staff had to proclaim in a blog post that they wanted to encourage experimentation!, as if having to figure out the API by yourself was actually a great thing.

Anyway, I haven't actually used Humbler for anything myself, but some people seem to be using it and that's good enough to me.

ratify

Returning back to Radiance problems, one of the recurring issues was validating user input. There didn't seem to be a library that did this in any way. And so the same old game of 'write it yourself' began. Ratify's development mostly included reading RFCs and trying to translate them into tests and finally bundling it all together in some easy to use macros.

dissect

On twitter I encountered a really nice screenshot of an error page on some Clojure project. I couldn't find the tweet again later, so I don't know what features it had exactly, but suffice to say it was better than anything I'd seen up to that point.

That lead me to wonder how I could actually get the stack trace myself if an error occurred. There was already a library that provided rudimentary support for that, trivial-backtrace. Taking a look at its source code filled me with everything else but esteem though, so I headed out to write something that would allow people to inspect the stack, restarts, and accompanied source code easily.

softdrink

A quick question by eudoxia on twitter inspired me to write a very quick toolkit to extract and infuse CSS from/into HTML. The main use case for the former would be to turn HTML into HTML+CSS and the latter to reverse the process (for, say, emails). Using lQuery and LASS this turned out to be a super easy thing to do and I had it done in no time.

Hooray for great code re-use!

purplish

Aside from the blog, the only really actively used component of TyNET was the imageboard, Stevenchan. Stevenchan ran on my own software called Purplish. In order to be able to dump everything of the old-code base I was driven to re-write Purplish for Radiance.

However, Purplish now takes a much different approach. A lot of traditional imageboard features are missing and a couple of unconventional features were added. Plus, having it written in CL has the advantage of being much easier to maintain, so if anything ever does crop up I'll tend much more towards wanting to fix it than I did before with PHP.

keyword-reviews

I like language a lot. I also like to try and reduce things to their minimum. So, the idea came to me of a site that allowed people to review things with only a single keyword. The idea behind that was to, with sufficient data, see what kind of patterns emerge and find out what people think the essence of an experience is.

Certainly it wouldn't be useful for an actual 'review' of anything, but it's nevertheless an interesting experiment. I don't know if I'll ever get enough data to find patterns in this or anything that could lead to scientifically significant correlations, but it's a fun enough thing on its own.

qtools

Having completed pretty much everything that I wanted to work on and stalling on some major issues with Radiance I was on the lookout for things to do. Parasol was still on hold and nothing else really picked my interest. In an attempt to start out right and not dive head over heels into it again, I first considered ways in which to make the C++-ness of Qt more lispy.

Born out of this was Qtools, a collection of tools to aid development with CommonQt and make it all feel a bit more homely. Of course, some major issues still remain; you still need to pay attention to garbage and there's still C++ calls lingering about, but all in all the heavy hackery of Qtools does make it more pleasing to the eye.

Qtools forced me to go deeper into the guts of CLOS and MOP than I've ever gone before and I had to spend a lot of time in implementation code to figure out how to make the things I needed work. I wouldn't advise Qtools as a good use of MOP, but it could be considered an impressive work in exercising the flexibility Lisp offers.

-

So, that's it then for now. I'd like to amend here that during the most part of all these projects I should've been studying for university. I'm not sure if working on these projects was the right choice, but I have learned a huge bunch and I hope that my produce of my efforts has been of use to other people. If not, then it certainly was not the right choice to indulge myself this much in programming.

Before I go into another long rant about my questionable situation in university I'll cap this here. Until another time!

Post scriptum: If you have ideas for features or new projects for me to work on, please let me know! More ideas is always better.

footer

16 Nov 2014 9:18pm GMT

Patrick Stein: Alternatives (v0.1.20141115) Released

I have now released the code that I mentioned in my previous post Code That Tells You Why which lets one keep multiple implementations around in code and switch between them manually without much trouble.

A link to the source code is here: nklein.com/software/alternatives/.

16 Nov 2014 5:04am GMT

13 Nov 2014

feedPlanet Lisp

Patrick Stein: Code That Tells You Why

A 2006 article by Jeff Atwood titled Code Tells You How, Comments Tell You Why showed up on reddit/r/programming today.

It makes a good point. However, it got me thinking that for cases like the binary-search example in the article, it might be nice to see all of the alternatives in the code and easily be able to switch between them.

One way to accomplish this in Lisp is to abuse the #+ and #- reader macros:

(defun sum-i^2 (n)
#+i-wanted-to-do-this
(loop :for i :to n :summing (* i i))

#+my-first-attempt-was-something-like-this
(do ((i 0 (1+ i))
(sum 0 (+ sum (* i i))))
((> i n) sum))

#+but-i-could-not-do-that-because
"Some people find a do-loop to hard to read
(and 'too' too hard to spell, apparently)."


#-now-i-know-better-and-can-do-this
(/ (* n (1+ n) (1+ (+ n n)) 6))

This is less than ideal for a number of reasons, including: one needs to make sure to pick "feature" names that won't actually ever get turned on, the sense of + and - seem backwards here, and switching to a different alternative requires editing two places.

Another Lisp alternative is to abuse the case form:

(defun sum-i^2 (n)
(case :now-i-know-better-and-can-do-this
(:i-wanted-to-do-this
(loop :for i :to n :summing (* i i)))

(:my-first-attempt-was-something-like-this
(do ((i 0 (1+ i))
(sum 0 (+ sum (* i i))))
((> i n) sum)))

(:but-i-could-not-do-that-because
"Some people find a do-loop to hard to read
(and 'too' too hard to spell, apparently)."
)

(:now-i-know-better-and-can-do-this
(/ (* n (1+ n) (1+ (+ n n)) 6)))))

This is better. No one can doubt which alternative is in use. It is only one edit to switch which alternative is used. It still feels pretty hackish to me though.

One can clean it up a bit with some macrology.

(defmacro alternatives (&body clauses)
(flet ((symbol-is-***-p (sym)
(and (symbolp sym)
(string= (symbol-name sym) "***")))
(final-clause-p (clause)
(when (listp clause)
(destructuring-bind (tag &body body) clause
(when (and (symbolp tag)
(member (symbol-name tag)
'("***" "FINAL" "BLESSED")
:test #'string=))
body)))))
(anaphora:acond
((member-if #'symbol-is-***-p clauses)
(let ((clause (first (rest anaphora:it))))
`(progn
,@(rest clause))))

((find-if #'final-clause-p clauses)
`(progn
,@(rest anaphora:it)))

((last clauses)
`(prog
,@(rest (first anaphora:it)))))))

With this macro, one can now rewrite the sum-i^2 function quite readably:

(defun sum-i^2 (n)
(alternatives
(i-wanted-to-do-this
(loop :for i :to n :summing (* i i)))

(my-first-attempt-was-something-like-this
(do ((i 0 (1+ i))
(sum 0 (+ sum (* i i))))
((> i n) sum)))

(but-i-could-not-do-that-because
"Some people find a do-loop to hard to read
(and 'too' too hard to spell, apparently)."
)

(now-i-know-better-and-can-do-this
(/ (* n (1+ n) (1+ (+ n n)) 6)))))

If I wanted to try the my-first-attempt-was-something-like-this clause, I could stick a *** before that clause or change its name to *** or final or blessed, or I could move that clause into the last spot.

There is still an onus on the developer to chose useful alternative names. In most production code, one wants to clean out all of the dead code. On the other hand, during development or for more interactive code bodies, one might prefer to be able to see the exact "How" that goes with the "Why" and easily be able to swap between them.

(Above macro coming in well-documented library form, hopefully this weekend.)

13 Nov 2014 10:22pm GMT

08 Nov 2014

feedPlanet Lisp

Christophe Rhodes: reproducible builds - a month ahead of schedule

I think this might be my last blog entry on the subject of building SBCL for a while.

One of the premises behind SBCL as a separate entity from CMUCL, its parent, was to make the result of its build be independent of the compiler used to build it. To a world where separate compilation is the norm, the very idea that building some software should persistently modify the state of the compiler probably seems bizarre, but the Lisp world evolved in that way and Lisp environments (at least those written in themselves) developed build recipes where the steps to construct a new Lisp system from an old one and the source code would depend critically on internal details of both the old and the new one: substantial amounts of introspection on the build host were used to bootstrap the target, so if the details revealed by introspection were no longer valid for the new system, there would need to be some patching in the middle of the build process. (How would you know whether that was necessary? Typically, because the build would fail with a more-or-less - usually more - cryptic error.)

Enter SBCL, whose strategy is essentially to use the source files first to build an SBCL!Compiler running in a host Common Lisp implementation, and then to use that SBCL!Compiler to compile the source files again to produce the target system. This requires some contortions in the source files: we must write enough of the system in portable Common Lisp so that an arbitrary host can execute SBCL!Compiler to compile SBCL-flavoured sources (including the standard headache-inducing (defun car (list) (car list)) and similar, which works because SBCL!Compiler knows how to compile calls to car).

How much is "enough" of the system? Well, one answer might be when the build output actually works, at least to the point of running and executing some Lisp code. We got there about twelve years ago, when OpenMCL (as it was then called) compiled SBCL. And yet... how do we know there aren't odd differences that depend on the host compiler lurking, which will not obviously affect normal operation but will cause hard-to-debug trouble later? (In fact there were plenty of those, popping up at inopportune moments).

I've been working intermittently on dealing with this, by attempting to make the Common Lisp code that SBCL!Compiler is written in sufficiently portable that executing it on different implementations generates bitwise-identical output. Because then, and only then, can we be confident that we are not depending in some unforseen way on a particular implementation-specific detail; if output files are different, it might be a harmless divergence, for example a difference in ordering of steps where neither depends on the other, or it might in fact indicate a leak from the host environment into the target. Before this latest attack at the problem, I last worked on it seriously in 2009, getting most of the way there but with some problems remaining, as measured by the number of output files (out of some 330 or so) whose contents differed depending on which host Common Lisp implementation SBCL!Compiler was running on.

Over the last month, then, I have been slowly solving these problems, one by one. This has involved refining what is probably my second most useless skill, working out what SBCL fasl files are doing by looking at their contents in a text editor, and from that intuiting the differences in the implementations that give rise to the differences in the output files. The final pieces of the puzzle fell into place earlier this week, and the triumphant commit announces that as of Wednesday all 335 target source files get compiled identically by SBCL!Compiler, whether that is running under Clozure Common Lisp (32- or 64-bit versions), CLISP, or a different version of SBCL itself.

Oh but wait. There is another component to the build: as well as SBCL!Compiler, we have SBCL!Loader, which is responsible for taking those 335 output files and constructing from them a Lisp image file which the platform executable can use to start a Lisp session. (SBCL!Loader is maybe better known as "genesis"; but it is to load what SBCL!Compiler is to compile-file). And it was slightly disheartening to find that despite having 335 identical output files, the resulting cold-sbcl.core file differed between builds on different host compilers, even after I had remembered to discount the build fingerprint constructed to be different for every build.

Fortunately, the actual problem that needed fixing was relatively small: a call to maphash, which (understandably) makes no guarantees about ordering, was used to affect the Lisp image data directly. I then spent a certain amount of time being thoroughly confused, having managed to construct for myself a Lisp image where the following forms executed with ... odd results:

(loop for x being the external-symbols of "CL" count 1)
; => 1032
(length (delete-duplicates (loop for x being the external-symbols of "CL" collect x)))
; => 978

(shades of times gone by). Eventually I realised that

(unless (member (package-name package) '("COMMON-LISP" "KEYWORD" :test #'string=))
  ...)

was not the same as

(unless (member (package-name package) '("COMMON-LISP" "KEYWORD") :test #'string=)
  ...)

and all was well again, and as of this commit the cold-sbcl.core output file is identical no matter the build host.

It might be interesting to survey the various implementation-specific behaviours that we have run into during the process of making this build completely repeatable. The following is a probably non-exhaustive list - it has been twelve years, after all - but maybe is some food for thought, or (if you have a particularly demonic turn of mind) an ingredients list for a maximally-irritating CL implementation...

There were probably other, more minor differences between implementations, but the above list gives a flavour of the things that needed doing in order to get to this point, where we have some assurance that our code is behaving as intended. And all of this is a month ahead of my self-imposed deadline of SBCL's 15th birthday: SBCL was announced to the world on December 14th, 1999. (I'm hoping to be able to put on an sbcl15 workshop in conjunction with the European Lisp Symposium around April 20th/21st/22nd - if that sounds interesting, please pencil the dates in the diary and let me know...)

08 Nov 2014 10:38pm GMT

06 Nov 2014

feedPlanet Lisp

Quicklisp news: November 2014 Quicklisp dist update now available

New projects:

Updated projects: asteroids, buildapp, chirp, cl-algebraic-data-type, cl-ana, cl-arrows, cl-async, cl-autowrap, cl-bert, cl-case-control, cl-conspack, cl-dbi, cl-erlang-term, cl-mustache, cl-read-macro-tokens, cl-store, cl-virtualbox, clack, cleric, clfswm, clip, closer-mop, clsql-helper, clss, clx, coleslaw, colleen, crane, crypto-shortcuts, deferred, defmacro-enhance, dissect, drakma-async, esrap-liquid, event-glue, gbbopen, glyphs, graph, hdf5-cffi, helambdap, hh-web, humbler, ironclad, lass, lfarm, lisp-executable, lisp-gflags, lparallel, lquery, lredis, mcclim, method-combination-utilities, mgl-pax, modularize, modularize-hooks, modularize-interfaces, okra, optima, petit.string-utils, pgloader, plump, plump-sexp, plump-tex, postmodern, prove, ratify, rcl, readable, rutils, serapeum, shelly, slime, smug, softdrink, south, staple, stumpwm, trivial-benchmark, trivial-download, trivial-indent, trivial-mimes, trivial-signal, trivial-thumbnail, uiop, universal-config, vecto, verbose, weblocks, weblocks-stores, weblocks-tree-widget, yason.

To get this update, use (ql:update-dist "quicklisp").

06 Nov 2014 10:59pm GMT

Lispjobs: Clojure Software Engineer, Staples Sparx, San Mateo, CA

SparX is a small engineering team focused on applying online machine learning and predictive modeling to eCommerce (impacting a 24 billion dollar business).

Our stack is 100% Clojure, service oriented, targeting 50 million users with 1ms SLAs. We apply engineering and data science to tough problems such as dynamic pricing, shipping estimations, personalized emails, and multi-variate testing.

We are always looking for talent in data-science, engineering and devops. Bonus points if you can bridge 2 of these together. We love people with strong fundamentals who can dive deep.

See:

http://www.staples-sparx.com/jobs/software-engineers.html


06 Nov 2014 5:14am GMT

05 Nov 2014

feedPlanet Lisp

Timofei Shatrov: It's alive! The path from library to web-app.

In case you'd rather just play with the website instead of reading this boring post, the url is http://ichi.moe/

In my previous posts (part 1, part 2, part 3) I described the development process of a romanization algorithm for texts in Japanese language. However the ultimate goal was always to make a simple one-purpose web application that makes use of this algorithm. It took quite a while, but it's finally here. In this post I will describe the technical details behind the development of this website.

I decided to build it with bare Hunchentoot; while there are some nice Lisp web frameworks developed lately like Restas or Caveman, my app would be too simple to need them. There would be a single handler that takes a query and various options as GET parameters, and returns a nicely formatted result.

Now I needed something to produce HTML. I used CL-WHO before, but this time I wanted a templating library where I can just copy-paste plain HTML into. I settled on closure-templates, which is based on Google Closure Templates but the syntax is slightly different. Now, I don't know if I should recommend this library because its documentation in English is rather sparse and it has a dead link in its Github description. It has a detailed manual written in Russian, so I was able to read that. As to why I chose it, this library has a major advantage over its competitors. The same template can be compiled into a Common Lisp function and into Javascript! Why is this useful? Well, for example, I have these cute cards that explain the definition of a word:

image

These are mostly generated statically by the Common Lisp backend. But if I want to make such card on the fly client-side, I can call the corresponding Javascript function and it will produce the exact same HTML content. Makes dynamic HTML generation really easy.

For the front-end framework I chose Foundation. Bootstrap was my first choice, but it really doesn't look all that great and it's difficult to customize. So I decided to try something else. Foundation was pretty nice, it was easy to make my website responsive and look decent on mobile screen. The only problem, like I later discovered, was its sheer size. The 183kb javascript file (minified!) was refused to be cached by my browser for some reason, so each page load took quite a while. Fortunately that was solved by loading this file from cloudflare CDN.

One thing I didn't concern myself about when writing the backend Ichiran algorithm was thread safety. However, as Hunchentoot uses threads to process requests, this matter becomes very important. Fortunately writing thread-safe code in Lisp is not that hard. Mostly you should just avoid modifying global special variables (binding them with let is okay) and be careful with writing persistent data. Since my app is pretty much read-only, there was only one such issue. I am storing a cache of word suffixes in a special variable. Generating this cache takes several seconds, but is only done once per session. As you can guess, this creates problems with thread safety, so I put a lock around this procedure and called it when the server is launched. Each server launch would therefore take several seconds, which is suboptimal. Later I would make the lock non-blocking and display a warning if the init-suffixes procedure is in progress.

Like I said before, I wanted my data to be compatible between Common Lisp and Javascript, so I added some functions to Ichiran to produce JSON objects containing various data. There are many JSON libraries for Common Lisp. I was using jsown before, so I decided to stick with it. jsown objects are lists with :obj as the first element and alist of properties as its cdr. The problem was that closure-templates only supports plists and alists as its context parameter, and jsown object is neither. The solution was to extend the methods fetch-property and fetch-keys. Since they are already defined for lists, I added :around methods to check for jsown objects specifically and call-next-method on cdr in that case.

(defmethod closure-template:fetch-property :around ((map list) key)
  "Support for jsown dict objects"
  (if (and (not (integerp key))
           (eql (car map) :obj)
           (every #'listp (cdr map)))
      (call-next-method (cdr map) key)
      (call-next-method)))

(defmethod closure-template:fetch-keys :around ((map list))
  (if (and (eql (car map) :obj)
           (every #'listp (cdr map)))
      (call-next-method (cdr map))
      (call-next-method)))

Theoretically this would fail if passed a valid plist like '(:obj (1 2)), but this cannot possibly happen in my application.

Now, at some point I had to actually put my app online. I needed a server and a domain name and I needed them cheap (because I'm currently unemployed (pls hire me)). For the server I chose Linode VPS, and I bought ichi.moe domain from Name.com. I still think these new TLDs are a pretty stupid idea, but at least it gives us all an opportunity to buy a short and memorable domain name. I spent the rest of the day configuring my Linode server, which I never did before. Thankfully the documentation they provide is really good.

Because I wanted to get the most juice out of my cheap-ass server, the plan was to put hunchentoot server behind Nginx and to cache everything. There are existing guides on how to do this setup, which were very helpful. In my setup everything is served by Nginx except for URLs that start with /cl/, which are passed to Hunchentoot. The static pages (including error pages) are also generated by closure-template (so that the design is consistent), but they are just dumped into .html files served by Nginx. Nginx also caches dynamic content, which might help if some high-traffic site links to a certain query. This, and the fact that Linodes are hosted on SSD made the site run pretty smooth.

Now let's talk about my infrastructure. As described in the guides above, I have a special hunchentoot user in addition to the main user. The main user's quicklisp directory is symlinked to hunchentoot's so the server can load the code but cannot write there. The code is stored in 2 repositories. One is the open-source core of the project (ichiran) and the other one is a private bitbucket repository ichiran-web which holds web-related code. However a simple git pull doesn't update the code running on the server. If I'm lazy, I do "sudo service hunchentoot restart", which restarts everything and reloads the code. This might of course create service interruptions for the users. Another option is hot swapping all the changes. For this purpose my hunchentoot server also starts a swank server like this:

(defun start-app (&optional (port 8080))
  (handler-case (swank:create-server :dont-close t)
    (error ()))
  (ichiran/dict:init-suffixes)
  (refresh-handlers)
  (let ((acceptor (make-instance 'easy-acceptor :port port
                                 :access-log-destination *access-log*
                                 :message-log-destination *message-log*
                                 )))
    (setf *ichiran-web-server* (start acceptor))))

Swank is, of course, the server-side component of SLIME. It runs on a port that is not accessible remotely and can only be connected to locally or via SSH tunnel. I use the latter to connect SLIME on my PC to Swank running on my server, which allows me to apply various fixes without restarting, either from the REPL or by using C-c C-c to recompile some function.

Anyway, I'm pretty happy with the way things turned out, and I got some positive feedback already. The biggest thing left is tightening up the web design, which is my least favorite part of web development. The other thing is attracting enough traffic so that I can analyze the performance (I'm only getting a few people a day right now, which barely makes a blip on my server's CPU graph).

In retrospect, getting this website up and running was pretty easy. I spent much more time trying to tweak ichiran library to split the sentences in a correct way (and I'm still working on it). It's not much harder than, say, building a Django-based site. The tools are all there, the documentation is out there (kind of). VPSes are cheap. And it spreads awareness of Common Lisp. No reason not to try!

05 Nov 2014 1:36pm GMT

04 Nov 2014

feedPlanet Lisp

Nicolas Hafner: Paranormal Parasols - Confession 36

header It's been too long since my last entry. I just haven't had much that I felt safe talking about. But, now that I'm mostly done with everything that occupied me for a while (Radiance and Purplish), I have more time available for other things. One of these things happens to be Parasol.

As you may or may not know, Parasol is a Common Lisp painting application that was born out of curiosity and wasn't really meant to be anything big, but then quickly exploded in size. But as it is with these things, at some point I hit a big roadblock: It couldn't process events fast enough and was thus not getting enough feedback from the tablet pen. This happened because the drawing operations took too long and clogged up the event loop.

To solve this problem, we need to put the operations that take a long time into a separate thread and queue up events while it runs. I tried to hack this in, but the results were less than fantastic. It could go either one of two ways; either the entire CL environment had a segmentation fault at a random time, or the painting would be littered with strange drawing artefacts.

The only thing that I could guess from this was that Qt wasn't happy with me using CL threads. But that wasn't the only issue. I really didn't like what I had built over the weeks of working on Parasol, as it reeked of patchwork design, without much of an ulterior architecture. So I put the project to rest for the time being, hoping to return to it some day and rewrite it anew.

In preparation for this day I recently wrote Qtools, a collection of utilities that should make working with Qt easier. Writing that library alone caused me quite some amounts of grief, so I'm not very enthusiastic about diving deeper into the sea of tears that is C++ interaction.

Regardless, I couldn't keep my mind off of it and used some lecture time yesterday to write together a crude design document that should lay out a basic plan of action and architecture for the new Parasol version. I have started to set this plan into motion today.

First in line is building a skeleton main window with a pane for "gizmos" and a main area that can hold a range of documents or other forms of tabs. With that I'll have a minimal environment to test things out on. After that there's a large chunk of internal structure that needs to be completed: the document representation.

As I've laid it out so far, a document is composed of a set of layer-type objects, a history stack, and metadata such as the size and file associated with it. Layer objects themselves are composed of a position, size, drawables list, and an image buffer. A drawable itself is not specified to be anything fixed. Unlike before, it does not have to be a stroke, but simply an object that has the general methods a drawable has. This allows me to later add things that behave differently, such as images and 3D objects.

This change away from Parasol's original model necessitates two further, drastic alterations: First we need to have tools aside from a general brush that allow manipulating other kinds of objects, so we need to have a way to define 'tools' and their effects when used, in a uniform fashion. Secondly, we need a proper, generalised history that allows us to un/re-do any kind of operation on the document. Both of those things offer significant design challenges and I really hope I can pull it off.

Fortunately however, integrating these is a long while off, as after implementing a basic document I first need to add a way to publish these -so far purely virtual- documents to the UI. To tie these two worlds together, we need a view object that defines where to and how we're currently looking at the document. The view's job will also be to relay input events, such as tablet or mouse movement, to the actual document (and preserve the correct coordinate translation while doing so).

At this point I'd like to quickly talk about how I'm intending on solving the issue that initially brought Parasol to halt. After thinking things over I came to the realisation that my previous attempt of adding complex splines to the strokes to make smooth lines possible was an artefact of never getting enough tablet input to begin with. With enough events, a simple linear interpolation is a feasible approache. Knowing this it becomes apparent that we do not need to precalculate the spline's effective points (as linear interpolation is dirt cheap). From this it follows that I do not need to put the actual stroke updates into a separate thread, but can simply add points to the stroke in the event loop. The only thing that does have to be outsourced is the drawing of the buffers in the document.

In order to solve the drawing artefacts problem that arose in my previous attempt, I thought I'd take a look at a solution of threading from Qt out. After all, it might just be that Qt isn't happy with being called from threads it doesn't know about. After looking around I found out that the library CommonQt uses to bind to Qt (smokeqt) does not include these classes by default, even though they could be parsed.

Adding these classes to the library was easy enough, a simple XML config change and a recompilation made it available. But whether it actually worked or not was a different question. At first it seemed that it would not work at all. A callback from a QThread back into lisp would always cause SBCL to segfault, which was a rather devastating sign. Fortunately enough it seems that CCL instead works just fine with foreign threads. Halleluyah!

I haven't tested whether everything works just fine and dandy with this idea yet, but hopefully I'll get to that soon enough. However, threading always comes at the price of an immense complexity increase. Object access needs to be carefully synchronised with mutexes or other techniques. Thanks to the fact that interaction between the two threads is minimal, this doesn't pose too big of an issue. Once the drawing thread is done it only needs a single atomic call to set the buffer of the object to its finished value and otherwise it only needs to read things from vectors, where it won't make a big difference if it misses an element or two ahead.

Or at least so I hope. I'm sure that I'll get plenty of headaches with threading once I get to it. For now I'll be content with imagining that it'll all work out perfectly fine.

So, when this is all well and done I can move on to writing the architecture for the tools and history operations and with that the general brushes tool, which I hope I can mostly copy over from the previous version.

With image operations, threaded drawing, history and document presentation all sorted out the final step will then be to make a front-end for it all: Layer management, colour picking, brush configuration toolbar, a pluggable gizmos panel and so on and so forth.

If all goes according to my dreams I'll end up with a painting application that is extensible in every aspect imaginable, exhibits the most powerful user-configurable brush-engine I've seen, offers support for integrating and rendering 3D models as well as plenty of other constructs, and allows rudimentary image manipulation.

All of this is left up to your and my imagination for the time being, but I certainly hope to make it a reality at some point.

I might talk more about my progress as I advance, but maybe I'll keep on thinking I have nothing to talk about, irregardless of how much truth there is to that in actuality.

Until the sun shines again.

04 Nov 2014 4:01pm GMT

01 Nov 2014

feedPlanet Lisp

Pixel: I'm Giving A Talk Today!

Apologies for the short notice--I've been rushed just trying to get the darn thing ready--but I will be giving a talk at Iowa Code Camp on Saturday, November 1st.

If you can't attend, or want to review my talk before/after I give it, you can find it here. Feel free to offer any feedback. It's much too late to make major changes at this point, but feedback is welcome regardless.

(Technically the talk is about PHP, but tagged Lisp because I blame Common Lisp for inspiring a bunch of it.)

Ze "official" blog URL has changed, this is merely a cross-post for your reading convenience. You'll have to click through to read comment count unavailable comments or leave your own.

01 Nov 2014 6:34am GMT