18 Apr 2014

feedPlanet Lisp

Zach Beane: The week in CL

Babel2 - "Babel2 connects the implementations of our core technologies such as Fluid Construction Grammar (FCG) and Incremental Recruitment Language (IRL) with mechanisms for multi-agent interactions, robotic embodiment, cognitive processing and learning. An extensive monitoring system gives access to every detail of Babel2's intermediate representations and dynamics and a high modularity ensures that the system can be used in a very wide variety of scenarios." I haven't tried it, but it sounds interesting.

cl-http2-protocol - "This is HTTP/2.0 draft-06 interopability test code written in Common Lisp. … The code offers a pure Common Lisp transport agnostic implementation of the HTTP 2.0 protocol at draft-06." This is not related to Mallery's CL-HTTP, but is based on a Ruby library, ported and updated by Akamai engineer Martin Flack.

emacs-cl - "Emacs Common Lisp is an implementation of Common Lisp, written in Emacs Lisp." A fun hack by Lars Brinkhoff.

{} descriptions - "… a meta level descriptions library for Common Lisp," inspired by Smalltalk Magritte and Lisp On Lines."

Multi-year SBCL uptime. 1000+ days, pretty cool.

Work-in-progress ASDF 3.1 has a feature that creates objects called package-systems. The name stems from its melding of the CL package system with ASDF system definition objects. As a name "package-system" seems to me to be ripe for confusion, since it's not the package system of CL, or a package system in the fetch-me-useful-software sense. Can you suggest a better name? Chime in. Before release is the proper time for a rename.

18 Apr 2014 1:24pm GMT

14 Apr 2014

feedPlanet Lisp

Nick Levine: I'm looking for work

I've been an enthusiastic Common Lisp implementor, developer and advocate for over 25 years, in particular with ten years at Harlequin and more recently as a consultant for Ravenbrook. I've worked with various lisps on a range of projects, from servers and GUI applications down to the LispWorks garbage collector. Most of the software I've written doesn't belong to me and so I can't publish it, but you'll find links to some of the libraries I've worked on here, along with a few papers that I've authored.

My full CV is here.

Working with Ravenbrook makes me part of a team. We're highly qualified mathematicians and computer scientists, we have a lot of lisp experience, and between us we can cover a lot of ground. So if what you need is (for instance) a temporary need for more weight on one of your projects, please come to us. If we can't help then we'll put you in touch with people who can.

ndl@ravenbrook.com

14 Apr 2014 12:20pm GMT

13 Apr 2014

feedPlanet Lisp

Paul Khuong: Number systems for implicit data structures

Implicit data structures are data structures with negligible space overhead compared to storing the data in a flat array: auxiliary information is mostly represented by permuting the elements cleverly. For example, a sorted vector combined with binary search is an implicit in-order representation of a binary search tree. I believe the seminal implicit data structure is the binary heap of Williams and Floyd, usually presented in the context of heapsort.

I find most developers are vastly more comfortable dealing with pointer-based data structures than with implicit ones. I blame our courses, which focus on the former and completely fail to show us how to reason about the latter. For example, the typical presentation of the binary heap introduces an indexing scheme to map from parent to children - the children of heap[i] are heap[2 * i] and heap[2 * i + 1], with one-based indices - that is hard to generalise to ternary or k-ary heaps (Knuth's presentation, with parents at \(\lfloor i/2 \rfloor\), is no better). The reason it's so hard to generalise is that the indexing scheme hides a simple bijection between paths in k-way trees and natural numbers.

I find it ironic that I first encountered the idea of describing data structures or algorithms in terms of number systems, through Okasaki's and Hinze's work on purely functional data structures: that vantage point seems perfectly suited to the archetypal mutable data structure, the array! I'll show how number systems help us understand implicit data structures with two examples: a simple indexing scheme for k-ary heaps and compositions of specially structured permutations to implement in-place bit-reversal and (some) matrix transpositions.

Simple k-ary max heaps

The classical way to present implicit binary heaps is to work with one-based array indices and to root an implicit binary tree at heap[1]. For any node heap[i], the children live at heap[2 * i] and heap[2 * i + 1]. My issue with this presentation is that it's unclear how to extend the scheme to ternary or k-ary trees: if the children of heap[1] are heap[3 * 1 ... 3 * 1 + 2], i.e., heap[3, 4, 5], what do we do with heap[2]? We end up with k - 1 parallel k-ary trees stashed in the same array. Knuth's choice of mapping children to parents with a floored integer division suffers the same fate.

One-based arrays hides the beauty of the binary indexing scheme. With zero-based arrays, the children of heap[i] are heap[2 * i + 1] and heap[2 * i + 2]. This is clearly isomorphic to the one-based scheme for binary heaps. The difference is that the extension to k-way trees is obvious: the children of heap[i] are heap[k * i + 1 ... k * i + k].

A couple examples like the one below fail to show any problem. However, even a large number of tests is no proof. Thinking in terms of number systems leads to a nice demonstration that the scheme creates a bijection between infinite k-ary trees and the naturals.

There is a unique finite path from the root to any vertex in an infinite k-ary tree. This path can be described as a finite sequence \((c\sb{1}, c\sb{2}, \ldots, c\sb{n})\) of integers between 1 and k (inclusively). If \(c\sb{1} = 1\), we first went down the first child of the root node; if \(c\sb{1} = 2\), we instead went down the second child; etc. If \(c\sb{2} = 3\), we then went down the third child, and so on. We can recursively encode such finite paths as naturals (in pidgin ML):

path_to_nat [] = 0
path_to_nat [c : cs] = k * path_to_nat cs + c

Clearly, this is an injection from finite paths in k-ary trees to the naturals. There's only a tiny difference with the normal positional encoding of naturals in base k: there is no 0, and digits instead include k. This prevents us from padding a path with zeros, which would map multiple paths to the same natural.

We only have to show that path_to_nat is a bijection between finite paths and naturals. I'll do that by constructing an inverse that is total on the naturals.

nat_to_path 0 = []
nat_to_path n = let c = pop n
                in c : nat_to_path (n - c) / k

where pop is a version of the mod k that returns k instead of 0:

pop n = let c = n `mod` k
        in 
         if (c != 0)
          then c
          else k

The base case is nat_to_path 0 = [].

In the induction step, we can assume that path_to_nat cs = n and that nat_to_path n = cs. We only have to show that, for any \(1 \leq c \leq k\), nat_to_path path_to_nat c:cs = c:cs. Let n' = path_to_nat c:cs = k * n + c.

\[n\sp{\prime} = kn + c \equiv c\quad \mod k,\] so pop n' will correctly return c (and k rather than 0). It's then a tiny bit of algebra to show that (n' - c) / k = n, and we fall back to the induction hypothesis.

This scheme is so simple that I wound up coding a version of heapsort(3) that lets callers choose the heap's arity at runtime. Higher arity heaps perform more comparisons but fewer swaps than binary heaps; the tradeoff is profitable when sorting large items. It seems to me that, for decades, we've been presenting implicit heaps and heapsort in a way that marginally simplifies the binary case at the expense of obscuring the elegant general case.

Array permutations as algebra on positional number systems

Bit reversing an array of length \(2\sp{l}\) sends the element x[i] to x[j], where the binary representation of i (including padding up to l bits) is the reverse of j. For example, in an array of length 16, 3 = 0011 becomes 1100 = 12.

Reversing a fixed-width integer's binary representation is its self-inverse, so bit reversing an array is a sequence of swaps. This means that the permutation can be performed in-place, as a series of independent swaps. Bit reversal used to be slow on cached machines: contiguous elements (with indices that only vary in their low order bits) swap with far-off elements (indices that only vary in their high order bits). Worse, the stride between the latter elements is a large power of two, which causes all sorts of aliasing issues. Workarounds (see Zhang 99 (PDF)) mostly end up implementing a software cache with explicit buffers. Nowadays, even L1 caches have such a high associativity that aliasing is a much less important issue.

Napa-FFT3 implements bit reversals by calling a few specialised functions that only swap the lower and higher bits; the main routine iterates over an array of precomputed middle bit reversals (similar to various publications of Elster's, but recursing on the middle bits first). In this implementation, the number of L1 cache misses incurred by bit reversing an array is only slightly greater than the compulsory misses. Bit reversal isn't free, but it's also not clear that autosorting FFTs are quicker than out-of-order FFTs followed by a bit reversal pass.

Bit reversal is the only array permutation I've seen described in terms of its effect on indices. I think it's a fruitful avenue for other in-place permutations.

For example, the viewpoint makes it clear how to transpose a matrix of dimensions \(2\sp{m} \times 2\sp{n}\) with a sequence of in-place bit reversals (each \(i\sb{k}\) and \(j\sb{l}\) is a bit in the index's binary representation).

For a row-major layout, the sketch above corresponds to:

  1. bit reverse each row of length \(2\sp{n}\) in place;
  2. bit reverse the whole vector of length \(2\sp{m + n}\) in place;
  3. bit reverse each new row of length \(2\sp{m}\) in place.

Bit reversal, like all other permutations, is a reversible linear operation. We can thus change the order of operation if we want to. For example, it's not necessarily preferable to bit-reverse contiguous rows first. We could also flip the high-order bits of the indices: rather than swapping scalars, we would swap rows. Separately bit reversing contiguous rows works best when each row fits in cache. Bit reversing columns instead amortises the bad access pattern inherent to bit reversal by spending more time on each swap: swapping rows is slower than swapping scalars, but also very efficients with regards to (streaming!) I/O.

This is interesting because in-place transposition of rectangular matrices is hard, and transposition is already a bad fit for caches. Transposing matrices with a sequence of bit reversals might just be practical. In fact, that's what I intend to do in Napa-FFT3 for multi-dimensional DFTs: we can fuse all but the middle whole-vector bit reversal with mixed-radix FFTs (and the latter might similarly benefit from operating on [sub-]rows rather than scalars).

One obvious question now appears: can we generalise the trick to general dimensions? It's pretty clear that we can do it for any other base \(b\) and matrices of dimension \(b\sp{m} \times b\sp{n}\) (it's interesting how highly composite numbers dimensions are easy to transpose, and, IIRC, so are coprime ones). What if there's no such factorisation? The best I can do is "more or less."

For arbitrary matrix dimensions \(m \times n\), I think it's best to decompose indices in a mixed radix (but still positional) number system. For example, a \(63 \times 21\) matrix might have indices in radix \(3,7\ |\ 3,7,3\). Given this number system, matrix transposition is

It's a small generalisation to let the radices be \(a,b\ |\ a,b,a\), for a matrix of dimension \(ab \times a\sp{2}b\). We can then perform most of a matrix transposition by swapping positions of identical weight: first a full mixed-radix digit reversal (the weights are palindromic), followed by another mixed-radix reversal on the first three positions.

This leaves the last chunk \(b\sb{2},a\sb{3}\), which should instead be \(a\sb{3},b\sb{2}\). That's another rectangular matrix tranpose, but smaller than the original one. It might be practical to execute that last step with a straightforward out-of-place transpose: a smaller transpose needs less scratch space and may fit in cache. We can also apply the same trick as for bit reversals and apply the transpose before everything else, by permuting rows rather than scalars. The simplest way to do that is to transpose a matrix of pointers before replicating the permutation on the actual data (glibc's mergesort references Knuth vol. 3, exercise 5.2-10).

Finally, this also works for \(a\sp{2}b \times ab\): matrix transposition is its own inverse, so we only have execute the inverse of each step, in reverse order.

Definitely mixed results, but at least we have some intuition on why general rectangular transpositions are hard to perform in place: they're hard to express as sequences of swaps.

Next: C code and cycle counts!

This post is the more theoretical prologue to a low-level look at qsort(3): I really wanted to make sure the nice implicit tree layout in the first section had the space it deserves.

I tried to make in-order implicit trees fit in this number system approach. I can't see how. The problem is that in-order trees associate ranges (rather than indices) with nodes; for example, at what depth is index 1000? It depends on the size of the search tree. It might be the root (in a tree of 2000 vertices) or a leaf (in a tree of 1001 vertices).

13 Apr 2014 4:20pm GMT

12 Apr 2014

feedPlanet Lisp

Colin Lupton: Self-seeding Context Added to CL-ISAAC

As recommended by Bob Jenkins, the original author of the ISAAC cryptographic random number generator algorithms, self-seeding ISAAC is a useful technique for increasing the cryptographic strength of the random numbers generated from a given ISAAC context; i.e., using random values generated by ISAAC to seed a new ISAAC context. This may not seem particularly valuable for one-off random values such as the session tokens generated in CL-ISAAC's documented Quick Recipes, but when you need to generate millions of cryptographically-strong random numbers from a single context-such as for a One-Time Pad cipher-you notice the extra strength that self-seeding provides.

CL-ISAAC v1.0.4 is now available on GitHub, which includes the self-seeding context. It will be available in the April distribution of Quicklisp.

Usage

Using the Self-seed context is similar to the other seeds already available; the function supports both ISAAC-32 and ISAAC-64 algorithms, and provides one additional keyword parameter, count, which specifies the number of rounds your ISAAC context will be self-seeded. The default value is 1, but a count greater-than 3 is recommended.

Usage is as straight-forward as the other contexts. To create a 512bit hexadecimal string token using the ISAAC-64 algorithm from a self-seeded context with 3 rounds:

* (ql:quicklisp "cl-isaac")
...
* (defvar *self-ctx* (isaac:init-self-seed :count 3 :is64 t))
* (format nil "~64,'0x" (isaac:rand-bits-64 *self-ctx* 512))

The Self-seeding context is necessarily heavier than the kernel and cl:random seeds, by a factor of approx. 5n+1, where n is the number of self-seeding rounds. Specifically, for every round there is an additional context created, as well as an additional scramble.

Enjoy!


12 Apr 2014 1:39pm GMT

11 Apr 2014

feedPlanet Lisp

Ben Hyde: Lisp on the BeagleBone Black

Somebody left and orphan in a basket on my doorstep. A BeagleBone Black. Inspired by Patrick Stein's assurances that it was easy to get CCL working on another tiny ARM board I gave it a try. It was tedious, but not hard. I've put my notes and some rought half-tested scripts on github.

11 Apr 2014 2:05am GMT

10 Apr 2014

feedPlanet Lisp

Paul Khuong: Number systems for implicit data structures

Implicit data structures are data structures with negligible space overhead compared to storing the data in a flat array: auxiliary information is mostly represented by permuting the elements cleverly. For example, a sorted vector combined with binary search is an implicit in-order representation of a binary search tree. I believe the seminal implicit data structure is the binary heap of Williams and Floyd, usually presented in the context of heapsort.

I find most developers are vastly more comfortable dealing with pointer-based data structures than with implicit ones. I blame our courses, which focus on the former and completely fail to show us how to reason about the latter. For example, the typical presentation of the binary heap introduces an indexing scheme to map from parent to children - the children of heap[i] are heap[2 * i] and heap[2 * i + 1], with one-based indices - that is hard to generalise to ternary or k-ary heaps (Knuth's presentation, with parents at \(\lfloor i/2 \rfloor\), is no better). The reason it's so hard to generalise is that the indexing scheme hides a simple bijection between paths in k-way trees and natural numbers.

I find it ironic that I first encountered the idea of describing data structures or algorithms in terms of number systems through Okasaki's and Hinze's work on purely functional data structures: that vantage point seems perfectly suited to the archetypal mutable data structure, the array! I'll show how number systems help us understand implicit data structures with two examples: a simple indexing scheme for k-ary heaps and compositions of specially structured permutations to implement in-place bit-reversal and (some) matrix transpositions.

Simple k-ary max heaps

The classical way to present implicit binary heaps is to work with one-based array indices and to root an implicit binary tree at heap[1]. For any node heap[i], the children live at heap[2 * i] and heap[2 * i + 1]. My issue with this presentation is that it's unclear how to extend the scheme to ternary or k-ary trees: if the children of heap[1] are heap[3 * 1 ... 3 * 1 + 2], i.e., heap[3, 4, 5], what do we do with heap[2]? We end up with k - 1 parallel k-ary trees stashed in the same array. Knuth's choice of mapping children to parents with a floored integer division suffers the same fate.

One-based arrays hides the beauty of the binary indexing scheme. With zero-based arrays, the children of heap[i] are heap[2 * i + 1] and heap[2 * i + 2]. This is clearly isomorphic to the one-based scheme for binary heaps. The difference is that the extension to k-way trees is obvious: the children of heap[i] are heap[k * i + 1 ... k * i + k].

A couple examples like the one below fail to show any problem. However, even a large number of tests is no proof. Thinking in terms of number systems leads to a nice demonstration that the scheme creates a bijection between infinite k-ary trees and the naturals.

There is a unique finite path from the root to any vertex in an infinite k-ary tree. This path can be described as a finite sequence \((c\sb{1}, c\sb{2}, \ldots, c\sb{n})\) of integers between 1 and k (inclusively). If \(c\sb{1} = 1\), we first went down the first child of the root node; if \(c\sb{1} = 2\), we instead went down the second child; etc. If \(c\sb{2} = 3\), we then went down the third child, and so on. We can recursively encode such finite paths as naturals (in pidgin ML):

path_to_nat [] = 0
path_to_nat [c : cs] = k * encoding cs + c

Clearly, this is an injection from finite paths in k-ary trees to the naturals. There's only a tiny difference with the normal positional encoding of naturals in base k: there is no 0, and digits instead include k. This prevents us from padding a path with zeros, which would map multiple paths to the same natural.

We only have to show that path_to_nat is a bijection between finite paths and naturals. I'll do that by constructing an inverse that is total on the naturals.

nat_to_path 0 = []
nat_to_path n = let c = pop n
                in c : nat_to_path (n - c) / k

where pop is a version of the mod k that returns k instead of 0:

pop n = let c = n `mod` k
        in 
         if (c != 0)
          then c
          else k

The base case is nat_to_path 0 = [].

In the induction step, we can assume that path_to_nat cs = n and that nat_to_path n = cs. We only have to show that, for any \(1 \leq c \leq k\), nat_to_path path_to_nat c:cs = c:cs. Let n' = path_to_nat c:cs = k * n + c.

\[n\sp{\prime} = kn + c \equiv c\quad \mod k,\] so pop n' will correctly return c (and k rather than 0). It's then a tiny bit of algebra to show that (n' - c) / k = n, and we fall back to the induction hypothesis.

This scheme is so simple that I wound up coding a version of heapsort(3) that lets callers choose the heap's arity at runtime. Higher arity heaps perform more comparisons but fewer swaps than binary heaps; the tradeoff is profitable when sorting large items. It seems to me that, for decades, we've been presenting implicit heaps and heapsort in a way that marginally simplifies the binary case at the expense of obscuring the elegant general case.

Array permutations as algebra on positional number systems

Bit reversing an array of length \(2\sp{l}\) sends the element x[i] to x[j], where the binary representation of i (including padding up to l bits) is the reverse of j. For example, in an array of length 16, 3 = 0011 becomes 1100 = 12.

Reversing a fixed-width integer's binary representation is its self-inverse, so bit reversing an array is a sequence of swaps. This means that the permutation can be performed in-place, as a series of independent swaps. Bit reversal used to be slow on cached machines: contiguous elements (with indices that only vary in their low order bits) swap with far-off elements (indices that only vary in their high order bits). Worse, the stride between the latter elements is a large power of two, which causes all sorts of aliasing issues. Workarounds (see Zhang 99 (PDF)) mostly end up implementing a software cache with explicit buffers. Nowadays, even L1 caches have such a high associativity that aliasing is a much less important issue.

Napa-FFT3 implements bit reversals by calling a few specialised functions that only swap the lower and higher bits; the main routine iterates over an array of precomputed middle bit reversals (similar to various publications of Elster's, but recursing on the middle bits first). In this implementation, the number of L1 cache misses incurred by bit reversing an array is only slightly greater than the compulsory misses. Bit reversal isn't free, but it's also not clear that autosorting FFTs are quicker than out-of-order FFTs followed by a bit reversal pass.

Bit reversal is the only array permutation I've seen described in terms of its effect on indices. I think it's a fruitful avenue for other in-place permutations.

For example, the viewpoint makes it clear how to transpose a matrix of dimensions \(2\sp{m} \times 2\sp{n}\) with a sequence of in-place bit reversals (each \(i\sb{k}\) and \(j\sb{l}\) is a bit in the index's binary representation).

For a row-major layout, the sketch above corresponds to:

  1. bit reverse each row of length \(2\sp{n}\) in place;
  2. bit reverse the whole vector of length \(2\sp{m + n}\) in place;
  3. bit reverse each new row of length \(2\sp{m}\) in place.

Bit reversal, like all other permutations, is a reversible linear operation. We can thus change the order of operation if we want to. For example, it's not necessarily preferable to bit-reverse contiguous rows first. We could also flip the high-order bits of the indices: rather than swapping scalars, we would swap rows. Separately bit reversing contiguous rows works best when each row fits in cache. Bit reversing columns instead amortises the bad access pattern inherent to bit reversal by spending more time on each swap: swapping rows is slower than swapping scalars, but also very efficients with regards to (streaming!) I/O.

This is interesting because in-place transposition of rectangular matrices is hard, and transposition is already a bad fit for caches. Transposing matrices with a sequence of bit reversals might just be practical. In fact, that's what I intend to do in Napa-FFT3 for multi-dimensional DFTs: we can fuse all but the middle whole-vector bit reversal with mixed-radix FFTs (and the latter might similarly benefit from operating on [sub-]rows rather than scalars).

One obvious question now appears: can we generalise the trick to general dimensions? It's pretty clear that we can do it for any other base \(b\) and matrices of dimension \(b\sp{m} \times b\sp{n}\) (it's interesting how highly composite numbers dimensions are easy to transpose, and, IIRC, so are coprime ones). What if there's no such factorisation? The best I can do is "more or less."

For arbitrary matrix dimensions \(m \times n\), I think it's best to decompose indices in a mixed radix (but still positional) number system. For example, a \(63 \times 21\) matrix might have indices in radix \(3,7\ |\ 3,7,3\). Given this number system, matrix transposition is

It's a small generalisation to let the radices be \(a,b\ |\ a,b,a\), for a matrix of dimension \(ab \times a\sp{2}b\). We can then perform most of a matrix transposition by swapping positions of identical weight: first a full mixed-radix digit reversal (the weights are palindromic), followed by another mixed-radix reversal on the first three positions.

This leaves the last chunk \(b\sb{2},a\sb{3}\), which should instead be \(a\sb{3},b\sb{2}\). That's another rectangular matrix tranpose, but smaller than the original one. It might be practical to execute that last step with a straightforward out-of-place transpose: a smaller transpose needs less scratch space and may fit in cache. We can also apply the same trick as for bit reversals and apply the transpose before everything else, by permuting rows rather than scalars. The simplest way to do that is to transpose a matrix of pointers before replicating the permutation on the actual data (glibc's mergesort references Knuth vol. 3, exercise 5.2-10).

Finally, this also works for \(a\sp{2}b \times ab\): matrix transposition is its own inverse, so we only have execute the inverse of each step, in reverse order.

Definitely mixed results, but at least we have some intuition on why general rectangular transpositions are hard to perform in place: they're hard to express as sequences of swaps.

Next: C code and cycle counts!

This post is the more theoretical prologue to a low-level look at qsort(3): I really wanted to make sure the nice implicit tree layout in the first section had the space it deserves.

I tried to make in-order implicit trees fit in this number system approach. I can't see how. The problem is that in-order trees associate ranges (rather than indices) with nodes; for example, at what depth is index 1000? It depends on the size of the search tree. It might be the root (in a tree of 2000 vertices) or a leaf (in a tree of 1001 vertices).

10 Apr 2014 6:20pm GMT

08 Apr 2014

feedPlanet Lisp

Ben Hyde: cl-xmpp

Cl-xmpp has, sadly, fallen into disrepair. It can't connect to anything I tried to connect oo. It's hard to find it's development list archives. Common-lisp.net is working thru some troubles. FYI - you can subscribe to common-lisp.net mailing lists by adding +subscribe to the list's name, as in cl-xmpp-devel+subscribe I built my own index to the archives as so:


cl-user> (loop for y from 2008 to 2014 do (loop for m in '("January" "February" "March" "April" "May" "June" "July" "August" "September" "October" "November" "December") as u = (format nil "http://lists.common-lisp.net/pipermail/cl-xmpp-devel/~A-~A/thread.html" y m) do (multiple-value-bind (a b) (drakma:http-request u :method :head) (declare (ignore a)) (when (= 200 b) (print u)))))

"http://lists.common-lisp.net/pipermail/cl-xmpp-devel/2008-January/thread.html" 
"http://lists.common-lisp.net/pipermail/cl-xmpp-devel/2008-June/thread.html" 
"http://lists.common-lisp.net/pipermail/cl-xmpp-devel/2008-July/thread.html" 
"http://lists.common-lisp.net/pipermail/cl-xmpp-devel/2008-August/thread.html" 
"http://lists.common-lisp.net/pipermail/cl-xmpp-devel/2009-January/thread.html" 
"http://lists.common-lisp.net/pipermail/cl-xmpp-devel/2009-September/thread.html" 
"http://lists.common-lisp.net/pipermail/cl-xmpp-devel/2010-March/thread.html" 
"http://lists.common-lisp.net/pipermail/cl-xmpp-devel/2010-August/thread.html" 
"http://lists.common-lisp.net/pipermail/cl-xmpp-devel/2011-March/thread.html" 
nil

Some random patches mentioned there … I'll have to try those.

And all I wanted to do was send an IM when ever the bouy outside of Boston harbor is forecast to have big waves.

08 Apr 2014 10:05pm GMT

Patrick Stein: Clozure Common Lisp on the Raspberry Pi

I finally ordered a Raspberry Pi last week. It arrived yesterday. Last night, I installed the Raspbian (Debian for Raspberry Pi) distro. Then, I followed Rainer Joswig's excellent instructions on how to get Clozure Common Lisp running on the Raspberry Pi.

The only think that I would add to Rainer's presentation is that Raspbian didn't come with m4(1) out of the box, but it is needed when you make all to rebuild Clozure.

Hacks and glory await!

Also, Linux pro-tip: If you are thinking of renaming your only account on the machine, make sure you add the new name to a group with sudo privileges and make sure you get both /etc/passwd and /etc/shadow in the same sudo.

Edit: Rainer correctly points out that his article does say you will need to install m4 if you haven't already. I just missed it.

08 Apr 2014 3:18pm GMT

30 Mar 2014

feedPlanet Lisp

Paul Khuong: Refactoring with LZ77: compression is compilation (?)

This post was written under the influence of coffee ice cream and espresso. It's a magical drink ;)

I don't really follow the compression scene, and only pay minimal attention to machine learning. Nevertheless, the "Compression is Learning" slogan feels more and more right to me. In this post, I'll explore another relationship that I find surprising: one between compression and compilation.

Five years ago, I took Marc Feeley's compiler class, and he let us choose our term project. I settled on generating traces from recursive algorithms (typical of cache-oblivious algorithms) and reordering them to get iterative code that was better suited to the first level of cache. I came up with a gnarly CL-accented research-quality prototype, but the result was surprisingly effective. Sadly, I never really managed to recover loops or function calls from the fully inlined trace, so even medium-size kernels would exhaust the instruction cache.

I believe I now see a practical and scalable solution, thanks to Artur Jez's work on "A really simple approximation of smallest grammar." His work might lead to a function-oriented analogue of trace compilation. Trace compilers are notoriously weak on recursion (recursive function calls don't map well to loops), so it would be nifty to have an alternative that identifies functions rather than loops.

This makes me quip that "(Trace) Compilation is Compression." We can see trace compilers as lazily unpacking traces from the source program, rewriting them a bit, and recompressing traces in an executable format. Admittedly, the analogy is a bit of a stretch for classical compilers: they are less aggressive in decompressing source code and directly translate from one compressed representation (a small source file may describe billions of execution paths) to another.

Anyway... this is extremely hypothetical and Jez's work is fun independently of my weekend speculations.

How to fail with LZ77

Once we cast a program trace as a sequence of opcodes (perhaps without operands, to expose more redundancy), it's obvious that reducing code size is "just" compression, and LZ-type algorithms quickly come to mind: they compress strings by referring to earlier substrings.

The heart of LZ77 is a loop that streams through the input sequence and finds the longest common subsequence earlier in the input. In practice, the search usually considers a fixed-size window; when I write LZ77 here, I instead refer to the theoretical approach with an unbounded search window. Repeated LCS searches on an unbounded window sounds slow, but, in sophisticated implementations, the bottleneck is sorting the input string to generate a suffix array.

I don't feel like being sophisticated and will implement a quadratic-time LCS search.

1
2
3
4
5
6
7
8
9
10
11
12
(defun longest-subseq (seq start)
  "Find the longest subseq of seq[start...] that begins before start."
  (let ((best 1)
        (index nil))
    (dotimes (i start (values index best))
      (let ((length (mismatch seq seq :start1 i :start2 start
                                      :test #'equal)))
        (assert length)
        (decf length i)
        (when (> length best)
          (setf best length
                index i))))))

Some backreferences clearly look like function calls.

1
2
3
4
5
6
CL-USER> (longest-subseq "abab" 0)
NIL ; There is no match to the left of abab
1   ;                                  ^
CL-USER> (longest-subseq "abab" 2)
0   ; There is a match for the second ab, starting
2   ; at index 0 and spanning 2 characters.

It's a bit surprising at first, but some also correspond to repeat loops.

1
2
3
CL-USER> (longest-subseq "abababa" 2)
0 ; There is a match for "ababa", starting
5 ; at index 0 and spanning 5 characters.

This last patch is strange because it's self-referential. However, if we decompress character by character, its meaning becomes obvious: we replicate the substring in [0, 1] to generate exactly 5 characters. This is basically a loop; we can handle partial copies with, e.g., rotation and entering the middle of the loop (like Duff's device).

Lempel and Ziv's result tells us that we can look for references greedily in an unbounded window and obtain asymptotically optimal (proportional to the string's entropy) compression. Again, there are algorithms to do that in linear time - via radix sort - but I'll just bruteforce my way in cubic time.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
(defstruct factor start length subseq)

(defun factorise (seq)
  (let ((index 0)
        (length (length seq)))
    (loop while (< index length)
          collect
          (multiple-value-bind (start length)
              (longest-subseq seq index)
            (prog1
                (if (null start)
                    (elt seq index)
                    (make-factor :start start :length length
                                 :subseq (subseq seq start (+ start length))))
              (incf index length))))))

Here's the problem with converting an LZ77 factorisation into calls and loops: there is no guarantee that patches nest sanely.

1
2
3
CL-USER> (factorise "ababac||bac")
(#\a #\b #S(FACTOR :START 0 :LENGTH 3 :SUBSEQ "aba") #\c #\| #\|
 #S(FACTOR :START 3 :LENGTH 3 :SUBSEQ "bac"))

This factorisation encodes "ababac||bac" by repeating "ab" two and a half times, inserting "c||", and then repeating "bac." The issue is that we wish to convert "ababa" into a loop, but the last patch would then need to enter that loop in its last iteration. We could also have the opposite problem, with a patch that only covers a prefix of an earlier patch (a few iterations of a loop). The sketch below shows how we might even have to deal with both issues in the same factor.

Jez's solution

Jez's paper analyses a method to recover a CFL with a single member, the string we wish to compress. The CFL is described by a fully deterministic (it produces exactly one string) CFG in Chomsky normal form, i.e., a straight-line program.

This "really simple" approach takes a step back from LZ compression and instead starts with the easiest, greedy, way to generate a straight-line program from a string: just pair characters as you encounter them, left-to-right.

For example, on "abab," this would pair "(ab)(ab)" and then "((ab)(ab))," and generate the following grammar:

G -> XX
X -> ab

We got lucky: the repetition synced up with the greedy pairing. That's not always the case; for example, "abcab" becomes "(ab)(ca)b" - which exposes no redundancy - rather than "(ab)c(ab)."

Jez addresses this issue by using LZ77 factorisation as an oracle to synchronise pairings.

First, we simplify the algorithm by assuming that there are no factors that begin exactly one character before their output. Such factors correspond to repetitions of that one character (e.g., "aaaa" = "a" x 4) and we can easily turn them into repetitions of a pair (e.g., "aa" x 2).

It's then simply a matter of not pairing when it would prevent the next character from synchronising with its backreference. For example, in "abcab," we'd skip pairing "ca" so that "ab" could pair like the first occurrence did. We also don't force a pair when the backreference only includes the first letter, but not the second. Finally, we opportunistically match consecutive unpaired letters.

Each such pass creates a new, shorter, string of literals and production rules. We iteratively apply the guided pairing loop until we're left with a single production rule.

That's it. Jez's paper is mostly concerned with showing that we only have to run LZ compression once and with proving suboptimality bounds. The key is that we can convert factors for the input into factors for the output, and that each pass shrinks the string by a multiplicative factor. I'll simplify things further by recomputing a factorisation (in cubic time!) from scratch in each pass.

Now, the code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
;; specials for shared state (quick-and-dirty REPL style)

;; For each character position, whether it's the first or
;; second of a pair, or just a singleton (nil)
(defvar *state*)
;; The sequence to compress.
(defvar *sequence*)
;; Cursor in the sequence.
(defvar *index*)

(defmacro with-sequence ((seq) &body; body)
  (let ((temp (gensym "SEQ")))
    `(let* ((,temp ,seq)
            (*sequence* ,temp)
            (*state* (make-array (length *sequence*)
                                 :initial-element nil))
            (*index* 0))
       ,@body)))

;; No pair begins here, but we might still fuse with the previous
;; singleton.
(defun single ()
  (let ((i *index*))
    (when (and (plusp i)
               (null (aref *state* (1- i))))
      (setf (aref *state* (1- i)) 'first
            (aref *state* i) 'second)))
  (incf *index*))

;; Force a pair.
(defun pair ()
  (let ((i *index*))
    ;; we mustn't be at the end of the sequence
    (assert (< (1+ i) (length *sequence*)))
    (setf (aref *state* i) 'first)
    (setf (aref *state* (1+ i)) 'second)
    (incf *index* 2)))

(defun sync-and-pair ()
  (multiple-value-bind (begin length)
      (longest-subseq *sequence* *index*)
    (cond ((null begin)
           ;; No factor; advance and merge greedily.
           (single))
          ((= begin (1- *index*))
           ;; Single-character repetition.
           (loop repeat length do (single)))
          (t
           (let ((end (+ begin length)))
             (assert (<= end (length *sequence*)))
             (assert (< begin (1- *index*)))
             (when (eql 'second (aref *state* begin))
               ;; The first character is the second half of a pair;
               ;; leave it by itself.
               (single)
               (incf begin))
             ;; Mimic the pairing decisions of the backref.
             (loop with expected = (+ *index* (- end begin))
                   for i from begin below end
                   do (ecase (aref *state* i)
                        (second)
                        (first (if (= (1+ i) end) ; Last character
                                   (single)       ; can't force
                                   (pair)))       ; a pair.
                        ((nil) (single)))
                   finally (assert (= *index* expected))))))))

My implementation is exceedingly naïve and runs in cubic time in the worst case. With a lot more work, the bottleneck (LZ77 factorisation) runs at the speed of sort... but constructing a suffix array would only get in the way of this post. Let's just see what pairing the code generates.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
CL-USER> (with-sequence ("abcab")
           (loop while (< *index* (length *sequence*))
                 do (sync-and-pair)
                 finally (return *state*)))
#(FIRST SECOND NIL FIRST SECOND)
CL-USER> (with-sequence ("abababa")
           (loop while (< *index* (length *sequence*))
                 do (sync-and-pair)
                 finally (return *state*)))
#(FIRST SECOND FIRST SECOND FIRST SECOND NIL)
CL-USER> (with-sequence ("ababac||bac")
           (loop while (< *index* (length *sequence*))
                 do (sync-and-pair)
                 finally (return *state*)))
#(FIRST SECOND FIRST SECOND FIRST SECOND FIRST SECOND NIL FIRST SECOND)

In short, "abcab" becomes "(ab)c(ab)", "abababa" "(ab)(ab)(ab)a", and "ababac||bac" "(ab)(ab)(ac)(||)b(ac)".

Programs from pairs

So far, we've only made pairing decisions. The next step is to translate them into production rules (i.e., functions), and to merge equivalent rules to actually save space.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
(defvar *counter* 0)
(defvar *rules* (make-array 0 :adjustable t :fill-pointer 0))

(defstruct rule
  (id (incf *counter*))
  a b)

;; Hash-consing naturally merges equivalent rules.
(fare-memoization:define-memo-function rule (a b)
  (let ((rule (make-rule :a a :b b)))
    (vector-push-extend rule *rules*)
    rule))

(defun reset-rules ()
  (setf *counter* 0)
  ;; Sorry, this is ugly.
  (fare-memoization:unmemoize 'rule)
  (fare-memoization:memoize 'rule)
  (setf *rules* (make-array 0 :adjustable t :fill-pointer 0)))

(defun pair-cfg (sequence state)
  (loop with acc = '()
        for i upfrom 0
        for character across sequence
        for choice across state
        do (ecase choice
             (second)
             (first (push (rule character (aref sequence (1+ i)))
                          acc))
             ((nil) (push character acc)))
        finally (return (coerce (nreverse acc) 'simple-vector))))

(defun compress-1 (sequence)
 (with-sequence (sequence)
    (loop with length = (length sequence)
          while (< *index* length)
          do (sync-and-pair)
          finally (return (pair-cfg *sequence* *state*)))))

On the examples above, we find:

1
2
3
4
5
6
7
8
9
10
11
CL-USER> (reset-rules)
#()
CL-USER> (compress-1 "abcab")
#(#S(RULE :ID 1 :A #\a :B #\b) #\c #S(RULE :ID 1 :A #\a :B #\b))
CL-USER> (compress-1 "abababa")
#(#S(RULE :ID 1 :A #\a :B #\b) #S(RULE :ID 1 :A #\a :B #\b)
  #S(RULE :ID 1 :A #\a :B #\b) #\a)
CL-USER> (compress-1 "ababac||bac")
#(#S(RULE :ID 1 :A #\a :B #\b) #S(RULE :ID 1 :A #\a :B #\b)
  #S(RULE :ID 2 :A #\a :B #\c) #S(RULE :ID 3 :A #\| :B #\|) #\b
  #S(RULE :ID 2 :A #\a :B #\c))

We only have to pair production rules further until the sequence consists of a single rule (or literal).

1
2
3
4
(defun refactor (sequence)
  (if (<= (length sequence) 1)
      sequence
      (refactor (compress-1 sequence))))
1
2
3
4
5
6
7
8
9
10
11
12
13
14
CL-USER> (refactor "ababac||bac")
#(#S(RULE
     :ID 8
     :A #S(RULE
           :ID 7
           :A #S(RULE
                 :ID 4
                 :A #S(RULE :ID 1 :A #\a :B #\b)
                 :B #S(RULE :ID 1 :A #\a :B #\b))
           :B #S(RULE
                 :ID 5
                 :A #S(RULE :ID 2 :A #\a :B #\c)
                 :B #S(RULE :ID 3 :A #\| :B #\|)))
     :B #S(RULE :ID 6 :A #\b :B #S(RULE :ID 2 :A #\a :B #\c))))

The "call graph" below shows there's a lot of sharing for such a short string.

Enough with the strings

I started by writing about compiling program traces, but so far I've only been working with strings of characters. Let's pretend the following PostScript file was written by someone who's never heard of functions or loops.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
%!PS
270 315 translate
1.5 setlinewidth

gsave
newpath
  0 36 moveto
 72 0  rlineto
  0 72 rlineto
-72 0  rlineto
closepath
stroke
grestore

45 rotate gsave newpath 0 36 moveto 72 0 rlineto 0 72 rlineto -72 0 rlineto closepath stroke grestore
45 rotate gsave newpath 0 36 moveto 72 0 rlineto 0 72 rlineto -72 0 rlineto closepath stroke grestore
45 rotate gsave newpath 0 36 moveto 72 0 rlineto 0 72 rlineto -72 0 rlineto closepath stroke grestore
45 rotate gsave newpath 0 36 moveto 72 0 rlineto 0 72 rlineto -72 0 rlineto closepath stroke grestore
45 rotate gsave newpath 0 36 moveto 72 0 rlineto 0 72 rlineto -72 0 rlineto closepath stroke grestore
45 rotate gsave newpath 0 36 moveto 72 0 rlineto 0 72 rlineto -72 0 rlineto closepath stroke grestore
45 rotate gsave newpath 0 36 moveto 72 0 rlineto 0 72 rlineto -72 0 rlineto closepath stroke grestore

showpage

Phew, programming is hard! That's a lot of copy-paste to produce a simple pattern.

Let's see what our (simplified) implementation of Jez's algorithm can do.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
CL-USER> (reset-rules)
#()
;; READ, the poor man's tokenizer.
CL-USER> (refactor #(
270 315 translate
1.5 setlinewidth

gsave
newpath
  0 36 moveto
 72 0  rlineto
  0 72 rlineto
-72 0  rlineto
closepath
stroke
grestore

45 rotate gsave newpath 0 36 moveto 72 0 rlineto 0 72 rlineto -72 0 rlineto closepath stroke grestore
45 rotate gsave newpath 0 36 moveto 72 0 rlineto 0 72 rlineto -72 0 rlineto closepath stroke grestore
45 rotate gsave newpath 0 36 moveto 72 0 rlineto 0 72 rlineto -72 0 rlineto closepath stroke grestore
45 rotate gsave newpath 0 36 moveto 72 0 rlineto 0 72 rlineto -72 0 rlineto closepath stroke grestore
45 rotate gsave newpath 0 36 moveto 72 0 rlineto 0 72 rlineto -72 0 rlineto closepath stroke grestore
45 rotate gsave newpath 0 36 moveto 72 0 rlineto 0 72 rlineto -72 0 rlineto closepath stroke grestore
45 rotate gsave newpath 0 36 moveto 72 0 rlineto 0 72 rlineto -72 0 rlineto closepath stroke grestore

showpage
))
#(#S(RULE
     :ID 35
   ...))

35 functions that each contain two literals or function calls. Not bad (: It's an actual SMOP to convert this grammar into a PostScript program.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
(defun emit-one-rule (stream rule)
  (flet ((emit-subrule (x)
           (if (rule-p x)
               (format nil "R~A" (rule-id x))
               (format nil "~(~a~)" x))))
    (format stream "/R~A { ~A ~A } def~%"
            (rule-id rule)
            (emit-subrule (rule-a rule))
            (emit-subrule (rule-b rule)))))

(defun emit-rules (stream rules)
  (map nil (lambda (rule)
             (emit-one-rule stream rule))
       rules)
  (format stream "R~A~%" (rule-id (elt rules (1- (length rules))))))
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
CL-USER> (emit-rules *standard-output* *rules*)
/R1 { 270 315 } def
/R2 { translate 1.5 } def
/R3 { setlinewidth gsave } def
/R4 { newpath 0 } def
/R5 { 36 moveto } def
/R6 { 72 0 } def
/R7 { rlineto 0 } def
/R8 { 72 rlineto } def
/R9 { -72 0 } def
/R10 { rlineto closepath } def
/R11 { stroke grestore } def
/R12 { 45 rotate } def
/R13 { R1 R2 } def
/R14 { R3 R4 } def
/R15 { R5 R6 } def
/R16 { R7 R8 } def
/R17 { R9 R10 } def
/R18 { R11 R12 } def
/R19 { gsave R4 } def
/R20 { R11 showpage } def
/R21 { R13 R14 } def
/R22 { R15 R16 } def
/R23 { R17 R18 } def
/R24 { R17 R20 } def
/R25 { R21 R22 } def
/R26 { R23 R19 } def
/R27 { R22 R24 } def
/R28 { R25 R26 } def
/R29 { R22 R26 } def
/R30 { R28 R29 } def
/R31 { R29 R29 } def
/R32 { R29 R27 } def
/R33 { R30 R31 } def
/R34 { R31 R32 } def
/R35 { R33 R34 } def
R35

This second PostScript program comprises 211 tokens, rather than the original's 156, but 105 of those are "{ } def" noise to define functions. Arguably, the number of meaningful tokens was reduced from 156 to slightly more than 100. Crucially, the output remains the same.

Stack programs seem particularly well suited to this rewriting approach: explicit operands (e.g., registers) introduce trivial discrepancies. In a VM, I would consider "compressing" the opcode stream separately from the operands. Compiling only the former to native code would already reduce interpretative overhead, and extracting function calls from the instruction stream should avoid catastrophic size explosions.

There are several obvious deficiencies in the direct LZ77-guided pairing. Just breaking Chomsky normalisation to inline single-use function would already help. It might also make sense to special-case repetitions as repeat n loops, instead of hiding that in \(\log n\) levels of recursion. In a real program, it would also be useful to inline the bottom-most rules so as not to drown in control flow. The thing is, destroying structure in the name of performance is well understood compiler tech; recovering it efficiently was the hard part.

30 Mar 2014 7:27pm GMT

26 Mar 2014

feedPlanet Lisp

Hans Hübner: No ECLM in 2014

As many of you may know, there was the plan to have the European Common Lisp Meeting happen in Berlin in October 2014. I was part of the organizing team, and we've tried our best to find high-quality speakers to make the event be at least as good as the previous ECLMs.
Unfortunately, and you've determined that already from the title and the previous wording, we've failed. We simply did not find enough interesting commercial projects that have not been presented on one of the previous ECLMs. As we do not want to compromise on the content, we thus have decided that there will be no ECLM this year. We hope to learn about new and exciting developments in the course of this year so that we can have an ECLM in 2015.

26 Mar 2014 10:03am GMT