19 Dec 2014

feedPlanet Lisp

Luís Oliveira: LOOP quiz

  1. Does (loop for i below 10 finally (return i)) return 9 or 10?
  2. Does (loop for i upto 10 finally (return i)) return 10 or 11?
  3. What does (loop for i below 10 for j upto 10 finally (return (list i j))) return?
  4. What about (loop for i below 10 and j upto 10 finally (return (list i j)))?


I stumbled upon the semantics of this last example in a recent bugfix and thought it was worth sharing. (Reminded me of the joke about what's hard in CS, too.)

Turns out LOOP's FOR ... AND not only mimics LET (rather than LET*) in terms of binding visibility, it also influences when the loop termination checks take place. That was new to me. I initially expected examples 3 and 4 to return the same values. What about you? Which ones, if any, did you get wrong? :-)

P.S.: LOOP for Black Belts is my favorite LOOP tutorial.

19 Dec 2014 6:27pm GMT

17 Dec 2014

feedPlanet Lisp

Quicklisp news: December 2014 Quicklisp dist update now available

New projects:

Updated projects: architecture.service-provider, asdf-linguist, asteroids, avatar-api, babel, basic-binary-ipc, caveman, chunga, cl-ana, cl-async, cl-async-future, cl-autowrap, cl-cffi-gtk, cl-closure-template, cl-conspack, cl-enumeration, cl-fad, cl-freetype2, cl-fuse, cl-gd, cl-gendoc, cl-glfw3, cl-inflector, cl-json, cl-libevent2, cl-logic, cl-mediawiki, cl-opengl, cl-pass, cl-plplot, cl-ppcre, cl-quickcheck, cl-read-macro-tokens, cl-rethinkdb, cl-rlimit, cl-sdl2, cl-unicode, cl-who, clack, clazy, clip, clod, closer-mop, clsql-helper, clss, coleslaw, colleen, com.informatimago, commonqt, consix, crane, curry-compose-reader-macros, daemon, dbus, defpackage-plus, documentation-template, drakma, drakma-async, eco, envy, esrap, esrap-liquid, external-program, fast-http, fast-io, flexi-streams, form-fiddle, fset, gbbopen, gendl, glyphs, green-threads, hdf5-cffi, helambdap, hunchensocket, hunchentoot, iolib, jsown, lass, local-time, log4cl, lquery, mcclim, mel-base, mgl-pax, modularize-interfaces, myway, new-op, ningle, plump, plump-tex, policy-cond, pp-toml, prove, pzmq, qlot, qmynd, qtools, quri, readable, restas, rock, sdl2kit, serapeum, sheeple, slime, smug, spinneret, staple, stumpwm, sxql, telnetlib, towers, trivial-ldap, trivial-mimes, trivial-raw-io, utilities.print-items, verbose, vom, weblocks, weblocks-stores, weblocks-tree-widget, weblocks-utils, websocket-driver, wookie, xhtmlambda, yason, zs3.

Removed projects: cl-api, cl-binaural, cl-proc, lisp-magick, okra.

To get this update, use (ql:update-dist "quicklisp").

This Quicklisp update is supported by my employer, Clozure Associates. If you need commercial support for Quicklisp, or any other Common Lisp programming needs, it's available via Clozure Associates.



17 Dec 2014 8:48pm GMT

Zach Beane: ELIZA from 1966

A few days ago Jeff Shrager posted that James Markevitch translated some 1966 BBN paper tape source code with the oldest known Eliza program. (Jeff's site, elizagen.org, tracks the genealogy of Eliza.)

image

Picture from elizagen.org

(doctor
   (lambda nil
      (prog (sentence keystack phraselist)
               (setsepr "
" " " " ")
               (setbrk "." "," ? | - + "(" 
")" L32 @ BS L14)
               (setq flipflop 0)
               (control t)
               (sentprint (quote (tell me your troubles"." 
please terminate input with an enter)))
               (setnone)
         a     (prin1 xarr)
               (makesentence)
               (cond
                  ((equal sentence (quote (goodbye)))
                     (return (sentprint (quote (it's been 
my pleasure))))))
               (analyze)
               (terpri)
               (go a)
         )))

The 1966 Eliza code is on github.

Jeff's post prompted some historical context from Jeff Barrett:

The original Eliza was moved to the ANFS Q32 at SDC (one of the (D)ARPA block grant sites) in the mid 1960's. The programmer responsible was John Burger who was involved with many early AI efforts. Somehow, John talked to one of the Playboy writers and the next thing we knew, there was an article in Playboy much to Weizenbaum's and everybody else's horror. We got all sorts of calls from therapists who read the article and wanted to contribute their "expertise" to make the program better. Eventually we prepared a stock letter and phone script to put off all of this free consulting.

The crisis passed when the unstoppable John Burger invited a husband and wife, both psychology profs at UCLA, to visit SDC and see the Doctor in action. I was assigned damage control and about lost it when both visitors laughed and kept saying the program was perfect! Finally, one of them caught their breath and finished the sentence: "This program is perfect to show our students just exactly how NOT to do Rogerian* therapy. *I think Rogerian was the term used but it's been a while.

A little latter we were involved in the (D)ARPA Speech Understanding Research (SUR) Program and some of the group was there all hours of day and night. Spouses and significant others tended to visit particularly in the crazy night hours and kept getting in our way. We would amuse them by letting them use Eliza on the Q32 Time Sharing System. One day, the Q32 became unavailable in those off hours for a long period of time. We had a Raytheon 704 computer in the speech lab that I thought we could use to keep visitors happy some of the time. So one weekend I wrote an interpretive Lisp system for the 704 and debugged it the next Monday. The sole purpose of this Lisp was to support Eliza. Someone else adopted the Q32 version to run on the new 704 Lisp. So in less than a week, while doing our normal work, we had a new Lisp system running Eliza and keeping visitors happy while we did our research.

The 704 Eliza system, with quite a different script, was used to generate a conversation with a user about the status of a computer. The dialogue was very similar to one with a human playing the part of a voice recognition and response system where the lines are noisy. The human and Eliza dialogues were included/discussed in A. Newell, et al., "Speech Understanding Systems; Final Report of a Study Group," Published for Artificial Intelligence by North-Holland/ American Elsevier (1973). The content of that report was all generated in the late 1960s but not published immediately.

The web site, http://www.softwarepreservation.org/projects/LISP/, has a little more information about the Raytheon 704 Lisp. The SUR program was partially funded and on-going by 1970.

17 Dec 2014 12:57pm GMT

12 Dec 2014

feedPlanet Lisp

Zach Beane: The unknown dependency tree

After posting about the Quicklisp verbosity conundrum, a few people emailed me with variations on this theme: "Since Quicklisp knows what the dependencies of a system are, can't you just load those quietly first and then load your project verbosely?"

The problem is that the premise is not true. Quicklisp has an idea about the dependencies of Quicklisp-provided systems, but not of any other systems available through ASDF.

And it's actually pretty difficult to answer the question, for a given system, "What systems must be loaded first?" It's not as simple as loading the system definition and then looking at it. The act of loading the system definition may trigger the loading of other systems, which then load other systems, which then load other systems. System definition files are not simply data files. They're Lisp programs that can do arbitrary computation and manipulation of the environment.

Quicklisp knows about its system dependency structures because, for every system in Quicklisp, I load it, and record what got loaded to support it. That dependency structure is then saved to a file, and that file is fetched by the Quicklisp client as part of a Quicklisp dist. This data is computed and saved once, on my dist-constructing computer, not each time, on the Quicklisp client computer. The data is evident whenever you see something like "To load foo, installing 5 Quicklisp releases: …"

But that "installing 5 Quicklisp releases" only works when foo itself is provided by Quicklisp. No dependency info is printed otherwise.

Quicklisp then loads foo by calling asdf:load-system. If some system that foo requires isn't present, ASDF signals an asdf:missing-dependency error, which Quicklisp handles. If Quicklisp knows how to fetch the missing dependency, it does so, then retries loading foo. Otherwise, the missing dependency error is fatal.

Ultimately, though, only the top-level asdf:load-system can be wrapped with the verbosity-controlling settings. The fetching-on-demand error handling only happens the first time a system is installed, so it's not a predictable point of intercession. After that first time, the system is found via asdf:find-system and no error handling takes place.

Writing this up has given me some twisted ideas, so maybe a fix is possible. I'll keep you posted.

12 Dec 2014 11:56pm GMT

11 Dec 2014

feedPlanet Lisp

Luís Oliveira: paredit

Taylor Campbell's paredit is one of those Emacs extensions that I can't live without. In a nutshell, it forces you deal with Lisp code exclusively via operations that don't introduce unbalanced parenthesis (or other similar violations of structure). The genius about this approach is that it completely eliminates the step of making sure parentheses are properly balanced after you write or edit a piece of code. After you get used to paredit, performing - or even watching - manual parenthesis balancing becomes painful.

Recently, I've come across these two introductions to paredit:

  1. Emacs Rocks! Episode 14: Paredit
  2. The Animated Guide to Paredit

So, if you're still not using paredit, have a look at those and give it a try. At first you might feel like the karate kid doing frustrating chores - you can always take a break with M-x paredit-mode - but I promise it'll soon pay off!

11 Dec 2014 9:20pm GMT

10 Dec 2014

feedPlanet Lisp

Zach Beane: A verbosity conundrum

Here's the scoop: Quicklisp hides too much information when building software, and it can't easily be controlled.

That is partly intentional. Remember the post about clbuild a few days ago? The information hiding is a reaction to the (often joyous) sense, when using clbuild, that you were on the cutting, unstable edge of library development, likely at any given time to hack on a supporting library in addition to (or instead of) your primary application.

To muffle that sense, I wanted the libraries Quicklisp provided to be loaded quietly. Loaded as though they were building blocks, infrastructure pieces that can be taken for granted. Loaded without seeing pages of style-warnings, warnings, notices, and other stuff that you shouldn't need to care about. (I realize, now, that this voluminous output isn't common to all CL implementations, but even so, one of the loudest implementations is also one of the most popular.)

I still feel good about the concept. I don't usually want to see supporting library load output, but if I do, there's always (ql:quickload "foo" :verbose t).

But the default quiet-output mode of quickload interacts with something else in a way I didn't expect, a way I don't like, and a way that I don't really know how to fix.

I switched from using (asdf:load-system "foo") to using (ql:quickload "foo"). This works because Quicklisp's quickload can "see" any system that can be found via ASDF, even if it isn't a system provided by Quicklisp. Quickload also automatically fetches, installs, and loads Quicklisp-provided systems on demand, as needed, to make the system load. It's super-convenient.

Unfortunately, that now means that the quiet-output philosophy is being applied to very non-infrastructure-y code, the code I'm working on at the moment, the code where I really do want to know if I'm getting warnings, style-warnings, notes, and other stuff.

It didn't bother me a lot at first. When you're writing something interactively in slime, C-c C-c (for a single form) and C-c C-k (for an entire file) will highlight the things you need to care about. But over time I've really started to miss seeing the compile and load output of my own projects differently, and more verbosely, than the output from "infrastructure." It would be nice to be able to see and fix new warnings I accidentally introduce, in code that I'm directly responsible for.

Unfortunately, I don't know enough about ASDF to know if it's possible, much less how to implement it.

The special variables and condition handlers that implement quiet-output are installed around a single toplevel call to asdf:load-system. Everything after that point is handled by ASDF. Loading a given system may involve loading an unknown mix of Quicklisp-provided systems and other systems. I can think of many ways to identify systems as originating from Quicklisp, but even if they're identified as such, I can't think of a way to intercede and say "When loading a system provided by Quicklisp, be quiet, otherwise, be verbose."

Ideally, of course, it would be nice to be able to be totally verbose, totally quiet, or a mix of the two, depending on some property of a system. But at the moment, I just don't see where I can hook into things temporarily to implement the policy I want.

If you have any ideas about how this might be done, please email me at xach@xach.com. Working proof-of-concept code would be the most welcome form of help; I don't have much time to chase down a lot of "have-you-tried-this?" speculation. But I'll gratefully take whatever I can get.

10 Dec 2014 10:10pm GMT

Zach Beane: elPrep 2.0 available, and ported to SBCL

I wrote a bit about elPrep, a "high-performance tool for preparing SAM/BAM/CRAM files for variant calling in DNA sequencing pipelines," back in August. The initial version was LispWorks-only.

There's a new version 2.0 available, and, along with new features and bugfixes, elPrep now supports SBCL. (Charlotte Herzeel's announcement cautions that "performance on LispWorks 64bit editions is generally better" and "the use of servers with large amounts of RAM is also more convenient with LispWorks.")

The elPrep source is available on github.

10 Dec 2014 3:12pm GMT

09 Dec 2014

feedPlanet Lisp

Zach Beane: cl-async updates

Andrew Lyon has updated cl-async to use libuv as the backend, switching from libevent. This is an incompatible change, so if you use cl-async, be sure to check the upgrade guide.

There is some discussion about the change on reddit.

09 Dec 2014 1:04pm GMT

08 Dec 2014

feedPlanet Lisp

Zach Beane: clbuild and Quicklisp

Listen, friends, to the story of clbuild and how it influenced the design and implementation of Quicklisp.

I can't tell a full story of clbuild, since I didn't use it very much, but here's what I remember.

clbuild was trivial to install. Download a single file, a shell script, to get started. From there, clbuild could fetch the source code of dozens of interesting projects and set up an environment where it was easy to use that code to support your own projects. It was also trivial to hack on most of the projects, since in most cases you were getting a source control checkout. It was nice to be able to hack directly on a darcs or git checkout of a useful library and then send patches or pull requests upstream.

Luke Gorrie created it, and, like many of his projects, quickly encouraged a community of contributors and hackers that kept evolving and improving clbuild.

clbuild was fantastic in many ways. So why didn't I use it? Why create Quicklisp, which lacks some of the best features of clbuild?

My biggest initial issue was the firewall at work.

Since clbuild checked out from various version control systems, some of them used ports outside of the range allowed by a typical corporate firewall. I was limited almost exclusively to HTTP or HTTPS service.

A subsequent problem was obtaining all the prerequisite version control tools. Although git and github are dominant today, in 2007, cvs, darcs, svn, and several other version control systems were more frequently used than today. It took a series of errors about missing commands before I could finally get things rolling.

In 2007, for 20 different projects, there might be 20 different computers hosting them. In 2014, it's more likely that 18 are hosted on github. Because of the diversity of hosting back then, it wasn't all that uncommon for a particular source code host to be unavailable. When that happened to a critical project's host, it could mean that bootstrapping your project from clbuild was dead in the water, waiting for the host to come back.

Even if everything was available, there was no particular guarantee that everything actually worked together. If the package structure of a particular project changed, it could break everything that depended on it, until everything was updated to work together again.

Pulling from source control also meant that the software you got depended heavily on the time you got it. If you had separate clbuild setups on separate computers, things could get out of sync unless you made an effort to sync them.

One final, minor issue was that clbuild was Unix-only. If you wanted to use it on Windows, you had to set up a Unix-like environment alongside your Lisp environment so you could run shell scripts and run cvs, darcs, svn, etc. as though they were Unix command-line programs. This didn't affect me personally, since I mostly used Linux and Mac OS X. But it did limit the audience of clbuild to a subset of CL users.

Elements of Quicklisp's design are in reaction to these issues.

Quicklisp's software structure shifts the task of fetching from source control, building, and distribution of software from the end user to a central server. Rather than continuously updating all sources all the time, the updates happen periodically, typically once per month.

This is based on the observation that although there are intermittent problems with software incompatibility and build-breaking bugs, most of the time things work out ok. So the Quicklisp process is meant to slow down the pace of updates and "freeze" a configuration of the Common Lisp project universe at a working, tested, known-good point in time.

In Quicklisp terms, that universe is called a dist, and a dist version represents its frozen state at a particular point in time. The software is checked out of every source control system, archived into a .tar.gz file, built and tested, and then finally frozen into a set of HTTP-accessible archive files with a few metadata and index files. Fetching libraries is then a matter of connecting to the central server via HTTP to get the metadata and archives. There are no source-control programs to install or firewall ports to open. The build testing means there is a reduced risk of one project's updates being fatally out-of-sync with the rest of the project universe.

By default, a Quicklisp installation uses the latest version of the standard dist, but it's a short, easy command to get a specific version instead either at installation time or at some later time. So even if you install Quicklisp multiple times on multiple computers, you can make sure each has the same software "universe" available for development. The uncertainty introduced by the time of installation or update can be completely managed.

This works even for the oldest dist versions; if you started a project in October, 2010, you can still go back to that state of the Common Lisp library world and continue work. That's because no archive file is ever deleted; it's made permanently available for just this purpose.

In the mid-2000s, it would have been hard to make a design like that very reliable for a reasonable cost. Amazon web services have made it cheap and easy. I have had only a few minutes of HTTP availability issues with Amazon in the past four years. I've never lost a file.

Quicklisp mitigates the Unix-only issue by using Common Lisp for the installation script and Common Lisp as the library management program. It fetches via HTTP, decompresses, and untars archives with code that has been adapted to work on each supported Common Lisp on each platform. No Unix shell or command-line tools are required.

There are still some bugs and issues with Quicklisp on Windows, because it doesn't receive as much testing and use as non-Windows platforms, but it's just as easy to get started on Windows as it is anywhere else.

Despite fixing some of my personal issues with clbuild, Quicklisp is missing a big, key feature. When using clbuild, it's easy to get to the forefront of development for the universe of CL software. You can work with the bleeding-edge sources easily and submit bug fixes and features. With Quicklisp, it's harder to find out where a particular library came from, and it's harder to get a source-control copy of it suitable for hacking and tweaking. It's harder to be a contributor, rather than just a consumer, of projects that aren't your own.

I'd like to improve the situation in Quicklisp, but some of the old obstacles remain. It would require a bunch of Unix-only or Unix-centric command-line tools to be installed and properly configured. Maybe that's not such a big deal, but it's loomed large in my mind and blocked progress. Maybe someone will take a look at the Quicklisp project metadata and write a nice program that makes it easy to combine the best of clbuild and Quicklisp. If you do, please send me a link.

PS. clbuild lives on in clbuild2. It looks like it's still active, with commits from just a few months ago. Maybe that's the right thing to use when the hacking urge strikes? I'll have to give it a try.

08 Dec 2014 2:50pm GMT

05 Dec 2014

feedPlanet Lisp

Gábor Melis: INCLUDE locative for PAX

I'm getting so used to the M-. plus documentation generation hack that's MGL-PAX, that I use it for all new code which highlighted an issue of with code examples.

The problem is that [the ideally runnable] examples had to live in docstrings. Small code examples presented as verifiable transcripts within docstrings were great, but developing anything beyond a couple of forms of code in docstrings or copy-pasting them from source files to docstrings is insanity or an OOAO violation, respectively.

In response to this, PAX got the INCLUDE locative (see the linked documentation) and became its own first user at the same time. In a nutshell, the INCLUDE locative can refer to non-lisp files and sections of lisp source files which makes it easy to add code examples and external stuff to the documentation without duplication. As always, M-. works as well.

05 Dec 2014 11:00pm GMT