11 Dec 2018

feedPlanet Lisp

Quicklisp news: December 2018 Quicklisp dist update now available

New projects:

Updated projects: alexandria, april, architecture.builder-protocol, architecture.hooks, asdf-viz, bst, cambl, cari3s, carrier, caveman, cffi, chronicity, cl-ana, cl-bibtex, cl-cffi-gtk, cl-charms, cl-cognito, cl-collider, cl-conllu, cl-dbi, cl-digraph, cl-environments, cl-epoch, cl-hamcrest, cl-json-helper, cl-ledger, cl-markdown, cl-patterns, cl-python, cl-quickcheck, cl-str, cl-tetris3d, cl-tiled, cl-toml, cl-unification, clazy, clip, closer-mop, clx, codex, cover, croatoan, dbus, de.setf.wilbur, definitions, docparser, dufy, eclector, event-emitter, f2cl, femlisp, fiasco, flare, float-features, function-cache, fxml, gamebox-math, gendl, genhash, glsl-toolkit, golden-utils, harmony, helambdap, http-body, hu.dwim.web-server, ip-interfaces, ironclad, jonathan, jsonrpc, lack, lisp-binary, lisp-chat, local-time, maiden, mcclim, mmap, opticl, overlord, parachute, parenscript, parser.common-rules, petalisp, pgloader, plexippus-xpath, plump, plump-sexp, postmodern, protest, protobuf, qbase64, qlot, quri, racer, regular-type-expression, safety-params, sc-extensions, serapeum, shadow, simple-tasks, sly, snakes, snooze, staple, stealth-mixin, stefil, stumpwm, the-cost-of-nothing, time-interval, trivial-benchmark, trivial-utilities, umbra, utilities.binary-dump, vgplot, websocket-driver, with-c-syntax, woo, zacl.

To get this update, use (ql:update-dist "quicklisp")

Enjoy!

11 Dec 2018 3:14pm GMT

08 Dec 2018

feedPlanet Lisp

Nicolas Hafner: About Making Games in Lisp - Gamedev

header
Recently there's been a bit of a storm brewing about a rather opinionated article about game development with Lisp. After reading Chris Bagley's very well done response, I thought I'd share my perspective on what it's like to actually make games with Lisp. I'm not writing this with the intent on convincing you of any particular arguments, but rather to give some insight into my process and what the difficulties and advantages are.

I'll start this off by saying that I've been working with games in some capacity as long as I can remember. My programming career started out when I was still a young lad and played freeware games on a dinky Compaq laptop with Windows 95. Making really terrible games is almost all I did in terms of programming all throughout primary school. I branched out into other software after that, but making games is something that has always kept sticking around in my mind.

Naturally, when it came to having learnt a new programming language, it didn't take too long before I wanted to make games again. And of course, because I'm a stubborn idiot, I decided to build an engine from scratch - it wasn't my first one, either. This is what lead to Shirakumo's Trial engine.

Since then, the team and I have built a couple of "games" with Trial:

None of these are big, none of these are great. They're all more experiments to see what can be done. What I've learned most of all throughout all my time working on games is that I'm not good at making games. I'm decent at making engines, which is a very, very different thing.

If you're good at making games, you can make an engaging game with nothing more than format, read-line, and some logic thrown in. If you're bad at making games like I am, you build a large engine for all the features you imagine your game might need, and then you don't know how to proceed and the project dies. You may notice that this also has a bit of an implication, namely that for making the game part of a game, the choice of language matters very little. It matters a lot for the engine, because that's a software engineering problem.

I'm writing this because this is, to me, an important disclaimer: I don't well know how to make games. I can write code, program mechanics, make monkeys jump up and down on your screen, but that's not the meat of a game and usually not why people play games either. Thus my primary difficulty making games has absolutely nothing to do with the technology involved. Even if I were using Unity or Unreal, this problem would not go away. It was the same when I was last writing games in Java, and it was the same when I was using GameMaker.

Now, why am I not using a large, well made engine to make games? Is it because I've been tainted by Lisp and don't want to use other languages in my free time anymore? Is it because the game making problem would persist anyway so what's the point? Is it because I like making engines? Is it because I'm stupid? Well, the answers are yes, yes, yes, and yes.

Alright, so here we are: Lisp is the only choice left, I like making engines and don't know how to make games, so what are the difficulties and advantages of doing that?

As you might know, I'm currently working on a game, so I have a lot of immediate thoughts on the matter. What seems to bother me the most is that currently I don't have a built in, usable scene editor in Trial. For every game so far we had to either build an editor from scratch, or place things manually in code. Both of these things suck, and making an editor that isn't a huge pain to use takes a long, long time. Part of the issue with that is that Trial currently does not have a UI toolkit to offer. You can use it with the Qt backend and use that to offer a UI, but I really don't want to force using Qt just for an editor. Not to mention that we need in-game UI capabilities anyway.

All of the UI toolkits I've seen out there are either a giant blob of foreign code that I really don't want to bind to, or they're McCLIM which won't work with OpenGL in what I project to be the next decade or more. So, gotta do it myself again. I have some nice and good ideas for making a design that's different and actually very amenable towards games and their unique resolution constraints, but making a UI toolkit is a daunting effort that I have so far not felt the energy to tackle.

Aside from the lack of an editor and UI toolkit, I actually have very few complaints with the current state of Trial for the purposes of my game. It handles asset management, shaders and effects pipelines, input and event delivery, and so forth. A lot of the base stuff that makes OpenGL a pain in the neck has been taken care of.

That said, there's a lot of things I had to implement myself as well that could be seen as something the engine should do for you: map save and load, save states, collision detection and resolution, efficient tile maps. Some of the implementations I intend to backport into Trial, but other things that might seem simple on first look like maps and save states, are actually incredibly domain specific, and I'm currently unconvinced that I can build a good, generic system that can handle this.

One thing that I think was a very good decision for Trial that I still stand by is the idea to keep things as modular and separate as possible. This is so that, as much as possible, you won't be forced to use any particular feature of the engine and can replace them if your needs demand such. If you know anything at all about architecture, this is a very difficult thing to do, and something that I believe would be a huge lot more difficult if it weren't implemented in Lisp. Modularity, re-usability, and extensibility are where Lisp truly shines.

Unfortunately for us, games tend to need a lot of non-reusable, very problem-specific solutions and implementations. Sure, there's components that are re-usable, like a rendering engine, physics simulations, and so forth. But even within those you have a tremendous effort in implementing game-specific mechanics and features that can't be ported elsewhere.

But, that's also great for me because it means I can spend a ton of time implementing engine parts without having to worry about actually making a game. It's less great for the chances of my game ever being finished, but we'll worry about that another time.

Right now I'm working on implementing a quest and dialog system in the game, which is proving to be an interesting topic on its own. Lisp gives me a lot of nifty tools here for the end-user, since I can wrap a lot of baggage up in macros that present a very clean, domain-geared interface. This very often alleviates the need to write scripting languages and parsers. Very often, but not always however. For the dialog, the expected amount of content is so vast that I fear that I can't get away with using macros, and need to implement a parser for a very tailored markup language. I've been trying to get that going, but unfortunately for reasons beyond me my motivation has been severely lacking.

Other than that, now that all the base systems for maps, saves, chunks, tiles, and player mechanics are in place the only remaining part is UI stuff, and we already discussed the issue with that. This also means that I really need to start thinking about making a game again because I've nearly run out of engine stuff to do (for now). We'll see whether I can somehow learn to shift gears and make an actual game. I really, really hope that I can. I want this to work.

I've talked a lot about my own background and the kinds of problems I'm facing at the moment, and very little about the process of making these games. Well, the process is rather simple:

  1. Decide on a core idea of the game.
  2. Figure out what the player should be able to do and the kinds of requirements this has on the engine.
  3. Implement these requirements in the engine.
  4. Use the features of the engine to build the game content. This requires the most work.
  5. As you develop content and the vision of the game becomes clearer, new ideas and requirements will crystallise. Go back to 3.
  6. Your game is now done.

Again, the bulk of the work lies in making content, which is rather orthogonal to the choice of your language, as long as the tools are mature enough to make you productive. I believe Lisp allows me to be quicker about developing these tools than other languages, but making an actual game would be even quicker if I didn't have to make most of these tools in the first place.

So if there's anything at all that I want for developing games in Lisp, it wouldn't be some magical engine on par with Unreal or whatever, it wouldn't even be more libraries and things. I'm content enough to build those myself. What I'd really like is to find the right mindset for making game content. Maybe, hopefully, I will at some point and I'll actually be able to publish a game worth a damn. If it happens to have been developed with Lisp tools, that's just a bonus.

If you've made it this far: thank you very much for reading my incoherent ramblings. If you're interested in my game project and would like to follow it, or even help working on it, hang out in the #shirakumo channel on Freenode.

08 Dec 2018 10:58am GMT

27 Nov 2018

feedPlanet Lisp

Vsevolod Dyomkin: Structs vs Parametric Polymorphism

Recently, Tamas Papp wrote about one problem he had with Lisp in the context of scientific computing: that it's impossible to specialize methods on parametric types.

While you can tell a function that operates on arrays that these arrays have element type double-float, you cannot dispatch on this, as Common Lisp does not have parametric types.

I encountered the same issue while developing the CL-NLP Lisp toolkit for natural language processing. For instance, I needed to specialize methods on sentences, which may come in different flavors: as lists of tokens, vectors of tokens, lists of strings or some more elaborate data-structure with attached metadata. Here's an example code. There's a generic function to perform various tagпing jobs (POS, NER, SRL etc.) It takes two arguments: the first - as with all CL-NLP generic functions - is the tagger object that is used for algorithm selection, configuration, as well as for storing intermediate state when necessary. The second one is a sentence being tagged. Here are two of its possible methods:


(defmethod tag ((tagger ap-dict-postagger) (sent string)) ...)
(defmethod tag ((tagger ap-dict-postagger) (sent list)) ...)

The first processes a raw string, which assumes that we should invoke some pre-processing machinery that tokenizes it and then, basically, call the second method, which will perform the actual tagging of the resulting tokens. So, list here means list of tokens. But what if we already have the tokenization, but haven't created the token objects? I.e. a list of strings is supplied as the input to the tag method. The CLOS machinery doesn't have a way to distinguish, so we'll have to resort to using typecase inside the method, which is exactly what defmethod replaces as a transparent and extensible alternative. Well, in most other languages we'll stop here and just have to assert that nothing can be done and it should be accepted as is. After all, it's a local nuisance and not a game changer for our code (although Tamas refers to it as a game-changer for his). In Lisp, we can do better. Thinking about this problem, I see at least 3 solutions with a varying level of elegance and portability. Surely, they may seem slightly inferior to such capability being built directly into the language, but demanding to have everything built-in is unrealistic, to say the least. Instead, having a way to build something in ourselves is the only future-proof and robust alternative. And this is what Lisp is known for. The first approach was mentioned by Tamas himself:

You can of course branch on the array element types and maybe even paper over the whole mess with sufficient macrology (which is what LLA ended up doing), but this approach is not very extensible, as, eventually, you end up hardcoding a few special types for which your functions will be "fast", otherwise they have to fall back to a generic, boxed type. With multiple arguments, the number of combinations explodes very quickly.

Essentially, rely on typecase-ing but use macros to blend it into the code in the most non-intrusive way, minimizing boilerplate. This is a straightforward path, in Lisp, and it has its drawbacks for long-running projects that need to evolve over time. But it remains a no-brainer for custom one-offs. That's why, usually, few venture further to explore other alternatives. The other solution was mentioned in the Reddit discussion of the post:

Generics dispatching on class rather than type is an interesting topic. I've definitely sometimes wanted the latter so far in doing CL for non-scientific things. It is certainly doable to make another group of generics that do this using the MOP.

I.e. use the MOP to introduce type-based generic dispatch. I won't discuss it here but will say that similar things were tried in the past quite successfully. ContextL and Layered functions are some of the examples. Yet, the MOP path is rather heavy and has portability issues (as the MOP is not in the standard, although there is the closer-to-mop project that unifies most of the implementations). In my point of view, its best use is for serious and fundamental extension of the CL object system, not to solve a local problem that may occur in some contexts but is not so pervasive. Also, I'd say that the Lisp approach that doesn't mix objects and types (almost) is, conceptually, the right one as these two facilities solve a different set of problems. There's a third - much simpler, clear and portable solution that requires minimal boilerplate and, in my view, is best suited for such level of problems. To use struct-s. Structs are somewhat underappreciated in the Lisp world, not a lot of books and study materials give them enough attention. And that is understandable as there's not a lot to explain. But structs are handy for many problems as they are a hassle-free and efficient facility that provides some fundamental capabilities. In its basic form, the solution is obvious, although a bit heavy. We'll have to define the wrapper structs for each parametric type we'd like to dispatch upon. For example, list-of-strings and list-of-tokens. This looks a little stupid and it is, because what's the semantic value of a list of strings? That's why I'd go for sentence/string and sentence/token which is a clearer naming scheme. (Or, if we want to mimic Julia, sentence<string>).


(defstruct sent/str
toks)

Now, from the method's signature, we will already see that we're dealing with sentences in the tagging process. And will be able to spot when some other tagging algorithm operates on the paragraphs instead of words: let's say, tagging parts of an email with such labels as greeting, signature, and content. Yes, this can also be conveyed via the name of the tagger, but, still, it's helpful. And it's also one of the hypothetical fail cases for a parametric type-based dispatch system: if we have two different kinds of lists of strings that need to be processed differently, we'd have to resort to similar workarounds in it as well. However, if we'd like to distinguish between lists of strings and vectors of strings, as well as more generic sequences of strings we'll have to resort to more elaborate names, like sent-vec/str, as a variant. It's worth noting though that, for the sake of producing efficient compiled code, only vectors of different types of numbers really make a difference. A list of strings or a list of tokens, in Lisp, uses the same accessors so optimization here is useless and type information may be used only for dispatch and, possibly, type checking. Actually, Lisp doesn't support type-checking of homogenous lists, so you can't say :type (list string), only :type list. (Wel, you can, actually uses (and satisfies (lambda (x) (every 'stringp x)), but what's the gain?) Yet, using structs adds more semantic dimensions to the code than just naming. They may store additional metadata and support simple inheritance, which will come handy when we'd like to track sentence positions in the text and so on.


(defstruct sent-vec/tok
(toks nil :type (vector tok)))

(defstruct (corpus-sent-vec/tok (:include sent-vec/tok))
file beg end)

And structs are efficient in terms of both space consumption and speed of slot access.
So, now we can do the following:


(defmethod tag ((tagger ap-dict-postagger) (sent sent/str)) ...)
(defmethod tag ((tagger ap-dict-postagger) (sent sent/tok)) ...)
(defmethod tag ((tagger ap-dict-postagger) (sent sent-vec/tok)) ...)

We'll also have to defstruct each parametric type we'd like to use. As a result, with this approach, we can have the following clean and efficient dispatch:


(defgeneric tag (tagger sent)
(:method (tagger (sent string))
(tag tagger (tokenize *word-splitter* sent))
(:method (tagger (sent sent/str))
(tag tagger (make-sent/tok :toks (map* ^(prog1 (make-tok
:word %
:beg off
:end (+ off (length %)))
(:+ off (1+ (length %)))
@sent.toks)))
(:method ((tagger pos-tagger) (sent sent/tok))
(copy sent :toks (map* ^(copy % :pos (classify tagger
(extract-features tagger %))
@sent.toks))))

CL-USER> (tag *pos-tagger* "This is a test.")
#S(SENT/TOK :TOKS (<This/DT 0..4> <is/VBZ 5..7> <a/DT 8..9>
<test/NN 10..14> <./. 14..15>))

Some of the functions used here, ?, map*, copy, as well as @ and ^ reader macros, come from my RUTILS, which fills the missing pieces of the CL standard library. Also an advantage of structs is that they define a lot of things in the background: invoking type-checking for slots, a readable print-function, a constructor, a builtin copy-structure and more. In my view, this solution isn't any less easy-to-use than the static-typed one (Julia's). There's a little additional boilerplate (defstructs), which may be even considered to have a positive impact on the code's overall clarity. And yes, you have to write boilerplate in Lisp sometimes, although not so much of it. Here's a fun quote on the topic I saw on twitter some days ago:

Lisp is an embarrassingly concise language. If you're writing a bunch of boilerplate in it, you need to read SICP & "Lisp: A Language for Stratified Design".

P.S. There's one more thing I wanted to address from Tamas's post

Now I think that one of the main reasons for this is that while you can write scientific code in CL that will be (1) fast, (2) portable, and (3) convenient, you cannot do all of these at the same time.

I'd say that this choice (or rather a need to prioritize one over the others) exists in every ecosystem. At least, looking at his Julia example, there's no word of portability (citing Tamas's own words about the language: "At this stage, code that was written half a year ago is very likely to be broken with the most recent release."), while convenience may be manifest well for his current use case, but what if we require to implement in the same system features that deal with other areas outside of numeric computing? I'm not so convinced. Or, speaking about Python, which is a goto language for scientific computing. In terms of performance, the only viable solution is to implement the critical parts in C (or Cython). Portable? No. Convenient - likewise. Well, as a user you get convenience, and speed, and portability (although, pretty limited). But at what cost? I'd argue that developing the Common Lisp scientific computing ecosystem to a similar quality would have required only 10% of the effort that went into building numpy and scipy...

27 Nov 2018 7:49pm GMT

26 Nov 2018

feedPlanet Lisp

Wimpie Nortje: How to write test fixtures for FiveAM.

When you write a comprehensive test suite you will most likely need to repeat the same set up and tear down process multiple times because a lot of the tests will test the same basic scenario in a slightly different way.

Testing frameworks address this code repetition problem with "fixtures". FiveAM also has this concept, although slightly limited1.

FiveAM implements fixtures as a wrapper around defmacro. The documentation states:

NB: A FiveAM fixture is nothing more than a macro. Since the term 'fixture' is so common in testing frameworks we've provided a wrapper around defmacro for this purpose.

There are no examples in the documentation on what such a fixture-macro should look like. Do you need the usual macro symbology like backticks and splicing or not? If so how? This can be difficult to decipher if you are not fluent in reading macros. The single example in the source code is making things worse because it does include backticks and splicing.

FiveAM defines a macro def-fixture which allows you write your fixtures just like normal functions with the one exception that there is an implicit variable &body to represent your test code. No fiddling with complex macros!

This is a simple example:

(def-fixture in-test-environment ()
  "Set up and tear down the test environment."
  (setup-code)
  (&body)
  (teardown-code))

(def-test a-test ()
  "Test in clean environment."
  (with-fixture in-test-environment ()
    (is-true (some-function))))

The fixture implementation provides an easy-to-use definition syntax without any additional processing. If you need more complex macros than what def-fixture can handle you can write normal Lisp macros as usual without interfering with FiveAM's operation.

  1. Some frameworks can apply fixtures to the test suite (as opposed to a test) so that it executes only once before any test in the suite is run and once after all tests have completed, regardless of how many tests in the suite are actually executed. FiveAM does not have this capability.

26 Nov 2018 12:00am GMT

02 Nov 2018

feedPlanet Lisp

Eugene Zaikonnikov: Some documents on AM and EURISKO

Sharing here a small collection of documents by Douglas B. Lenat related to design AM and EURISKO that I assembled over the years. These are among the most famous programs of symbolic AI era. They represent so-called 'discovery systems'. Unlike expert systems, they run loosely-constrained heuristic search in a complex problem domain.

AM was Lenat's doctoral thesis and the first attempt of such kind. Unfortunately, it's all described in rather informal pseudocode, a decision that led to a number of misunderstandings in follow-up criticism. Lenat has responded to that in one of the better known publications, Why AM and EURISKO appear to work.

AM was built around concept formation process utilizing a set of pre-defined heuristics. EURISKO takes it a step further, adding the mechanism of running discovery search on its own heuristics. Both are specimen of what we could call 'Lisp-complete' programs: designs that require Lisp or its hypothetical, similarly metacircular equivalent to function. Their style was idiomatic to INTERLISP of 1970s, making heavy use of FEXPRs and self-modification of code.

There's quite a lot of thorough analysis available in three-part The Nature of Heuristics: part one, part two. The third part contains the most insights into the workings of EURISKO. Remarkable quote of when EURISKO discovered Lisp atoms, reflecting it was written before the two decade pause in nuclear annihilation threat:

Next, EURISKO analyzed the differences between EQ and EQUAL. Specifically, it defined the set of structures which can be EQUAL but not EQ, and then defined the complement of that set. This turned out to be the concept we refer to as LISP atoms. In analogy to humankind, once EURISKO discovered atoms it was able to destroy its environment (by clobbering CDR of atoms), and once that capability existed it was hard to prevent it from happening.

Lenat's eventual conclusion from all this was that "common sense" is necessary to drive autonomous heuristic search, and that a critical mass of knowledge is necessary. That's where his current CYC project started off in early 1990s.

Bonus material: The Elements of Artificial Intelligence Using Common Lisp by Steven L. Tanimoto describes a basic AM clone, Pythagoras.

02 Nov 2018 3:00pm GMT

27 Oct 2018

feedPlanet Lisp

CL Test Grid: quicklisp 2018-10-18

Test diff with the previous release:
https://common-lisp.net/project/cl-test-grid/ql/quicklisp-2018-10-18-diff2.html

27 Oct 2018 1:18pm GMT

19 Oct 2018

feedPlanet Lisp

Quicklisp news: October 2018 Quicklisp dist update now available

New projects:

Updated projects: array-utils, asdf-viz, assoc-utils, binary-io, bit-smasher, cari3s, cepl, cl+ssl, cl-ana, cl-cffi-gtk, cl-collider, cl-colors2, cl-i18n, cl-kanren, cl-ledger, cl-liballegro, cl-mecab, cl-mixed, cl-neovim, cl-notebook, cl-patterns, cl-plumbing, cl-portmanteau, cl-postgres-plus-uuid, cl-progress-bar, cl-pslib, cl-pslib-barcode, cl-python, cl-rabbit, cl-sdl2, cl-sdl2-image, cl-sdl2-mixer, cl-sdl2-ttf, clack, closer-mop, closure-common, clunit2, clx, codex, colleen, commonqt, croatoan, cxml, cxml-stp, datafly, definitions, dexador, djula, dml, do-urlencode, dufy, dynamic-mixins, easy-audio, eclector, femlisp, function-cache, fxml, gamebox-math, geowkt, golden-utils, harmony, iclendar, inquisitor, integral, ironclad, lack, lass, lichat-tcp-server, log4cl, maiden, mcclim, mito, mito-attachment, myway, nineveh, ningle, overlord, pango-markup, parachute, parser.ini, perlre, petalisp, place-utils, plexippus-xpath, plump-sexp, postmodern, prepl, print-licenses, qlot, qtools, quri, read-csv, rove, s-dot2, scalpl, sel, serapeum, shadow, shuffletron, sly, split-sequence, st-json, staple, stmx, stumpwm, sxql, time-interval, tooter, trace-db, track-best, trivia, trivial-benchmark, trivial-garbage, trivial-gray-streams, trivial-indent, trivial-utilities, ubiquitous, utm, varjo, vernacular, woo, wookie.

Removed projects: clot, clpmr, cobstor, html-sugar, ie3fp, manardb, metafs, mime4cl, net4cl, npg, ods4cl, plain-odbc, quid-pro-quo, sanitized-params, sclf, smtp4cl, tiff4cl.

The removed projects no longer work on SBCL.

To get this update, use (ql:update-dist "quicklisp"). Enjoy!

19 Oct 2018 9:46pm GMT

30 Sep 2018

feedPlanet Lisp

Vsevolod Dyomkin: ANN: flight-recorder - a robust REPL logging facility

Interactivity is a principal requirement for a usable programming environment. Interactivity means that there should be a shell/console/REPL or other similar text-based command environment. And a principal requirement for such an environment is keeping history. And not just keeping it, but doing it robustly:

  • recording history from concurrently running sessions
  • keeping unlimited history
  • identifying the time of the record and its context

This allows to freely experiment and reproduce the results of successful experiments, as well as go back to an arbitrary point in time and take another direction of your work, as well as keeping DRY while performing common repetitive tasks in the REPL (e.g. initialization of an environment or context).

flight-recorder or frlog (when you need a distinct name) is a small tool that intends to support all these requirements. It grew out of a frustration with how history is kept in SLIME, so it was primarily built to support this environment, but it can be easily utilized for other shells that don't have good enough history facility. It is possible due to its reliance on the most common and accessible data-interchange facility: text-based HTTP.

frlog is a backend service that supports any client that is able to send an HTTP request.

The backend is a Common Lisp script that can be run in the following manner (probably, the best way to do it is inside screen):


sbcl --noprint --load hunch.lisp -- -port 7654 -script flight-recorder.lisp

If will print a bunch of messages that should end with the following line (modulo timestamp):


[2018-09-29 16:00:53 [INFO]] Started hunch acceptor at port: 7654.

The service appends each incoming request to the text file in markdown format: ~/.frlog.md.

The API is just a single endpoint - /frlog that accepts GET and POST requests. The parameters are:

  • text is the content (url-encoded, for sure) of the record that can, alternatively, be sent in the POST request's body (more robust)

Optional query parameters are:

  • title - used to specify that this is a new record: for console-based interactions, usually, there's a command and zero or more results - a command starts the record (and thus should be accompanied with the title: for SLIME interactions it's the current Lisp package and a serial number). An entitled record is added in the following manner:

    ### cl-user (10) 2018-09-29_15:49:17

    (uiop:pathname-directory-pathname )
    If there's no title, the text is added like this:

    ;;; 2018-09-29_15:49:29

    #<program-error @ #x100074bfd72>
  • tag - if provided it signals that the record should be made not to a standard .frlog.md file, but to .frlog-<tag>.md. This allows to easily log a specific group of interactions separately If the response code is 200 everything's fine.

Currently, 2 clients are available:

  • a SLIME client flight-recorder.el that monkey-patches a couple of basic SLIME functions (just load it from Emacs if you have SLIME initialized)
  • and a tiny Lisp client frlog.lisp

P.S. To sum up, more and more I've grown to appreciate simple (sometimes even primitive - the primitive the better :) tools. flight-recorder seems to me to be just like that: it was very easy to hack together, but it solves an important problem for me and, I guess, for many. And it's the modern "Unix way": small independent daemons, text-based formats and HTTP instead of pipes...

P.P.S. frlog uses another tiny tool of mine - hunch that I've already utilized in a number of projects but haven't described yet - it's a topic for another post. In short, it is a script to streamline running hunchentoot that does all the basic setup and reduces the developer's work to just defining the HTTP endpoints.

P.P.P.S. I know, the name is, probably, taken and it's a rather obvious one. But I think it just doesn't matter in this case... :)

30 Sep 2018 5:15pm GMT

31 Aug 2018

feedPlanet Lisp

Quicklisp news: August 2018 Quicklisp dist update now available

New projects:

Updated projects: 3d-matrices, 3d-vectors, a-cl-logger, acclimation, ahungry-fleece, alexa, algebraic-data-library, april, arc-compat, array-utils, bodge-sndfile, caveman, cepl, cepl.drm-gbm, chirp, cl+ssl, cl-ana, cl-bnf, cl-bootstrap, cl-colors2, cl-conllu, cl-csv, cl-dbi, cl-feedparser, cl-flac, cl-flow, cl-fond, cl-gamepad, cl-gendoc, cl-generator, cl-gpio, cl-grace, cl-i18n, cl-k8055, cl-kanren, cl-libuv, cl-mixed, cl-monitors, cl-mpg123, cl-oclapi, cl-opengl, cl-out123, cl-patterns, cl-ppcre, cl-progress-bar, cl-project, cl-pslib, cl-pslib-barcode, cl-sdl2-ttf, cl-soil, cl-soloud, cl-spidev, cl-str, cl-virtualbox, cl-wayland, cl-yesql, clack, claw, clip, clml, closer-mop, clss, clunit2, colleen, configuration.options, conium, croatoan, crypto-shortcuts, curry-compose-reader-macros, dartsclhashtree, deeds, deferred, definitions, delta-debug, deploy, dexador, dissect, dml, documentation-utils, dufy, eazy-gnuplot, eazy-project, eclector, fare-scripts, fast-http, flare, flow, flute, for, form-fiddle, fxml, glsl-spec, glsl-toolkit, golden-utils, gsll, halftone, harmony, humbler, ironclad, jsonrpc, kenzo, lack, lambda-fiddle, lass, legit, lichat-protocol, lichat-serverlib, lichat-tcp-client, lichat-tcp-server, lichat-ws-server, lionchat, lisp-executable, lispbuilder, listopia, lquery, maiden, mcclim, mito, modularize, modularize-hooks, modularize-interfaces, more-conditions, multiposter, neo4cl, nibbles, nineveh, north, oclcl, opticl, overlord, oxenfurt, parachute, pathname-utils, perlre, piping, plump, plump-bundle, plump-sexp, plump-tex, postmodern, qlot, qt-libs, qtools, qtools-ui, racer, random-state, ratify, redirect-stream, rfc2388, rutils, sanitized-params, sel, serapeum, shadow, simple-inferiors, simple-tasks, slime, snooze, softdrink, south, spinneret, staple, stumpwm, sxql, terminfo, thnappy, tooter, trace-db, trivial-arguments, trivial-battery, trivial-benchmark, trivial-clipboard, trivial-gray-streams, trivial-indent, trivial-main-thread, trivial-mimes, trivial-thumbnail, umbra, usocket, uuid, varjo, verbose, vgplot, whofields, with-c-syntax, woo, wookie, xml.location.

Removed projects: cl-clblas, cl-proj.

There are no direct problems with cl-clblas and cl-proj. They are victims of a hard drive crash on my end, and an incomplete recovery. I have not been able to set up the foreign libraries required to build those projects in time for this month's release.

If you want to continue using cl-clblas and cl-proj, there are a few options:

Sorry for any inconvenience this may cause. I hope to have it fully resolved in the September 2018 update.

To get this update, use (ql:update-dist "quicklisp").

Enjoy!

31 Aug 2018 8:02pm GMT

27 Aug 2018

feedPlanet Lisp

Zach Beane: A Road to Common Lisp - Steve Losh

A Road to Common Lisp - Steve Losh

27 Aug 2018 4:47pm GMT