18 Jan 2021

feedPlanet Lisp

Tycho Garen : Learning Common Lisp Again

In a recent post I spoke about abandoning a previous project that had gone off the rails, and I've been doing more work in Common Lisp, and I wanted to report a bit more, with some recent developments. There's a lot of writing about learning to program for the first time, and a fair amount of writing about lisp itself, neither are particularly relevant to me, and I suspect there may be others who might find themselves in a similar position in the future.

My Starting Point

I already know how to program, and have a decent understanding of how to build and connect software components. I've been writing a lot of Go (Lang) for the last 4 years, and wrote rather a lot of Python before that. I'm an emacs user, and I use a Common Lisp window manager, so I've always found myself writing little bits of lisp here and there, but it never quite felt like I could do anything of consequence in Lisp, despite thinking that Lisp is really cool and that I wanted to write more.

My goals and rational are reasonably simple:

  • I'm always building little tools to support the way that I use computers, nothing is particularly complex, but it'd enjoy being able to do this in CL rather than in other languages, mostly because I think it'd be nice to not do that in the same languages that I work in professionally. [1]
  • Common Lisp is really cool, and I think it'd be good if it were more widely used, and I think by writing more of it and writing posts like this is probably the best way to make that happen.
  • Learning new things is always good, and I think having a personal project to learn something new will be a good way of stretching my self as a developer. Most of my development as a programmer has focused on
  • Common Lisp has a bunch of features that I really like in a programming language: real threads, easy to run/produce static binaries, (almost) reasonable encapsulation/isolation features.

On Learning

Knowing how to program makes learning how to program easier: broadly speaking programming languages are similar to each other, and if you have a good model for the kinds of constructs and abstractions that are common in software, then learning a new language is just about learning the new syntax and learning a bit more about new idioms and figuring out how different language features can make it easier to solve problems that have been difficult in other languages.

In a lot of ways, if you already feel confident and fluent in a programming language, learning a second language, is really about teaching yourself how to learn a new language, which you can then apply to all future languages as needed.

Except realistically, "third languages" aren't super common: it's hard to get to the same level of fluency that you have with earlier languages, and often we learn "third-and-later" languages are learned in the context of some existing code base or project4, so it's hard to generalize our familiarity outside of that context.

It's also the case that it's often pretty easy to learn a language enough to be able to perform common or familiar tasks, but fluency is hard, particularly in different idioms. Using CL as an excuse to do kinds of programming that I have more limited experience with: web programming, GUI programming, using different kinds of databases.

My usual method for learning a new programming language is to write a program of moderate complexity and size but in a problem space that I know pretty well. This makes it possible to gain familiarity, and map concepts that I understand to new concepts, while working on a well understood project. In short, I'm left to focus exclusively on "how do I do this?" type-problems and not "is this possible," or "what should I do?" type-problems.

Conclusion

The more I think about it, the more I realize that when we talk about "knowing a programming language," inevitably linked to a specific kind of programming: the kind of Lisp that I've been writing has skewed toward the object oriented end of the lisp spectrum with less functional bits than perhaps average. I'm also still a bit green when it comes to macros.

There are kinds of programs that I don't really have much experience writing:

  • GUI things,
  • the front-half of the web stack, [2]
  • processing/working with ASTs, (lint tools, etc.)
  • lower-level kind of runtime implementation.

There's lots of new things to learn, and new areas to explore!

Notes

[1] There are a few reasons for this. Mostly, I think in a lot of cases, it's right to choose programming languages that are well known (Python, Java+JVM friends, and JavaScript), easy to learn (Go), and fit in with existing ecosystems (which vary a bit by domain,) so while it might the be right choice it's a bit limiting. It's also the case that putting some boundaries/context switching between personal projects and work projects could be helpful in improving quality of life.
[2] Because it's 2020, I've done a lot of work on "web apps," but most of my work has been focused on areas of applications including including data layer, application architecture, and core business logic, and reliability/observability areas, and less with anything material to rendering web-pages. Most projects have a lot of work to be done, and I have no real regrets, but it does mean there's plenty to learn. I wrote an earlier post about the problems of the concept of "full-stack engineering" which feels relevant.

18 Jan 2021 12:00am GMT

17 Jan 2021

feedPlanet Lisp

Alexander Artemenko: declt

This is the documentation builder behind Quickref site. It is good for generating API references for third party libraries.

Most interesting features of Declt are:

As always, I've created a template project, ready to be used:

https://github.com/cl-doc-systems/declt

Here is how it is rendered in HTML:

https://cl-doc-systems.github.io/declt/

And in PDF:

https://cl-doc-systems.github.io/declt/index.pdf

Sadly, Declt does not support markup in docstrings and cross-referencing does not work there.

Some other pros and cons are listed on example site.

Remember, all example projects from https://github.com/cl-doc-systems include a build script and GitHub Action to update documentation on every commit!

17 Jan 2021 1:58pm GMT

12 Jan 2021

feedPlanet Lisp

Jonathan Godbout: Proto Cache: A Caching Story

What is Proto-Cache?

I've been working internally at Google to open source several libraries including cl-protobufs and a series of utility libraries we call "ace". I wrote several blog posts making an HTTP server that takes in either protocol buffers or JSON strings and responds in kind. I think I have worked enough on Mortgage Server and wish to work on a different project.

Proto-cache will grow up to be a pub-sub system that takes in google.protobuf:any protos and send them to users over http requests. I'm developing it to showcase the ace.core library and the Any proto well-known-type. In this post we create a cache system which stores google.protobuf.any messages in a hash-table keyed off of a symbol.

The current incarnation of Proto Cache:

The code can be found here: https://github.com/Slids/proto-cache

Proto-cache.asd:

This is remarkable in-as-much as cl-protobufs isn't required for the defsystem! It's not required at all, but we do require the cl-protobufs.google.protobuf:any protocol buffer message object. Right now we are only adding and getting it from the cache. This allows us to store a protocol buffer message object that any user system can parse by calling unpack-any. We never have to understand the message inside.

Proto-cache.lisp:

The actual implementation. We give three different functions:

We also have a:

Note: The ace.core library can be found at: https://github.com/cybersurf/ace.core

Fast-read mutex (fr-mutex):

The first interesting thing to note is the fast-read mutex. This can be found in the ace.core.thread package included in the ace.core utility library. This allows for mutex free reads of a protected region of code. One has to call:

If the body of with-frmutex-read is finished with nobody calling with-frmutex-write then the value is returned. If someone calls with-frmutex-write while another thread is in with-frmutex-read then the body of with-frmutex-read has to be re-run. One should be careful to not modify state in the with-frmutex-read body.

Discussion About the Individual Functions

get-from-cache:

(acd:defun* get-from-cache (key)
  "Get the any message from cache with KEY."
  (declare (acd:self (symbol) google:any))
  (act:with-frmutex-read (cache-mutex)
    (gethash key cache)))


This function uses the defun* form from ace.core.defun. It looks the same as a standard defun except has a new declare statement. The declare statement takes the form

(declare (acd:self (lambda-list-type-declarations) output-declaration))

In this function we state that the input KEY must be a symbol and the return value is going to be a google:any protobuf message. The output declaration is optional. For all of the options please see the macro definition for ace.core.defun:defun*.

The with-fr-mutex-read macro is also being used.

Note in the macro's body we only do a simple accessor call into a hash-table. Safety is not guaranteed, only consistency.

set-in-cache:

(acd:defun* set-in-cache (key any)
  "Set the ANY message in cache with KEY."
  (declare (acd:self (symbol google:any) google:any))
  (act:with-frmutex-write (cache-mutex)
    (setf (gethash key cache) any)))

We see that the new defun* call is used. In this case we have two inputs, KEY will be a symbol ANY will be a google:any proto message. We also see that we will return a google:any proto message.

The with-frmutex-write macro is being used. The only thing that is done in the body is setting a cache value. If we try to get a message from the cache and set a message into the cache, it is possible a reader will have to read multiple times. In systems where readers are more common than writers fr-mutexes and spinlocking are much faster than having readers lock a mutex for every read..

remove-from-cache:

We omit this function in this write-up for brevity.

Conclusion:

Fast-read mutexes like the one found in ace.core.thread are incredibly useful tools. Having to access a mutex can be slow even in cases where that mutex is never locked. I believe this is one of the more useful additions in the ace.core library.

The new defun* macro found in ace.core.defun for creating function definitions is more mixed. I find a lack of clarity in mapping the lambda list s-expression in the defun statement to the s-expression in the declaration. Others may find it provides nicer syntax and the clarity is more obvious.

Future posts will show the use of the any protocol buffer message.

As usual Carl Gay gave copious edits and suggestions.

12 Jan 2021 9:22pm GMT