31 Jul 2025
Planet Lisp
Joe Marshall: JRM runs off at the mouth
Although LLMs perform a straightforward operation - they predict the next tokens from a sequence of tokens - they can be almost magical in their results if the stars are aligned. And from the look of it, the stars align often enough to be useful. But if you're unlucky, you can end up with a useless pile of garbage. My LLM started spitting out such gems as Cascadescontaminantsunnatural and exquisiteacquire the other day when I requested it imagine some dialog. Your mileage will vary, a lot.
The question is whether the magic outweighs the glossolalia. Can we keep the idiot savant LLM from evangelically speaking in tongues?
Many people at work are reluctant to use LLMs as an aid to programming, preferring to hand craft all their code. I understand the sentiment, but I think it is a mistake. LLMs are a tool of extraordinary power, but you need to develop the skill to use them, and that takes a lot of time and practice.
The initial key to using LLMs is to get good at prompting them. Here a trained programmer has a distinct advantage over a layperson. When you program at a high level, you are not only thinking about how to solve your problem, but also all the ways you can screw up. This is "defensive programming". You check your inputs, you write code to handle "impossible" cases, you write test cases that exercise the edge cases. (I'm no fan of test-driven development, but if I have code that is supposed to exhibit some complex behavior, I'll often write a few test cases to prove that the code isn't egregiously broken.)
When you prompt an LLM, it helps a lot to think in the same way you program. You need to be aware of the ways the LLM can misinterpret your prompt, and you need to write your prompt so that it is as clear as possible. You might think that this defeats the purpose. You are essentially performing the act of programming with an extra natural language translation step in the middle. This is true, and you will get good results if you approach the task with this in mind. Learning to effectively prompt an LLM is very similar to learning a new programming language. It is a skill that a trained programmer will have honed over time. Laypeople will find it possible to generate useful code with an LLM, but they will encounter bugs and problems that they will have difficulty overcoming. A trained programmer will know precisely how to craft additional clauses to the prompt to avoid these problems.
Context engineering is the art of crafting a series of prompts to guide the LLM to produce the results you want. If you know how to program, you don't necessarily know how to engineer large systems. If you know how to prompt, you don't necessarily know how to engineer the context. Think of Mickey Mouse in Fantasia. He quickly learns the prompts that get the broom to carry the water, but he doesn't foresee the consequences of exponential replication.
Ever write a program that seems to be taking an awfully long time to run? You do a back-of-the-envelope calculation and realize that the expected runtime will be on the order of 1050 seconds. This sort of problem won't go away with an LLM, but the relative number of people ill-equipped to diagnose and deal with the problem will certainly go up. Logical thinking and foreseeing of consequences will be skills in higher demand than ever in the future.
You won't be able to become a "machine whisperer" without a significant investment of time and effort. As a programmer, you already have a huge head start. Turn on the LLM and use it in your daily workflow. Get a good feel for its strengths and weaknesses (they'll surprise you). Then leverage this crazy tool for your advantage. It will make you a better programmer.
31 Jul 2025 2:14am GMT
30 Jul 2025
Planet Lisp
Joe Marshall: Novice to LLMs — LLM calls Lisp
I'm a novice to the LLM API, and I'm assuming that at least some of my readers are too. I'm not the very last person to the party, am I?
When integrating the LLM with Lisp, we want to allow the LLM to direct queries back to the Lisp that is invoking it. This is done through the function call protocol. The client supplies to the LLM a list of functions that the LLM may invoke. When the LLM wants to invoke the function, instead of returing a block of generated text, it returns a JSON object indicating a function call. This contains the name of the function and the arguments. The client is supposed to invoke the function, but to return an answer, it actually makes a new call into the LLM and it concatenates the entire conversation so far along with the result of the function call. It is bizarro continuation-passing-style where the client acts as a trampoline and keeps track of the continuation.
So, for example, by exposing lisp-implementation-type
and lisp-implementation-version
, we can then query the LLM:
> (invoke-gemini "gemini-2.5-flash" "What is the type and version of the lisp implementation?") "The Lisp implementation is SBCL version 2.5.4."
30 Jul 2025 2:49pm GMT
28 Jul 2025
Planet Lisp
Joe Marshall: Pseudo
I was wondering what it would look like if a large language model were part of your programming language. I'm not talking about calling the model as an API, but rather embedding it as a language construct. I came up with this idea as a first cut.
The pseudo
macro allows you to embed pseudocode expressions in your Common Lisp code. It takes a string description and uses an LLM to expand it into an s-expression. You can use pseudo
anywhere an expression would be expected.
(defun my-func (a b) (pseudo "multiply b by factorial of a.")) MY-FUNC (my-func 5 3) 360 (defun quadratic (a b c) (let ((d (sqrt (pseudo "compute discriminant of quadratic equation")))) (values (/ (+ (- b) d) (* 2 a)) (/ (- (- b) d) (* 2 a))))) QUADRATIC (quadratic 1 2 -3) 1.0 -3.0
The pseudo
macro gathers contextual information and packages it up in a big set of system instructions to the LLM. The instructions include
- the lexically visible variables in the macro environment
- fbound symbols
- bound symbols
- overall directives to influence code generation
- directives to influence the style of the generated code (functional vs. imperative)
- directives to influence the use of the loop macro (prefer vs. avoid)
- the source code of the file currently being compiled, if there is one
pseduo
sets the LLM to use a low temperature for more predictable generation. It prints the "thinking" of the LLM.
Lisp is a big win here. Since Lisp's macro system operates at the level of s-expressions, it has more contextual information available to it than a macro system that is just text expansion. The s-expression representation means that we don't need to interface with the language's parser or compiler to operate on the syntax tree of the code. Adding pseudo
to a language like Java would be a much more significant undertaking.
pseudo
has the usual LLM caveats:
- The LLM is slow.
- The LLM can be expensive.
- The LLM can produce unpredictable and unwanted code.
- The LLM can produce incorrect code; the more precise you are in your pseudocode, the more likely you are to get the results you want.
- You would be absolutely mad to use this in production.
pseudo
has one dependency on SBCL which is a function to extract the lexically visible variables from the macro environment. If you port it to another Common Lisp, you'll want to provide an equivalent function.
pseudo
was developed using Google's Gemini as the back end, but there's no reason it couldn't be adapted to use other LLMs. To try it out, you'll need the gemini library, available at https://github.com/jrm-code-project/gemini, and a Google API key.
Download pseudo
from https://github.com/jrm-code-project/pseudo.
You'll also need these dependencies.
alexandria
- available from Quicklispcl-json
- available from Quicklispdexador
- available from Quicklispfold
- https://github.com/jrm-code-project/foldfunction
- https://github.com/jrm-code-project/functionnamed-let
- https://github.com/jrm-code-project/named-letuiop
- available from Quicklisp
If you try it, let me know how it goes.
28 Jul 2025 9:41am GMT