I discovered Structure and Interpretation of Computer Programs in my late teens and quickly moved on to Common Lisp, working my way up from a z80 macro assembler to various web frameworks, and fun projects like a CAM system. After 12 years where I didn't work with Lisp at all, I recently decided to go back, and I am delighted by what I found. This is a series of articles that articulate my thoughts about coming back to an old love, and document the very practical things I found along the way.
When I mention Lisp in this article, it will refer to either Scheme or Common Lisp, the two languages I have actually used. You can probably replace them with Emacs Lisp or Clojure or any other SEXP based language and follow along just as well. I hate it when people write LISP uppercase as if we were still using something from the 60ies, I'm going with Lisp to convey a modern touch.
In this first article, I will talk about what I missed most: working inside a language.
REPL programming is the foundation
While many languages offer a REPL (read eval loop, i.e. a prompt you can use to execute statements), few adopt it as the central way to interact with your software system.
In Lisp projects, you write functions and modules and packages in files, as is usual in programming projects, but you always have the compiler running along, compiling what you write and giving you feedback on what you typed. In traditional IDEs, the IDEs understanding of the program is divorced from the execution environment (either by being implemented in the IDE itself, or being run in a separate Language Server Process). With Lisp, the compiler functions as LSP to help you interact with your code (go to definition, inspect, etc...), as quick prompt to run experiments, as debugger to trace / debug / instrument and interact with the actual running system, as shell to manage your packages, deployments, builds and runtime systems.
With Lisp, everything feels intimately (and robustly) connected.
Programming as language creation is explicit
Programming is about getting computers to do things for us. But computers only really care about instructions that their CPU can execute. Since our brains can't comprehend streams of assembly language, we created programming languages, coherent, readable ways of assembling words and concepts so that we can collaborate amongst humans on the one side, and have computers execute our ideas on the other.
These dialects are shaped by:
- frameworks and libraries used (i.e., we use react and redux)
- design patterns used (we use higher order components and context providers for a global store)
- code and naming conventions (we call our handlers onX, and our store actions are of the form verbObjectObject. we use immer for imperative-like store reducers)
Finding a satisfying API, syntax and naming conventions for concepts can be tremendously difficult. Impedance mismatch with the underlying programming languages can also mean that bugs are easier to make than they really should. When transpiling or using advanced meta programming techniques, the runtime errors are often hard to map back to the original code. Dialects still feel like dialects, modified, lived, bastardized versions of the underlying programming language.
Lisp languages don't really have much in way of syntax, as you usually write the program in terms of nested linked lists representing the abstract syntax tree. This gives you a much more simpler tool to not only create a programming dialect, but actually modify the underlying grammar to allow for a much more concise expression of useful concepts.
This is a two edged sword, as it is easy to create incoherent project languages with inscrutable grammatical extensions. A project usually needs at most one or two grammatical extensions to support its project language, and these are usually trivial (for example, an easy way to define state machine enums). In traditional languages, a clever closure pattern or some code generation will get you there just as well.
The beauty of Lisp however is during the ideation phase. It is very easy to try out different syntax ideas, move seamlessly between the meta and the practical level, run experiments in the REPL, massage syntax. This makes it possible to quickly home in on what fundamental concepts for the project are, and expressing them succinctly.
I missed experimenting with concepts at the language level
Over time, I forgot how easy it was to use Lisp to experiment with different approaches. Designing a concurrent task language in C++ takes many lines of code and a lot of careful thought. While you can sketch things out pretty quickly using macros and code generation, or by being well acquainted with C++ templates, you still wrestle with a lot of syntax and operational complexity.
In a Lisp language, you can experiment by writing a program as you wish you could write it, then implementing it in 3 macros and then running it, printing out ASTs in the REPL for debugging. Building a grammar for concurrent data streams is an afternoon project.
Top comments (3)
Do you have any experience with Lisp (scheme, clojure, common lisp, racket, etc...)? What did you learn from it? What is your favourite feature?
I am currently learning racket, which is amazing. I regret not having looked into it earlier, as it allows you to quickly add proper syntax to the languages you create. Parens and SEXPs tend to scare people away.
To me, parentheses are one of the best features of Lisp. It is interesting that the s-expression based syntax was seen as an intermediate syntax by John McCarthy, to be eventually replaced by m-expressions that should be more "user friendly". Over the years, though, people who were actually using Lisp found that s-expressions are great, because they contribute a lot to the malleability of Lisp programs. For one, implementing macros is much easier if the "code is data" notion is retained, and not obstructed by another artificial syntactic layer. For another, s-expressions make it easy to write interactive programming support tools in editors, creating natural ways how to format programs and removing the need to interactively parse a complex syntax.
I find this particularly interesting because it underlines the idea that Lisp was not really developed, but rather discovered, and that the appreciation of it grew when people used it. It takes the curious mind to look at Lisp, and I don't really believe that the parentheses are the reason why Lisp is not used more widely.
The reason for the lack of adoption of Lisp has more to do with how programming as an activity has been industrialized. Lisp works best, and caters for, the individual working with the machine in a dialog. The programmer and the system work together to experiment and explore possible solutions. Solutions are often ephemeral, being part of a REPL interaction.
This notion, the dialog between the machine and the programmer, does not fit the model of industrialized software development, in which the source code to programs is the primary artifact that programmers work with. In that model, the system helps create that primary artifact and the dialog between the programmer and the system is about creating the source code that implements the solution, not about the solution itself. In the industrial context, there are many good reason that support the validity of this model. At the same time, Lisp with its REPL and strong interaction of the programmer and the running system, does not fit the model well.
I'm learning it by using Emacs. And I started out as a brain stretch and now it's so elegant and composible; with macros really coming into their own as I think and implement.