DEV Community

Chryso Lambda
Chryso Lambda

Posted on

The AI Industry Is Accidentally Reinventing Lisp

The AI Industry Is Accidentally Reinventing Lisp

There's a running joke in certain programming circles: every sufficiently advanced system eventually reinvents Lisp, badly. It was funny for decades. In 2026, it stopped being funny because it became literally true.

Look at what the AI agent ecosystem actually does. Agents receive structured instructions as data. They parse those instructions, decide what tools to call, compose new instructions for sub-agents, and sometimes rewrite their own behavior mid-execution. Code generates code. Data becomes instructions becomes data again. If you squint even slightly, you're looking at homoiconicity — the property where code and data share the same representation. Lisp had this in 1958.

Agents Are Just S-Expressions With Extra Steps

The typical agent framework in 2026 works like this: you define a set of tools as structured descriptions (usually JSON), hand them to a language model along with a task, and the model emits structured calls that get executed by a runtime. The runtime feeds results back, the model emits more calls, and this loops until the task is done.

Now describe a Lisp program: you define functions, compose them into s-expressions, and an evaluator walks the tree, applying functions to arguments, feeding results into the next expression. The program can inspect its own structure because its code is just lists. It can generate new code at runtime through macros.

The parallel isn't surface-level. It's structural. Both systems treat executable instructions as manipulable data structures. Both support dynamic composition. Both allow the running system to extend itself. The difference is that Lisp was designed this way on purpose, while the AI industry arrived here by accident after trying everything else first.

The Macro System They Don't Know They Want

Here's where it gets specific. Every major agent framework has reinvented some version of Lisp macros, just worse.

LangChain has "chains" — composable sequences of operations that transform and route data. CrewAI has "processes" that orchestrate agent collaboration patterns. AutoGen has conversation patterns that restructure agent communication at a meta level. These are all compile-time (or pre-execution) transformations of program structure based on declarative descriptions.

That's what macros do. A Lisp macro takes code as input, transforms it at compile time, and produces new code as output. It operates on the structure of the program itself, not just its values. Every agent "orchestration framework" is groping toward this same idea, but they're doing it through YAML config files and Python decorator soup instead of just... writing macros.

The LISA project — Lisp-based Intelligent Software Agents — has been doing production rule systems in Common Lisp for over two decades. It got a new release in November 2025. Nobody in the AI agent hype cycle noticed.

Neuro-Symbolic AI: The Return of What Lisp Never Left

2026 is supposedly "the year of neuro-symbolic AI." The pitch: combine neural networks (pattern recognition, language fluency) with symbolic systems (logical reasoning, explainability). The neural side handles perception and generation; the symbolic side handles structured thought.

Lisp was built for symbolic computation. That's the "LIS" in the name — LISt Processing for symbolic reasoning. When neuro-symbolic papers talk about "reasoning backbones" and "knowledge representation layers," they're describing systems that Lisp environments have provided natively for six decades. Condition systems for structured error handling. CLOS for knowledge representation with multiple dispatch. The entire MOP (Meta-Object Protocol) for systems that reason about their own structure.

The irony is thick. The AI industry spent fifteen years insisting that neural networks made symbolic AI obsolete. Now they're bolting symbolic reasoning back on because pure neural approaches hallucinate, can't explain their decisions, and fail at multi-step logic. They could have just... not thrown away the symbolic part.

The Text-Centric Argument

There's a dev.to post from late 2025 called "The Infinite Buffer: Why AI Agents Are Rebuilding the Lisp Machine" that nails a point most people miss. Current AI agents work in text. They read text, they emit text, they reason (such as it is) over text. Graphical interfaces are a barrier for them, not an aid.

Lisp machines were text-centric environments where everything — the editor, the debugger, the shell, the application — existed in a unified, inspectable, modifiable text space. Emacs is the surviving descendant of this idea. And it's not a coincidence that AI coding assistants work best in text-heavy environments. The entire paradigm of "give the agent a terminal and let it work" is a rediscovery of what Genera provided on Symbolics hardware in the 1980s.

We ripped out that architecture, replaced it with GUIs optimized for mouse-clicking humans, and now we're building text-centric agent environments on top of the GUI layer. Like paving over a perfectly good rail line, building a highway, then wondering why we need trains.

Why It Keeps Happening

There's a pattern here that goes beyond technical coincidence. Lisp was designed by John McCarthy as a mathematical notation for computation that happened to also be executable. It captures something fundamental about how computation works — the relationship between structure and transformation, between data and process.

Every time someone builds a sufficiently flexible system, they rediscover these relationships. Agent frameworks rediscover homoiconicity. Orchestration tools rediscover macros. Neuro-symbolic AI rediscovers symbolic reasoning. Not because anyone reads McCarthy's 1960 paper, but because these ideas map to real computational needs that don't go away just because the industry forgot about them.

The AI ecosystem has over 120 agent tools across 11 categories as of early 2026, according to StackOne's landscape analysis. Most of them are solving problems that Common Lisp solved decades ago, using languages that make those problems harder than they need to be.

So What?

I'm not arguing everyone should drop Python and rewrite their agents in Common Lisp tomorrow. (I mean, I think they should, but I understand that "the libraries" and "the ecosystem" and "my team only knows Python" are real constraints.)

But I am arguing that understanding Lisp — really understanding homoiconicity, macros, condition systems, and the code-as-data paradigm — would make AI agent developers better at their jobs. They'd stop reinventing wheels. They'd recognize the patterns they're implementing and reach for better abstractions.

And maybe, eventually, enough people will realize that instead of building bad Lisp on top of Python on top of C, they could just use the real thing. SBCL compiles to native code. Quicklisp gives you a package manager. SLIME/Sly give you the best interactive development experience in any language, period.

The AI industry is running toward Lisp as fast as it can, with its back turned the entire way. At some point, it's going to arrive and pretend it invented something new.

That's fine. We're used to waiting.

Top comments (0)