Almost exactly one year ago, I was building Eloquence — our visual language for data analysis — and Claude Code had just been released. Like most builders that month, I was giddy. An agent that could write code with me, read my files, run my commands. The future, finally, felt within arm's reach.
So I put it to work.
And then I started waiting.
Not for long. Ten seconds. Forty. The agent was searching the codebase for something — a function name, a usage, a reference. Find. Grep. Read. Find. Grep. Read. Sometimes it found what it was looking for. Often it didn't, and tried again. Sometimes it convinced itself it had found something it hadn't, and confidently changed it. The next morning I'd discover the damage: a renamed function with three forgotten callers, a variable updated everywhere except inside a callback two levels deep, a "fix" that introduced a worse problem than the original.
I didn't blame the model. The model was brilliant. The model was doing exactly what it had been asked to do: treat my codebase as a long, unstructured stream of characters, and reason about that stream as best it could.
But the more I watched it work — the more I waited, the more I cleaned up after it — the louder a small question got in my head.
Why is my AI agent reading my codebase like a book?
Code is not text. Code is data.
Open any program and squint. What do you see?
You see structure. Functions live inside files, files live inside modules, modules connect through namespaces. Calls reach across boundaries. Names resolve to definitions. Types flow through arguments. A symbol on line 412 of one file means something specific about a symbol on line 17 of another. None of this is implied. It's all explicitly there, encoded in syntax that compilers, linters, and language servers have understood for decades.
Code is a graph. Code is a tree. Code is, at its core, data with shape.
Every IDE you have ever loved knows this. That's why "Rename Symbol" in your editor doesn't just sweep through your project doing find-and-replace — it walks the tree, updates every reference, and leaves every unrelated identifier of the same name untouched. That's why "Go to Definition" returns in a millisecond. The tooling has known for a very, very long time that source code has structure.
And then we handed source code to LLMs — the most powerful language tools ever built — and asked them to operate on it the way you'd operate on a paragraph in a novel.
It bothered me. It kept bothering me.
A character named Alice
Here's the example that finally clicked it for me.
Imagine you're writing a novel. Your protagonist is named Alice. Over a few hundred pages, you refer to her in dozens of ways: Alice, she, her, the girl, the heroine, sometimes by a nickname her sister gives her, sometimes by a title bestowed in chapter twelve. All of those references point back to the same person.
Now suppose your editor asks you to rename her. Make her Maya.
An LLM working text-first will do an admirable job. It will find most of the "Alice"s and swap them. It will probably handle the easy pronouns. But somewhere on page 184, it will miss "the heroine." Somewhere on page 211, it will miss the nickname. Somewhere on page 7, in a flashback, it will rename the wrong Alice — the cousin who only appears once.
You ship the book. Most readers won't notice. Some will.
In code, all of them will notice. Tests fail. Production breaks. A bug report lands at 3am.
But here's the thing: to a tool that understands structure, Alice isn't a string. Alice is a node. Every reference to her — every alias, every pronoun, every callback, every indirection — is a literal edge in the graph, pointing back to her node. You change Alice once, at the node. Every edge updates automatically, exactly, and only the ones that should. The cousin in chapter 7 doesn't move. The heroine on page 184 does.
Same operation. Two completely different worlds.
That's what bothered me, watching my AI agent grep through my codebase a year ago. Not the waiting. Not even the bugs. It was that we were asking a brilliant tool to do its work without ever giving it the right tools for code.
pandō
I started sketching the idea on paper. A layer between the agent and the codebase that did the structural understanding for it. Not a replacement for the agent — an organ the agent didn't have. The agent thinks. The structural layer remembers, finds, refactors, transforms.
I needed a name. I kept thinking about trees — code as a tree, references as branches, every leaf rooted in a definition somewhere underneath. And there came Pando.
Pando is a colony of quaking aspens in Utah. From above, it looks like a forest: thousands of individual trees, leaves trembling in the wind. From below, it's a single organism. Every tree connected by one vast root system, sharing one genome, one life. The largest known living thing on Earth, and one of the oldest. One root. Many surfaces.
That's exactly what a codebase is. The text on your screen is the leaves. The structure underneath is the root.
For the last year, I've been building that root system. A structural engine that gives any AI agent — Claude Code, Cursor, Aider, and the rest — the right tools for your code: namespaces, symbols, dependencies, call sites, all as a graph it can operate on. Rename a function and every call site moves with it, exactly. Find every caller of a method, with no hallucinations. Transform code as code, not as text.
Our craft was never typing. It is to decide what the system should be.
That's why I built pandō.

Top comments (0)