Buttons Are Not Functions
— Designing a World Where AI Can Actually Live
I've been a frontend developer for six years.
I've built thousands of buttons, rendered countless lists.
State management, async handling, performance optimization
—I thought I knew this world pretty well.
But one day, when I tried to hand my app over to an AI agent, I was shocked.
To my eyes, it was clearly a "Register" button.
To the AI, it was just a callback function pointing somewhere or meaningless pixels.
The AI couldn't understand the context of the UI I'd built,
and even slightly complex requests would completely corrupt the system state.
At first, I thought:
"The AI is just too dumb."
But after countless debugging sessions, I realized I was wrong.
The problem wasn't the AI's intelligence.
The problem was that the physics of the world I'd provided were utterly broken.
The UI We Build Is a World Made Only for Humans
Buttons, lists, forms, modals—all of these assume that humans will "fill in the meaning themselves."
Humans just know.
Why this button is here,
what this list represents,
how far this state is valid.
But AI has no such tacit knowledge.
All I'd given it were loosely connected state fragments and event handlers.
At that moment, I stopped coding and started asking questions.
Is a "Register" button really a function?
Is a user list really an array?
No. They Weren't.
They should have been coordinates within the vast space called the domain.
The register button should have been a vector that moves state from coordinate A to coordinate B.
The user list should have been a dimension where those coordinates exist.
What matters isn't "what to render,"
but which coordinate system the state moves through.
This world needed explicit physics, not implicit conventions.
The Keyword: "Deterministic Coordinate Calculation"
State shouldn't be data that changes arbitrarily.
It should be the result of coordinate calculations that always produce the same output for the same input.
The moment I grasped this perspective,
the architecture I'd been struggling with for over a year
assembled itself in just one month.
AI was no longer a hallucination-prone nuisance.
On the geometric space I'd designed,
it became something closer to a calculator that rapidly computes coordinates.
Before & After: What This Actually Looks Like
// Before: Meaningless to AI — just a callback
<button onClick={() => createUser(form)}>
Register
</button>
The AI sees createUser. But what does that mean? What preconditions must be met? What state changes result? It has no idea. So it guesses. And breaks things.
// After: Intent as a vector in state space
{
intent: "user.create",
preconditions: {
currentView: "registrationForm",
formState: "valid",
permissions: ["user.write"]
},
effects: {
users: { operation: "append", entity: "$formData" },
currentView: "userList",
notification: { type: "success", message: "User created" }
}
}
Now the AI isn't guessing. It's computing.
Given the current coordinates, which intents are valid?
What will the coordinates be after applying this vector?
The LLM doesn't need to understand your business logic.
It just needs to interpret what the user wants and select the appropriate vector.
The deterministic engine handles the rest.
The O(1) Guarantee: Complexity Doesn't Matter
Here's what surprised me most.
Traditional AI agents get slower as your task gets harder. Ask them to "mark all tasks as done"? A few calls. Ask them to "mark all tasks as done except high priority ones"? Suddenly they're reasoning, backtracking, correcting—9 or 10 LLM calls.
With the coordinate-based approach: always 2 calls. Period.
| Task Type | Traditional Agent | Manifesto |
|---|---|---|
| Simple ("show kanban") | 2-3 calls | 2 calls |
| Multi-field ("create urgent task for tomorrow") | 3-6 calls | 2 calls |
| Exception ("complete all except high priority") | 5-10 calls | 2 calls |
Why? Because the LLM isn't reasoning. It's compiling.
The first call extracts the intent into coordinates. The second call translates the result back to natural language. Everything in between is deterministic computation—no LLM required.
This isn't just faster. It's predictable. Your costs don't spike randomly. Your latency doesn't depend on how the AI "feels" today.
Why Traditional Agents Drift
There's a subtle failure mode I discovered while debugging.
Every time an AI agent takes an action, it adds the result to its context. By turn 3, the original user request is just 25% of what the model is attending to. By turn 5, it's 12%.
Turn 1: Original input = 60% of context
Turn 3: Original input = 25% of context
Turn 5: Original input = 12% of context
The model literally forgets what you asked for, buried under its own intermediate outputs.
This is why exception clauses get ignored. "Complete all tasks except the design review" becomes just "complete all tasks" after a few turns of context accumulation.
The coordinate approach sidesteps this entirely. There's no iteration. No accumulating context. Just: input → coordinates → output.
What's Next: Entering the Most Chaotic Coordinate Space
Now I'm taking these principles into the most brutal, non-deterministic world I can find.
The crypto market.
This is a space filled with noise,
where coincidence, illusion, and over-interpretation are daily occurrences.
That's exactly what makes it the perfect extreme test case.
The market is still a state space. Prices, volumes, indicators—they're all coordinates. The challenge is that this space has extreme noise, and humans constantly over-interpret signals that are just randomness.
But if the coordinate system is explicit,
if the physics are deterministic,
then even in chaos, the AI becomes a disciplined calculator—
not a speculator who hallucinates patterns that don't exist.
This experiment is called Coin Sapience.
I'll likely fail multiple times.
But the coordinate system is clear now.
Try It Yourself
The architecture I've described is open source:
GitHub: github.com/manifesto-ai/core
Live Demo: taskflow.manifesto-ai.dev
— An early v0.3 Intent Compiler demo.
Raw, but it shows the core idea.
The Question I'm Left With
I spent six years building UIs for humans.
Now I'm learning to build worlds for AI.
The shift isn't about making AI smarter.
It's about making our systems honest—explicit about their physics instead of relying on human intuition to paper over the gaps.
If you've ever watched an AI agent break your app in creative ways,
maybe the problem wasn't the AI.
Maybe, like me, you just hadn't defined the coordinates yet.
Have you tried giving AI agents access to your applications? What broke? I'd love to hear your war stories in the comments.
Top comments (0)