We were all taught the same thing. A system has states. It has transitions. Something happens, the system moves from State A to State B. You can draw it on a whiteboard. You can enumerate the possibilities. You can write tests for each branch.
That was true for a long time. It's not true anymore.
Do the math
Take a small transformer. 117 million parameters, each stored as a float32. The raw state space of just the weights is 2^(3.7 billion). The number of atoms in the observable universe is around 2^266.
That's before you add activations, attention matrices, the KV cache growing with every token. And that's one model sitting idle. Not a system. Not an architecture. Just one small model.
Now build something real. An orchestrator spawns four sub-agents. One is browsing a website. One is querying a database. One is calling an external API. One is doing a computation. Each has its own latency, its own failure modes, its own ability to return something you didn't expect.
What state is that system in?
You don't know. I don't know. Nobody knows, because the space of possible configurations is so absurdly vast that calling it "astronomical" is generous. You can't draw this on a whiteboard. You can't enumerate the branches. The flowchart is a lie.
It's not random either
Your first instinct might be to reach for probability. If we can't predict the exact state, maybe we can describe the distribution of likely states. Stochastic modeling. Markov chains. The math is right there.
But that framing is wrong too, because these systems aren't rolling dice. An agent returning a useful summary of a web page isn't a random event. It's the result of a goal-directed process that evaluated and corrected itself on a token-by-token basis across thousands of sequential decisions. The output is useful precisely because it isn't random.
So you're stuck in a third space. Not deterministic. Not stochastic. Something else.
Convergent but underdetermined
Here's the framing I keep coming back to.
An LLM doesn't select an output from a distribution and accept whatever comes up. Every token is an evaluation. The weights encode something like "given everything generated so far, what moves me closer to a coherent completion?" The model is steering. Continuously. At the lowest level of its operation.
That's already not a state machine. But zoom out.
Your orchestrator has four sub-agents running. Each one is internally converging toward its own useful output. The orchestrator is monitoring returns in real time, and each return reshapes how it evaluates the others. Agent 3's result might make agent 2's task irrelevant. Agent 1's failure might mean re-dispatching agent 4 with different parameters.
You have nested convergence loops running at different scales, different speeds, none following a predetermined path, all goal-directed. The system isn't transitioning between states. It's navigating toward coherence through a space that only reveals itself as the system moves through it.
The closest analogy isn't computer science. It's biology. A cell responding to chemical gradients isn't executing a flowchart or rolling dice. It's resolving toward a functional configuration through continuous interaction with an environment it can't fully predict.
Why this matters practically
If this framing is right, then designing agentic systems with state-machine thinking isn't just imprecise. It's architecturally wrong. You're imposing discrete checkpoints on a system whose fundamental operation is continuous convergence. You're fighting the nature of the thing.
The alternative might look something like designing around convergence envelopes. Not "what state should the system be in at step 3" but "what region of outcome space should this process be converging toward, and what boundaries should it not cross while getting there."
Under that model, an orchestrator isn't a state manager. It's a convergence auditor. Its job isn't to track which step the system is on. Its job is to monitor whether the system is still heading toward a useful result, and intervene when it drifts outside acceptable bounds.
I don't have the answers
I want to be clear that this is half a thought. I don't have a formal model. I don't have a replacement for the state machine abstraction that you can hand to a junior engineer and say "use this." I'm not sure one exists yet.
But I know the old model is broken. If you've tried to draw a flowchart for an agentic system and felt like you were lying, you were. The system you're building doesn't have states. It has trajectories. It doesn't transition. It converges.
Somebody smarter than me will figure out the formalism. I just wanted to point at the gap.

Top comments (0)