Neural networks started as an esoteric discipline. A small group of researchers trying to teach machines to think, largely ignored by the industry. Then the creature got life — and everyone rushed in to build on top of it.
The problem with rushing: you rebuild what already exists.
Not because you're careless. Because nobody told you to look at the right shelf.
The agentic AI field is in a phase every young technology goes through:
rediscovering solved problems without knowing they are solved. Engineers are
building bespoke orchestration frameworks that duplicate Dapr. Inventing retry logic that misses guarantees Sagas have provided since 1987. Designing agent communication protocols that solve problems gRPC solved in 2015.
The infrastructure layer largely maps onto solved distributed systems patterns. Stop rebuilding it from scratch.
But three problems have no ancestor.
Semantic failure is invisible. A microservice failure returns HTTP 500. An LLM failure returns HTTP 200 — with a confident, plausible, wrong answer. Your circuit breakers do not trip. Your dashboards stay green. Your observability stack is structurally blind to semantic errors. This is new.
The compute unit has opinions. A Redis node executes your request. An LLM interprets it, shaped by training data and alignment constraints you cannot inspect or patch. When a microservice misbehaves, you get a stack trace. When an LLM misinterprets, you get a well-formatted wrong answer three reasoning steps later. Also new.
The instruction/data boundary does not exist. SQL injection is solved — parameterized queries separate instructions from data at the protocol level. Prompt injection is not solved because instructions and data are both text. There is no type system that separates them.
There is also a fourth problem surfacing: semantic coordination instability. Systems whose orchestration topology is structurally sound, but whose semantic interactions destabilize execution — semantic retry amplification, cascading hallucination propagation, planner oscillation. This sits between distributed systems, control theory, and probabilistic cognition. It does not yet have a field.
If you have a distributed systems background, you have a 30-to-50-year head start on every infrastructure problem in agentic AI. Use it. Pick Orleans for stateful agent orchestration. Use Temporal for durable workflows. Apply Saga discipline to retry logic. Read the Hearsay-II papers before designing your multi-agent scratchpad.
Then spend the time you saved on the four problems above. That is where the actual engineering work is.
This is a condensed version of a two-part series published at OmniTechnicus.AI. Part 1 maps every agentic AI pattern to its distributed systems ancestor in full. Part 2 delivers the engineering playbook for semantic correctness — the part distributed systems cannot help you with.
Top comments (0)