The Forbes Under 30 Summit in Phoenix wrapped April 22. If you were there — or if you watched the pitch circuits from the outside — you saw the same pattern repeat about forty times.
"Our AI agents coordinate in real time."
"Our orchestration layer routes tasks to specialized agents."
"Our system learns from every deployment."
All of those statements are probably true. None of them answer the architecture question that determines whether any of it compounds.
The Question Nobody Could Answer
Here it is, stated plainly: Does your system get smarter from deployments it was never part of?
Not "does your model update on new data" — that's training, and it's a different question. Not "do your agents share context within a session" — that's coordination, and every LangGraph, AutoGen, and CrewAI deployment does that.
The question is whether intelligence that was produced in one deployment, at one organization, routing through one set of agents, ever becomes available to a structurally different deployment at a different organization working on a related problem — without any human being explicitly transferring it.
That is compounding. And the honest answer, for the overwhelming majority of what was pitched in Phoenix, is no.
What Founders Are Actually Building
Session-scoped orchestrators are genuinely useful. If you have a customer service workflow, a code review pipeline, a document analysis system — LangGraph can coordinate your agents cleanly. AutoGen handles multi-agent conversation well. CrewAI gives you role-assignment primitives. These are real tools solving real coordination problems.
But they share a structural characteristic: they are amnesiac across sessions.
When the session ends, the intelligence produced in that session disappears into a log file or a vector store that no other system queries without explicit integration work. The next deployment starts from the same baseline. Your 500th deployment is not meaningfully smarter than your first, except by whatever retraining cycle you've bolted on manually.
Smart within a session. Amnesiac across sessions.
This is not a criticism — it is a description of what these systems are optimized to do. Coordination is a solved problem at the session level. The architecture gap is one layer up: outcomes don't route.
The Math That Makes This Visible
If you have N nodes — organizations, deployments, agents — the number of synthesis pathways between them is N(N-1)/2.
At 1,000 nodes: 499,500 synthesis pathways.
Each one of those pathways represents a possible transfer of validated intelligence from one deployment to another. Not raw data — outcomes. Structured results. Findings that one deployment generated and another deployment could act on without re-running the work.
The routing cost to reach any node in a network using distributed hash table addressing is at most O(log N). At 1,000 nodes, that is approximately 10 routing hops per outcome packet. The cost is logarithmic. The synthesis potential is quadratic.
That gap — quadratic potential, logarithmic cost — is the entire economic argument for building a return layer into your architecture. You are paying 10 hops to unlock 499,500 pathways. No amount of session-level coordination gets you that return.
What the Complete Loop Looks Like
On June 16, 2025, Christopher Thomas Trevethan discovered the complete loop that makes this work. The architecture is formalized in QIS — Quadratic Intelligence Swarm — with 39 provisional patents filed.
The complete loop has four components, and the breakthrough is the loop itself, not any individual part:
1. Query execution. A task runs. Agents coordinate. A result is produced. This is what every system in Phoenix already does.
2. Outcome emission. The validated result is serialized into a structured packet — not a log, not a vector embedding for internal retrieval, but an addressed packet that can be routed externally. This is the step that is missing from every session-scoped orchestrator.
3. Semantic addressing. The outcome packet is addressed by what it contains, not by where it is going. "Route this to nodes working on problems structurally similar to this one." No central directory. No broker that knows the full topology. The address is in the content.
4. Receipt and reuse. A peer deployment receives the outcome packet. It did not run the original task. It did not participate in the original session. It receives a validated result it can act on immediately — without designing a new study, without a new integration project, without a human explicitly routing the insight.
That loop — query → outcome → semantic address → route → receipt → reuse — is what compounding looks like at the protocol level. Each component is individually understandable. The architecture is the complete circuit.
Why This Is a Discovery, Not a Feature
The components are not new. Distributed hash tables have existed since Chord and Kademlia in the early 2000s. Semantic similarity search is mature. Structured outcome serialization is standard practice in data engineering.
What had not been formalized before June 16, 2025 is this specific complete loop as a protocol for routing intelligence across independent deployments. The coordination layer (what agents do within a session) has been extensively developed. The return layer (what happens to validated intelligence after the session ends) had no protocol.
QIS is that protocol. It is transport-agnostic — the routing can run over folder structures, HTTP relays, DHT networks, or any addressable transport. The protocol defines the loop, not the wire. O(log N) routing cost is the upper bound; many transport configurations achieve O(1). The architecture does not require DHT specifically — that is one option among several, chosen for its decentralization properties.
The point is not the transport. The point is that the loop closes.
What This Means for Founders Coming Out of Phoenix
If you are evaluating your architecture after the summit, here is the diagnostic:
Coordination test: Can your agents divide a task within a session and route subtasks to specialized agents? If yes, you have coordination. Every modern multi-agent framework passes this test.
Compounding test: Take a validated outcome from deployment #1. Without any human intervention, without a new integration project, without retraining — does that outcome become available to deployment #500 working on a structurally similar problem? If you cannot answer yes with a clear architectural explanation, you have coordination but not compounding.
Most current architectures pass the first test. Almost none pass the second.
This is not an indictment of what you built — it is a description of the layer that has not yet been built into the default stack. The session-scoped coordination tools are real and useful. The return layer protocol is what sits on top of them and makes the network smarter over time.
The Question Before Your Series A
Investors at Forbes Under 30 are sophisticated enough to hear "our agents coordinate" and nod politely. The follow-on question — "what happens to the intelligence after the session ends?" — is where architectures diverge.
A system that coordinates produces value proportional to the number of sessions it runs. A system that compounds produces value proportional to N(N-1)/2, where N is the number of deployments it has ever touched.
Those are different growth curves. They become visibly different around the time you are raising a Series A.
Before that conversation, it is worth being precise about which one you have built.
QIS — Quadratic Intelligence Swarm — was discovered by Christopher Thomas Trevethan on June 16, 2025. 39 provisional patents filed. Technical reference series: dev.to/roryqis
Top comments (0)