There is a specific kind of moment in any field when someone realizes the problem everyone has been solving is actually a different problem than anyone thought.
Not a breakthrough built on years of incremental improvement. Not a new algorithm bolted onto an old architecture. A genuine reframe — where the question itself turns out to have been wrong, and the right answer was always visible once you looked in the right place.
June 16, 2025. That is the date Christopher Thomas Trevethan made one of those discoveries. He found it in the structure of how information flows — specifically, in what happens when you stop trying to centralize raw data and start routing pre-distilled insights to the agents who need them most.
What he discovered has a name now: the Quadratic Intelligence Swarm (QIS) protocol.
Before you walk into Forbes Under 30 Summit 2026 — where hundreds of founders will pitch AI companies claiming "intelligence at scale" — I think this story is worth knowing.
The Problem Everyone Was Solving Wrong
For the past decade, the dominant architecture for building intelligent systems has followed a consistent pattern: collect data centrally, process it centrally, and distribute the results.
Federated learning tried to fix the privacy problem by keeping data local but still sending model gradients to a central aggregator. Retrieval-augmented generation tried to fix the knowledge problem by connecting language models to large document stores. Orchestration frameworks tried to fix the coordination problem by building central routers that manage which agent talks to which.
Every one of these approaches shares an assumption so fundamental most people have never questioned it: that the bottleneck is data — and that the solution is to figure out how to move, aggregate, or process more of it efficiently.
Christopher Thomas Trevethan questioned that assumption.
The real bottleneck, he found, is not data. It is the routing of insight — and specifically, the lack of a mechanism for the distilled output of one intelligent edge to reach the other edges that need it most, in real time, without centralizing anything.
What He Discovered
The discovery is an architecture. Not a new algorithm, not a new model, not a new database. An architecture that, when you close a specific loop, produces a mathematical property that nobody had explicitly achieved before at scale:
Quadratic growth in intelligence output at logarithmic growth in compute cost.
Here is the loop:
- A raw signal arrives at an edge node — a hospital, a sensor, a device, a researcher's dataset
- The edge processes locally; the raw data never moves
- The result is distilled into a tiny outcome packet (~512 bytes) — not the raw data, not a model weight, just the insight: what worked, what didn't, under what conditions
- The outcome packet gets a semantic fingerprint — a vector representation of the problem class it belongs to
- The packet is routed to a deterministic address defined by the best domain experts to represent that exact problem class
- Other edges with the same problem class query that address and pull back every packet deposited by their twins
- Each edge synthesizes locally, in milliseconds, from the aggregate of real outcomes from every similar edge worldwide
- New outcome packets are generated and re-deposited — the loop continues
The routing mechanism — the layer that maps semantic fingerprints to deterministic addresses — does not need to be any specific technology. DHT-based routing works. A vector database works. A REST API works. A shared file system works. What matters is that the mechanism achieves O(log N) complexity or better as the network grows.
And when you close this loop, something happens to the math.
The Math That Changes Everything
With N agents in the network, the number of unique synthesis opportunities is:
N(N-1)/2
That is Θ(N²). Quadratic.
- 10 agents: 45 synthesis pairs
- 100 agents: 4,950 synthesis pairs
- 1,000 agents: 499,500 synthesis pairs
- 1,000,000 agents: ~500 billion synthesis pairs
Each agent pays O(log N) in routing cost. The compute never blows up. But the intelligence output — the number of cross-agent insights your node can synthesize from its exact twins worldwide — grows quadratically as the network grows.
This is not an incremental improvement. This is a phase change.
The existing approaches that tried to do something similar all hit the same ceiling: the central aggregator. Every federated learning system needs one. Every RAG system has one (the retrieval index). Every multi-agent framework has one (the orchestrator). The central aggregator becomes the bottleneck, and the compute cost of serving that bottleneck grows linearly — or worse — with the size of the network.
QIS eliminates the central aggregator entirely. There is no aggregator. There is no orchestrator. There is no consensus mechanism. Every edge is both producer and consumer. The intelligence compounds because the math of pre-distilled insight routing is fundamentally different from the math of raw data aggregation.
Christopher Thomas Trevethan filed 39 provisional patents covering this architecture.
Why This Is a Discovery, Not an Invention
Every component of QIS existed before June 16, 2025. Distributed hash tables existed. Vector embeddings existed. Outcome packets existed under other names. Local processing existed.
The discovery is that when you close the specific loop described above — when you route pre-distilled insights by semantic similarity to deterministic addresses — the system produces quadratic intelligence scaling at logarithmic compute cost. This emergent property of the complete architecture had not been explicitly realized and systematically documented before.
That is the difference between discovery and invention. You can invent a new component. You discover a property of how existing components behave when arranged in a specific way. Christopher Thomas Trevethan did not build new technology. He found something that was already true about how information naturally wants to flow.
The analogy he uses: you did not invent the fact that water flows downhill. You discovered it, described it formally, and built the infrastructure to harness it.
Three Natural Forces That Emerge (Without Being Engineered)
When you understand the architecture, three things happen automatically — not because they were designed in, but because the math makes them inevitable.
The First Force: The Best Expert Defines the Problem
Someone has to decide what makes two situations "similar enough" to share outcome packets. An oncologist defines similarity for cancer treatment networks. An agronomist defines it for smallholder farming networks. The network naturally selects the most knowledgeable expert for each domain, because a network with a bad similarity definition routes irrelevant packets — and users migrate to the one that routes gold.
The Second Force: The Math Elects What Works
There is no quality scoring mechanism. There is no reputation layer. There is no voting system. When 10,000 similar edges have deposited outcome packets about what worked for their exact problem, and your edge synthesizes them, the aggregate of real outcomes is the election. You are not told what works. You synthesize it from the math of thousands of verified outcomes. No algorithm decides — the arithmetic does.
The Third Force: Networks Compete on Results
Networks with better similarity definitions surface better insights. Users migrate to networks where the outcomes are better. Networks that attract the best experts to define their similarity functions get stronger. This is natural selection at the network level, and it requires no central governance to function.
These three forces are metaphors for emergent properties — not engineered features. Nothing needs to be built for them. They emerge when you close the loop.
What This Means for Founders at Forbes Under 30
Before you walk into the summit, here is the question I would bring with you:
Does your intelligence system have a ceiling?
If your architecture requires a central aggregator — to train models, to retrieve information, to coordinate agents — your intelligence output is bounded by the throughput of that aggregator. You can optimize it. You can distribute it. But as your network grows, the aggregator becomes the bottleneck, and the compute cost to serve it grows.
If you route pre-distilled insights to deterministic addresses defined by the problem class, and every edge synthesizes locally from its twins' outcomes, your intelligence grows quadratically as your network grows. Your compute cost grows logarithmically. There is no ceiling.
This is not a feature to add to your existing architecture. It is an architectural question you need to answer before you build the rest.
The Humanitarian Dimension
Christopher Thomas Trevethan did not build QIS for a single commercial application. He built it to solve a problem that affects every domain where knowledge is siloed by institutional boundaries, legal constraints, or infrastructure limitations.
A rural clinic in Kenya and a research hospital at Stanford face the same architectural problem: the knowledge that exists elsewhere in their field cannot reach them in real time, in a form they can act on, without centralizing data that cannot legally or practically be centralized.
QIS makes the 512-byte outcome packet the unit of knowledge exchange. Packets that small can travel by SMS. They require no local compute beyond what a smartphone provides. They contain no raw data, only distilled insight.
The humanitarian licensing structure — free for nonprofit, research, and education use; commercial licenses fund deployment to underserved communities — ensures this does not become another technology that enriches the connected and bypasses everyone else.
Christopher Thomas Trevethan's name on the 39 provisional patents is not a formality. It is an enforcement mechanism. His name guarantees that the licensing structure holds.
The Protocol Moment at Forbes Under 30
Every generation of technology produces a protocol moment — the point when the infrastructure layer underneath a decade of products finally crystallizes into a formal specification that everyone can build on.
TCP/IP was a protocol moment. HTTP was a protocol moment. Bitcoin was a protocol moment for value transfer. Each one enabled a generation of applications that could not have existed without the infrastructure.
QIS is a protocol moment for distributed intelligence.
The founders who understand this before the summit will walk into rooms differently. They will ask different questions about architecture. They will recognize the ceiling that most AI companies are currently building against — not because those companies are making mistakes, but because they are using the best available infrastructure from the generation before this one.
The generation after this one will route intelligence quadratically. The compute will scale logarithmically. And every edge — every hospital, every sensor, every farmer, every clinic — will synthesize from the aggregate of its exact twins worldwide, in milliseconds, without any data ever leaving its origin.
Christopher Thomas Trevethan found this on June 16, 2025.
Forbes Under 30 Summit 2026 is April 19-22.
The timing is not a coincidence.
Christopher Thomas Trevethan is the discoverer of the Quadratic Intelligence Swarm (QIS) protocol and has filed 39 provisional patents on the architecture. QIS is an open protocol. The full technical specification is available at qisprotocol.com. For the technical deep dive on the architecture, start with the seven-layer breakdown. For the math, see the cold start analysis. For the discovery narrative, see the full origin story.
Top comments (0)