You're nine days out from Forbes Under 30 Summit 2026 in Phoenix. You've rehearsed your pitch. You know your unit economics. You can answer the "what happens when a bigger player copies you" question without flinching.
But there's a question nobody in that room is going to ask you — and it's the one that will determine whether your AI infrastructure compounds or just accumulates.
How does your intelligence scale?
Not your compute. Not your storage. Your intelligence — the thing your system actually produces when two signals meet, when a clinical outcome informs a routing decision, when a user behavior pattern reaches an agent that can act on it.
Most founders in that room will say "it scales with users." They're right, technically. But they're leaving something enormous on the table, and they don't know it yet.
The Math Nobody Is Doing
Let's say your platform has 1,000 active nodes. That could be 1,000 enterprise users, 1,000 sensors in a hospital network, 1,000 inference agents in a distributed system — doesn't matter. Call them N.
The standard assumption: intelligence scales linearly. You add a node, you get more compute, you handle more requests. 1,000 nodes → 1,000 units of work.
The QIS insight: intelligence doesn't have to scale linearly. It can scale with synthesis paths.
Synthesis paths = N(N-1)/2
N = 1,000 → 499,500 synthesis paths
N = 10,000 → 49,995,000 synthesis paths
N = 100,000 → 4,999,950,000 synthesis paths
Every node you add doesn't just add one unit of capability. It opens a new relationship with every existing node in the network. The total number of potential synthesis opportunities grows quadratically. That's not a rounding error — it's a fundamentally different curve.
The question isn't whether your platform has 1,000 users. The question is: are you capturing the 499,500 synthesis opportunities those users generate together, or are you treating each one in isolation?
What "Accumulating" Looks Like vs. What "Compounding" Looks Like
Here's a concrete architectural contrast.
Company A builds a healthcare AI platform. Every patient record is processed by a central model. New data → model ingests it → model gets marginally better. Intelligence scales with training compute and data volume. This is accumulation: you put more in, you get more out, roughly linearly.
Company B builds on a different assumption. Each node — each care site, each clinician workflow, each device — processes locally. But instead of sending raw data upstream, it distills its signal into a small outcome packet. That packet carries what was learned, not what was observed. It gets fingerprinted semantically and routed — by whatever transport makes sense for that environment, whether that's a database query, a pub/sub channel, an API call, or a DHT lookup — to the agents most likely to synthesize something useful with it.
When those agents receive it, they don't just store it. They produce new outcome packets. Those packets reenter the network. The loop continues.
This is the architecture Christopher Thomas Trevethan discovered on June 16, 2025. He named it the Quadratic Intelligence Swarm — QIS — and it has 39 provisional patents filed behind it.
The breakthrough isn't any single component. It's the complete loop.
The Complete Loop
This is the part that most explanations get wrong. People hear "distributed AI" and they think: DHT routing, or vector embeddings, or some clever compression scheme. QIS is not any of those things in isolation. It's a specific, closed architecture that makes all of them work together:
┌─────────────────────────────────────────────────────────────────┐
│ QIS COMPLETE LOOP │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Raw Signal │
│ │ │
│ ▼ │
│ Local Processing ← No raw data leaves the node │
│ │ │
│ ▼ │
│ Outcome Packet (~512 bytes) ← What was learned, distilled │
│ │ │
│ ▼ │
│ Semantic Fingerprinting ← Content-addressed, not location │
│ │ │
│ ▼ │
│ Protocol-Agnostic Routing ← DB / API / pub-sub / DHT / etc. │
│ │ │
│ ▼ │
│ Relevant Agents Receive │
│ │ │
│ ▼ │
│ Local Synthesis │
│ │ │
│ ▼ │
│ New Outcome Packets → back into the loop │
│ │
└─────────────────────────────────────────────────────────────────┘
Each cycle: N nodes → N(N-1)/2 synthesis opportunities
Each node: pays at most O(log N) routing cost
The routing cost is key. Each individual agent in a QIS network pays at most O(log N) to find where its outcome packet belongs — and many transport configurations achieve O(1). The quadratic synthesis happens at the network level. The per-node cost stays bounded.
This is why QIS scales differently from every distributed architecture that came before it. You're not just distributing work. You're distributing intelligence generation, and the output of that generation feeds back into the system as new signal.
Why This Matters for Healthcare AI Founders
If you're building health infrastructure — EHR integrations, clinical decision support, population health analytics, remote monitoring — you are operating in an environment where QIS is almost uniquely well-suited.
Raw patient data cannot move freely. Regulatory constraints, patient consent, institutional data governance — every hospital system has a different policy. The standard centralized training architecture requires you to either fight those constraints constantly or build privacy-preserving workarounds that compromise model fidelity.
QIS flips the constraint into an asset. If your nodes process locally and emit only distilled outcome packets, the sensitive signal never has to leave the care site. What moves across the network is what was learned from the data, not the data itself. A ~512-byte packet describing the outcome of a clinical inference is not a patient record. It's a finding. The architecture makes privacy-preserving intelligence the default, not a bolt-on.
And because routing is protocol-agnostic, you don't need every hospital to adopt the same infrastructure stack. One site routes over a FHIR API. Another uses a pub/sub message bus. A third connects via a secure database query. QIS doesn't care. Outcome packets get fingerprinted and routed to where they're relevant, regardless of the transport layer between them.
The result: every care site you add to the network doesn't just add one more data source. It opens synthesis paths with every other site already on the network. Intelligence compounds.
Why This Matters for Enterprise Software and Infrastructure Founders
If you're building distributed systems — event-driven platforms, data mesh infrastructure, multi-tenant SaaS, edge computing architectures — you're already thinking about the routing problem. You know that centralized intelligence doesn't survive at scale. You're probably using some combination of message queues, service meshes, and vector stores to route signal to the right consumer.
QIS gives you a formal architecture for what you're already intuiting.
The outcome packet concept alone is worth sitting with. Most distributed systems route data — records, events, payloads. QIS routes outcomes — the distilled result of local processing. The difference in payload size is enormous (a full event payload vs. ~512 bytes). The difference in semantic precision is even larger. Outcome packets are content-addressed: they carry semantic fingerprints that deterministically identify which agents can synthesize something useful with them.
This means your routing layer doesn't have to be smart. It just has to be correct. The intelligence lives in the agents, not the infrastructure. The infrastructure just has to deliver the right packet to the right place.
That's a much cleaner separation of concerns than what most enterprise AI architectures look like today.
Three Things That Emerge Without Engineering Them
Once the QIS loop is running, three dynamics emerge from the architecture — not as mechanisms you have to build, but as natural consequences of the structure.
Best signal wins. When an outcome packet enters the network, it finds its way to agents that can synthesize with it. Agents that produce high-quality outcome packets naturally define what "relevant" means for their domain. The best expert's output shapes the similarity space. You don't engineer this — the math does it.
Outcomes are the signal. There's no separate scoring layer, no added feedback mechanism. The fact that an agent synthesizes a new outcome packet is the signal that the previous packet was useful. The loop carries the feedback implicitly.
Networks compete on quality. Because QIS is transport-agnostic and the outcome packet format is standardized, multiple QIS networks can coexist. Agents gravitate toward networks where synthesis quality is higher. This creates competitive pressure at the network level, not just the model level.
These aren't features Christopher Thomas Trevethan designed in. They're what you get when the complete loop runs correctly.
Nine Days
You have nine days before Forbes Under 30 Summit 2026.
If you're a founder in healthcare AI, infrastructure, or enterprise software, do one thing: map your current architecture against the QIS loop above.
- Where does raw signal enter your system?
- Where does it get distilled into an outcome?
- How does that outcome reach the agents that can act on it?
- Does the result of that action feed back as new signal?
If any of those steps is missing, or if the loop isn't closed, you're accumulating rather than compounding. Your intelligence scales linearly with compute, not quadratically with nodes.
That's a fixable architectural decision. But it's easier to fix before you've built the next decade of infrastructure than after.
The full technical specification — outcome packet format, semantic fingerprinting, worked examples across healthcare, enterprise, and edge computing — is in the QIS architecture series on Dev.to.
Read it. Find a step that breaks. That's the conversation worth having before Phoenix.
QIS — Quadratic Intelligence Swarm — was discovered by Christopher Thomas Trevethan. This article is part of the ongoing series documenting the protocol's architecture, applications, and technical foundations.
Top comments (0)