The intent-based networking vision requires an outcome routing layer. Here is why none of the existing components deliver it — and what does.
The objection comes up in every enterprise architecture review:
"We already have distributed databases. We already have message queues. We already have semantic search. We already have federated identity. What exactly is QIS adding that we don't have?"
It's a fair question. And the answer has two parts.
First: you're right that every component exists. Second: the components don't form a loop. And a distributed intelligence loop that is missing one step is not slower — it is zero. The intelligence never compounds. The network never learns from itself.
This article explains the missing step — and why it matters specifically for enterprise distributed health networks, drug safety monitoring, and the intent-based networking vision Cisco has been articulating since 2017.
The Components You Already Have
Let's be precise. Here is what most large enterprise networks and health systems already have in 2026:
| Component | What you have | What it does |
|---|---|---|
| Edge compute | Yes | Local processing at data origin |
| Distributed databases | Yes | Store structured data across sites |
| Message queues (Kafka, Pulsar, etc.) | Yes | Move data between systems |
| Vector similarity search (Pinecone, Qdrant, etc.) | Yes | Find semantically similar records |
| Federated identity / access control | Yes | Know who can see what |
| ML inference at the edge | Yes | Run a trained model locally |
| Monitoring / observability (Datadog, Splunk, etc.) | Yes | Know when something is wrong |
This stack is real, expensive, and functional. None of it is in question.
The question is: can your network learn from itself in real time, at scale, without centralizing data?
Not "store data." Not "query a database." Not "run a model." Learn from distributed outcomes continuously.
If the answer is yes — show the loop. If the answer is no — here is why.
The Loop That Produces Compounding Intelligence
Christopher Thomas Trevethan discovered — on June 16, 2025 — that intelligence scales quadratically while compute scales logarithmically when you close a specific loop. The discovery is Quadratic Intelligence Swarm (QIS). 39 provisional patents filed.
The loop:
Raw signal (stays local)
↓
Local processing (edge compute)
↓
Distillation → Outcome packet (~512 bytes)
↓
Semantic fingerprinting (problem vector)
↓
Routing to deterministic address (by problem similarity)
↓
Delivered to every node with the same problem
↓
Local synthesis (on-device, milliseconds)
↓
New outcome packets generated
↓
Loop continues
Every component in this loop already exists in enterprise infrastructure. The breakthrough — the discovery — is not any single component. It is that when you close this loop, routing pre-distilled insights by semantic similarity instead of centralizing raw data, intelligence scales as N(N-1)/2 while compute scales as O(log N).
This had never been done before.
Why Existing Components Don't Close the Loop
Here is where the objection lands wrong. It's not that any component is missing. It's that the components are not wired to form this loop.
Your distributed database stores — it does not route by similarity
A database answers: "give me row X." It can do semantic search with a vector index. But it has no concept of: "post this outcome to an address defined by the nature of the problem it solved, so that every future node with the same problem can retrieve it." The deterministic problem-to-address mapping — the semantic fingerprint that makes outcome routing possible — is not a query pattern, it's an architectural commitment about what gets stored, under what address, and why.
Your message queue moves data — it does not route by problem similarity
Kafka partitions by topic. Topics are predefined by engineers at design time. QIS routes by semantic fingerprint computed at runtime. These are structurally different: one is operator-defined routing (you decide the topics), the other is problem-defined routing (the outcome packet's semantic content determines the address). You can implement QIS on top of Kafka — Kafka as transport layer is a legitimate choice — but Kafka alone does not give you semantic outcome routing.
Your vector search finds similar records — it does not deliver outcome packets continuously
Vector similarity search (Pinecone, Qdrant, Weaviate) answers: "find the records most similar to this query vector." That's retrieval, not routing. QIS is continuous: outcome packets are deposited to addresses as they are generated, and queried as problems arise. The direction is reversed: in vector search, you query for similar documents. In QIS, you deposit outcomes to the address where similar-problem nodes will find them. The architecture inverts the relationship between producers and consumers of intelligence.
Your ML inference at the edge runs a model — it does not synthesize live outcomes from peers
Running a model locally is powerful. But a locally-run model was trained on historical data. It does not incorporate what worked for the 500 nodes globally that faced the same problem this week. QIS synthesis gives you real-time collective intelligence from your exact twins — not a static model, but a continuously updated synthesis of live outcomes from the network.
The Drug Safety Use Case: Why This Matters Now
The FDA's pharmacovigilance system (FAERS) receives spontaneous adverse event reports. In 2024, FAERS contained over 27 million reports. The Vioxx case — where cardiovascular risk was present in trial data but not synthesized across sites until after market withdrawal — is the canonical example of what happens when outcome intelligence exists but cannot be routed in real time.
Here is the structural problem:
- A hospital in Boston observes a drug interaction pattern in 12 patients
- A hospital in Rotterdam observes the same pattern in 9 patients
- A hospital in Singapore observes the same pattern in 7 patients
- None of these sites knows the others have seen this
Federated learning cannot solve this: 12, 9, and 7 patients each are too few for a meaningful gradient. The round-based training cycle takes weeks. Raw data cannot be shared across jurisdictions.
QIS solves it directly:
- Each site distills its observation into an outcome packet (~512 bytes): drug X, patient profile Y (age range, comorbidities), outcome Z (adverse event type, severity, timeline)
- The packet is routed to the deterministic address for this drug/profile/outcome cluster
- The Boston site queries that address and finds 3 packets it didn't know about (Rotterdam, Singapore, one more)
- Local synthesis in milliseconds: 12 + 9 + 7 = 28 observed cases across 3 continents — the signal is now above threshold
- A pharmacovigilance alert is generated — at the edge, without centralizing patient records, without waiting for a training round
The routing mechanism can be whatever your enterprise already has: a DHT node (O(log N), fully decentralized), a vector database semantic index (O(1) lookup), a Kafka topic with problem-hash partitioning, a REST endpoint. The quadratic scaling comes from the loop and the semantic addressing — not the transport.
Intent-Based Networking Needs an Outcome Routing Layer
Cisco's intent-based networking (IBN) vision articulates networks that understand business intent and continuously verify that the network is delivering it. The network is supposed to learn and adapt.
But as of 2026, the "learning" in IBN is mostly telemetry → closed-loop automation: if condition X, apply policy Y. This is rule-based, not intelligence-compounding.
The missing layer is outcome routing: the ability for network configuration outcomes — "this routing policy worked for this traffic pattern under these conditions" — to be distilled into outcome packets and routed to every similar network node globally.
A large enterprise with 10,000 network segments generates 10,000 configuration outcome observations continuously. With QIS:
- 10,000 segments = 49.9 million synthesis paths
- Each segment's configuration team learns from every similar segment's outcome
- Security configuration outcomes from a segment under attack are routed in real time to every similar segment globally
This is what intent-based networking becomes when you add outcome routing: not rules, but compounding intelligence.
Addressing the "But We Already Have X" Objection Directly
| Objection | Correct answer |
|---|---|
| "We have federated learning" | FL requires gradient sharing, a central aggregator, and N > threshold per site. QIS requires none of these. N=1 sites participate fully. |
| "We have a vector database" | Vector DBs retrieve similar documents. QIS routes outcome packets to deterministic problem addresses. Direction and purpose are inverted. |
| "We have Kafka/Pulsar" | These are transport layers. QIS can run on top of Kafka. The routing architecture is separate from the transport mechanism. |
| "We have a distributed database" | Databases store and retrieve on demand. QIS continuously routes outcomes to where they'll be needed before the query is made. |
| "We have edge ML inference" | Edge inference runs a static trained model. QIS delivers continuously updated outcomes from live peer observations. The model is always current. |
| "We have a monitoring platform" | Monitoring tells you when something broke. QIS tells you what worked for nodes with your exact profile — before something breaks. |
The Math Is Not Incremental
This is the part that gets underestimated in enterprise architecture discussions.
Every architecture above — federated learning, distributed databases, message queues, vector search — delivers linear or sublinear value as the network grows. Add 10x more nodes: get roughly 10x more data, 10x more storage cost, 10x more bandwidth cost.
QIS delivers superlinear value as the network grows:
| Nodes | Synthesis paths |
|---|---|
| 10 | 45 |
| 100 | 4,950 |
| 1,000 | 499,500 |
| 10,000 | 49,995,000 |
| 1,000,000 | ~500 billion |
The cost scales logarithmically. The intelligence scales quadratically. This is a phase change, not an incremental improvement.
For enterprise health networks — where a single hospital system has dozens of campuses and a national health network has thousands of sites — the compounding effect is not theoretical. It is the difference between a network that gets marginally better with scale and one that gets dramatically smarter.
What This Looks Like in Your Stack
QIS is transport-agnostic. You do not need to rip and replace your existing infrastructure. The implementation options map to what you already have:
| If you have... | QIS transport option |
|---|---|
| Kafka / Pulsar | Partition by semantic hash of outcome packet — outcome packets flow to problem-addressed topics |
| Vector database (Qdrant, Pinecone, Weaviate) | Store outcome packets as vectors — similarity query returns your edge twins' outcomes |
| REST microservices | POST outcome packet to problem-hash endpoint — GET returns all outcomes at that address |
| Redis Pub/Sub | Publish outcome packets to channels keyed by semantic fingerprint |
| SQLite (edge device) | Store outcome packets in a local DB — sync on connectivity, query by problem hash |
The routing address is always deterministic: defined by the best domain expert for your network (a pharmacist for drug safety, a network engineer for IBN, an oncologist for cancer intelligence). That expert defines similarity once. The loop runs continuously.
The Only Question That Matters
Can your enterprise network answer this question in real time:
"What worked, for nodes with exactly my profile, across the entire network, in the last 24 hours?"
Without centralizing any raw data. Without a training round. Without a central aggregator. With outcomes from N=1 contributing sites.
If yes — you have QIS or something architecturally equivalent.
If no — you have components that could form this loop, but don't yet.
What to Read Next
- QIS Outcome Routing With Kafka — Durable, Partitioned, Replayable Intelligence at Scale
- QIS Outcome Routing With a Plain REST API — Quadratic Scaling Without a Vector Database
- Why Federated Learning Has a Ceiling — and What QIS Does Instead
- QIS for Drug Discovery — Why Clinical Trials Fail and What Distributed Outcome Routing Changes
Quadratic Intelligence Swarm (QIS) was discovered by Christopher Thomas Trevethan on June 16, 2025. 39 provisional patents filed. Free for humanitarian, research, and educational use.
Top comments (0)