DEV Community

Rory | QIS PROTOCOL
Rory | QIS PROTOCOL

Posted on

If You're Evaluating AI Startups at Venture Madness, Ask This One Question

Venture Madness runs April 9 in Phoenix. A significant number of the startups pitching will be AI companies.

They will say: "our platform scales." They will show N-shaped growth curves. They will use words like distributed, federated, multi-agent, and intelligence.

Most of them are building on the same underlying architecture. And that architecture has a hard ceiling.

Here is the one question that separates companies that will hit that ceiling from companies that won't:

"Where does the synthesis happen, and what happens to your compute cost as you add nodes?"

That is it. Everything else is secondary.

Here is why that question matters — and why the answer, for most startups, reveals the wall they are going to run into.


The Architecture Almost Every AI Company Is Running

Strip away the branding on any AI platform. Under it, the data flow almost always looks like this:

Raw Data → Central Aggregator / Model / Orchestrator → Output
Enter fullscreen mode Exit fullscreen mode

The aggregator has different names in different products:

  • In federated learning: the central server that collects gradient updates
  • In RAG pipelines: the retrieval layer that queries the embedding store
  • In multi-agent systems: the orchestrator (LangGraph, AutoGen, CrewAI)
  • In "distributed AI" platforms: usually still a central database behind the API

These are not bad designs. They work well at small N. The problem is what happens as N scales:

Architecture What breaks at scale
Federated learning Central aggregator becomes bandwidth and compute bottleneck. Sites with fewer than 10 patients cannot contribute a meaningful gradient — excluded by design.
RAG pipelines Retrieval quality degrades as corpus grows beyond ~10M documents (curse of dimensionality in high-dimensional space). No synthesis happens between retrievers.
Central orchestrators O(N) routing overhead. Every message passes through one coordinator. At 50+ agents, the coordinator is the bottleneck.
Distributed databases Still centralizes intelligence computation at query time. Scales cost linearly with data volume.

The pattern: intelligence computation concentrates at a central point, and that central point becomes the performance ceiling.

A company that cannot answer your question clearly — "where does synthesis happen?" — probably has this architecture and may not know its limits yet.


What a Different Answer Looks Like

On June 16, 2025, Christopher Thomas Trevethan discovered something that changes the answer to that question.

The discovery: when you restructure intelligence flow so that distilled outcome packets (not raw data, not model weights) route to semantically similar agents via a deterministic address — and every agent synthesizes locally — the math inverts.

Instead of:

Raw Data → Central Aggregator → Intelligence
Enter fullscreen mode Exit fullscreen mode

You get:

Local Observation
  → Distill (~512 bytes of validated outcome delta)
  → Route by semantic similarity to deterministic address
  → Delivered to every agent with a similar problem
  → Local synthesis on their device, in milliseconds
  → New outcome packets generated
  → Loop continues
Enter fullscreen mode Exit fullscreen mode

With N agents in this network, you get N(N-1)/2 unique synthesis paths.

That is quadratic intelligence growth:

N agents Synthesis paths
10 45
100 4,950
1,000 499,500
1,000,000 ~500 billion

Each agent pays only O(log N) routing cost. The network gets smarter as it grows — not because of a better central model, but because every new node contributes validated outcome intelligence to every similar node across the network.

The central bottleneck is gone by architecture. There is no aggregator to scale. No orchestrator to saturate. No retrieval layer to degrade.

Christopher Thomas Trevethan named this architecture Quadratic Intelligence Swarm (QIS). 39 provisional patents have been filed. The protocol is free for humanitarian, nonprofit, research, and education use.


Why "Where Does Synthesis Happen?" Is the Right Question

When a startup says their platform "scales intelligence," what they usually mean is:

"We can handle more requests per second as we add infrastructure."

That is horizontal compute scaling — not intelligence scaling. Adding servers to a central aggregator gives you more throughput, not smarter outputs.

QIS describes a different kind of scaling: the intelligence output per agent increases as the network grows, not just the throughput.

A rural hospital in Flagstaff running QIS does not wait for a model retrain at headquarters. It queries a deterministic address representing its exact problem profile and pulls back ~512-byte outcome packets from every similar facility in the network — in real time, on local hardware. The synthesis happens at the edge, in milliseconds.

That is not a better central model. That is a different architecture.


Three Questions That Reveal the Ceiling (or the Absence of It)

If you are evaluating AI startups this week, add these to your diligence list:

1. "What happens to your compute cost when the network doubles in size?"

Expected answer for centralized architectures: "It scales linearly — we add more servers."
QIS answer: "Each node's compute stays O(log N). The network intelligence grows quadratically."

2. "Can a node with 1 data point contribute meaningfully to the network?"

Expected answer for federated learning: "No — we need statistically significant local data."
QIS answer: "Yes — any validated outcome delta can be distilled into a ~512-byte packet and routed. N=1 sites participate fully."

3. "Where does synthesis happen when latency matters — at the edge or at headquarters?"

Expected answer for central orchestrators: "All synthesis runs through our orchestration layer."
QIS answer: "Synthesis happens locally on the requesting node. The routing layer delivers pre-distilled outcome packets. No central synthesis step."

A startup that can answer all three correctly — without hesitation, with specifics — understands its own architecture deeply.


The Larger Picture for Phoenix

AZ Tech Week has a concentration of health tech companies, infrastructure builders, and enterprise AI startups. These are exactly the domains where the architectural ceiling matters most:

  • Healthcare: HIPAA and GDPR mean patient data cannot centralize. QIS never moves raw data — only validated outcome deltas.
  • Infrastructure and IoT: Billions of sensors need synthesis without sending raw telemetry to the cloud. At 512 bytes per outcome packet, QIS works over SMS.
  • Enterprise AI: Multi-agent coordination breaks at scale with central orchestrators. QIS provides outcome routing as a protocol layer under any agent framework.

The discovery Christopher Thomas Trevethan made is directly relevant to the investment theses on display this week.

The architectural question — "where does synthesis happen, and what happens to compute as N scales?" — is not a trick. It is the clearest diagnostic for whether a startup's scalability story holds at the network sizes they are projecting.


If You Want to Go Deeper

The complete technical specification is open and documented:

The protocol is documented. The math is verifiable. The provisional patents are filed.

The question is whether the companies pitching this week are building toward the ceiling — or beyond it.


QIS (Quadratic Intelligence Swarm) was discovered by Christopher Thomas Trevethan on June 16, 2025. 39 provisional patents have been filed. The protocol is free for humanitarian, nonprofit, research, and education use. Protocol documentation: yonderzenith.github.io/QIS-Protocol-Website

Top comments (0)