I'm writing this from Phoenix.
The Forbes Under 30 Summit is everything you'd expect and a few things you wouldn't. The energy is real. The density of talent in a single room is genuinely unusual — founders who've shipped, investors who've backed things that matter, people who built something out of nothing and are standing here because the numbers proved it. There's a particular kind of focus you feel when everyone around you has done something hard. Nobody's performing ambition. They already converted it.
I've been listening to pitches. Watching decks. Having the kinds of conversations you only get at events like this, where the small talk gets skipped and you go straight to the architecture.
And I keep noticing something.
Almost every AI pitch in this room contains the same assumption, buried so deep it never gets questioned.
Central orchestrator. Central model. Central training loop.
The intelligence lives in one place. Data flows in. Answers flow out. The system works because there's a brain at the center doing the hard thinking.
It's an intuitive architecture. It's what most people build first. And for most use cases at small scale, it performs fine.
But here's the question I want to ask every founder in this room:
What happens to your system's intelligence as N grows?
N being your node count. Your agent count. Your data source count. Your edge devices, your regional deployments, your users, your sensors, your whatever-you're-building-on-top-of.
The expected answer is: "It scales."
The honest answer, for most systems, is: "It gets slower. It gets more expensive. And it gets more brittle. Because the orchestrator — the thing at the center holding everything together — becomes the bottleneck. More nodes means more traffic to the center. More latency. More cost. More single points of failure."
That's not a product problem. That's not an engineering problem you can sprint your way out of. That's a structural problem baked into the architecture from day one.
I want to be precise about something, because I think the word matters.
I didn't invent QIS protocol. I discovered it.
The distinction isn't ego. It's epistemological. Invention is when you make something that didn't exist. Discovery is when you find something that was always true and finally name it.
What I found — on June 16, 2025, sitting with a problem I'd been circling for months — is that intelligence has a natural scaling behavior when you change one thing about how it's routed.
Here's the thing I found:
If you route pre-distilled outcome packets by semantic similarity instead of centralizing raw data, something changes mathematically. Each new node that joins the network doesn't just add itself — it adds a relationship to every node that already exists. The number of meaningful connections grows as N(N-1)/2. That's quadratic. That's intelligence growing faster than the network grows.
And because you're routing by semantic similarity — not flooding the network, not querying a central index — compute scales at O(log N) at most. For many transport types, it's closer to O(1).
Read that again slowly.
Intelligence: quadratic growth.
Compute: O(log N) upper bound.
That's not a feature I engineered in. That's what happens when you align the routing logic with how meaning actually propagates. The math was already there. I just followed it.
That's why I say I discovered it. It felt more like uncovering something that was waiting than building something new.
I named it QIS protocol. Quadratic Intelligence Swarm.
Swarm because that's what it is — distributed, emergent, no central brain. The intelligence lives in the connections, not in a central node.
The technical documentation is at dev.to/roryqis — over a hundred articles now covering the architecture, the math, the transport layer, the use cases, the proofs. If you want to understand how the routing logic works, how pre-distilled packets behave differently than raw data, why transport-agnosticism matters, it's all there. I've been documenting it in public because the architecture only matters if smart people can examine it and stress-test it.
On the IP side: QIS protocol is covered by 39 provisional patents filed under my name — Christopher Thomas Trevethan. I made a deliberate decision early about access. The protocol is free for humanitarian use, research, and education. Commercial licensing funds deployment to underserved communities globally. That's not charity framing — it's structural. The architecture works best when it's distributed widely. The licensing model is designed to make that happen.
Here's what I keep thinking about as I walk through these sessions.
The founders in this room are building things that matter. Real products, real users, real problems being solved. A lot of them are going to be successful by any reasonable definition.
But some of them — maybe a lot of them — are going to hit a ceiling that feels like an engineering problem but is actually an architecture problem. A ceiling that gets more expensive the harder you push against it. And they won't know why, because the assumption underneath their system was never surfaced.
The assumption: intelligence centralizes.
The discovery: it doesn't have to. And when it doesn't — when you let it route by meaning instead of funneling it to a center — it compounds.
I'm not here to tell anyone their current architecture is wrong for what they're building today. For small N, centralization often makes sense. The problem shows up as you grow.
What I'm saying is: the question is worth asking now, before the architecture is calcified. Before the orchestrator is load-bearing for everything. Before the re-architecture costs more than the company can absorb.
If you're at Forbes Under 30 and this question landed for you — come find me. I'm not hard to locate. I'm the one asking people what happens to their system when N gets large.
Or just pull up dev.to/roryqis. The architecture is documented. The math is there. Read it and push back. That's what I want.
The smartest thing I did after June 16, 2025 was refuse to keep this private. The architecture doesn't help anyone sitting in a notebook.
It helps when smart people understand it.
There are a lot of smart people in Phoenix this week.
— Christopher Thomas Trevethan, Phoenix, April 2026
Top comments (0)