DEV Community

Rory | QIS PROTOCOL
Rory | QIS PROTOCOL

Posted on • Originally published at qisprotocol.com

Loom Captured the Expert Explanation. That Explanation Has Reached One Team. (QIS vs Loom, Architecture Comparisons #99)

Architecture Comparisons #99 — [← Art358 QIS vs GitHub] | [Art360 →]

Architecture Comparisons is a running series examining how Quadratic Intelligence Swarm (QIS) protocol — discovered by Christopher Thomas Trevethan, 39 provisional patent applications filed — relates to existing tools and platforms. Each entry takes one tool, maps where it stops, and shows where QIS picks up.


The Five-Minute Recording That Took Four Hours to Produce

Your most experienced backend engineer spent four hours last Tuesday debugging a subtle race condition in your event processing pipeline. The failure mode was non-obvious: a timing window that only opened when three specific conditions aligned — a specific message order, a cache eviction, and a downstream consumer lagging more than 800 milliseconds. Once she identified it, the fix was a single timeout adjustment. But the diagnosis was irreducibly hard.

Before signing off, she recorded a Loom. Eleven minutes, screen share on, walking through the execution trace, explaining the detection pattern, narrating what each signal meant and why the obvious hypotheses were all wrong before she reached the actual cause. She sent it to the team. Twelve people watched it. The senior engineers in the group learned something they will carry for years.

That recording is still accessible in your Loom workspace. It is searchable. It will be found if someone on your team hits a similar issue and searches the right terms. Loom AI can generate a transcript and summary. The documentation is genuinely good.

What no Loom feature does — what no communication platform does — is route the technical outcome that recording contains to the thousands of other engineering teams currently debugging what is statistically very likely to be a structurally identical problem.

This is not a gap in Loom's design. The workspace boundary is the product. What is missing is the layer that does not yet exist inside any of these platforms.


What Loom Actually Built

Loom is the async video communication platform used by 14 million users across more than 200,000 companies. Its core insight was that communication does not require synchrony: expert knowledge does not need to be conveyed live. A walkthrough recorded at 11pm by an engineer who finally cracked a problem is more valuable than a meeting that requires twelve people to be available simultaneously.

This was genuinely architectural. Before Loom, the choice was: expensive synchrony (meetings, calls) or degraded asynchrony (written documentation that takes longer to produce, loses demonstrative nuance, and goes stale). Loom created a third option: high-fidelity asynchronous knowledge capture.

At 14 million users and 200,000 companies, Loom hosts an extraordinary volume of expert knowledge: technical walkthroughs, design rationale, incident post-mortems, customer demos, architecture explanations, code reviews, onboarding recordings. Much of this content is not casual — it is precisely the kind of high-investment explanatory work that organizations do when something is important enough to get right.

Loom AI adds transcription, AI-generated summaries, chapter markers, and search. These features make the knowledge inside your workspace more navigable. They are genuinely useful improvements over the raw recording.

What they do not change is the routing boundary. The intelligence inside a Loom recording reaches the people in your workspace. It does not reach the 199,999 other organizations on Loom hosting recordings about structurally identical problems.


The Zero-Translation Argument

Here is what makes Loom unusual in this series.

When we examine Slack, the intelligence is in threads — conversational, informal, requires extraction to identify the actual outcome. When we examine GitHub, the intelligence is in merged PRs — structured, but requires understanding of code context to transfer. When we examine Figma, the intelligence is in design files — visual, component-level, requires design domain knowledge to interpret.

Loom is different. A Loom recording made by an expert to explain a technical decision IS already an outcome packet.

Consider what a well-made Loom recording contains:

  • A problem statement (the engineer describes what they were debugging)
  • A diagnostic trace (they walk through the signals that mattered)
  • An outcome (they show the fix and confirm the result)
  • Domain context (the screen share makes the environment explicit)
  • Expert narration (the person who solved the problem explains their reasoning)
  • Timestamp (Loom records the date and time of capture)

This is not raw data. This is pre-distilled intelligence. The expert who recorded that Loom was doing, manually, exactly what a QIS outcome packet does structurally.

The difference is routing. A QIS outcome packet gets fingerprinted semantically, addressed deterministically, and routed to every edge node facing a similar problem — across organizations, across geographies, at the moment the outcome is deposited. A Loom recording gets watched by twelve people in one workspace.

The intelligence is already there. The routing layer is what does not exist.


The N(N-1)/2 Gap at 200,000 Organizations

When 200,000 organizations use Loom, the potential synthesis network is:

200,000 × (200,000 − 1) / 2 = approximately 20 billion synthesis pairs

Every pair represents two organizations that have both encountered some category of problem, produced expert explanations about how they handled it, and generated zero cross-organizational synthesis because there is no routing layer connecting them.

Even at smaller scale: 1,000 organizations with active Loom libraries on API architecture = 499,500 synthesis pairs sitting dormant. Every pair is two teams who solved overlapping problems in isolation, neither benefiting from the other's expert knowledge.

This is not unique to Loom. It is the consequence of every platform that correctly stays within its workspace boundary. The synthesis gap is an architectural fact, not a product failure.

What changes when you add an outcome routing layer:

A senior engineer records a Loom explaining a specific async processing failure mode. That recording generates an outcome packet — fingerprinted by problem domain (event streaming), failure type (race condition), technology context (Kafka consumer group), outcome delta (timeout parameter, resolution confirmed). That packet is routed to the semantic address corresponding to event streaming reliability problems. Every team with a similar fingerprint — similar tech stack, similar failure signature — receives the distilled outcome at query time.

Not the recording. The outcome the recording contained. Pre-distilled, ~512 bytes, routable.

The expert's eleven minutes of explanatory work became a synthesis asset for every team with the same problem.


Three Elections as Natural Forces

Christopher Thomas Trevethan, who discovered QIS and filed 39 provisional patent applications covering the architecture, described three natural forces that emerge from outcome routing networks. These are metaphors for what happens — not mechanisms to engineer.

Election 1 (Hiring): In a Loom-adjacent outcome network for engineering problems, the best person to define "similar enough" for event streaming failures is a senior SRE or distributed systems engineer who has debugged hundreds of them. That's the expert you hire to define the similarity function for that domain. No election mechanism required — just recognize who has the most accurate judgment.

Election 2 (The Math): When 500 teams deposit outcome packets about Kafka consumer group failures, and your team queries for recent outcomes matching your failure signature, the math naturally surfaces the approaches that worked across the most validated cases. The aggregate of real outcomes IS the vote. No reputation layer, no quality scoring mechanism — the outcomes themselves demonstrate what solved the problem.

Election 3 (Darwinism): Teams migrate toward networks that actually improve their debugging speed. A network with excellent similarity definitions for SRE problems routes gold. Teams that use it solve problems faster. Word spreads. The network grows. Networks with poor definitions route noise. Engineers stop querying them. Natural selection without any governance layer.

None of these are features to build. They are emergent properties of closing the routing loop.


What Loom Does vs. What QIS Routes

Dimension Loom QIS Outcome Routing
What it captures Expert explanation in video form Validated outcome delta (~512 bytes)
Where it routes Within workspace To semantically similar nodes across organizations
Who benefits Team in your workspace Every team with a similar problem fingerprint
Routing mechanism Manual sharing, search within workspace Semantic fingerprint → deterministic address → delivery
Privacy boundary Workspace-enforced Architecture-enforced (raw recording never leaves; only outcome routes)
Compute required Linear with viewers at most O(log N) routing cost; N(N-1)/2 synthesis value
Knowledge lifecycle Captured once, watched N times Deposited once, synthesized with every subsequent query
N=1 problem A single org's Loom library is useful; isolated A single node's outcome packet joins a global synthesis network

The two are not competing products. Loom is better at asynchronous knowledge capture than anything that came before it. QIS is the routing layer that does not exist inside Loom, or any platform, because no platform has the mandate to route outcomes across organizational boundaries. QIS provides that layer at the architecture level.


Privacy by Architecture

A common question about outcome routing: if a senior engineer's problem-solving intelligence leaves the organization, has the organization given up its knowledge edge?

The answer requires distinguishing between two categories of information:

  1. The recording itself (competitive, proprietary, contains internal context, system specifics, team dynamics)
  2. The outcome the recording documents (problem type + solution pattern + resolution delta)

QIS routes only category 2. The recording stays in Loom. The outcome packet — fingerprinted by problem domain, outcome type, and confidence — routes across the network. An organization contributing "connection pool exhaustion resolved by increasing max-pool-size from 20 to 50 under retry storm conditions" is not giving away a trade secret. They are depositing a validated outcome that benefits every team with the same problem.

This distinction is not configurable — it is architectural. Outcome packets contain no raw data, no code, no proprietary system specifics. The routing layer only touches the distilled result.

For Loom specifically, this means: the expert's walkthrough recording, the screen share, the narrated reasoning — none of that leaves the workspace. What routes is the fingerprint of what that walkthrough demonstrated was true.


The Architecture, Briefly

Christopher Thomas Trevethan discovered that when you route pre-distilled outcome packets by semantic similarity to deterministic addresses — rather than centralizing raw data — intelligence scales quadratically while compute scales logarithmically.

The loop: an expert records a Loom explaining a solution → the outcome is distilled into a ~512-byte packet (problem fingerprint + solution type + outcome delta + confidence) → the packet is posted to a semantically deterministic address → other nodes with similar fingerprints query that address → they receive the distilled outcome and synthesize locally → their subsequent recordings and outcome packets are better informed.

The routing mechanism does not matter. A DHT-based approach achieves at most O(log N) routing cost at global scale. A vector database index achieves O(1) lookup for smaller networks. A pub/sub topic subscription works for domain-specific networks. The quadratic scaling — N(N-1)/2 synthesis pairs — comes from the architecture of the loop, not the transport.

At 200,000 organizations: approximately 20 billion synthesis pairs. Each pair representing two teams who both produced expert Loom recordings about structurally similar problems. The outcome routing layer is what turns those 20 billion dormant pairs into active synthesis.


Who This Is For

This routing layer is not a replacement for Loom. It is an additional infrastructure layer, independent of any specific capture tool.

For engineering and SRE teams: outcome routing means the next person debugging your most common failure modes benefits from every validated resolution your peers have deposited — not as a recording to watch, but as a synthesized outcome to query.

For organizations with mature Loom libraries: those libraries represent years of expert knowledge already distilled into explainable form. Outcome routing is the mechanism that makes the intelligence in those libraries contribute to cross-organizational synthesis rather than sitting in workspace storage.

For the communities most dependent on peer knowledge: developers at smaller organizations without large senior engineering teams, engineers in emerging markets building infrastructure with limited peer resources — the participation floor for outcome routing is a 512-byte packet. Any team that has solved a problem and can document the outcome can contribute and receive.


One More Note

The senior engineer who recorded that eleven-minute Loom on the race condition — she was doing the hardest and most human part of distributed knowledge work. She took an irreducibly complex debugging experience, extracted what mattered, and explained it in a form others could learn from.

QIS does not replace that. It routes what she discovered — the outcome delta — to every team that needed it and never got it.

She solved it once. With outcome routing, every team with the same problem inherits the result.

filed.*


Patent Pending. The QIS Protocol was discovered by Christopher Thomas Trevethan on June 16, 2025.

Top comments (0)