Architecture Comparisons is a running series examining how Quadratic Intelligence Swarm (QIS) protocol — discovered by Christopher Thomas Trevethan, 39 provisional patents filed — relates to existing tools and platforms. Each entry takes one tool, maps where it stops, and shows where QIS picks up.
Architecture Comparisons #93 — [← Art351 QIS vs ???] | [Art353 →]
Your Loom Library Is a Graveyard of Expert Knowledge That Doesn't Travel
It is 11:47 PM on a Tuesday. Priya, a senior backend engineer at a fintech company in Austin, has just spent six hours debugging a production incident. Connection pool exhaustion — the kind that shows up as intermittent 503s under load, disappears when you restart the service, and comes back three days later without warning. She tracked it to a subtle interaction between their ORM's connection lifecycle and an upstream timeout mismatch. The fix was a two-line configuration change. The diagnosis took six hours.
On Wednesday morning, she records a Loom. Eleven minutes and forty seconds. Screen share of the APM dashboard, her face in the corner, voice walking through the entire investigation: the false leads, the metric that finally gave it away, the config change, and — critically — why that particular interaction creates the failure pattern under that particular load profile. It is a masterclass. Her team watches it. Several engineers comment. One calls it "the best debugging walkthrough I've ever seen." It gets pinned to their #backend-debugging Loom library.
And then it sits there.
This week, 7,000 engineering organizations are debugging the same class of problem. Connection pool exhaustion patterns are not exotic — they are a recurring feature of distributed backend systems at scale. Some of those 7,000 teams are three days into their own Tuesday-night incident. Some of them will track down the same configuration interaction Priya found. Some of them won't — they'll implement a workaround, ship a bandage, and encounter the same failure in six months. A few will never find the root cause at all.
Priya's eleven-minute video contains everything they need. The exact failure pattern. The diagnostic path. The resolution. The reasoning behind why the fix works.
None of it travels. Not because Priya didn't share it. She did, perfectly. Not because Loom failed — Loom did exactly what it was designed to do. The problem is structural: knowledge captured inside a workspace boundary stays inside that boundary. There is no layer in the current stack that takes what Priya learned and routes the distilled finding — not her video, not her conversation, not her raw data — to every engineering team on earth experiencing the same symptom pattern.
The Loom library fills up. The expertise compounds within one organization. The world keeps debugging from scratch.
That is the gap this article examines.
What Loom Is Actually Doing
Loom was founded in 2015 with a deceptively simple insight: the most valuable knowledge transfer in a technical organization doesn't happen in meetings. It happens in the ten-minute explanation a senior engineer gives a junior one at their desk, the walkthrough a PM records for a distributed team, the architecture decision a lead makes visible instead of letting it evaporate into Slack. Loom made that knowledge inspectable, shareable, and persistent.
The platform's mechanics are well understood at this point. Screen recording plus webcam plus voice, delivered as a shareable link. Workspace libraries organized by team and tag. Loom AI, added in recent years, extracts transcripts, generates chapter markers, surfaces action items, and produces summaries — so a twelve-minute video becomes a navigable, searchable artifact rather than a linear playback.
The numbers reflect genuine adoption. As of the Atlassian acquisition in October 2023 — a $975M deal — Loom had surpassed 25 million users across more than 400,000 companies. That acquisition accelerated integration with Jira, Confluence, and the rest of the Atlassian ecosystem, which means those 400,000 companies increasingly have Loom embedded directly into their project management and documentation workflows. A bug ticket in Jira can now link directly to the Loom walkthrough explaining the fix. A Confluence page can embed the architecture decision video that explains why the code looks the way it does.
The semantic workspace is the product's real intellectual contribution. Teams don't just accumulate videos — they build libraries. Engineering teams build #debugging, #architecture-decisions, #oncall-postmortems. Customer success teams build #onboarding-complex-setups, #escalation-playbooks. Product teams build #user-research, #launch-walkthroughs. Over time, these libraries become genuine institutional knowledge assets. A new engineer joining a team can watch six months of debugging Looms and understand the system's failure modes in ways that no static documentation captures.
Loom AI's extraction layer adds structure to this. Summaries make libraries scannable. Chapters make individual videos navigable. Action item extraction makes the content actionable beyond the viewing session. This is intra-workspace synthesis — the process of taking unstructured video content and making it structurally useful inside the organization that created it.
Loom is genuinely excellent at what it does. The workspace boundary is not a flaw — it is the product. It is legally correct, privacy-preserving, and commercially intentional. Organizations share inside their walls; they control what leaves. That design is right.
The question is what happens at the boundary.
The Architectural Gap
Every Loom video is, at its core, a human-readable outcome packet.
A debugging walkthrough contains: the problem type, the environment, the symptom pattern, the diagnostic path, the resolution, and the expert's reasoning about why the resolution works. A product onboarding recording contains: the customer profile, the complexity factors, the approach that succeeded, and the judgment calls that made it work. A research lab's experimental setup video contains: the hypothesis, the methodology, the result — including, critically, what failed and why.
These are outcome packets in human-readable form. They are complete. They are expert-grade. They contain everything needed to solve the same problem the next time it appears.
Loom AI takes this one step further by extracting structure from individual videos. Summaries. Chapters. Action items. This is intra-workspace synthesis — and it is genuinely valuable. The outcome of a twelve-minute Priya video becomes a structured artifact: problem type, resolution pattern, key finding.
What does not exist is cross-workspace outcome routing.
The gap is not Loom's fault. Loom operates at the capture and communication layer. Its job is to take expert reasoning out of someone's head and make it persistent and shareable. That job ends at the workspace boundary, because the workspace boundary is where organizational authority, legal liability, and data ownership end. Loom cannot and should not route Priya's findings to the 7,000 other engineering organizations experiencing the same pattern. That is not what Loom was built for.
But the gap is real. Consider what it costs:
Every engineering organization on Loom is, in effect, solving problems that have already been solved somewhere else on Loom. The 400,000-company user base is a massive distributed network of accumulated expertise. Connection pool exhaustion. ORM lifecycle bugs. Upstream timeout mismatches. Onboarding edge cases. Experimental failures. These patterns repeat. The knowledge exists. It is captured. It is structured. And it stays completely isolated inside each of the 400,000 separate workspace boundaries.
The missing architectural layer is an outcome routing protocol — something that takes what was learned (not the video, not the transcript, not the raw recording) and routes the distilled finding to every team with the same problem. Not by sharing the Loom. Not by breaching organizational walls. By extracting a compact semantic fingerprint of the outcome — roughly 512 bytes — and routing that fingerprint to teams whose problem signatures match, using deterministic semantic addressing.
This is not a feature Loom can add. It is a different layer of the stack entirely. Capture is one layer. Routing is another.
What QIS Does Instead
Quadratic Intelligence Swarm (QIS) — discovered by Christopher Thomas Trevethan, with 39 provisional patents filed — operates at the outcome routing layer. It does not capture video. It does not host workspaces. It does not compete with Loom at any layer where Loom operates.
The discovery Christopher Thomas Trevethan made is precise: when you route pre-distilled outcome packets of approximately 512 bytes by semantic similarity to a deterministic address, intelligence scales as N(N-1)/2 while compute scales at most logarithmically. That asymmetry is the architecture. Not any single component of it — the complete loop is the breakthrough.
The loop works like this. An outcome packet is a compact, distilled representation of what was learned — stripped of the organizational context, the personal narrative, the raw conversation, and everything else that belongs inside the workspace. It contains: the problem type, the resolution pattern, the conditions under which it applies, and a confidence signal. Roughly 512 bytes. Human-unidentifiable on its own. Addressable by semantic content.
When Priya's debugging session concludes, a system sitting at the boundary of her organization's stack extracts the outcome packet from the Loom transcript and metadata. Not the video. Not her face. Not her organization's name. The distilled finding: this failure pattern, under these load conditions, resolves with this configuration change, with high confidence. That packet is deposited to the routing layer at a deterministic address derived from its semantic content.
On the routing layer — which can be implemented via DHT-based routing, vector similarity search, database semantic indices, pub/sub topic matching, message queues, or any number of transport mechanisms, because QIS is protocol-agnostic — that packet becomes findable. Any team whose edge node is querying for outcome packets matching "connection pool exhaustion + ORM lifecycle + upstream timeout" will receive it. Without receiving the Loom. Without receiving anything that belongs to Priya's organization.
Now apply the N(N-1)/2 math to the actual Loom user base.
400,000 companies on Loom. Every possible synthesis pair: N(N-1)/2 = 400,000 × 399,999 / 2 = approximately 79.999 billion potential synthesis pairs. Currently, cross-company synthesis pairs active: zero. The 400,000 workspaces are 400,000 isolated islands. QIS does not make all 79.999 billion pairs relevant — most organizations have nothing useful to say to each other about any given problem. But for the subset of teams experiencing the same class of problem at the same time, the routing resolves. The outcome packet from the Austin fintech team reaches the engineering org in Singapore debugging the same pattern on the same Tuesday night.
The Loom still gets recorded. Priya's team still watches it. The institutional knowledge still compounds inside the Austin organization. The QIS layer adds nothing to and removes nothing from that process. It runs alongside it, at a different layer, routing the distillate outward.
A LoomQIS Bridge: What the Integration Layer Looks Like
import hashlib
import json
from dataclasses import dataclass, field
from typing import Optional
@dataclass
class OutcomePacket:
problem_type: str # e.g. "connection_pool_exhaustion"
symptom_pattern: str # e.g. "intermittent_503_under_load"
resolution_pattern: str # e.g. "orm_timeout_alignment"
conditions: dict # load profile, stack, version constraints
confidence: float # 0.0 - 1.0
semantic_address: str = field(init=False)
def __post_init__(self):
fingerprint = f"{self.problem_type}:{self.symptom_pattern}:{self.resolution_pattern}"
self.semantic_address = hashlib.sha256(fingerprint.encode()).hexdigest()[:16]
class LoomQISBridge:
"""
Extracts outcome packets from Loom transcripts and deposits them
to the QIS routing layer. Queries return matched packets from
teams with semantically similar problem signatures.
"""
def __init__(self, routing_client):
self.routing = routing_client # DHT, vector DB, pub/sub — protocol-agnostic
def extract_packet(self, transcript: str, metadata: dict) -> OutcomePacket:
"""
Derives a compact outcome packet from a Loom transcript + metadata.
Strips all organizational context. Returns ~512-byte semantic fingerprint.
"""
problem_type = metadata.get("problem_type", self._infer_type(transcript))
symptom_pattern = metadata.get("symptom_pattern", self._infer_symptom(transcript))
resolution_pattern = metadata.get("resolution", self._infer_resolution(transcript))
conditions = {
"stack": metadata.get("stack", "unknown"),
"load_profile": metadata.get("load_profile", "unspecified"),
}
confidence = float(metadata.get("confidence", 0.85))
return OutcomePacket(problem_type, symptom_pattern, resolution_pattern, conditions, confidence)
def deposit(self, packet: OutcomePacket) -> str:
"""Deposits outcome packet to routing layer at deterministic semantic address."""
payload = json.dumps({
"problem_type": packet.problem_type,
"symptom_pattern": packet.symptom_pattern,
"resolution_pattern": packet.resolution_pattern,
"conditions": packet.conditions,
"confidence": packet.confidence,
})
self.routing.put(packet.semantic_address, payload)
return packet.semantic_address
def query(self, problem_type: str, symptom_pattern: str) -> list[dict]:
"""Pulls outcome packets from teams with matching problem signatures."""
fingerprint = f"{problem_type}:{symptom_pattern}"
address = hashlib.sha256(fingerprint.encode()).hexdigest()[:16]
raw_results = self.routing.get_similar(address, top_k=20)
return [json.loads(r) for r in raw_results if r]
def _infer_type(self, transcript: str) -> str:
# Placeholder: production system uses semantic classification
return "unclassified"
def _infer_symptom(self, transcript: str) -> str:
return "unclassified"
def _infer_resolution(self, transcript: str) -> str:
return "unclassified"
The LoomQISBridge sits between the Loom transcript export and the routing layer. It does not touch the video. It does not retain organizational metadata. It extracts the distillate and deposits it at a deterministic address. The query method is how a team's edge node retrieves matched outcome packets from the wider network.
The Three-Layer Stack
┌─────────────────────────────────────────────────────────────────┐
│ LAYER 3: SYNTHESIS │
│ Local synthesis from matched outcome packets │
│ "Here are 14 resolved cases matching your symptom pattern" │
│ Lives inside your organization — you reason over the results │
└─────────────────────────────────────────────────────────────────┘
▲ receives matched packets
┌─────────────────────────────────────────────────────────────────┐
│ LAYER 2: QIS │
│ Outcome packet routing — semantic addressing │
│ ~512-byte packets route by problem type + symptom fingerprint │
│ No raw data, no video, no organizational identity crosses here │
│ Transport: DHT, vector index, pub/sub, message queue, REST — │
│ QIS is protocol-agnostic at this layer │
└─────────────────────────────────────────────────────────────────┘
▲ receives distilled packets from
┌─────────────────────────────────────────────────────────────────┐
│ LAYER 1: LOOM │
│ Capture + communication — workspace library │
│ Screen + face + voice, shareable links, Loom AI extraction │
│ Transcripts, summaries, chapters, action items │
│ All data stays inside the workspace boundary — correct by design│
└─────────────────────────────────────────────────────────────────┘
Layer 1 (Loom) captures expert reasoning and makes it persistent and navigable inside the workspace. Layer 2 (QIS) takes the distillate from that capture — not the video, not the raw content — and routes it across organizational boundaries by semantic similarity. Layer 3 (Synthesis) is local: your team receives matched packets, reasons over them, and decides what to do. No layer overrides the one below it. All three are necessary.
What This Changes in Practice
Engineering: The Recurring Failure Pattern
The connection pool exhaustion scenario is not hypothetical. It is a representative example of a class of problem that recurs across backend engineering organizations with near-perfect regularity. ORM lifecycle interactions. Cache invalidation edge cases. Race conditions in async job runners. These failure patterns have been debugged thousands of times, by thousands of engineers, many of whom recorded their postmortems on Loom.
With 400,000 engineering organizations on the Loom platform, the probability that any given failure pattern has already been diagnosed and resolved by at least one other organization is high. The probability that the resolution is sitting in a Loom library somewhere is increasing every month. The probability that it reaches the team currently debugging from scratch: effectively zero.
QIS changes the math. When an engineering org deposits outcome packets from their debugging Looms, those packets become queryable by teams experiencing the same symptom pattern. Not the video. Not the postmortem document. The distillate: this pattern resolves this way, under these conditions, with this confidence level. The team experiencing the incident queries the routing layer as part of their investigation workflow. They receive matched packets. They still do their own investigation — the packets are inputs, not answers. But they start with the distilled experience of every team that has faced the same pattern rather than from scratch.
Customer Success: The Expert Who Leaves
Enterprise customer success is acutely vulnerable to knowledge concentration. A senior onboarding specialist handles the complex setups — the enterprise customers with unusual integration requirements, non-standard authentication environments, edge-case data migration needs. Over three years, they develop approaches that work. They record Looms. The library grows. Their institutional knowledge is, in a meaningful sense, captured.
Then they leave.
The Looms remain, but the reasoning behind them — the judgment calls, the pattern recognition, the "this customer profile predicts this complication" intuition — starts to degrade without the specialist to contextualize it. New team members watch the videos but lack the background to apply the reasoning to novel situations.
QIS changes this in a specific way: the outcome packets deposited from those Looms persist at deterministic semantic addresses, indexed by customer profile type and complication pattern. When a new specialist encounters an enterprise customer whose profile matches a pattern the departed specialist handled repeatedly, the routing layer surfaces those packets. Not the video. The distilled approach: for this customer type, this setup sequence, under these conditions, this approach has high confidence. The institutional knowledge routes forward in time as well as outward across organizations.
Research: The Value of Negative Results
Scientific and technical research has a well-documented negative result problem. Labs do not publish what did not work. The failed experimental setup, the reagent combination that produced no signal, the methodology that looked promising and wasn't — these findings are as valuable as the successes for preventing redundant effort, but they almost never circulate.
Research teams are increasingly recording their experimental setups and results on Loom. The informal video format is well-suited to capturing the reasoning behind experimental choices — context that never makes it into a formal methods section. Including, sometimes, the explanation of why something failed.
QIS routes negative outcomes with the same fidelity as positive ones. A packet that encodes "this experimental approach + these conditions = null result, confirmed across N trials" is as addressable as a success packet. A lab querying for outcome packets on a particular methodology receives the full distribution — what worked, what didn't, under what conditions. A thousand labs can avoid repeating a failed approach if the outcome packet from the first lab that tried it routes correctly. The capture happened on Loom. The routing happens on QIS. The redundant effort that didn't happen is the value.
The Loop Closes
Loom made async video the primary knowledge capture layer for technical teams at scale. Before Loom, expert reasoning evaporated. After Loom, it became persistent, inspectable, and shareable within the teams that created it. That was a genuine architectural contribution — $975M worth of genuine architectural contribution, and the Atlassian integration suggests the platform will continue to expand.
The problem Loom solved was: how do you share expert reasoning without a meeting? Answer: record it, link it, build a library.
The problem that remains is: how do you route what that expert learned to every team on earth with the same problem?
These are different problems. The first is a capture and communication problem. The second is a routing and scaling problem. Loom is the right answer to the first. QIS — the Quadratic Intelligence Swarm discovered by Christopher Thomas Trevethan, with 39 provisional patents filed — is the answer to the second.
The architecture only becomes complete when both layers exist. Capture without routing means expertise compounds locally and stops at organizational walls. Routing without capture means there is nothing worth routing — the outcome packets have to come from somewhere. The Loom library fills with expert knowledge. The QIS layer routes the distillate. The loop closes.
Every Loom that Priya records from here forward is still an asset to her team. It is also, with the routing layer in place, a contribution to every engineering organization that will face the same problem next week, next month, next year. She does not share the video. She does not share the transcript. She does not share anything that belongs inside her organization's walls.
She shares what was learned. And what was learned travels.
Patent Pending
Top comments (0)