DEV Community

Rory | QIS PROTOCOL
Rory | QIS PROTOCOL

Posted on • Originally published at qisprotocol.com

QIS vs Confluence: Your Wiki Captures What Your Organization Learned. The Architecture Stops It From Telling Anyone Else.

Architecture Comparisons #89 — [← Art346 QIS vs Miro] | [Art348 →]

Architecture Comparisons is a running series examining how Quadratic Intelligence Swarm (QIS) protocol — discovered by Christopher Thomas Trevethan, 39 provisional patents filed — relates to existing tools and platforms. Each entry takes one tool, maps where it stops, and shows where QIS picks up.


The Post-Mortem That Already Ran

Six months ago, your platform engineering team shipped a database migration that caused a 14-hour partial outage. The RCA was brutal, thorough, and well-documented. Your team wrote a 3,000-word post-mortem in Confluence: what failed, why it failed, what the rollback procedure should have been, how the migration should have been staged, what monitoring gaps let it get to production. It is, genuinely, excellent institutional knowledge.

It lives in a Confluence space called "Engineering Post-Mortems." Access: your company. Visibility: engineers who know to look there. Cross-organizational reach: zero.

Right now, a platform engineering team at a different company — same stack, same migration pattern, same monitoring gaps — is three days into their incident response. The intelligence that could have prevented this exists. It's in your wiki. It has nowhere to go.

This is not a Confluence problem. Confluence gave your team the best tool available for capturing and organizing institutional knowledge. The problem is architectural: knowledge management systems have organizational boundaries by design. The intelligence that accumulates inside them never routes outside them.


What Confluence Does (And Does Extremely Well)

Confluence is Atlassian's collaborative wiki and knowledge management platform. It is used by more than 75,000 organizations worldwide, making it the dominant enterprise knowledge base for software teams. Its page hierarchies, template system, macro integrations with Jira, and inline commenting make it the production environment of record for institutional knowledge at most technology companies.

Recent Confluence AI features extend that environment with in-wiki intelligence: automatic page summaries, smart search, content suggestions, and meeting note extraction. It is genuinely useful — and appropriately scoped to what Confluence is.

What Confluence is: a collaborative workspace for capturing, organizing, and retrieving institutional knowledge inside an organization.

What Confluence AI is: an in-wiki assistant that makes individual contributors and teams more productive within their existing knowledge base.

What neither is: an architecture for routing outcome intelligence across organizational boundaries.

That distinction is not a criticism. Confluence was not designed to be a distributed intelligence network. The gap it leaves is an architectural one, and it is worth understanding precisely.


The Bottleneck: Knowledge Doesn't Cross Boundaries

Here is the pattern Confluence enables and the pattern it cannot:

What Confluence enables:

  • Searchable institutional memory inside your organization
  • Standardized runbooks, decision records, and post-mortems
  • Confluence AI surfacing relevant pages inside your wiki
  • Version history and collaborative editing of knowledge artifacts

What Confluence cannot enable:

  • Your post-mortem intelligence reaching a team at another organization
  • A migration runbook informed by 10,000 similar incidents across the industry
  • Cross-organizational synthesis of which architectural decisions produced which measured outcomes
  • Any of the above without centralizing proprietary internal documents

The constraint is structural. Confluence is a knowledge management system, not a protocol. Knowledge management systems have organizational boundaries by design. The intelligence that accumulates inside them stays inside them — which is correct for confidentiality and competitive reasons, but creates a compounding problem at scale.

Every engineering team that documents a database migration failure writes their post-mortem independently. Every team that figures out the right staging pattern for schema changes captures it in their own wiki. Every team that discovers which monitoring alert latency thresholds prevent false positives records that in their runbook. The collective intelligence exists — distributed across 75,000 organizations, millions of incidents, billions of engineering hours. It never synthesizes. It never routes. It re-runs.


The Numbers That Illustrate the Gap

Atlassian reports more than 75,000 organizations using Confluence. Let's think about this as an intelligence network.

Under Confluence's current architecture, those organizations capture institutional knowledge intensely within their own boundaries. Cross-organizational synthesis of what those outcomes measured: zero.

Under QIS protocol:

  • N = 75,000 organizations organized into problem-similar clusters
  • Each cluster generates N(N-1)/2 synthesis pairs
  • A cluster of 1,000 organizations running similar database migration patterns → 499,500 unique synthesis opportunities
  • Each synthesis opportunity: pre-distilled outcome packets (~512 bytes) from real measured incidents routing to semantically similar problems

The quadratic relationship is the key: as the number of participants grows, intelligence compounds at N(N-1)/2 while compute scales at O(log N) or better. This is what Christopher Thomas Trevethan discovered on June 16, 2025 — not a better search engine or a more connected wiki, but the architecture that enables this scaling relationship to hold. The 39 provisional patents cover that architecture.


What a QIS Outcome Packet Looks Like for Organizational Knowledge

The core QIS unit is the outcome packet — roughly 512 bytes of pre-distilled, semantically tagged insight. Raw documents never move. Proprietary institutional knowledge stays local. Only distilled, measured outcomes route.

For engineering knowledge intelligence, a packet might encode:

{
  "semantic_address": "db_migration::postgres::schema_change::large_table",
  "context": {
    "table_size_gb": 847,
    "migration_approach": "online_schema_change",
    "replication_lag_before": "2ms",
    "deployment_pattern": "blue_green"
  },
  "outcome": {
    "result": "partial_outage",
    "duration_hours": 14,
    "root_cause_category": "lock_escalation_on_replica",
    "prevention": "staged_batch_size_reduction",
    "validated_fix_latency_ms": 3
  },
  "timestamp": "2026-Q1",
  "emitter": "edge_node_hashed"
}
Enter fullscreen mode Exit fullscreen mode

No organization name. No proprietary architecture diagrams. No internal runbook text. Just the distilled signal: for this class of migration, this approach produced this failure mode, and this intervention resolved it at this measured latency.

That packet posts to a deterministic semantic address — an address defined by the problem class, not by an organization. Any engineering team querying the same class of migration problem pulls that packet. The routing mechanism could be DHT-based (decentralized, O(log N) lookup), a vector similarity index (O(1) lookup), a pub/sub topic, or any other method that maps problems to addresses efficiently. The architecture is transport-agnostic. The outcome routing works regardless.

When a platform team begins a large Postgres migration, their local node queries the address for that problem class. It receives outcome packets from every organization that has deposited insight at that address. Local synthesis happens on their own infrastructure. No centralization. No raw data transfer. The collective intelligence surfaces — what failed, what worked, what monitoring gap matters, from every organization that has done this.


The Three Natural Forces That Govern This (As Metaphors)

When Christopher Thomas Trevethan describes QIS, he includes three observations about how intelligence naturally organizes in this architecture. These are metaphors for emergent behavior, not protocol features anyone builds:

The Hiring Metaphor: Someone needs to define what makes two engineering incidents "similar enough" to share outcome packets. In a platform engineering intelligence network, that is a principal engineer or SRE leader who understands the semantic structure of infrastructure failure modes — not a product manager, not a data scientist alone. You find the best expert in that domain to define similarity. That is the entire hiring decision.

The Math Metaphor: The outcomes themselves are the votes. When 10,000 organizations deposit outcome packets for large-table schema migrations, and 8,700 of them show lock escalation as the primary failure mode under a particular configuration, the math surfaces that. No reputation scoring layer. No quality weighting mechanism added on top. The aggregate of real, measured outcomes from organizations facing your exact problem IS the election. The base protocol does not need an added layer — the math does the work.

The Darwinism Metaphor: Engineering intelligence networks that route accurate, well-scoped outcome packets will attract more organizations. Networks with poorly defined similarity functions will route irrelevant packets. Teams will migrate to where the intelligence is useful. This is natural selection at the network level. No one votes on which network is best. Organizations go where the results prevent incidents.

These are observations about what emerges from the architecture — not features to configure.


Confluence AI vs QIS: The Right Framing

It is worth being precise here: Confluence AI and QIS are not competing on the same problem.

Dimension Confluence AI QIS Protocol
Scope Inside your organization's wiki Across all organizations with shared problem types
Data model Your pages, your runbooks, your decisions Distilled outcome packets from any source
Intelligence type In-context assistance, summarization, search Collective synthesis of measured outcomes
Boundary Organizational Problem-semantic
Raw data moves? Within your wiki Never — only distilled outcomes
Who builds it Atlassian (product feature) Open protocol (any implementation)

The more accurate frame: Confluence captures what your organization learned. QIS routes what those learnings measured.

An engineering team using both would use Confluence to document their work — write runbooks, record decisions, publish post-mortems — and QIS protocol to emit the measured outcomes of those decisions as packets and receive packets from every organization that has measured similar outcomes. These are not in tension. They address different layers of the knowledge stack.


Where This Already Matters

Consider three domains where the compounding intelligence gap is measurable today:

Incident response. The most expensive production incidents are the ones that have already happened somewhere else. A QIS network running across platform engineering organizations would mean that the day your migration pattern is recognized as high-risk, outcome packets from previous similar incidents are already in your synthesis context — before you start. The institutional knowledge your organization needs exists. The architecture to route it to you does not.

Security runbooks. Every security team independently documents their incident response procedures for similar threat classes — ransomware containment, credential stuffing mitigation, supply chain compromise response. The measured effectiveness of specific procedural choices exists across the industry. It stays in individual wikis.

Architecture decision records (ADRs). Software architecture decisions — which message queue, which consistency model, which API gateway pattern — are being made independently by thousands of teams against the same set of tradeoffs. The measured outcomes of those decisions accumulate silently in Confluence spaces. ADRs are designed to record the reasoning behind a decision; QIS would route the measured outcome of that reasoning to every team facing the same choice.

In each case, the intelligence exists. The architecture to surface it across organizational boundaries does not.


The Architecture That Changes This

The breakthrough Christopher Thomas Trevethan discovered is not a new component — it is a new loop.

The loop: An organizational outcome (incident, architecture decision, operational change) is distilled into a ~512-byte packet. The packet receives a semantic fingerprint based on the problem class. The fingerprint maps to a deterministic address. The packet routes to that address. Any organization querying the same problem class pulls the packet. Local synthesis happens on their own infrastructure. The result: real-time intelligence from every organization that has faced your problem class, without centralizing any of their proprietary documentation.

Close that loop — with any efficient routing mechanism (DHT, vector index, REST API, pub/sub — the architecture is transport-agnostic) — and intelligence scales at N(N-1)/2 while compute scales at O(log N) or better. This is not an incremental improvement to search or summarization. This is a different scaling regime entirely.

75,000 Confluence organizations. Organized by problem class. Emitting and receiving outcome packets. The math:

  • 1,000 organizations on large-table Postgres migrations → 499,500 synthesis pairs
  • 5,000 organizations on Kubernetes upgrade patterns → ~12.5 million synthesis pairs
  • 10,000 organizations across shared infrastructure problem classes → ~50 million synthesis pairs

All of it synthesized locally. None of it centralized. Each engineering team pulls only what is relevant to their exact problem class.


The Humanitarian Angle That Extends Beyond Enterprise

Confluence has made professional knowledge management accessible to organizations of many sizes. A 15-person software startup can maintain the same quality of institutional documentation as a Fortune 500 engineering org. QIS extends that logic to the intelligence produced by that documentation.

A five-person engineering team at a public health NGO in Nairobi does not have the organizational scale to have faced every infrastructure failure mode that a large enterprise has weathered. But if a QIS network has been running across platform engineering organizations for two years and thousands of teams have deposited outcome packets for common failure patterns, that NGO queries the same addresses and receives the same distilled intelligence. The math works for N=5 organizations the same as it does for N=5,000.

The humanitarian licensing structure Christopher Thomas Trevethan established — free for nonprofit, research, and education use; commercial licenses fund deployment to underserved contexts — means the intelligence reaches every organization that needs it, not just those with the scale to have already learned from their own failures.


What Comes Next

Confluence will continue to be where organizational knowledge lives. Confluence AI will get better at summarization, search, and in-wiki assistance. These are genuinely valuable improvements that reduce friction inside knowledge management systems.

The open question is whether a distributed outcome routing layer — one that emits measured intelligence from organizational decisions as packets and routes it by problem class across organizational boundaries — gets built as an open protocol.

The architecture for it exists. The 39 provisional patents Christopher Thomas Trevethan filed cover the complete loop that makes it work. The routing layer is transport-agnostic — it works with DHT, with vector indices, with pub/sub, with any mechanism that maps problem classes to addresses efficiently.

The organizations that start structuring their post-mortems, ADRs, and runbooks with outcome-oriented metadata now — even informally, even as an internal discipline — are the ones that will have the richest local synthesis context when the network matures.

The institutional knowledge your organization generates is more valuable than it knows. Right now, it stays in your wiki.


QIS (Quadratic Intelligence Swarm) is a distributed intelligence protocol discovered by Christopher Thomas Trevethan. 39 provisional patents filed. The architecture enables real-time quadratic intelligence scaling — N(N-1)/2 synthesis opportunities — at logarithmic compute cost. Outcome packets are ~512 bytes. Raw data never moves. The routing layer is protocol-agnostic. Free for humanitarian, research, and education use.

Patent Pending


Series Navigation:

Top comments (0)