DEV Community

Rory | QIS PROTOCOL
Rory | QIS PROTOCOL

Posted on

Your LLM Agents Are Coordinating. They Are Not Learning. Here Is the Architecture That Closes the Loop.

In 2026, every serious AI engineering team has a multi-agent system. LangGraph for stateful orchestration. AutoGen for conversation-driven coordination. CrewAI for role-based task delegation. The tooling is excellent. The documentation is thorough. The benchmark numbers are real.

Here is the problem none of these frameworks solve: when your agent network runs a task today and learns something, that learning does not accumulate for tomorrow's run. Your 50-agent system solves 10,000 tasks this week. Next week, it starts fresh.

This is not a criticism of the frameworks. They were not designed to solve intelligence accumulation. They were designed to solve coordination. Those are different problems, and conflating them is why intelligent-seeming multi-agent systems keep making the same mistakes at scale.

This article explains the coordination ceiling, why it is architectural rather than a matter of better prompting, and describes the QIS outcome routing layer that closes the loop.


What Coordination Solves (And What It Doesn't)

LangGraph's core contribution is stateful graph execution — persistent state across agent turns, branching logic, interrupt/resume for human-in-the-loop workflows. AutoGen enables conversable agents that can chat, delegate, and self-correct. CrewAI provides role specialization and sequential/hierarchical task pipelines.

These frameworks solve the execution coordination problem: getting multiple LLM agents to work together on a structured task without stepping on each other.

They do not solve the intelligence accumulation problem: learning, at the network level, what works across many tasks over time.

Consider a concrete example. You have a 20-agent AutoGen network running customer support triage. Over 10,000 resolved tickets, your agents discover (implicitly, through repeated patterns) that a specific combination of issue type + customer segment + product version maps to a resolution pathway with a 94% first-contact resolution rate.

Where does that discovery live? In your vector store, if you wrote custom memory retrieval. In your fine-tuning data, if you ran a fine-tuning cycle. In no persistent structured form that any agent can query at runtime, in most deployments.

The next day, a new ticket arrives that matches that exact pattern. Your agent starts from scratch.


The Coordination Ceiling: A Formal Description

Let N = the number of agents in your network. Let T = the number of tasks completed.

Under current framework architectures:

  • Intelligence per agent ≈ I(prompt) + I(context window) + I(retrieval) — all constants that do not scale with T
  • Cross-agent learning ≈ 0 in most deployments (each agent gets the same base prompt; no cross-agent synthesis at runtime)
  • Network-level intelligence after T tasks ≈ I₀ (roughly constant — the network is not smarter after 10,000 tasks than after 10)

This is the ceiling. It is not a failure of prompting. It is a failure of architecture. There is no mechanism in the standard framework stack that converts completed task outcomes into routable intelligence that other agents can query.


What QIS Outcome Routing Adds

Christopher Thomas Trevethan discovered the Quadratic Intelligence Swarm (QIS) protocol on June 16, 2025. QIS is not an agent framework. It is a routing protocol — a layer that sits between your existing framework (LangGraph, AutoGen, CrewAI — your choice) and persistent storage.

QIS does one thing: it routes pre-distilled outcome packets — ~512-byte structured insight objects — to semantically addressed destinations so other agents can query them at runtime.

The architecture looks like this:

Your Agent Framework:   LangGraph / AutoGen / CrewAI  (unchanged)
──────────────────────────────────────────────────────────────────
QIS Protocol Layer:     Outcome packet generation,
                        semantic fingerprinting,
                        address routing
──────────────────────────────────────────────────────────────────
Transport Layer:        Vector DB, Redis, REST, pub/sub,
                        SQLite, any queryable store
Enter fullscreen mode Exit fullscreen mode

Your framework continues to handle execution. QIS handles the memory-as-intelligence layer beneath it.


The Outcome Packet: Unit of Intelligence

A QIS outcome packet for an LLM agent task looks like this:

from dataclasses import dataclass
from typing import Optional
import hashlib, json

@dataclass
class OutcomePacket:
    # Semantic address — deterministic from problem description
    address: str

    # What was attempted
    task_type: str
    context_summary: str      # compressed, no PII

    # What happened
    outcome: str              # "resolved", "escalated", "failed"
    resolution_pathway: str   # which approach worked
    confidence: float         # 0.0 - 1.0

    # Metadata
    agent_id: str             # anonymous — no personal data
    timestamp: str
    domain_tags: list[str]    # semantic routing tags

def generate_address(task_type: str, domain_tags: list[str]) -> str:
    """
    Deterministic semantic address from problem characteristics.
    Any agent facing the same problem type + domain generates the same address.
    This is the routing mechanism: similarity = same address = same packets.
    """
    canonical = f"{task_type}::{sorted(domain_tags)}"
    return hashlib.sha256(canonical.encode()).hexdigest()[:16]
Enter fullscreen mode Exit fullscreen mode

The address is the key insight. It is not generated from agent identity — it is generated from problem characteristics. Any agent facing a semantically similar task generates the same address and can therefore retrieve packets deposited by agents that faced the same task before.

This is semantic routing without a central directory.


The Math: Why This Scales Quadratically

In a network of N agents completing T tasks over time, QIS enables:

  • N(N-1)/2 unique cross-agent synthesis opportunities
  • Each agent queries relevant outcome packets at O(1) to O(log N) cost depending on transport
  • Network-level intelligence I(N, T) grows as Θ(N²) — quadratic in agent count

For concreteness:

  • 10 agents → 45 pairwise synthesis paths
  • 50 agents → 1,225 pairwise synthesis paths
  • 100 agents → 4,950 pairwise synthesis paths

Each pairwise synthesis path is not a message — it is a potential route for accumulated intelligence. When Agent 47 (customer support triage) runs the same task type that Agents 3, 12, 19, and 31 have handled before, it does not start from scratch. It queries the outcome packets from those prior runs and synthesizes locally.

The compute cost does not scale with N². Only the intelligence available does. Routing cost is O(log N) or better.


Implementation: QIS Layer on Top of LangGraph

Here is a minimal implementation of the QIS outcome routing layer that wraps a standard LangGraph workflow:

import json
import hashlib
from datetime import datetime
from typing import Any, Optional
from langchain_core.messages import HumanMessage

# Transport layer: using simple SQLite for this example
# QIS is transport-agnostic — swap for Redis, ChromaDB, NATS, REST API, etc.
import sqlite3

class QISOutcomeRouter:
    """
    QIS protocol layer for LangGraph agents.
    Sits between agent execution and persistent storage.
    """

    def __init__(self, db_path: str = ":memory:"):
        self.conn = sqlite3.connect(db_path)
        self.conn.execute("""
            CREATE TABLE IF NOT EXISTS outcome_packets (
                address TEXT,
                task_type TEXT,
                outcome TEXT,
                resolution_pathway TEXT,
                confidence REAL,
                domain_tags TEXT,
                timestamp TEXT
            )
        """)
        self.conn.execute("CREATE INDEX IF NOT EXISTS idx_address ON outcome_packets(address)")
        self.conn.commit()

    def generate_address(self, task_type: str, domain_tags: list) -> str:
        """Deterministic address from problem semantics."""
        canonical = f"{task_type}::{json.dumps(sorted(domain_tags))}"
        return hashlib.sha256(canonical.encode()).hexdigest()[:16]

    def query_similar_outcomes(self, task_type: str, domain_tags: list, limit: int = 10) -> list:
        """
        O(1) lookup via indexed address.
        Returns outcome packets from semantically similar prior tasks.
        """
        address = self.generate_address(task_type, domain_tags)
        cursor = self.conn.execute("""
            SELECT outcome, resolution_pathway, confidence, timestamp
            FROM outcome_packets
            WHERE address = ?
            ORDER BY confidence DESC, timestamp DESC
            LIMIT ?
        """, (address, limit))
        return [dict(zip(["outcome", "pathway", "confidence", "ts"], row))
                for row in cursor.fetchall()]

    def deposit_outcome(
        self,
        task_type: str,
        domain_tags: list,
        outcome: str,
        resolution_pathway: str,
        confidence: float
    ):
        """Deposit outcome packet after task completion."""
        address = self.generate_address(task_type, domain_tags)
        self.conn.execute("""
            INSERT INTO outcome_packets
            VALUES (?, ?, ?, ?, ?, ?, ?)
        """, (
            address, task_type, outcome, resolution_pathway,
            confidence, json.dumps(domain_tags),
            datetime.utcnow().isoformat()
        ))
        self.conn.commit()
        return address

    def synthesize_prior_outcomes(self, outcomes: list) -> Optional[str]:
        """
        Local synthesis — generate a recommendation from prior outcome packets.
        This is the QIS synthesis step: no central aggregator, runs on the querying agent.
        """
        if not outcomes:
            return None

        high_conf = [o for o in outcomes if o["confidence"] > 0.7]
        if not high_conf:
            return None

        # Weight by confidence
        pathway_scores = {}
        for o in high_conf:
            p = o["pathway"]
            pathway_scores[p] = pathway_scores.get(p, 0) + o["confidence"]

        best_pathway = max(pathway_scores, key=pathway_scores.get)
        top_confidence = max(o["confidence"] for o in high_conf if o["pathway"] == best_pathway)

        return f"Prior outcomes suggest: {best_pathway} (confidence: {top_confidence:.2f}, n={len(high_conf)})"


# Usage with LangGraph agent
def build_qis_aware_agent(router: QISOutcomeRouter):
    """
    Wrap any LangGraph node with QIS query + deposit logic.
    """

    def agent_node(state: dict) -> dict:
        task_type = state.get("task_type", "general")
        domain_tags = state.get("domain_tags", [])

        # Step 1: Query prior outcomes (O(1) lookup)
        prior_outcomes = router.query_similar_outcomes(task_type, domain_tags)
        synthesis = router.synthesize_prior_outcomes(prior_outcomes)

        if synthesis:
            # Inject synthesized prior intelligence into agent context
            state["messages"].append(HumanMessage(
                content=f"[QIS Context] {synthesis}"
            ))

        # Step 2: Execute agent (your existing LangGraph logic)
        # ... agent execution here ...
        result = execute_agent(state)

        # Step 3: Deposit outcome packet
        if result.get("completed"):
            router.deposit_outcome(
                task_type=task_type,
                domain_tags=domain_tags,
                outcome="resolved",
                resolution_pathway=result.get("approach", "standard"),
                confidence=result.get("confidence", 0.5)
            )

        return result

    return agent_node
Enter fullscreen mode Exit fullscreen mode

The existing LangGraph graph, nodes, edges, and state management are untouched. QIS operates as a before/after wrapper on any node: query before execution, deposit after. Transport is SQLite here — swap for Redis, ChromaDB, a REST API, or a pub/sub system without changing the protocol logic.


The Difference After 10,000 Tasks

Without QIS outcome routing, your LangGraph network after 10,000 tasks is as intelligent as it was at task 1. Same prompts, same context, same retrieval.

With QIS outcome routing, your network after 10,000 tasks has deposited 10,000 outcome packets — and every future agent facing a semantically similar task has access to all of them. Network intelligence I(N, T) grows with T as well as with N.

The accumulation is not bounded by context window size. Outcome packets are ~512 bytes — you can surface the top-K most relevant prior outcomes for any task in milliseconds, regardless of how many total packets exist.


The Three Properties That Make This Work

Christopher Thomas Trevethan's QIS architecture has three structural properties that standard multi-agent frameworks do not replicate:

1. Semantic addressing, not agent identity addressing.
Packets route to problem-type destinations, not agent-specific inboxes. Any agent with the right problem type can query any relevant packet. This is what enables N(N-1)/2 synthesis paths — the routing is by problem similarity, not by agent relationship.

2. Pre-distillation before routing.
Raw agent outputs are not shared — only ~512-byte distilled outcome packets. This keeps per-agent compute constant regardless of network size, and keeps the routing layer lightweight.

3. Local synthesis.
Each agent synthesizes queried packets locally. There is no central aggregator, no consensus mechanism, no orchestrator that becomes a bottleneck as N grows. This is what enables logarithmic routing cost with quadratic intelligence growth.

These three properties together are the discovery. No single component is the breakthrough. The complete loop — distill, address, route, synthesize locally, deposit back — is what enables quadratic scaling without compute explosion.


Transport Doesn't Matter. The Loop Does.

The example above uses SQLite. You can implement the same QIS protocol with:

  • ChromaDB or Qdrant — vector similarity search as the routing mechanism (O(log N) HNSW)
  • Redis — pub/sub topic matching for real-time packet delivery
  • NATS JetStream — durable message streams with pull consumer support for intermittent agents
  • REST API — semantic address as URL path, outcome packet as JSON body (O(1) lookup with indexed routing table)
  • Shared filesystem — address as directory path, packets as files (works for co-located agents)

The transport does not change the intelligence accumulation math. N(N-1)/2 pairwise synthesis paths emerge from the semantic addressing protocol regardless of how packets are stored and retrieved.

39 provisional patents protect this architecture under Christopher Thomas Trevethan's name, with a humanitarian licensing model: free for nonprofits, research, and education.


The Question for Your Agent Architecture

If your multi-agent system has been running for 30 days, here is a diagnostic question:

Is it smarter today than it was on day 1? Not because you updated the prompts or fine-tuned a model — but because the network itself accumulated intelligence from completed tasks and made that intelligence available to future agents?

If the answer is no, you have a coordination layer without an outcome routing layer. The ceiling exists whether or not you can see it from day 1.

QIS is the routing layer. The loop: distill → address → route → synthesize locally → deposit back. Every transport works. Every framework works. The math — N(N-1)/2 — is invariant.


Christopher Thomas Trevethan discovered the Quadratic Intelligence Swarm protocol on June 16, 2025. 39 provisional patents protect the architecture under a humanitarian licensing model: free for nonprofits, research, and education. IP protection is in place.

Top comments (0)