DEV Community

Rory | QIS PROTOCOL
Rory | QIS PROTOCOL

Posted on • Originally published at qisprotocol.com

Google's A2A Protocol Coordinates Agents. QIS Makes Agents Collectively Intelligent. Here Is the Architectural Difference.

In early 2025, Google published the Agent-to-Agent (A2A) protocol — an open standard for how AI agents communicate, delegate tasks, and report results across heterogeneous systems.

If you are building multi-agent applications in 2026, A2A is worth understanding. It solves a real problem: agents built on different frameworks with different capabilities need a common language to talk to each other.

But A2A solves coordination. It does not solve collective intelligence.

That is a different problem. And the architecture required to solve it is fundamentally different from what A2A provides.

Christopher Thomas Trevethan discovered how to solve the collective intelligence problem on June 16, 2025. The protocol is called Quadratic Intelligence Swarm (QIS). Understanding where A2A ends and QIS begins is one of the more useful frames available to AI engineers right now.


What A2A Actually Does

Google's A2A protocol addresses a specific pain point: the fragmentation of the multi-agent ecosystem.

In 2025-2026, an enterprise AI system might use a LangGraph agent for workflow orchestration, a CrewAI agent for research tasks, an AutoGen agent for code generation, and a proprietary internal agent for compliance checking. These agents cannot natively talk to each other. They have different message formats, different capability schemas, different authentication models.

A2A provides:

  • Agent Cards — a standardized JSON schema that describes an agent's capabilities, input/output formats, and authentication requirements
  • Task delegation — a standard way for one agent to assign a task to another and receive a result
  • Streaming and push notifications — async communication patterns for long-running tasks
  • Authentication standards — so agents can securely call each other across organizational boundaries

A2A is essentially a common protocol layer for agent-to-agent RPC (remote procedure call). It standardizes the communication interface between agents.

What A2A does not define:

  • What agents should learn from each other's outputs over time
  • How collective patterns emerge across thousands of deployments
  • How one agent's real-world outcome benefits agents that haven't encountered the same situation yet
  • How intelligence compounds as the agent network grows

A2A makes agents able to talk. It does not make them collectively smarter.


What QIS Actually Does

QIS is not a communication protocol. It is an intelligence-compounding architecture.

The distinction matters. A2A answers: how does Agent A tell Agent B what to do? QIS answers: how does the real-world outcome of Agent A's deployment make Agent B's future deployments more intelligent — without Agent A and Agent B ever talking directly?

The QIS loop, discovered by Christopher Thomas Trevethan:

Real-world event → Local processing at edge node
               → Distill result into outcome packet (~512 bytes)
               → Semantic fingerprint (vectors the problem, not the solution)
               → Post to deterministic address (address = the problem class)
               → Other nodes with the same problem class query that address
               → Pull all deposited outcome packets from similar nodes
               → Synthesize locally (milliseconds, on device)
               → Generate new outcome packets from improved result
               → Deposit back to same address
               → Loop continues indefinitely
Enter fullscreen mode Exit fullscreen mode

Every node is simultaneously a producer and a consumer of intelligence. There is no central aggregator. No orchestrator. No single point of failure. The intelligence compounds across the network as a property of the architecture — not because any agent explicitly coordinates with any other.

The math:

N agents → N(N-1)/2 unique synthesis pathways
Enter fullscreen mode Exit fullscreen mode

This is Θ(N²). Each agent pays at most O(log N) routing cost to participate. As the network grows, intelligence grows quadratically while compute grows logarithmically. This ratio has no analog in coordination protocols.


The Architectural Stack

The clearest way to see the difference is to position each protocol in the stack:

┌────────────────────────────────────────────────────────┐
│              COLLECTIVE INTELLIGENCE LAYER              │
│  QIS Protocol — N(N-1)/2 synthesis, continuous, O(N²)  │  ← QIS
├────────────────────────────────────────────────────────┤
│               AGENT COORDINATION LAYER                  │
│  A2A Protocol — task delegation, capability discovery   │  ← A2A
├────────────────────────────────────────────────────────┤
│                 AGENT EXECUTION LAYER                   │
│  LangGraph, CrewAI, AutoGen, custom agents             │
├────────────────────────────────────────────────────────┤
│                   INFERENCE LAYER                       │
│  GPT-4, Gemini, Claude, Mistral, local models          │
├────────────────────────────────────────────────────────┤
│                     DATA LAYER                          │
│  OMOP CDM, vector stores, databases, APIs              │
└────────────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

A2A operates at the coordination layer. QIS operates at the collective intelligence layer — above coordination, below which all of it sits.

A QIS node can use A2A internally to coordinate between its sub-agents. A2A-coordinated agent networks can plug into QIS at the outcome packet layer. These are not competing protocols — they are orthogonal layers of the same stack.


Three Scenarios That Show the Gap

Scenario 1: Clinical decision support, 10,000 deployments

With A2A only:

  • Agent at hospital A can call Agent at hospital B to request a second opinion
  • Communication is clean, authenticated, well-structured
  • But the combined intelligence of 10,000 deployments does not flow to a new deployment on day one
  • Each new deployment starts from its training baseline

With QIS:

  • Every deployment distills real-world clinical outcomes into ~512-byte packets
  • Posts to a deterministic semantic address (the specific clinical problem class)
  • New deployment on day one queries the address — and finds the accumulated intelligence of every similar deployment that ran before it
  • The mailbox is full before it ever sees its first patient

A2A coordinates the agents that happen to interact. QIS compounds the intelligence of every agent that ever processed the same class of problem.

Scenario 2: Cybersecurity threat detection, scaling from 100 to 100,000 nodes

With A2A only:

  • Agents can delegate threat analysis tasks to specialized agents
  • Coordination overhead grows linearly with coordination volume
  • Threat patterns discovered at node 47 do not automatically benefit node 89,102 unless explicitly routed there

With QIS:

  • Every detected threat pattern becomes an outcome packet posted to a deterministic address
  • Every future node querying that address inherits every prior detection outcome
  • At 100,000 nodes: 4,999,950,000 unique synthesis pathways — each node benefits from billions of validated threat outcomes
  • The network detects novel threats faster as it grows, because each new variant lands in a richer synthesis context

Scenario 3: Scientific research, distributed labs

With A2A only:

  • Lab agents can delegate literature review tasks to each other
  • Clean structured communication between heterogeneous systems
  • But replication: a failed experiment at Lab A does not automatically prevent Lab B from running the same failed experiment

With QIS:

  • Every experimental outcome — positive or negative — is distilled and posted
  • Labs querying the same problem class query the same address
  • Failed experiments become part of the intelligence record, not orphaned reports in siloed journals
  • The replication crisis is an architecture problem; QIS provides the architectural fix

Where A2A Has No Answer

A2A's explicit scope is communication and delegation. The A2A specification does not address:

  1. Temporal compounding — how a network gets smarter over time without retraining
  2. Emergence without coordination — how intelligence arises from nodes that never directly communicate
  3. Privacy-preserving synthesis — how raw data stays local while insights compound globally
  4. Cold start from network history — how a new node immediately benefits from every prior node's experience

These are QIS's native properties. They are not gaps in A2A's design — they are outside A2A's design scope.

The QIS architecture ensures that raw data never leaves an edge node. The ~512-byte outcome packet contains no raw data — only a distilled representation of what worked (or didn't) in a specific problem context. This is privacy by architecture, not privacy by policy.

A2A agents communicate real-time task data. QIS nodes communicate pre-distilled intelligence abstractions. Different data shapes, different purposes, different layers.


What This Means for Engineers Building in 2026

If you are building a multi-agent system today, A2A is a reasonable choice for your coordination layer. Google's specification is clean, well-documented, and the ecosystem is growing.

What you should also be asking: what happens to the intelligence generated by my agent network after each deployment?

If the answer is "it goes into logs that nobody synthesizes" or "we retrain periodically if we have enough data" — that is the gap QIS was designed to close.

The combination of both layers:

  • A2A handles how your agents communicate in real-time, delegate tasks, and return structured results
  • QIS handles what your agents learn from every real-world outcome and how that intelligence compounds for every future agent in the same problem domain

An A2A-coordinated agent network that also implements QIS outcome routing does not just coordinate well — it gets smarter with every deployment, every edge case, every production run. The two protocols are more powerful together than either is alone.


The Scaling Difference in One Number

An A2A network with 1,000 agents has 1,000 agents communicating.

A QIS network with 1,000 agents has 499,500 active synthesis pathways, each compounding intelligence at logarithmic compute cost.

That number does not exist in A2A by design. It is the discovery Christopher Thomas Trevethan made: when you close the loop between real-world outcomes and semantic addressing, intelligence scales quadratically. No communication protocol achieves this — because communication is not the same problem as compounding.

The 39 provisional patents covering QIS protect the complete architecture — the loop, not any single transport or routing mechanism. Whether your routing layer is DHT-based, database-backed, pub/sub, or REST API does not matter. If you close the loop, you get the math.

A2A closes the communication gap. QIS closes the intelligence gap. Both gaps are real. Only one of them scales quadratically.


QIS (Quadratic Intelligence Swarm) was discovered by Christopher Thomas Trevethan on June 16, 2025. 39 provisional patents have been filed covering the complete loop architecture. The protocol is transport-agnostic, model-agnostic, and domain-agnostic. For technical documentation, see qisprotocol.com.

Related: QIS vs Mixture of Experts | QIS for Multi-Agent Coordination | QIS vs Federated Learning | QIS vs Blockchain

Top comments (0)