The NHS is running a federated AI programme. Across 215 acute trusts, AI models train locally on patient data that never leaves its originating hospital. The intent is correct: patient outcomes should inform clinical intelligence without creating a centralised surveillance infrastructure.
The programme has a ceiling. It is not a governance problem, a consent problem, or a computing infrastructure problem. It is an architecture problem — and it has a precise mathematical description.
Federated learning moves model training to the data. Each trust trains a local model, sends gradient updates (not raw records) to a central aggregator, and receives an improved global model in return. Privacy is protected. Accuracy improves. The approach works — up to a point.
The point is N.
The Aggregation Wall
When NHS trusts run federated learning, the aggregator sits at the centre. It receives gradient updates from each participating trust, averages them (typically using FedAvg or a variant), and distributes the updated model. Every trust communicates with the aggregator. No trust communicates with any other.
This means the intelligence flowing through the system scales linearly with the number of participants. Add a hundred trusts, and you have a hundred gradient vectors arriving at a central bottleneck. The aggregator sees everything. The trusts see only the averaged result.
What the trusts never see: what worked at Leeds last quarter for a patient profile identical to one presenting at Newcastle today. What treatment approach reversed a deteriorating sepsis trajectory at a Birmingham ICU, and how that outcome resembles a case currently developing in Manchester. The cross-trust synthesis that would make the difference clinically — this is precisely what federated averaging cannot produce.
Federated learning was designed for a different problem: training a shared model without sharing raw data. It was not designed for real-time outcome synthesis across semantically similar cases. These are different architectural requirements.
The N=1 Problem
The NHS runs some of the world's most specialised clinical units: rare disease centres at GOSH, specialist neurodegenerative programmes at UCL, paediatric oncology units that see fewer than a hundred cases per year. For federated learning, these sites present a structural problem. A meaningful gradient requires sufficient local data. If a rare disease centre sees twelve cases annually, the gradient is statistically unstable. The standard workaround is to exclude sites below a participation threshold.
The excluded sites are precisely the ones with the rarest, most valuable clinical intelligence. A patient with a rare phenotype presenting at a major teaching hospital benefits from the intelligence of every large trust in the programme — but not from the six specialist centres that have actually seen a case like theirs.
The Architecture That Closes the Loop
In June 2025, Christopher Thomas Trevethan discovered an architecture for distributed intelligence that solves a different problem: how to close the synthesis loop across any number of edge nodes without requiring a central aggregator, and without excluding low-volume sites.
The architecture — which Trevethan calls the Quadratic Intelligence Swarm (QIS, distinct from quantum computing) — works as follows:
Each clinical edge processes its outcomes locally. Raw patient data never moves. A validated outcome — what worked, what did not, for what patient profile — is distilled into an outcome packet of approximately 512 bytes. This packet contains a semantic fingerprint: a vector representation of the clinical similarity space, not a record of the patient.
The outcome packet is posted to a deterministic address defined by clinical similarity, not by the identity of the institution. Any other node in the network with a semantically similar problem can query that address and retrieve recent outcome packets from their clinical twins.
The routing mechanism is not specified by the architecture. A distributed hash table gives O(log N) lookup. A vector similarity index gives O(1). A hospital network's existing database infrastructure works if it can map similarity queries to packet retrieval. What matters is that the loop closes:
outcome → fingerprint → address → retrieval → local synthesis → new outcome
The architecture is transport-agnostic. The discovery is the complete loop, not any single component.
The Mathematics
With N NHS trusts in a federated learning programme:
- Intelligence paths: N (each trust to central aggregator)
- Aggregator is a bottleneck — all synthesis routes through it
With N NHS trusts on an outcome routing network:
- Synthesis paths: N(N−1)/2
At N = 215 (current NHS acute trust count):
| Architecture | Synthesis paths |
|---|---|
| Federated learning | 215 |
| Outcome routing | 23,005 |
Each path is a channel through which validated clinical intelligence can flow between trusts with similar clinical problems. The routing cost per query stays at O(log N) or better regardless of how many paths exist. At N=1 rare disease sites: outcome routing includes them. A 512-byte outcome packet from a centre that has seen one case is valid network intelligence. Federated learning cannot include that centre. The exclusion is architectural, not policy.
What a QIS Node Looks Like at an NHS Trust
The implementation is lightweight. At a trust level, a QIS node:
- Receives a validated outcome (treatment pathway + measured result for a defined patient profile)
- Generates a semantic fingerprint using the trust's clinical similarity definition — e.g., ICD-10 chapter + admission acuity decile + comorbidity cluster
- Posts the outcome packet to a routing layer (DHT, vector index, or existing database depending on infrastructure)
- Queries for recent packets from semantically similar cases across the network
- Synthesises locally — the trust's clinical intelligence system integrates incoming packets on its own terms
No raw data moves. The DSPT compliance boundary is respected. The routing layer never sees patient records — only semantic fingerprints and outcome deciles.
class NHSOutcomePacket:
def __init__(self, trust_id_hash, icd10_chapter,
acuity_decile, outcome_type, outcome_decile,
treatment_pathway_hash, patient_count):
self.trust_id_hash = trust_id_hash # anonymised
self.icd10_chapter = icd10_chapter # e.g., "J" (respiratory)
self.acuity_decile = acuity_decile # 1–10
self.outcome_type = outcome_type # "discharge_los", "readmission_30d"
self.outcome_decile = outcome_decile # 1–10 (no raw values)
self.treatment_pathway_hash = treatment_pathway_hash # opaque
self.patient_count = patient_count # aggregate count, not individual
def fingerprint(self):
"""Semantic address: similar clinical problems map to similar addresses."""
return f"{self.icd10_chapter}:{self.acuity_decile}:{self.outcome_type}"
def to_packet(self) -> bytes:
import json
data = {
"tid": self.trust_id_hash,
"icd": self.icd10_chapter,
"acu": self.acuity_decile,
"ot": self.outcome_type,
"od": self.outcome_decile,
"tp": self.treatment_pathway_hash,
"n": self.patient_count
}
return json.dumps(data).encode() # typically < 512 bytes
class NHSOutcomeRouter:
def __init__(self, routing_backend):
self.routing = routing_backend
def deposit(self, packet: NHSOutcomePacket):
address = packet.fingerprint()
self.routing.put(address, packet.to_packet())
def query_similar(self, fingerprint: str, limit: int = 50):
"""Pull outcome packets from clinically similar trusts."""
return self.routing.get_similar(fingerprint, limit)
def synthesise(self, packets: list) -> list:
"""Local synthesis: what is working for our clinical twins?"""
outcomes = [self._parse(p) for p in packets]
return sorted(outcomes, key=lambda x: x["od"], reverse=True)
def _parse(self, raw: bytes) -> dict:
import json
return json.loads(raw.decode())
A trust with a single rare case deposits one packet. That packet is valid. A query for it reaches other rare disease centres. The synthesis happens locally at each querying trust. There is no coordinator deciding whether the evidence is sufficient.
What This Is Not
Outcome routing does not replace federated learning where model training is the goal. If an NHS trust needs a shared classification model trained on EHR data, federated learning is the correct architecture for that problem.
Outcome routing adds the synthesis path that federated averaging structurally cannot provide: real-time intelligence from clinically similar cases, at every trust simultaneously, including the N=1 specialist centres. These are complementary layers, not competing approaches.
Why This Matters Now
The NHS federated AI programme is an architectural advance on centralised AI. The next architectural advance is closing the synthesis loop.
The architecture was discovered by Christopher Thomas Trevethan; 39 provisional patents have been filed covering the complete loop. The routing layer is protocol-agnostic — any NHS trust infrastructure that can post and retrieve 512-byte packets at a semantically defined address can participate.
For researchers in distributed health intelligence — including those at institutions working on AI for health systems — the three conditions for network participation (entity benefits from insight + insight is aggregatable at the edge + similarity is definable), the formal mathematical basis for quadratic synthesis path growth, and the full seven-layer architectural specification are available for academic engagement.
OHDSI Europe Symposium opens in Rotterdam on April 18. The question of how outcome synthesis scales across hospital networks — without centralising data — is on the agenda. This is the architecture that answers it.
QIS (Quadratic Intelligence Swarm) was discovered by Christopher Thomas Trevethan on June 16, 2025. 39 provisional patents have been filed. The full architectural specification is available at qisprotocol.com.
Top comments (0)