Somewhere in the United States right now, a preventable patient harm event is occurring. And somewhere else in the country — at a hospital two states away, or three floors up in the same building — a clinician already knows how to prevent it. They learned the hard way, six months ago. They changed their protocol. Their patients are safer now.
That knowledge will not reach the clinician making the mistake today. Not because anyone is withholding it. Not because of negligence or indifference. Because no architecture exists to route validated safety outcomes across institutional boundaries without creating a legal, compliance, and data privacy crisis in the process.
This is not a training problem. It is not a staffing problem. It is an architecture problem.
The Numbers That Should Not Be Possible
The To Err Is Human report from the Institute of Medicine was published in 1999. It estimated that between 44,000 and 98,000 Americans die from preventable medical errors each year.
That was 27 years ago.
A 2016 study published in The BMJ by Martin Makary and Michael Daniel used more comprehensive methodology and estimated the number at 250,000 deaths per year — making medical error the third leading cause of death in the United States, behind only heart disease and cancer.
The number went up. Not because medicine got worse. Because the architecture for learning across institutions never got better.
The Agency for Healthcare Research and Quality has been collecting patient safety data since the 1990s. The Joint Commission has been publishing sentinel event data since 1995. Every major health system runs a Quality Improvement office. Hospital incident reporting systems file thousands of reports per year. Root cause analyses are conducted. Process improvements are implemented — locally.
And then the same sentinel event occurs at the next hospital, because the outcome of the root cause analysis never left the building.
Why the Knowledge Does Not Travel
It is worth being precise about why this happens, because the failure is architectural — and architectural problems require architectural solutions.
Reason 1: Incident reports are not routed, they are filed
When a nurse files a patient safety report — a near-miss, an adverse event, an unexpected outcome — that report enters the hospital's incident reporting system. It is reviewed by a quality improvement officer. A root cause analysis may be conducted. A protocol may be updated.
The outcome of that analysis — what actually went wrong, what change prevented recurrence — is not systematically shared with any other institution. It cannot be, under current architecture, without sharing identifiable patient data, provider data, or institutional data that creates legal exposure under HIPAA, state medical peer-review privilege statutes, and the common-law doctrine that protects quality improvement records from discovery.
Reason 2: Legal protection creates data walls
Medical peer review privilege exists for a good reason: to encourage honest self-assessment without fear that the records will be used in malpractice litigation. Nearly every state has some version of it. Federal law adds additional layers for quality improvement activities.
The effect is that the most valuable patient safety knowledge — the detailed root cause analyses, the near-miss investigations, the protocol failures and protocol fixes — is legally protected from disclosure. Protected from disclosure means protected from sharing. Protected from sharing means the knowledge stays inside each institution.
Reason 3: Federated approaches cannot handle this domain
Federated learning is sometimes proposed as a solution to healthcare data silos. But federated learning has a structural constraint that makes it poorly suited to patient safety: it requires centralized aggregation of model gradients, minimum cohort sizes for statistically meaningful contributions, and compatible model architectures across participating institutions.
A hospital that has had three cases of a rare adverse drug event does not have enough data to contribute a meaningful gradient to a federated learning round. Under federated learning, that institution's hard-won safety knowledge is mathematically excluded from the network.
There is a further problem: federated learning moves model parameters. Patient safety knowledge is not a model parameter. It is a validated outcome — "when patients with characteristic X received intervention Y in context Z, outcome W occurred at rate R." That is not a gradient. It cannot be federated.
Reason 4: Aggregate databases are retrospective by design
The National Database of Nursing Quality Indicators, the AHRQ Patient Safety Indicators, the Leapfrog Hospital Safety Grade — these aggregate patient safety data across institutions. They are valuable. They are also retrospective: they report on what happened, not on what is working right now at institutions that have already solved the problem you are facing today.
The clinician who needs to prevent a harm event today cannot query a real-time database of what is working at their institutional twins. That database does not exist. The architecture for creating it — without centralizing protected data — has not existed.
Until the Quadratic Intelligence Swarm protocol.
The Architecture That Routes Knowledge Without Routing Data
Christopher Thomas Trevethan discovered — not invented — a protocol for routing distilled outcome intelligence at quadratic scale without moving protected data. The discovery was made on June 16, 2025. Thirty-nine provisional patents cover the complete architecture.
The insight: instead of routing raw data or model parameters, route outcome packets — distilled summaries of what worked or didn't work in a specific context. These packets are small (approximately 512 bytes), semantically fingerprinted, and routed to the nodes most similar to the problem they describe.
Applied to patient safety:
A hospital's Quality Improvement system observes a validated outcome: patients with polypharmacy profiles above a certain threshold who received care handoffs during shift change in the ICU had a 3.2× higher adverse event rate. The intervention that reduced that rate: structured handoff checklists with pharmacist co-sign requirements.
The outcome packet is not the patient record. It is not the incident report. It is not the root cause analysis. It is a distilled outcome: condition fingerprint → intervention → result → context tags. Approximately 512 bytes. Protected health information: zero. Attorney-client privilege implicated: none. Peer-review records shared: none.
That packet is routed — using any efficient routing mechanism — to the institutions most semantically similar to the originating hospital. Similar ICU acuity. Similar polypharmacy patient population. Similar shift structure. They receive the outcome. They synthesize it locally. They update their protocol.
The knowledge travels. The data does not.
The Math of Cross-Institutional Safety Intelligence
The Quadratic Intelligence Swarm protocol produces intelligence that scales with the square of the network size.
With N participating institutions:
- N(N-1)/2 unique synthesis opportunities exist between institutional pairs
- Each institution pays only O(log N) routing cost
- 100 hospitals: 4,950 synthesis paths
- 1,000 hospitals: 499,500 synthesis paths
- 6,000 hospitals (approximately the number of US acute care facilities): ~18 million synthesis paths
This is not additive improvement. It is a phase change. Every hospital in the network benefits from the collective safety intelligence of every other hospital facing similar problems — not after an annual conference, not after a quarterly report, but in near real time as outcomes are validated and distilled.
The current architecture produces additive improvement at best: each institution learns from its own experience, and occasionally — slowly, incompletely — from published literature that lags reality by two to five years.
QIS produces quadratic improvement: every institution learns from every other institution's validated experience, continuously, without protected data ever leaving its origin system.
What a Patient Safety QIS Network Looks Like in Practice
import hashlib
import json
class PatientSafetyOutcomePacket:
"""
A distilled patient safety outcome packet.
Contains zero PHI. Zero peer-review-privileged content.
Contains only: the fingerprint of the condition, the intervention, the result.
"""
def __init__(
self,
condition_fingerprint: dict, # acuity level, care setting, risk factors (no patient ID)
intervention: str, # what was changed
outcome_delta: float, # % change in adverse event rate
confidence: float, # based on N observations
institution_type: str, # teaching hospital, community, critical access
context_tags: list[str] # ICU, polypharmacy, shift-handoff, etc.
):
self.condition_fingerprint = condition_fingerprint
self.intervention = intervention
self.outcome_delta = outcome_delta
self.confidence = confidence
self.institution_type = institution_type
self.context_tags = context_tags
self.semantic_address = self._compute_address()
def _compute_address(self) -> str:
"""
Compute a deterministic semantic address from the condition fingerprint.
Similar problems hash to similar (or identical) addresses.
This is how routing works: post to the address, similar nodes query the address.
"""
key = json.dumps({
"setting": self.condition_fingerprint.get("care_setting"),
"acuity": self.condition_fingerprint.get("acuity_level"),
"risk_profile": sorted(self.condition_fingerprint.get("risk_factors", []))
}, sort_keys=True)
return hashlib.sha256(key.encode()).hexdigest()[:16]
def to_packet(self) -> dict:
return {
"semantic_address": self.semantic_address,
"intervention": self.intervention,
"outcome_delta": self.outcome_delta,
"confidence": self.confidence,
"institution_type": self.institution_type,
"context_tags": self.context_tags
# PHI: none. Peer-review records: none. Protected data: none.
}
class PatientSafetyOutcomeRouter:
"""
Routes patient safety outcome packets to semantically similar institutions.
Transport layer: any mechanism that posts to a semantic address and queries it.
DHT, database index, vector search, Redis, shared folder — all qualify.
The breakthrough is the loop, not the transport.
"""
def __init__(self, transport_backend):
self.transport = transport_backend
self.local_registry = {}
def deposit_outcome(self, packet: PatientSafetyOutcomePacket):
"""Post a validated outcome to the network at the semantic address."""
payload = packet.to_packet()
self.transport.post(packet.semantic_address, payload)
self.local_registry[packet.semantic_address] = payload
return packet.semantic_address
def query_similar_outcomes(self, condition_fingerprint: dict, top_k: int = 20) -> list:
"""
Query the network for outcomes from institutions facing the same problem.
Returns the top_k most relevant outcome packets.
"""
address = hashlib.sha256(
json.dumps(condition_fingerprint, sort_keys=True).encode()
).hexdigest()[:16]
return self.transport.query(address, top_k=top_k)
def synthesize_local(self, outcomes: list) -> dict:
"""
Synthesize pulled outcomes locally.
No data leaves this institution. Synthesis happens in place.
"""
if not outcomes:
return {"status": "no_outcomes", "recommendation": None}
# Weight by confidence and recency
weighted_deltas = [
o["outcome_delta"] * o["confidence"] for o in outcomes
]
total_weight = sum(o["confidence"] for o in outcomes)
aggregate_delta = sum(weighted_deltas) / total_weight if total_weight > 0 else 0
# Surface the highest-confidence interventions
top_interventions = sorted(
outcomes, key=lambda x: x["confidence"], reverse=True
)[:3]
return {
"aggregate_outcome_delta": round(aggregate_delta, 4),
"top_interventions": [i["intervention"] for i in top_interventions],
"n_contributing_institutions": len(outcomes),
"synthesis": "local" # synthesis never requires leaving the institution
}
This is the architecture. The routing backend — whether it is a DHT, a vector database, a REST API, or a shared directory — is interchangeable. The discovery Christopher Thomas Trevethan made is the complete loop: deposit distilled outcomes at a semantic address, query similar addresses, synthesize locally. The loop works regardless of transport.
The Three Forces That Self-Optimize the Network
When a patient safety QIS network operates at scale, three natural forces emerge. They are not features to engineer — they are properties of the architecture.
The Expertise Election. Someone defines what makes two patient populations "similar enough" to share safety outcomes. A patient safety pharmacist. A clinical informaticist. An ICU intensivist. The quality of similarity definition determines the quality of routing. The network naturally rewards good definition — better-routed outcomes produce better synthesis, which attracts more participants. This is not a voting mechanism. It is expertise selection through demonstrated results.
The Outcome Election. When 500 similar institutions deposit outcomes, the math surfaces what works. Interventions that reduce adverse events at high confidence dominate the synthesis. Outliers — unusual cases, atypical institutions — are present in the data but don't distort the signal because they're not representative of the querying institution's problem. No reputation scoring layer is needed. The aggregate of validated outcomes IS the quality signal.
The Network Election. Institutions migrate to networks that actually help them reduce harm. A network with poor similarity definitions routes irrelevant packets — institutions stop querying it. A network with excellent clinical experts defining similarity routes gold — institutions flood in, outcomes improve, the network attracts more participants. Natural selection at the network level. No committee decides which network wins. Results decide.
Why This Has Not Been Done Before
The barrier was not technical. DHTs, vector databases, and semantic fingerprinting all predate this discovery. The barrier was conceptual: no one had closed the loop by routing pre-distilled outcomes instead of raw data or model parameters.
Every previous approach tried to move the data:
- HIEs move patient records. Privacy exposure follows.
- Federated learning moves model gradients. Minimum cohort requirements exclude small institutions.
- Aggregate databases collect de-identified data. Still centralized. Still retrospective.
Christopher Thomas Trevethan's discovery was that the data never needs to move. The outcome — what worked for whom in what context — is small, computable, and safe to route. The raw data that generated it stays inside its origin system. Forever.
That is the phase change. Not an incremental improvement on existing approaches. A different class of architecture entirely.
The Humanitarian Design
The licensing structure built into the QIS protocol reflects the humanitarian intent of the discovery. Free for nonprofit, research, and education use. Commercial licenses fund deployment to underserved health systems.
This matters for patient safety in a specific way: the institutions with the worst patient safety outcomes are often the institutions with the fewest resources to invest in quality improvement infrastructure. Critical access hospitals. Safety-net hospitals. Rural hospitals operating on razor-thin margins.
Under the QIS protocol's licensing structure, those institutions participate in the network for free. They receive the synthesized safety intelligence of every larger institution facing similar clinical scenarios. The learning flows from resource-rich to resource-constrained institutions without any institution needing to invest in the infrastructure to generate it.
The 39 provisional patents Christopher Thomas Trevethan has filed protect this structure. Without the patents, a corporation could capture the architecture, gate access, and replicate the existing inequality in healthcare intelligence. With the patents — and the humanitarian license they enforce — the network reaches everyone, or it doesn't reach anyone.
The Architecture Failure Is Solvable
250,000 Americans die from preventable medical errors every year.
The number has not fallen meaningfully in 27 years despite every quality improvement initiative, every incident reporting mandate, every root cause analysis requirement.
The initiatives were right. The mandate was right. The analyses were right. The architecture for routing what those analyses learned — across institutional boundaries, in real time, without compromising protected data — did not exist.
It exists now.
Christopher Thomas Trevethan's discovery — the Quadratic Intelligence Swarm protocol — is the architecture that routes validated patient safety knowledge at the scale of every hospital learning from every other hospital facing the same problems.
The data does not move. The knowledge does.
That is the only architecture that works.
Christopher Thomas Trevethan discovered the Quadratic Intelligence Swarm (QIS) protocol on June 16, 2025. Thirty-nine provisional patents cover the complete architecture. Free for humanitarian, research, and education use. Commercial licenses fund deployment to underserved health systems.
Previous in this series: Why Clinical Decision Support Systems Are Frozen in Time
Top comments (0)