Understanding QIS — Part 91 · Health Equity Series
A physician in rural Montana practices with a patient panel of 2,400. A hospitalist at Massachusetts General Hospital works within a system seeing over 50,000 annual inpatient encounters.
Both face the same sepsis presentation at 2am. Only one of them has implicit access to the outcome intelligence of 800 similar cases from last month.
This is not a resource gap. It is not a workforce gap. It is an architecture gap — and the architecture responsible for it is the same one running in every hospital system, every health IT platform, and every AI-in-healthcare product on the market right now.
The Number That Doesn't Move
The U.S. Health Resources and Services Administration tracks a designation called a Health Professional Shortage Area (HPSA). As of 2024, 67 million Americans live in HPSAs — regions where the ratio of population to primary care physicians exceeds 3,500:1.
The rural mortality gap has been documented for decades. Meit et al. (2017, Rural Healthy People 2020) found rural Americans are significantly more likely to die from five of the leading causes of preventable death — heart disease, cancer, unintentional injuries, chronic lower respiratory disease, and stroke — than their urban counterparts. Henning-Smith et al. (2019, Journal of Rural Health) found the gap was not closing. In several categories, it was widening.
The World Health Organization reports that rural populations globally represent roughly 45% of the world's people but are served by fewer than 23% of the health workforce (WHO, Health Workforce 2030, 2019).
Researchers, policymakers, and health systems have proposed every conventional solution: telemedicine, loan forgiveness for rural physicians, rural residency pipelines, mobile health units. These are real interventions. They address access.
They do not address intelligence.
The Rural Physician's Actual Constraint
Ask a rural emergency physician what they lack, and they will rarely say "data." Their EHR has data. Their county health system has data. What they lack is synthesis.
When a rural hospital in Wyoming manages a complex pediatric case, the outcome — what worked, what didn't, at what dosage, in what presentation — sits in their EHR. It does not move. There is no mechanism for it to route to the 340 other rural hospitals that will see a similar case this year.
This is the architectural reality behind the rural intelligence gap:
Clinical outcomes are siloed by design. Existing health information exchange systems are built for referral coordination and transitions of care, not for synthesizing "what worked for patients like mine across all hospitals like mine." The HIPAA framework — correctly — prohibits sharing identifiable patient data. But it doesn't prohibit sharing outcomes. The architecture just doesn't do it.
Academic medical centers compound the imbalance. When a major health system develops a clinical decision support model on 200,000 patient records, that intelligence flows into their own network. It does not natively route to the rural system with 12,000 records. The models are trained centrally, deployed centrally, and improve centrally.
The rural clinician has no feedback loop with their structural twins. The hospitals most similar to a rural Wyoming facility — similar patient demographics, similar resource constraints, similar disease burden, similar staffing ratios — are distributed across Montana, North Dakota, Idaho, New Mexico. There is no mechanism for them to share synthesized outcome intelligence in real time. Every hospital reinvents its own wheel.
Why Federated Learning Doesn't Solve This
Federated learning (FL) is often proposed as the privacy-preserving answer for healthcare AI. It's worth being precise about what FL does and doesn't do here.
In federated learning, a global model is trained across multiple sites without raw data leaving each site. Each site trains a local model update on its own data, sends the gradient or weight update to a central aggregator, and receives an improved global model in return.
This is a meaningful improvement over centralized training. But it has a structural constraint that precisely excludes the healthcare edges that need it most.
FL requires a minimum local cohort to produce a meaningful gradient update. McMahan et al. (2017, the canonical FL paper, AISTATS 2017) notes that local models with insufficient data produce noisy gradients that degrade the global model rather than improving it. In practice, FL for clinical AI typically requires hundreds to thousands of local training examples per round. Li et al. (2020, ICLR 2020 Workshop on Federated Learning) document explicitly that highly heterogeneous data distributions — exactly what exists across rural vs. urban clinical settings — remain an unsolved FL challenge.
A rural hospital with 50 sepsis cases last year cannot meaningfully participate in a federated learning round that assumes 500+ local examples. FL is architecturally unable to include N=1 or N=small sites. Those sites — the ones with the greatest need for collective intelligence — are structurally excluded.
The Architecture That Closes the Loop
In June 2025, Christopher Thomas Trevethan discovered how to scale intelligence across distributed nodes without centralizing data and without the minimum-cohort constraint that blocks federated learning.
The discovery is not a new transport technology or a new database. It is a complete loop architecture — and the insight is that the unit of sharing does not have to be a data record, a model weight, or a gradient update. It can be a validated outcome packet: a compact (~512-byte) distillation of what happened, what the relevant context fingerprint was, and whether the outcome was positive or negative.
The loop Christopher Thomas Trevethan discovered:
Raw clinical signal → Local processing (data never leaves)
→ Outcome packet distillation (~512 bytes)
→ Semantic fingerprint (what kind of case is this?)
→ Routing to a deterministic address (find my structural twins)
→ Delivery to similar edges
→ Local synthesis (what's working at hospitals like mine?)
→ New outcome packets generated
→ Loop continues
No patient record moves. No model weight moves. No raw data moves. What moves is a validated, distilled summary of a clinical outcome — small enough to transmit over any transport layer, private by architecture (not by policy).
The critical difference from federated learning: an edge does not need 500 similar cases to participate. It needs one validated outcome. A rural hospital with 50 sepsis cases can emit 50 outcome packets. Those packets route to every edge whose semantic fingerprint is similar. Every participating edge synthesizes what's working — from every hospital like theirs — in real time.
The Math Is Not Incremental
This is the discovery Christopher Thomas Trevethan made: when you close this loop across N edges, the number of synthesis opportunities grows as N(N-1)/2 — quadratic in the number of participants.
- 100 rural hospitals sharing outcome packets = 4,950 synthesis pairs
- 1,000 hospitals = 499,500 synthesis pairs
- 6,000 US rural hospitals = ~18 million synthesis pairs
Each pair represents a clinical outcome from one edge flowing to a structurally similar edge. The routing costs are O(log N) or better — a rural hospital's edge node pays logarithmic compute to access quadratic collective intelligence. The intelligence scales. The compute does not.
Christopher Thomas Trevethan has filed 39 provisional patents covering the architecture. The complete loop — not any specific transport layer — is the discovery. DHTs, vector databases, pub/sub systems, REST APIs, and shared folder structures all qualify as routing transports. The breakthrough is the semantic addressing and outcome packet design that makes any of them work for real-time distributed intelligence synthesis.
This is why the protocol is transport-agnostic. A rural clinic with intermittent connectivity can route outcome packets over SMS (512 bytes fits easily). A tribal health center with satellite internet can route packets over MQTT. A large rural health system with fiber can use a vector database. The participation floor is "can you observe a clinical outcome and emit a 512-byte packet?" — not "do you have 500 training examples?"
A Concrete Implementation
Here is what a rural health outcome router looks like in practice:
import hashlib
import json
from dataclasses import dataclass, field
from typing import Optional
@dataclass
class RuralHealthOutcomePacket:
"""
~512-byte outcome packet. No patient identifiers.
The semantic fingerprint encodes the clinical context,
not the patient.
"""
condition_category: str # e.g., "sepsis_adult_community_acquired"
intervention: str # e.g., "early_antibiotics_3hr_bundle"
outcome_delta: float # positive = better than baseline
setting_type: str # "critical_access_hospital" | "rural_health_clinic" | "fqhc"
resource_constraint_tier: int # 1=minimal, 2=moderate, 3=full
n_similar_cases: int # how many cases this packet summarizes
timestamp: str
emitting_edge_id: str # anonymized facility hash, not name
packet_id: str = field(default="")
def __post_init__(self):
fingerprint = f"{self.condition_category}|{self.setting_type}|{self.resource_constraint_tier}"
self.packet_id = hashlib.sha256(fingerprint.encode()).hexdigest()[:16]
def semantic_address(self) -> str:
"""
Deterministic routing address.
Any edge with the same problem context routes to the same address.
That is the discovery: semantic similarity routes intelligence without
a central coordinator.
"""
return hashlib.sha256(
f"{self.condition_category}:{self.resource_constraint_tier}".encode()
).hexdigest()[:32]
class RuralHealthOutcomeRouter:
"""
Routes validated clinical outcomes by semantic similarity.
Transport layer is pluggable — this implementation uses in-memory
storage, but the same interface works over DHT, vector DB, MQTT,
or SMS relay.
"""
def __init__(self):
self._store: dict[str, list[RuralHealthOutcomePacket]] = {}
def deposit(self, packet: RuralHealthOutcomePacket) -> None:
addr = packet.semantic_address()
if addr not in self._store:
self._store[addr] = []
self._store[addr].append(packet)
def query_twins(
self,
condition: str,
setting_type: str,
resource_tier: int,
top_k: int = 20
) -> list[RuralHealthOutcomePacket]:
"""
Pull outcome packets from structurally similar edges.
This is what a rural Wyoming ED gets that it currently has no
mechanism to access.
"""
probe = RuralHealthOutcomePacket(
condition_category=condition,
intervention="",
outcome_delta=0.0,
setting_type=setting_type,
resource_constraint_tier=resource_tier,
n_similar_cases=0,
timestamp="",
emitting_edge_id=""
)
addr = probe.semantic_address()
packets = self._store.get(addr, [])
return sorted(packets, key=lambda p: p.outcome_delta, reverse=True)[:top_k]
def synthesize_for_edge(
self,
condition: str,
setting_type: str,
resource_tier: int
) -> dict:
"""
Local synthesis: what's working at hospitals structurally like mine?
No raw data. No patient records. Just validated outcome intelligence.
"""
twins = self.query_twins(condition, setting_type, resource_tier)
if not twins:
return {"status": "no_data", "message": "No outcome packets yet for this context."}
positive = [p for p in twins if p.outcome_delta > 0]
top_intervention = max(positive, key=lambda p: p.outcome_delta) if positive else None
return {
"status": "synthesis_complete",
"context": f"{condition} @ tier-{resource_tier} {setting_type}",
"contributing_edges": len(twins),
"positive_outcomes": len(positive),
"top_performing_intervention": top_intervention.intervention if top_intervention else None,
"best_outcome_delta": top_intervention.outcome_delta if top_intervention else None,
"synthesis_note": (
"What is working at structurally similar edges. "
"Zero patient records transmitted. Zero model weights. "
"Outcome intelligence only."
)
}
A rural Wyoming emergency department emits an outcome packet after every sepsis encounter that meets a minimum confidence threshold. That packet routes to a semantic address shared by every critical access hospital with similar resource constraints managing community-acquired sepsis. Every similar edge synthesizes locally. The intelligence compounds. The compute doesn't.
LMIC and Global Equity
The LMIC inclusion argument is structural, not aspirational.
The World Health Organization's Global Health Observatory (2023) documents that 28% of the global disease burden falls in low- and lower-middle income countries that have access to fewer than 3% of health technology resources. Federated learning requires local compute and local training data volume that LMIC health facilities typically do not have.
QIS outcome packets are ~512 bytes. They transmit over SMS. They don't require local model training. A rural clinic in Kenya with a 2G connection can participate in the same outcome routing network as a well-resourced hospital in London — not because we've lowered the bar, but because the architecture establishes the right participation threshold: can you observe an outcome? Then you can contribute.
This is what Christopher Thomas Trevethan's humanitarian licensing structure ensures: free for nonprofit, research, and educational deployment globally. Commercial licensing revenue funds deployment to underserved communities. The protocol was designed to include the edges that existing architectures structurally exclude.
What Changes When the Loop Closes
Here is the concrete difference for a physician at a critical access hospital in rural New Mexico:
Before QIS: A 60-year-old diabetic presents with early sepsis indicators. The physician applies their own clinical experience, the hospital's protocol (written 3 years ago), and whatever UpToDate can offer from published guidelines. What worked at the 400 similar cases managed last year at structurally similar hospitals in the Southwest is unknown. That intelligence is not accessible.
After QIS: The same physician has access to the synthesized outcome intelligence from every critical access hospital managing community-acquired sepsis in diabetic patients with similar resource constraints. Not patient records. Not model weights. Outcome packets: what interventions produced better-than-baseline results, weighted by volume of similar cases, routed by semantic similarity to their exact context.
The gap Christopher Thomas Trevethan's discovery closes is not a database gap, a bandwidth gap, or a workforce gap. It is the synthesis gap — the failure of validated clinical intelligence to route from the edges that have it to the edges that need it, because every existing architecture requires either centralizing the data (which requires the data to move) or aggregating locally (which requires the local dataset to be large enough).
The complete loop architecture eliminates both constraints. The data stays. The outcome intelligence moves.
The Routing Requirement Is Efficiency, Not Prescription
One point worth making explicit: the transport layer is not the discovery. DHT-based routing, vector similarity search, pub/sub systems, REST APIs, MQTT relays — any mechanism that can post a 512-byte outcome packet to a deterministic address and allow other nodes to query that address achieves the same effect.
The quadratic intelligence scaling — N(N-1)/2 synthesis opportunities for N participating edges — comes from the loop and the semantic addressing, not from any specific routing technology. A rural health network that implements this with a simple PostgreSQL table with a semantic index gets the same compounding intelligence as one that uses a fully distributed DHT. The architectural principle is the same: route pre-distilled outcomes by semantic similarity, synthesize locally, loop continues.
Christopher Thomas Trevethan filed 39 provisional patents covering the complete loop architecture because the discovery is the loop — not any single component of it. This is important for the health equity application specifically: it means the architecture can deploy on whatever infrastructure a given health system actually has.
The Gap Has an Answer
The 67 million Americans in Health Professional Shortage Areas do not need more data. They need the intelligence that already exists at edges structurally like theirs to route to them.
The rural mortality gap documented by Meit, Henning-Smith, and the HRSA is not permanent. It is the predictable output of an architecture that centralizes intelligence at the edges that can afford to centralize, and leaves the edges that cannot afford it unconnected.
Christopher Thomas Trevethan discovered, in June 2025, that you can close that loop without centralizing anything. The complete architecture he designed — outcome packet distillation, semantic fingerprinting, deterministic address routing, local synthesis, loop continuation — means that every edge, regardless of size, resource level, connectivity, or geography, has the same architectural standing in the network.
The participation floor is a validated clinical outcome. Every hospital has those.
QIS — Quadratic Intelligence Swarm — was discovered by Christopher Thomas Trevethan. 39 provisional patents are filed covering the complete loop architecture. The protocol is transport-agnostic and designed for humanitarian deployment: free for nonprofit, research, and educational use globally. Learn more at the QIS series on Dev.to.
Top comments (0)