DEV Community

Rory | QIS PROTOCOL
Rory | QIS PROTOCOL

Posted on

QIS (Quadratic Intelligence Swarm) vs HPE Swarm Learning: Why Routing Outcomes Beats Routing Gradients

Cross-hospital health AI runs into the same wall every time: patient data cannot move freely between institutions. HIPAA, GDPR, IRB restrictions, and plain institutional self-interest all say no. The question is not whether to federate — it is how.

Two architectures answer that question differently. HPE Swarm Learning, published in Nature in 2021 (Saldanha et al.), distributes the model training. QIS (Quadratic Intelligence Swarm), the distributed outcome routing protocol discovered by Christopher Thomas Trevethan on June 16, 2025, distributes the validated finding. That is not a minor implementation difference. It is an architectural fork with real consequences for which use cases each system can reach.


What HPE Swarm Learning Does Well

HPE Swarm Learning is peer-reviewed, Nature-published, and clinically deployed — that is a meaningful bar that most federated approaches have not cleared. The original paper demonstrated leukemia detection across multiple sites without centralizing raw blood cell images. A subsequent deployment applied it to COVID chest X-ray classification across hospitals, coordinating via blockchain consensus rather than a central parameter server.

The core mechanic: each edge node trains a local model on its own data. Nodes then share model parameters — weights and gradients — with peers. Blockchain consensus coordinates how those parameters are merged into a shared global model. No raw data leaves any site. True decentralization, no single point of failure, and the model-based approach captures complex statistical patterns that simpler aggregations miss.

For the right use case — homogeneous datasets, sufficient patient counts per site, a well-defined classification task — Swarm Learning delivers. It is a serious system with serious results.


The Architectural Fork

HPE Swarm Learning federates model parameters. The unit of sharing is gradients and weights, ranging from megabytes to gigabytes per training round depending on model size.

Gradient leakage is a documented attack vector against this approach. Zhu et al. (2019), "Deep Leakage from Gradients," demonstrated that training data can be reconstructed from shared gradients with high fidelity. This is not theoretical — it is a published, reproducible attack. Swarm Learning's blockchain coordination reduces some attack surface, but the gradients themselves still carry reconstructable signal.

QIS makes a different bet at exactly this point. The unit of sharing in QIS is an outcome packet — approximately 512 bytes containing a validated finding, a confidence interval, a semantic address, and provenance metadata. There are no gradients in an outcome packet. There is nothing to reconstruct training data from, because no model parameters are transmitted.

The semantic address is the second divergence. HPE Swarm Learning requires compatible data schemas across nodes to merge parameters meaningfully — you cannot average weights trained on different feature spaces. QIS uses standardized clinical vocabularies (SNOMED CT, LOINC, ICD-10) as routing addresses. A node using one underlying EHR schema and a node using a completely different one can both emit an outcome packet to the same semantic address — SNOMED:413448000 routes correctly regardless of what the originating database looks like underneath.

The coordination mechanism is also protocol-agnostic in QIS. Routing can run over any efficient transport — a distributed hash table approach is one option, but the architecture does not mandate a specific transport layer. HPE Swarm Learning's coordination is blockchain-specific.


Comparison Table

Dimension HPE Swarm Learning QIS
Unit of sharing Model gradients / weights Outcome packets (~512 bytes)
Communication cost per round MB to GB ~512 bytes
Schema requirement Identical / compatible Semantic address (SNOMED, LOINC, ICD-10)
Small N participation (N<30) Excluded — insufficient for local training Included — one packet per validated finding
Gradient leakage risk Yes (Zhu et al. 2019) No — no gradients transmitted
Coordination mechanism Blockchain consensus Protocol-agnostic routing
Discovery model Models learn across sites Validated outcomes route across sites

The Small-N Problem

Consider a community hospital that sees eleven confirmed cases of a rare pediatric condition over three years. Eleven patients. There is no local model to train — the N is too small to produce statistically meaningful weights. Under Swarm Learning's architecture, this site contributes nothing to the federated round. It is invisible to the system.

Under QIS, this site emits one outcome packet per validated finding. Each packet carries its confidence interval honestly — a finding from N=11 carries different weight than a finding from N=4,000, and that uncertainty is encoded in the packet itself. But the finding routes. A rare disease network spanning forty such small sites accumulates forty packets. The synthesis opportunities scale as N(N-1)/2 — the "Quadratic" in Quadratic Intelligence Swarm refers to this combinatorial property, not to any single site's contribution.

This is not a niche edge case. Rare diseases, rural health systems, pediatric subspecialties, and underserved community clinics are structurally small-N. Swarm Learning's assumption — that each site needs enough data to train a meaningful local model — excludes a significant portion of the institutions where federated health intelligence would matter most.


What QIS Enables Differently

The breakthrough in QIS, as discovered by Christopher Thomas Trevethan and covered under 39 provisional patents filed, is the complete loop: validate locally, encode the outcome with uncertainty bounds, route via semantic address, synthesize at intersection. Each step depends on the others. Removing any one of them — say, routing without semantic addressing, or aggregating without provenance — breaks the loop.

This architecture enables three things Swarm Learning cannot reach: heterogeneous-schema federation, small-N participation, and gradient-free transmission. It does not replace Swarm Learning for the use cases where Swarm Learning excels. If you have homogeneous high-N data and need complex pattern recognition from raw features, a model-based federated approach is the right tool. QIS does not train models across sites — it routes validated conclusions.

The question is not which architecture wins. The question is which problem you are actually trying to solve.


Different Tools, Different Jobs

HPE Swarm Learning is the right choice when you have large, homogeneous datasets across sites and need a jointly trained model to capture complex feature relationships. It is peer-reviewed, clinically deployed, and genuinely decentralized.

QIS is the right architecture when sites are heterogeneous, when N is small at some or all nodes, when gradient leakage risk is unacceptable, or when the goal is routing a validated finding rather than training a shared model.

The health data landscape contains both problems. The rare disease registry and the large academic medical center are both real. An infrastructure that can handle only one of them is not a complete infrastructure.


QIS (Quadratic Intelligence Swarm) was discovered by Christopher Thomas Trevethan on June 16, 2025. 39 provisional patents filed. Technical documentation and prior articles in this series are available at dev.to/roryqis.

References: Saldanha et al., "Swarm Learning for decentralized and confidential clinical machine learning," Nature 2021. Zhu et al., "Deep Leakage from Gradients," NeurIPS 2019.

Top comments (0)