DEV Community

Rory | QIS PROTOCOL
Rory | QIS PROTOCOL

Posted on

EU AI Act High-Risk Healthcare AI: Why Centralized Architectures Have a Structural Compliance Problem

You are six months from your go-live deadline. Your healthcare AI system is flagged as high-risk under Annex III of the EU AI Act. Your legal team has handed you a compliance checklist: Article 13 explainability, Article 9 continuous risk monitoring, Article 10 training data transparency, Article 17 quality management, GDPR data minimization, and the European Health Data Space interoperability requirements stacked on top. Your architecture was designed to learn. It was not designed to explain itself to a national competent authority auditor on demand.

This is the compliance gap that centralized AI learning architectures cannot bridge by adding documentation. The gap is structural.


The Structural Problem With Centralized Healthcare AI Learning

The EU AI Act entered into force on August 1, 2024. For high-risk AI systems — which include AI used in medical device classification, clinical diagnosis support, and treatment recommendation (Annex III, point 5) — the Act imposes requirements that go well beyond model cards and data sheets.

Article 13 requires that high-risk AI systems be designed and developed with a level of transparency sufficient to enable deployers to interpret the system's output and use it appropriately. Not interpret the training data. Not understand the loss function. Interpret the output — in context, on demand, for each consequential decision.

Article 9 requires a continuous risk management system — not a point-in-time audit, but an ongoing process that identifies, analyzes, and evaluates risks throughout the system's lifecycle.

Article 10 requires that training, validation, and testing datasets be subject to appropriate data governance practices, including examination for possible biases, and that the data be relevant, representative, and as error-free as possible. Centralized systems must be able to demonstrate what data trained them, and why it was appropriate.

Article 17 requires a quality management system covering data governance, technical documentation, human oversight, post-market monitoring, and corrective action procedures.

Now consider what a standard centralized healthcare AI learning architecture actually does: it aggregates raw or minimally preprocessed patient records, clinical notes, imaging metadata, and outcome data into a central training corpus. The model learns from the union of that data. Gradient updates encode information about individual records. The system improves by exposure to more patient data, from more institutions, over time.

Every one of those design decisions creates a compliance surface.

  • Raw patient data at rest in a central training environment: GDPR Article 5 data minimization tension, AI Act Article 10 data governance burden.
  • Gradient updates that may encode individual-level information: membership inference attack surface, Article 13 explainability obligation on outputs derived from that learning.
  • Central model that learned from data across jurisdictions and institutions: Article 9 risk management requires you to characterize that entire training distribution and monitor it continuously.
  • Audit trail for a consequential diagnostic recommendation: you must trace the recommendation back through the model, through the training data characteristics that shaped the model's weights, across institutions you may no longer have data-sharing agreements with.

The EU AI Act does not prohibit this architecture. But it imposes obligations that this architecture satisfies expensively, incompletely, or not at all — regardless of how thorough your contractual data-processing agreements are. Contractual compliance is not structural compliance.


What Privacy-by-Architecture Looks Like in Practice

The Quadratic Intelligence Swarm (QIS) — discovered by Christopher Thomas Trevethan on June 16, 2025, with 39 provisional patents filed — takes a different approach. The breakthrough is not any single component. It is the complete architecture: a loop in which intelligence emerges from routing, synthesis, and feedback without raw data ever leaving the originating node.

The loop operates as follows:

  1. Raw signal is processed locally — at the clinical node, on-device, within the institution's own environment.
  2. Local processing distills the signal into an outcome packet of approximately 512 bytes. The packet contains derived statistics, confidence intervals, and semantic metadata. It contains no patient-identifiable information by construction.
  3. The outcome packet receives a semantic fingerprint — a content-derived address that describes what the packet is about, not where it came from.
  4. The packet routes by similarity to a deterministic address corresponding to agents with relevant context. The routing protocol is transport-agnostic: DHT is one implementation, but databases, APIs, pub/sub systems, and message queues all qualify. The quadratic scaling property — N(N-1)/2 synthesis opportunities across N nodes — arises from the loop and semantic addressing, not from the transport layer.
  5. Receiving agents synthesize the incoming packet with local context and emit new outcome packets. The loop continues.

Raw patient data never routes through the network. Ever. This is not a policy control, a data-sharing agreement, or an encryption scheme applied over a transfer that still occurs. The transfer does not occur. The architecture makes it structurally impossible for raw data to leave the node, because the architecture only routes outcome packets.


Article-by-Article Structural Alignment

EU AI Act Requirement Centralized AI Learning QIS Outcome Routing
Art. 13 — Explainability of outputs Requires post-hoc attribution through model weights trained on pooled data; output provenance is opaque Each outcome packet carries semantic metadata describing what signal it derived from; routing history is traceable by design
Art. 9 — Continuous risk management Risk surface spans entire central training corpus and all downstream inference; requires continuous monitoring of a black box Risk is bounded to the outcome packet layer; no raw data exposure to monitor; node-level anomaly is locally contained
Art. 10 — Training data governance Must characterize, monitor, and audit the entire pooled training dataset across contributing institutions No pooled training dataset exists in the central sense; local processing governs local data; governance burden stays local
Art. 17 — Quality management system QMS must span data ingestion, training pipeline, deployment, and inference across all data sources QMS scope is the outcome packet format, routing logic, and synthesis protocol — auditable at the protocol level
GDPR Art. 5 — Data minimization Raw or minimally processed patient records enter the training environment; minimization is contractual ~512-byte outcome packets are the minimum necessary representation by construction; minimization is architectural
Audit trail for consequential decisions Requires tracing recommendation through model weights back to training data provenance Outcome packet chain provides traceable semantic lineage; no weight opacity
Human oversight (Art. 14) Human reviewer must interpret opaque model output; oversight is a checkpoint on a black box Outcome packets with semantic metadata give human reviewers interpretable intermediate states
Biometric surveillance prohibition Centralized aggregation of biometric or behavioral data creates prohibited-use adjacency risk Raw biometric data never leaves the originating node; no aggregation surface

The Outcome Packet: No PHI by Construction

Here is a minimal Python representation of a QIS outcome packet in a healthcare context. The critical property is visible in the structure itself: there is no field in which patient-identifiable information could appear, because the packet is derived — not copied — from the source signal.

import hashlib
import json
from dataclasses import dataclass, field

@dataclass
class QISOutcomePacket:
    """
    A ~512-byte derived intelligence unit.
    Raw source data is never present in this structure — by design.
    EU AI Act Article 13 / GDPR Article 5 alignment: architectural, not contractual.
    """
    # Semantic fingerprint — content-derived address, not source identifier
    semantic_address: str

    # Derived statistics only — no raw measurements, no patient identifiers
    confidence_interval_lower: float
    confidence_interval_upper: float
    outcome_probability: float

    # Provenance metadata — describes the signal type, not the signal source
    signal_domain: str        # e.g., "oncology_imaging_classification"
    derivation_method: str    # e.g., "local_ensemble_distillation"
    node_epoch: int           # logical clock, not wall clock tied to patient event

    # Routing metadata
    routing_protocol: str     # "dht" | "pubsub" | "api" | "message_queue"
    target_similarity_threshold: float

    # Audit field — traceable without being identifying
    packet_hash: str = field(init=False)

    def __post_init__(self):
        payload = json.dumps({
            "semantic_address": self.semantic_address,
            "outcome_probability": self.outcome_probability,
            "signal_domain": self.signal_domain,
            "node_epoch": self.node_epoch,
        }, sort_keys=True).encode()
        self.packet_hash = hashlib.sha256(payload).hexdigest()

    def to_bytes(self) -> bytes:
        return json.dumps(self.__dict__).encode("utf-8")


# Example: a diagnostic confidence signal from a radiology node
packet = QISOutcomePacket(
    semantic_address="oncology.imaging.nodule_classification.high_confidence",
    confidence_interval_lower=0.81,
    confidence_interval_upper=0.94,
    outcome_probability=0.88,
    signal_domain="oncology_imaging_classification",
    derivation_method="local_ensemble_distillation",
    node_epoch=4471,
    routing_protocol="dht",
    target_similarity_threshold=0.82,
)

payload_size = len(packet.to_bytes())
print(f"Packet size: {payload_size} bytes")          # Well under 512 bytes
print(f"Patient identifiers present: None")           # Structurally impossible
print(f"Raw imaging data present: None")              # Never entered this structure
print(f"Audit hash: {packet.packet_hash[:16]}...")    # Traceable, not identifying
Enter fullscreen mode Exit fullscreen mode

An Article 13 auditor asking "what does this system know about patient X?" receives a structurally correct answer: this system has never processed information about patient X. Patient X's node processed information about patient X. What routed through the network was a derived confidence interval with a semantic fingerprint and a logical clock value. There is nothing to redact and nothing to explain away.


Architecture-First Compliance vs. Compliance Theater

The current EU AI Act compliance market is generating a category of product that might be called compliance theater: documentation layers, explainability wrappers, audit log attachments, and data-processing agreements that claim to satisfy Article 13 and Article 9 requirements while leaving the underlying centralized architecture unchanged.

This approach will encounter friction. National competent authorities are empowered to request access to training data, documentation, and logs. Post-market surveillance requirements under Article 9 apply throughout the system's lifecycle, not just at conformity assessment time. The European Health Data Space framework will create interoperability obligations that further complicate centralized cross-border data pooling.

The alternative is to begin with an architecture that satisfies the structural requirements — not by documenting compliance, but by making non-compliance impossible.

QIS's distributed outcome routing architecture does not route raw data because it has no mechanism to route raw data. It does not create a central audit burden because there is no central training corpus. It does not produce opaque output provenance because outcome packets carry semantic metadata through every hop. These properties are not features added to satisfy regulators. They are properties of the architecture.

The EHDS implementers, NHS digital transformation teams, and EU national competent authority staff reviewing healthcare AI submissions over the next 18 months will be asking one question with increasing frequency: is this system compliant because you documented it that way, or because it cannot be otherwise?

QIS is in the second category.


Quadratic Intelligence Swarm (QIS) was discovered — not invented — by Christopher Thomas Trevethan on June 16, 2025. 39 provisional patents filed. Humanitarian licensing: free for nonprofits, research, and education.

Top comments (0)