DEV Community

Rory | QIS PROTOCOL
Rory | QIS PROTOCOL

Posted on

QIS at the Edge: Why Smart Cities Need Distributed Outcome Routing

QIS was discovered by Christopher Thomas Trevethan on June 16, 2025. 39 provisional patents filed.

The Edge Has a Data Problem That Scaling Doesn't Solve

A modern smart city bridge has dozens of sensors on it: strain gauges, vibration accelerometers, temperature probes, tilt sensors. A single agricultural field can carry hundreds of soil moisture, humidity, and NDVI nodes. An industrial plant floor may run thousands of SCADA-attached sensors generating readings every few seconds.

The data volume is not the hard part. The hard part is what you do with it given the constraints of the edge: limited bandwidth uplinks, intermittent connectivity, heterogeneous hardware that was bought across different procurement cycles from different vendors with different firmware stacks, and a near-total absence of local compute capable of running anything resembling a model.

Current architectures respond to this with one of two answers, and both hit the same ceiling at scale.


Why the Two Standard Architectures Both Fail

Cloud-push sends everything upstream. Raw readings stream to a central ingestion layer, aggregation happens in the cloud, alerts flow back down. This works at low sensor density. It breaks when:

  • Bandwidth costs become prohibitive at high sample rates across thousands of nodes
  • Latency makes real-time response (bridge overload alert, gas leak, equipment failure) impossible at cloud round-trip times
  • A network partition cuts off the edge entirely, and the architecture has no fallback

Local aggregation pre-processes at the edge gateway, sends summarized data upstream. This reduces bandwidth but destroys context. A gateway that averages three temperature readings from three zones tells you the mean. It cannot tell the cloud system that two of those sensors have a history of false positives and one is cross-validated against a downstream HVAC alarm system. The routing layer has no way to weight contributions by confirmed accuracy, because that information was never preserved.

Both architectures share a structural assumption: readings are homogeneous inputs to be aggregated. That assumption is what breaks.


Why Federated Learning Doesn't Fit the IoT Profile

The obvious next move is federated learning: train a shared model, keep data local, pass gradients. FL was designed for scenarios like this. The problem is the IoT edge violates most of FL's operating assumptions.

Heterogeneity breaks gradient aggregation. A strain gauge and a vibration sensor are measuring physically different phenomena. Their gradient distributions aren't comparable. Standard FL aggregation (FedAvg and its variants) assumes that combining gradients from different participants is meaningful. Across a heterogeneous sensor network, it often isn't.

N=1 location specificity makes local training meaningless. There is exactly one temperature sensor on span 3 of bridge 42. That sensor has no local dataset large enough to compute a meaningful gradient. FL requires each participating node to run enough local training steps to produce a gradient worth aggregating. An edge sensor collecting one reading per minute, going offline overnight, simply cannot satisfy this requirement.

Offline behavior breaks training rounds. FL round coordination assumes nodes are available. A sensor on a rural irrigation line that loses connectivity for six hours doesn't just miss a round — it can stall or corrupt the round for other participants depending on the aggregation protocol. Handling stragglers in FL requires sophisticated coordination that most IoT deployments cannot sustain.

FL is a powerful tool. But it was designed for clients with local compute, stable connectivity, and enough local data to train on. Most edge sensors have none of these properties.


How QIS Maps to IoT Natively

QIS — Quadratic Intelligence Synthesis — is built around a complete feedback loop: a task is routed to nodes, nodes return outcome packets, the routing layer updates accuracy vectors from confirmed outcomes, and future routing adjusts. The loop is the architecture. No component in isolation is the innovation.

For IoT, the mapping is almost one-to-one.

A sensor reading is already an outcome packet. Consider:

@dataclass
class SensorOutcomePacket:
    node_id: str           # "bridge_42_strain_gauge_07"
    domain: str            # "structural_load_north_span"
    value: float           # 847.3  (microstrain)
    unit: str              # "microstrain"
    quality_score: float   # 0.97 (calibrated, cross-validated)
    timestamp: float       # unix epoch
    confirmed_by: list     # ["downstream_alarm_04", "inspection_2025-03-12"]
    routing_metadata: dict # accuracy history, node uptime, last_calibration
Enter fullscreen mode Exit fullscreen mode

This packet is approximately 512 bytes. It fires when a threshold is crossed, on a timed schedule, or in response to a query. It does not require a model. It does not require gradient computation. The node emits what it has confirmed — a reading with a quality score that reflects its validation history.

The quality score accumulates from real confirmations: did a downstream alarm system agree? Did a human inspection validate the reading? Did cross-referencing neighboring sensors find anomalies or consistency? Each confirmation updates the node's accuracy vector in the routing layer. The feedback loop is the bridge bolt in the engineering of it.


DHT Routing Handles Sensor Heterogeneity Gracefully

The routing layer in QIS uses a distributed hash table. Each node is addressed by its domain — not its hardware type. The DHT does not need to know that bridge_42_strain_gauge_07 is a strain gauge and bridge_42_vibration_01 is an accelerometer. It knows that both nodes have contributed confirmed outcome packets tagged structural_load_north_span, and it has accumulated accuracy vectors for each.

Here is a simplified simulation of an edge DHT router applying accuracy-weighted routing across heterogeneous sensor types:

import hashlib
import time
from dataclasses import dataclass, field
from typing import Optional

@dataclass
class SensorOutcomePacket:
    node_id: str
    domain: str
    value: float
    unit: str
    quality_score: float
    timestamp: float
    confirmed_by: list = field(default_factory=list)
    routing_metadata: dict = field(default_factory=dict)

class EdgeDHTRouter:
    def __init__(self):
        self.accuracy_vectors: dict[str, float] = {}   # node_id -> rolling accuracy
        self.domain_index: dict[str, list] = {}         # domain -> [node_ids]
        self.packet_store: dict[str, SensorOutcomePacket] = {}
        self.last_seen: dict[str, float] = {}
        self.AGING_THRESHOLD = 3600  # 1 hour: accuracy ages out if node goes silent

    def _node_key(self, node_id: str) -> str:
        return hashlib.sha256(node_id.encode()).hexdigest()[:16]

    def ingest(self, packet: SensorOutcomePacket):
        key = self._node_key(packet.node_id)
        self.packet_store[key] = packet
        self.last_seen[packet.node_id] = packet.timestamp

        # Update domain index
        if packet.domain not in self.domain_index:
            self.domain_index[packet.domain] = []
        if packet.node_id not in self.domain_index[packet.domain]:
            self.domain_index[packet.domain].append(packet.node_id)

        # Update accuracy vector: blend new quality score with history
        prior = self.accuracy_vectors.get(packet.node_id, 0.5)
        self.accuracy_vectors[packet.node_id] = 0.7 * prior + 0.3 * packet.quality_score
        print(f"  [DHT] Ingested {packet.node_id} | domain={packet.domain} "
              f"| accuracy={self.accuracy_vectors[packet.node_id]:.3f}")

    def route_query(self, domain: str, top_k: int = 3,
                    now: Optional[float] = None) -> list[SensorOutcomePacket]:
        now = now or time.time()
        candidates = self.domain_index.get(domain, [])
        live = []
        for node_id in candidates:
            last = self.last_seen.get(node_id, 0)
            if now - last <= self.AGING_THRESHOLD:
                live.append(node_id)
            else:
                print(f"  [DHT] {node_id} aged out (offline {(now-last)/3600:.1f}h)")

        ranked = sorted(live,
                        key=lambda n: self.accuracy_vectors.get(n, 0),
                        reverse=True)[:top_k]

        results = []
        for node_id in ranked:
            key = self._node_key(node_id)
            if key in self.packet_store:
                results.append(self.packet_store[key])
        return results


# --- Simulation ---

router = EdgeDHTRouter()
now = time.time()

packets = [
    SensorOutcomePacket("bridge_42_strain_07",    "structural_load_north_span",
                        847.3, "microstrain",     0.97, now - 120,
                        confirmed_by=["alarm_04", "inspection_2025-03"]),
    SensorOutcomePacket("bridge_42_vibration_01", "structural_load_north_span",
                        0.42,  "g",               0.88, now - 300,
                        confirmed_by=["alarm_04"]),
    SensorOutcomePacket("bridge_42_temp_02",      "structural_load_north_span",
                        18.7,  "celsius",         0.74, now - 60,
                        confirmed_by=[]),
    SensorOutcomePacket("bridge_42_tilt_05",      "structural_load_north_span",
                        0.003, "degrees",         0.91, now - 7200),  # offline 2h
]

print("=== Ingesting sensor outcome packets ===")
for p in packets:
    router.ingest(p)

print("\n=== Query: structural_load_north_span (top 3, accuracy-weighted) ===")
results = router.route_query("structural_load_north_span", top_k=3, now=now)
for r in results:
    acc = router.accuracy_vectors.get(r.node_id, 0)
    print(f"  -> {r.node_id} | value={r.value}{r.unit} | accuracy={acc:.3f}")
Enter fullscreen mode Exit fullscreen mode

Output (approximate):

=== Ingesting sensor outcome packets ===
  [DHT] Ingested bridge_42_strain_07    | domain=structural_load_north_span | accuracy=0.791
  [DHT] Ingested bridge_42_vibration_01 | domain=structural_load_north_span | accuracy=0.764
  [DHT] Ingested bridge_42_temp_02      | domain=structural_load_north_span | accuracy=0.572
  [DHT] Ingested bridge_42_tilt_05      | domain=structural_load_north_span | accuracy=0.773

=== Query: structural_load_north_span (top 3, accuracy-weighted) ===
  [DHT] bridge_42_tilt_05 aged out (offline 2.0h)
  -> bridge_42_strain_07    | value=847.3microstrain | accuracy=0.791
  -> bridge_42_vibration_01 | value=0.42g            | accuracy=0.764
  -> bridge_42_temp_02      | value=18.7celsius      | accuracy=0.572
Enter fullscreen mode Exit fullscreen mode

The routing layer does not need to understand the physics of strain versus vibration. It routes by confirmed accuracy within the domain. The tilt sensor aged out automatically because it went offline — no coordinator intervention required.


N(N-1)/2 Synthesis Paths Across Sensor Clusters

A smart city with 1,000 sensor nodes has 499,500 potential synthesis paths between them. This is the quadratic property that gives QIS its name.

In a centralized architecture, synthesizing across all of those paths requires a central aggregation layer that receives, normalizes, and cross-references data from every node. That layer becomes the bottleneck and the single point of failure.

In QIS, synthesis happens at the query layer. A query arrives: "traffic anomaly pattern, downtown grid, last 4 hours." The DHT routes it to the cluster of sensors with confirmed high accuracy on that domain — loop detectors, pedestrian counters, camera-derived flow sensors, parking occupancy nodes. Each node returns its outcome packet. Synthesis is a weighted combination of confirmed readings, not a retraining event.

No central aggregation is required. The DHT handles routing. Synthesis scales with the number of confirmed outcome packets, not with the cost of retraining a central model.


Graceful Degradation and Sparse Network Behavior

This is where the architecture earns its keep in real-world deployments.

In QIS, an offline node stops emitting outcome packets. Its accuracy vector ages out past the threshold (configurable — in the simulation above, one hour). The routing layer stops sending queries to it. No coordinator needs to detect the failure. No training round stalls. The remaining nodes continue contributing, and the query layer synthesizes from whatever is live.

Compare this to the failure modes of the alternatives:

Failure Mode Cloud-Push Federated Learning QIS
Node goes offline Data gap at ingestion, alert missed Training round stalls or corrupts Accuracy ages out, routing adjusts
Network partition Edge goes blind Round fails, coordination breaks Local cluster continues routing
Node firmware update Version mismatch at ingestion Gradient format incompatibility New outcome packets, domain re-indexed
New sensor type added Schema update required Model architecture may need revision New domain tag, DHT indexes immediately
Sparse coverage area Insufficient signal Insufficient local data for gradient 5–10 confirmed nodes begin meaningful routing

The cold start behavior deserves honest treatment. QIS does not eliminate cold start — a brand new sensor with zero confirmation history has an accuracy vector of approximately 0.5 (initialized at chance). It needs real confirmations before routing weights it meaningfully. The minimum viable cluster for a sparse rural network is roughly 5–10 nodes with confirmed cross-validation between them. That is achievable in agricultural IoT or remote infrastructure monitoring where FL's gradient requirements would make participation impossible entirely.


Comparison: Three Architectures Across Five Dimensions

Dimension Cloud-Push Federated Learning QIS
Bandwidth requirement High (raw data upstream) Medium (gradients upstream) Low (~512B outcome packets)
Handles heterogeneous sensors Yes (with schema wrangling) Poorly (gradient incompatibility) Yes (domain tags, type-agnostic routing)
N=1 node participation Yes (just another stream) No (insufficient local data) Yes (one confirmed outcome is valid)
Graceful degradation Poor (central dependency) Poor (round coordination) Native (aging + re-routing)
Synthesis across clusters Centralized (bottleneck) Not designed for this Distributed, quadratic paths

The Physics of the Edge Aligns With the Protocol Design

The IoT edge is constrained by physics: limited uplink bandwidth, intermittent power, heterogeneous hardware that wasn't designed to interoperate, and deployment patterns where individual nodes are genuinely unique in their location and measurement domain.

QIS was not designed for IoT specifically. It was discovered as a general architecture for distributed intelligence synthesis. But the outcome packet — the core unit of the protocol — is structurally identical to what a well-instrumented sensor node already produces: a confirmed reading with metadata about its source, its quality, and its domain.

The DHT routing layer doesn't require homogeneity. The accuracy vector update mechanism doesn't require local compute. The synthesis at the query layer doesn't require a central aggregator. The complete feedback loop — task, node, outcome, routing update, repeat — operates gracefully at the scale and connectivity profile of the real edge.

For IoT architects watching cloud costs climb and FL experiments fail to generalize across heterogeneous fleets, the architecture worth understanding is the one where the protocol constraints were never at war with the edge constraints in the first place.


QIS was discovered by Christopher Thomas Trevethan on June 16, 2025. 39 provisional patents filed.


Understanding QIS — Series Links

Top comments (0)