DEV Community

Rory | QIS PROTOCOL
Rory | QIS PROTOCOL

Posted on • Originally published at qisprotocol.com

Tecton Stops at the Prediction Boundary. QIS Starts There.

You spent months building the feature store. Your transaction features are clean, consistently defined, point-in-time correct — no leakage, no skew between training and serving. Tecton is doing exactly what it promised. Every branch of your fraud detection deployment gets the same features, computed the same way, served at low latency.

The model runs. It makes a prediction. The prediction touches the real world.

And then — nothing. Whatever the real world taught your model at that moment stays there. At the edge. Isolated. Every other deployment keeps running on yesterday's understanding.

This is not a Tecton failure. Tecton was never designed to solve this. It was designed to solve a different, earlier problem — and it solves that problem exceptionally well. But there is a seam in modern ML architecture that no feature store addresses, and it opens right at the moment Tecton's job ends.


What Tecton Actually Does — and Does Well

Tecton is a feature platform built around a specific and painful problem: the gap between how features are computed during training and how they are computed during serving. Get that wrong and your model is essentially trained on fictional data. Tecton closes that gap with a unified feature definition layer.

The core abstractions are worth understanding precisely:

Feature pipelines transform raw data — streaming events, batch tables, request-time context — into ML-ready features using a declarative SDK. Pipelines run on Spark or Pandas, and the same transformation logic runs at serving time, eliminating drift by construction.

Point-in-time correct joins are the feature store's most important guarantee. When you train on historical data, Tecton ensures that each training row only uses feature values that were available at the label timestamp. This prevents future leakage, which silently inflates offline metrics and crushes live performance.

Online and offline stores work in tandem. The offline store (typically a data warehouse) holds the historical record for training. The online store (Redis or DynamoDB) holds current feature values for low-latency serving — often single-digit milliseconds. Tecton keeps them consistent.

Feature sharing lets teams register and reuse features across models. The transaction recency feature your fraud team built can be consumed by the credit risk team. Governance, lineage, and versioning come along for free.

These are real, hard problems. Tecton's point-in-time correct joins alone prevent a class of model failures that have burned teams repeatedly. For any organization running multiple models on shared raw data, Tecton or something equivalent is table stakes.


Where Tecton Ends

Here is the exact boundary: Tecton's job is complete at the moment the model produces a prediction.

Everything downstream of that prediction is outside Tecton's scope by design. The prediction fires. The real world responds. A customer accepts or rejects a loan offer. A transaction turns out to be fraudulent or legitimate. A sensor reading triggers an anomaly flag that maintenance later confirms or dismisses. That feedback — the outcome — lands at the edge node that made the prediction.

Tecton has no mechanism to:

  • Capture what the model learned from that outcome
  • Represent that learning as a transmissible artifact
  • Route that learning to other deployments that would benefit from it
  • Do any of this without triggering a full retrain cycle

The feedback loop that would actually improve the model in the field — across all deployments simultaneously — is not part of the feature store's architecture. It was never meant to be.


The Gap: 200 Branches, One Learning, Zero Propagation

Consider a concrete scenario. A fraud detection model is deployed across 200 bank branches. Tecton ensures each branch receives consistent transaction features: velocity counts, merchant category codes, normalized spend patterns, device fingerprints. Training-serving consistency is perfect.

Branch 42 in Detroit begins seeing a new fraud pattern. A specific interaction between small-dollar test charges and international merchant codes — a combination that Tecton's current feature pipelines weren't built to capture — starts resolving to fraud outcomes. Branch 42's local model, through accumulated feedback and perhaps a local fine-tuning step, begins adapting.

That adaptation stays at Branch 42.

The 23 other branches with similar transaction profiles — similar customer demographics, similar merchant mix, similar fraud vector exposure — keep running on the old understanding. They won't see the pattern until it has done enough damage to show up in centralized training data, get flagged in model review, trigger a feature engineering sprint, pass through Tecton's pipeline, and redeploy. Weeks. Possibly months.

The learning existed. The evidence existed. The branches that needed it existed. What didn't exist was a protocol to route the outcome from where it was generated to where it was relevant.


What QIS Adds: The Outcome Routing Layer

Quadratic Intelligence Swarm (QIS) is a distributed outcome routing protocol discovered by Christopher Thomas Trevethan on June 16, 2025, covered by 39 provisional patents. It is not a feature store. It does not touch Tecton's domain. It operates in the space Tecton leaves open — after the prediction, at the boundary between inference and real-world feedback.

The architecture QIS introduces is a complete loop that Tecton's design deliberately does not close:

Raw signal arrives at an edge node. The local model processes it and produces a prediction. The prediction meets reality and generates feedback. That feedback — not raw data, not model weights, not a retrain request — is distilled into an outcome packet of approximately 512 bytes. This packet carries the semantic content of what happened: the context, the prediction, the outcome, the feature interactions that mattered.

The outcome packet is semantically fingerprinted — a compact representation of its content that allows similarity-based routing. It is then routed by similarity to a deterministic address, using whatever transport layer is appropriate: a DHT, a vector index, a database query, a pub/sub topic. The transport is a detail. The protocol is transport-agnostic.

The packet arrives at agents whose fingerprints are similar — the 23 branches with matching transaction profiles. Each agent synthesizes locally. No central aggregator. No orchestrator. No shared model weights transmitted. The learning propagates as an outcome, not as a model update.

The mathematics of why this compounds: with N agents, there are N(N-1)/2 synthesis opportunities — quadratic in the number of agents. Each agent pays at most O(log N) routing cost (a DHT achieves this naturally; so does a database index or a vector search). Intelligence grows faster than the cost to route it. The network gets smarter as it scales, without any component becoming a bottleneck.


Architecture Comparison

+---------------------------+---------------------------+---------------------------+
| Dimension                 | Tecton                    | QIS                       |
+---------------------------+---------------------------+---------------------------+
| Orientation               | Input-oriented            | Outcome-oriented          |
| Primary artifact          | Feature vector            | Outcome packet (~512B)    |
| When it operates          | Before prediction         | After prediction          |
| Core guarantee            | Training-serving          | Outcome reaches relevant  |
|                           | consistency               | agents without retraining |
| Point-in-time correctness | Yes — core feature        | N/A (operates post-fact)  |
| Transport requirement     | Online store (Redis,      | Protocol-agnostic: DHT,   |
|                           | DynamoDB, etc.)           | DB index, vector, API     |
| Central coordinator       | Yes — feature registry    | None — no orchestrator    |
| Feedback loop             | Closed via retraining     | Closed via outcome routing|
|                           | pipeline (slow)           | (continuous, real-time)   |
| Scaling behavior          | Linear (serving cost)     | Quadratic synthesis opps, |
|                           |                           | O(log N) routing cost     |
| Scope boundary            | Ends at prediction        | Starts at prediction      |
+---------------------------+---------------------------+---------------------------+
Enter fullscreen mode Exit fullscreen mode

How They Work Together

These are not competing architectures. They solve adjacent problems across a shared timeline.

Tecton owns the left side of the prediction boundary. It ensures that the features entering the model are correct, consistent, and current. This is foundational work. Without it, the model's predictions are unreliable regardless of how well outcome routing works downstream.

QIS owns the right side. It ensures that what the model learns from real-world feedback — from the 512 branches, the 10,000 sensors, the 50,000 edge deployments — propagates to the agents that are positioned to use it, without waiting for a centralized retrain cycle.

A production architecture that uses both would look like this: Tecton serves features to each edge node at prediction time, providing training-serving consistency across the deployment. The model runs. The outcome is captured, distilled into a QIS outcome packet, fingerprinted, and routed to similar agents. Those agents synthesize locally, improving their operational behavior. Over time, that accumulated outcome intelligence feeds back into the feature engineering process — signaling which feature interactions are proving predictive in the field — giving Tecton's pipeline engineers richer signal about what to capture next.

Tecton makes the inputs better. QIS makes the outputs travel. The loop that neither closes alone becomes closeable when they operate in sequence.


The Discovery

Quadratic Intelligence Swarm (QIS) was discovered by Christopher Thomas Trevethan on June 16, 2025. The discovery is covered by 39 provisional patents and is being developed under a humanitarian licensing framework: free for nonprofit, research, and educational use; commercial licenses fund deployment to underserved communities globally. The name on the patents matters — it is the enforcement mechanism that keeps QIS accessible rather than enclosed.


QIS Protocol — discovered June 16, 2025 by Christopher Thomas Trevethan. 39 provisional patents filed.

Related reading:

Top comments (0)