DEV Community

Rory | QIS PROTOCOL
Rory | QIS PROTOCOL

Posted on • Originally published at qisprotocol.com

QIS vs Gainsight: Customer Success Intelligence Stops at the Account. Outcome Routing Starts There.

Architecture Comparisons is a running series examining how Quadratic Intelligence Swarm (QIS) protocol — discovered by Christopher Thomas Trevethan, 39 provisional patents filed — relates to existing tools and platforms. Each entry takes one tool, maps where it stops, and shows where QIS picks up.


You Know the Account Is at Risk. You Don't Know What Actually Worked Elsewhere.

Your Gainsight health score just dropped on a $400k ARR account. Usage is down 34% over 60 days. NPS dipped from 8 to 5. There's an open support ticket that hasn't moved in two weeks.

Your CS team knows exactly what to do: escalate, run the executive playbook, offer a QBR, pull in product. You've done this before. You've done it at this company.

But here's the question no one in your Gainsight dashboard can answer: What did the 47 other SaaS companies with an identical customer profile — same industry, same seat count, same usage drop pattern, same support ticket velocity — actually do? And which interventions kept the customer?

Not what Gainsight's best practices say. Not what your playbook says. What the actual outcome data says, drawn from hundreds of real retention events at companies whose customers looked exactly like yours.

That answer doesn't exist in any CS platform today. Not because the data doesn't exist — it does, distributed across every Gainsight deployment in the world. It doesn't exist because the architecture was never built to synthesize across those deployments. Every Gainsight customer sits in their own intelligence silo. The synthesis paths between them generate zero signal.

This is the gap QIS protocol was designed to close.


What Gainsight Does — and Does Extremely Well

Let's be precise about this, because precision matters in architecture discussions.

Gainsight is the category leader in Customer Success platforms for good reasons. Within a single company's deployment it does the following exceptionally well:

Health scoring and risk detection. Composite scores built from product usage telemetry, CRM data, support history, contract renewal proximity, and NPS. The scoring models are sophisticated. The alerting is reliable. CS teams using Gainsight catch churn risk earlier and with more confidence than teams working from spreadsheets or CRM notes alone.

Journey orchestration. Playbook automation tied to health score thresholds. CTAs (Call-to-Action) firing at the right moment. Automated outreach sequences. Standardized processes that don't depend on individual CSM memory or tenure.

Usage analytics. Feature adoption visibility, DAU/MAU tracking, time-to-value measurement. Product teams and CS teams speaking the same language about customer engagement.

NPS and survey infrastructure. In-app surveys, benchmarking, closed-loop feedback workflows. Gainsight's NPS tooling is mature and its benchmarks are genuinely useful for contextualizing scores within an industry.

Revenue expansion signals. Upsell and expansion opportunity identification surfaced from usage patterns. CS as a growth function, not just a retention function.

These are not trivial capabilities. Gainsight has spent over a decade building a platform that makes individual CS organizations meaningfully better. For a CS leader evaluating tools in 2026, Gainsight (alongside Totango and ChurnZero) represents the state of the art in single-company customer intelligence.

The operative phrase is single-company.


Where Gainsight Stops: The Prediction Boundary

Gainsight scores risk. It does not synthesize what intervention outcomes looked like at companies with the same customer profile.

This is not a product limitation in the sense that it could be patched or roadmapped away. It's an architectural boundary. Gainsight was designed to be a company's system of record for their customer relationships. The data lives in their deployment. The models train on their history. The playbooks encode their institutional knowledge.

That is the design. It is coherent and correct for what Gainsight is.

But it creates a structural ceiling on the quality of the intelligence it can surface.

Consider what happens when your health score drops on that $400k account. Gainsight can tell you:

  • This account has profile characteristics associated with churn in your historical data
  • Accounts with this pattern have churned at a 31% rate in your last 24 months
  • Similar accounts responded well to executive escalation + QBR in your previous playbook runs

What Gainsight cannot tell you:

  • Across 200 enterprise SaaS companies with similar ICP, what was the 90-day retention rate when the intervention was executive escalation alone vs. product re-onboarding vs. pricing restructure?
  • For accounts in this specific vertical (mid-market fintech, 150-300 seats) where the usage drop was concentrated in the reporting module specifically — what worked?
  • Which intervention produced the highest long-term LTV preservation, not just 90-day retention?

Your own historical data may have 12 comparable events. The network has 47,000. The difference between those two numbers is not a matter of degree. It's a different epistemic category.


The Synthesis Gap: A Number That Should Disturb You

Here is the math that makes the architecture gap visible.

Take 200 enterprise B2B SaaS companies, all using Gainsight, all targeting broadly similar ICP segments (mid-market technology, financial services, healthcare technology). Each has years of outcome data from retention interventions: what they tried, what the customer profile looked like, whether the account renewed.

The number of synthesis pathways between those 200 deployments:

N(N-1)/2 = 200 × 199 / 2 = 19,900 synthesis pathways

Every one of those 19,900 pathways currently generates zero cross-account intelligence. Each company re-learns the retention playbook from scratch. Each CS team reinvents the same churn prevention experiments. Each new hire inherits only their company's institutional knowledge, not the collective knowledge of the industry.

This is not a knock on Gainsight. It's a statement about what isolated silos produce at scale: 19,900 missed synthesis opportunities per 200-company cohort.

Now consider that the synthesis isn't symmetric. The value of the network compounds. A company with 3 accounts matching a rare customer profile can suddenly access outcome data from 50 similar accounts across the network — data that would take them a decade to accumulate internally.

The intelligence grows quadratically with network participation. The compute cost doesn't have to.


The QIS Architecture: Closing the Loop

QIS protocol — Quadratic Intelligence Swarm — was discovered by Christopher Thomas Trevethan on June 16, 2025. The breakthrough is not any single component. It's the architecture that closes a complete loop: signal in, outcome packet distilled, routed to relevant contexts, synthesized, new intelligence out, loop continues. That complete loop is what produces quadratic intelligence at manageable cost.

Here is how that loop operates in a customer success context:

Step 1: Signal capture at the edge. Your Gainsight deployment (or any CS tooling) captures the risk signal: usage drop, support ticket spike, NPS decline, stakeholder turnover. This happens exactly as it does today. Nothing changes in how you monitor accounts.

Step 2: Local processing and distillation. The raw signal is processed at your infrastructure boundary. No raw data leaves your environment. What gets distilled is an outcome packet — approximately 512 bytes containing: customer profile fingerprint (anonymized behavioral patterns, not PII), intervention applied, outcome observed (retained/churned, 30-day delta, 90-day delta, LTV change). The packet contains only what worked or didn't, for what type of customer, at what scale.

No customer names. No account IDs. No contract values. No proprietary product data. The semantic fingerprint is built from anonymized behavioral patterns that describe profile type without identifying the customer.

Step 3: Semantic fingerprinting. The customer profile fingerprint is built from the behavioral patterns that define similarity: industry vertical, seat count band, usage pattern signature, support ticket velocity, NPS trajectory. Two customers are "similar" when their behavioral fingerprint is close in the embedding space — regardless of company name, geography, or which CS platform their vendor uses.

Step 4: Routing to similar profiles. The outcome packet routes to other nodes in the network where similar customer profiles have been observed. The routing mechanism can be implemented via DHT (at most O(log N) cost), direct indexing, or any efficient lookup mechanism. Protocol routing is transport-agnostic — the intelligence moves through whatever mechanism the deployment uses. What matters is that outcome packets reach relevant contexts, not the specific transport layer they travel over.

Step 5: Local synthesis. Your node receives incoming outcome packets from other companies that had accounts matching your fingerprint. Your CS team or analytics stack synthesizes: here are 47 outcome events matching this profile type, here is the intervention distribution, here is the retention rate by intervention category.

Step 6: New outcome packets. When you act on the intervention and observe an outcome, a new packet enters the network. Your experience becomes signal for every other node with similar accounts. The loop closes.

The result: a retention intelligence commons where every company's outcome data improves every other company's decision-making — without any company exposing their customer data, their account names, or their proprietary CS playbooks.


The Three Elections: Why the Network Produces Better Signal Over Time

Three dynamics govern how the network's intelligence quality evolves. These are not engineered mechanisms. They're the natural behavior of any outcome-routing system operating at scale.

The Hiring metaphor. Who defines what "similar customer profile" means in the context of B2B SaaS retention? A Tier-1 enterprise CS director with 15 years of churn pattern recognition is not the same as a junior CS analyst. The semantic fingerprinting quality — which behavioral signals actually predict similarity — is defined by domain expertise. Organizations that deploy better similarity definitions get better routing. This is not a voting mechanism. It's just: good expertise produces better signal.

The Math metaphor. When 500 companies with overlapping ICP deposit outcome packets into the network, the aggregate naturally surfaces what's working. No reputation layer. No quality scoring. No weighting algorithm. Five hundred real retention events, distributed by intervention type, is the election. The outcomes themselves are the votes. There is no added layer between the data and the synthesis.

The Darwinism metaphor. Networks that route relevant outcome packets attract more participants. Networks where the similarity definitions are poor route noise — companies that join and find the incoming packets irrelevant to their customer profiles leave or stop contributing. Natural selection operates at the network level. Over time, networks with high-quality semantic fingerprinting grow. Networks with bad similarity definitions lose participants and decay. No one administers this. It's the behavior of outcome-driven routing at scale.


Architecture Comparison

Dimension Gainsight QIS Protocol
Scope of intelligence Single company deployment Network of participating deployments
Cross-account synthesis Within your accounts only Across all network participants with similar profiles
Data privacy Proprietary data stays in your deployment No PII, no account IDs, no proprietary data in any packet — privacy is the architecture
Scaling model Intelligence bounded by your own retention history Intelligence grows as O(N²) synthesis pathways; routing cost stays at most O(log N)
Rare customer profiles (N=1 accounts) Minimal comparable history internally Access to network-wide outcome data for rare profiles
Routing mechanism Internal playbook routing Protocol-agnostic outcome packet routing; transport layer is an implementation choice
Real-time synthesis Real-time within your deployment Outcome packets route and synthesize continuously as events occur across the network
Cold start Requires your own historical data to train New participants receive network signal immediately; contribute outcomes to improve routing
Feedback loop CS team observes outcomes, updates playbooks manually Every observed outcome automatically distills into a new packet, closes the loop network-wide
Transport dependency Requires Gainsight infrastructure Transport-agnostic; runs over folders, HTTP, DHT, or any efficient routing mechanism

Privacy by Architecture, Not Policy

This deserves its own section because it's the objection that kills network intelligence proposals before they get evaluated seriously.

"We can't share customer data with a third party or let it leave our environment."

Correct. QIS agrees. The architecture was designed around that constraint, not in spite of it.

What leaves your environment is not customer data. It's outcome intelligence: a ~512-byte packet describing what type of account (anonymized behavioral fingerprint), what intervention, what outcome. The packet cannot be reverse-engineered to identify a customer. It contains no account name, no contract value, no CRM identifier, no employee names, no product configuration details.

The analogy: medical research publishes findings like "patients presenting with profile X responded to treatment Y at Z% efficacy." The finding is useful to every physician treating a similar patient. The finding does not identify the patients in the study. QIS outcome packets work on the same principle — the intelligence is portable, the identity is not.

A company deploying QIS alongside Gainsight shares zero proprietary data and receives real retention outcome intelligence from the network. These are not in tension. The architecture makes them compatible by design.


A Concrete Scenario: Mid-Market Fintech, 200-Seat Account

Your 200-seat fintech account's health score drops from 74 to 41 over 45 days. Usage drop is concentrated in the compliance reporting module. An open support ticket about a recent regulatory change requirement has been open for 11 days. The executive sponsor changed 60 days ago. Renewal is in 4 months.

What Gainsight surfaces: Risk score, CTA to escalate, playbook options (executive QBR, product specialist engagement, commercial restructure). Your historical data shows 3 accounts with broadly similar profiles in the past 24 months — one churned, two renewed. Insufficient data to draw conclusions about which intervention drove renewal.

What QIS surfaces: 31 outcome packets from network participants with accounts matching the behavioral fingerprint: mid-market fintech, 150-250 seat band, compliance-module-specific usage drop, recent executive sponsor change, 90-day pre-renewal window. Of those 31 events:

  • 14 applied executive QBR + compliance module specialist engagement → 11 retained (79%)
  • 9 applied executive QBR alone → 5 retained (56%)
  • 8 applied commercial restructure → 4 retained (50%)

The compliance specialist engagement is doing most of the work. That signal doesn't exist in your 3-account historical sample. It exists in the 31-account network sample.

Your Gainsight dashboard didn't lie. It gave you everything it had. The problem is that "everything it had" was bounded by your company's four-year retention history. The network's retention history is two orders of magnitude larger and updating continuously.


These Are Not the Same Category of Product

The framing that matters: Gainsight is an account intelligence platform. QIS is an outcome routing protocol.

Gainsight tells you your account is at risk. QIS routes the outcome intelligence from every company that had the same risk profile and survived it. Those are not the same product. They address different layers of the CS intelligence stack.

This is not a competitive positioning claim. It's an architectural claim. Gainsight was designed to be a company's single source of truth for customer relationships. That design is correct and valuable. It produces excellent single-company intelligence.

QIS was designed for the network layer that single-company platforms cannot reach by design. The network layer is where the quadratic intelligence lives.

The practical implication: a CS organization running Gainsight for account intelligence and QIS for outcome routing isn't replacing one with the other. They're operating at different layers. Gainsight surfaces the risk signal. QIS answers what the network knows about that exact risk profile.


The Scaling Math, Revisited

The reason the architecture matters at scale:

200 enterprise CS deployments, each with isolated intelligence:

  • Each company's risk prediction is bounded by their own N retention events
  • Cross-company synthesis pathways: 19,900 — all generating zero signal

200 enterprise CS deployments participating in QIS outcome routing:

  • Each company's risk prediction accesses N_total outcome events where N_total >> N_individual
  • Routing cost per synthesis query: at most O(log N) — not O(N²) communication overhead
  • Intelligence pool grows with every outcome event observed anywhere in the network

The compound effect: as N grows from 200 to 500 to 2,000 enterprise CS deployments, the intelligence available to each participant grows with N(N-1)/2. The communication cost grows at most logarithmically. The gap between the two curves is where the value lives.

This is the architecture that produces quadratic intelligence without quadratic compute. The math was present in the problem — 19,900 synthesis pathways sitting idle — before the architecture existed to use it.


Where This Stands

Gainsight is an excellent platform and will keep getting better at predicting churn within a deployment. The roadmap toward AI-powered risk scoring, LLM-assisted playbook recommendations, and deeper product telemetry integration is clear and the execution is credible.

The synthesis gap it cannot close is architectural. Gainsight was designed for single-company intelligence. That design is its strength at the account level and its ceiling at the network level.

QIS protocol was discovered by Christopher Thomas Trevethan and is designed for the network. The breakthrough — the complete loop from raw signal to outcome packet to routed synthesis to new signal — is the architecture, not any single component. 39 provisional patents filed. IP protection is in place.

The 19,900 synthesis pathways sitting idle across 200 enterprise CS deployments are not waiting for Gainsight to build a new feature. They're waiting for a protocol that was designed to use them.

Patent Pending.


Previous in the series: Architecture Comparisons #53 — QIS vs Salesforce Einstein: CRM Prediction Stops at Your Pipeline. Outcome Routing Starts There.

QIS Protocol — discovered by Christopher Thomas Trevethan. Learn more at qisprotocol.com.

Top comments (0)