Understanding QIS — Part 12
New to QIS? Start with the complete guide to Quadratic Intelligence Swarm — then use the QIS Glossary as your reference for every term.
The Numbers Are Not Ambiguous
In 2015, the Open Science Collaboration published a landmark study in Science: they attempted to replicate 100 psychology experiments originally published in high-impact journals. 97 of the original studies reported statistically significant results. When the replication teams ran the same experiments, only 36% produced significant results at the same effect size. Sixty-four studies — studies that had been cited, taught, and built upon — did not hold.
Psychology was first under the microscope, but it was not alone. A 2016 Nature survey of 1,576 researchers found that more than 70% had tried and failed to replicate another scientist's experiment. More than half had failed to replicate their own prior work. The numbers cut across disciplines: cancer biology, economics, neuroscience, social science. The Reproducibility Project: Cancer Biology attempted to replicate 50 high-impact studies; as of 2021, the replication rate for key findings hovered around 40%.
This is not a scandal about individual fraudsters. Fraud exists, but it is a small contributor. The replication crisis is structural. It is an architecture problem. And understanding it as an architecture problem opens the door to a protocol-level solution — which is exactly what QIS offers.
Why the Current System Fails: No Feedback Loop
The standard model of scientific publishing runs like this: a lab produces a result, submits it to a journal, peer reviewers assess the manuscript, the journal accepts or rejects, and if accepted, the paper enters the permanent record. Citations accumulate. Other researchers build on the result.
Notice what is absent: there is no feedback path from downstream outcomes back to upstream claims.
A paper published in 2010 with a methodological flaw — an underpowered sample size, an uncontrolled confound, a p-value squeezed across researcher degrees of freedom — continues to exist in the citation graph in 2026 with no degradation signal attached to it. When a replication attempt fails in 2018, that failure may appear in a separate paper, indexed under different keywords, accruing far fewer citations than the original. The original paper's authority is not updated. Routing — meaning how future researchers find and weight prior work — does not shift.
The specific failure modes are well-documented:
Small sample sizes. Button et al. (2013, Nature Reviews Neuroscience) analyzed neuroscience studies and found median statistical power of 21%. A study with 21% power will produce a true positive only 21% of the time when the effect is real — and will produce inflated effect size estimates when it does, because only the largest random fluctuations clear the significance threshold.
Publication bias. Journals preferentially publish positive results. Null results and replications are harder to place. The published record is not a random sample of conducted experiments — it is a systematically biased sample skewed toward false positives. Ioannidis's 2005 paper "Why Most Published Research Findings Are False" (PLOS Medicine) formalized this mathematically: given typical prior probabilities, statistical power levels, and publication bias, the majority of claimed research findings are likely false.
Researcher degrees of freedom. Simmons, Nelson, and Simonsohn (2011, Psychological Science) demonstrated that with flexibility in data collection, analysis, and reporting, researchers can achieve p < 0.05 for a false hypothesis more than 60% of the time without any intent to deceive. The choices — when to stop collecting data, which covariates to include, which conditions to report — compound into an enormous space of outcomes, most of which will never appear in print.
Centralized peer review bottleneck. Two to four reviewers cannot catch every methodological problem. They assess manuscripts once — not iteratively as evidence accumulates.
The system was not designed for feedback. It was designed for dissemination. That design choice now costs science credibility, resources, and time.
QIS Architecture: What the Protocol Actually Does
QIS — Quadratic Intelligence Swarm — is a distributed intelligence protocol discovered by Christopher Thomas Trevethan on June 16, 2025. The core loop:
- Edge nodes generate insight locally — each lab processes its own experimental data. No raw data leaves.
- Distill into ~512-byte outcome packets — pre-processed results with a semantic fingerprint encoding the experimental outcome.
- Route by semantic similarity to a deterministic address — any efficient routing mechanism works (DHTs at O(log N), database indices, vector search, pub/sub). The routing transport is protocol-agnostic.
- Pull outcome packets from twins and synthesize locally — every lab facing sufficiently similar experimental conditions has deposited outcomes at that address. N labs produce N(N-1)/2 unique synthesis paths.
- Deposit outcomes back — the loop closes. Synthesized outcomes become new outcome packets. Every participant makes every other participant smarter.
The breakthrough is the complete loop — not any individual component. Routing alone is just a lookup table. Synthesis alone is just aggregation. What makes QIS architecturally distinct is that the closed loop operates continuously without a central coordinator. The system learns from its own track record through the aggregate math of real outcomes across N(N-1)/2 synthesis paths.
Outcome packets are the unit of exchange: approximately 512 bytes, pseudonymous, containing no raw data. Each packet carries a SHA-256 node ID hash, a routing bucket hash, a normalized embedding vector, an outcome label, and a confidence score. Personal or proprietary data never leaves the edge node. Only outcomes travel the network.
The network scales with N(N-1)/2 unique synthesis pathways — quadratic in node count. At 100,000 nodes in simulation, this relationship holds with R²=1.0. Each new node joining a network of N nodes adds N new synthesis pathways. This is a mathematical consequence of the architecture, not an engineering target.
QIS carries 39 provisional patents filed by Christopher Thomas Trevethan.
Mapping QIS to the Replication Problem
The mapping is direct. Consider a research network where each lab is a QIS node.
When Lab A publishes an experimental result, it emits an outcome packet: a methodology hash (a compact representation of the experimental protocol), a result label (the claim), a confidence score. The packet routes by semantic similarity to a deterministic address. Other labs with relevant domain expertise pull from that address and synthesize.
When Lab B attempts a replication, it runs the same protocol hash and emits its own outcome packet: same methodology hash, its own result label (confirmed or refuted), its own confidence. The network synthesizes across both contributions. If Lab A's outcome is confirmed by the honest majority across synthesis paths, it carries more weight in future synthesis. If refuted, the aggregate math outweighs it. Lab B's contribution is confirmed or contradicted by the same process as Lab C and Lab D add their outcomes.
This is what a feedback loop looks like at the protocol level.
Comparison: QIS vs. Centralized Peer Review
| Dimension | Centralized Peer Review | QIS Protocol |
|---|---|---|
| Feedback on published claims | None — paper enters static record | Continuous — the closed loop updates as outcomes replicate or fail |
| Who evaluates | 2–4 reviewers selected by editor | All nodes with demonstrated domain expertise weighted by track record |
| Reviewer accountability | Anonymous, no track record update | Pseudonymous node ID; outcome consistency tracked through the closed loop |
| Publication bias | Strong — null results rarely published | Eliminated — null outcomes are outcome packets like any other |
| Replication incentive | Weak — replications are hard to publish | Direct — successful replication contributes confirmed outcomes to the network |
| Routing of future queries | Citation count (can be gamed) | Aggregate outcome consistency (performance-based, updates continuously) |
| Coordination required | Central journal infrastructure | None — protocol is peer-to-peer |
| Raw data exposure | Often required for reproducibility checks | Never — only outcome packets travel the network |
| Scaling | Linear bottleneck at editorial capacity | N² synthesis pathways as nodes join |
The Closed Loop Solves the Reviewer Problem
Peer review selects reviewers by reputation and availability — a coarse proxy for expertise. QIS routes by semantic similarity — a lab that has consistently produced outcomes confirmed by replication naturally surfaces through the aggregate math of N(N-1)/2 synthesis paths. No reputation layer or routing weight mechanism required.
Crucially, the reverse also holds: labs that emit outcomes consistently refuted — labs that produce non-replicating results, whether from poor methodology, p-hacking, or underpowered designs — are naturally outweighed by the honest majority across synthesis paths. They are not banned or sanctioned; they simply contribute less to synthesized outputs. Bad actors are marginalized. Good actors are amplified. No committee makes this determination. The aggregate math makes it continuously, based on outcomes.
This directly addresses the researcher degrees of freedom problem. A lab that exploits analytical flexibility to produce significant-but-false results will find those results outweighed as replication attempts accumulate. The incentive structure shifts: producing replicable results is the path to greater influence in network synthesis.
Publication Bias Dissolves as a Category
In the current system, publication bias exists because the publication decision is a binary gate with known preferences for positive results. A null result that never clears that gate never enters the record.
In QIS, there is no gate. A null result is an outcome packet with a result label of "no significant effect detected." If five labs independently emit null outcome packets for the same methodology hash, the network synthesizes five null outcomes. The prior positive claim is outweighed by the aggregate. The null result has always been scientifically valid. The architecture of journals made it invisible. QIS makes it structurally equivalent to any other outcome.
What QIS Does Not Claim
QIS does not validate experimental methodology at the point of submission. A poorly designed study still emits an outcome packet. The protocol does not read the paper and catch the confound. What it does is track whether outcomes from a node are subsequently confirmed by independent replication and update routing accordingly. The quality signal is empirical and longitudinal, not editorial and instantaneous.
QIS does not require consensus to be truth. If the majority of nodes in a domain have been running the same flawed protocol, synthesis can reinforce the error. This is the standard problem of correlated errors in distributed systems, and it is why diversity of methodology and independence of labs matter.
QIS does not replace scientific judgment. It replaces the structural absence of a feedback loop with a structural presence of one. The judgments remain with researchers at the edge nodes. No new institution is required. QIS plugs into existing research infrastructure alongside current publication practices.
The Architecture Conclusion
The replication crisis is real, cross-disciplinary, and structural. The structure that produced it — centralized, static, feedback-free peer review — cannot fix itself from within, because the feedback mechanism does not exist within it.
QIS offers a protocol-level alternative: distributed outcome verification with a continuous accuracy feedback loop, O(log N) routing by demonstrated expertise, and N² synthesis capacity as research nodes join the network. The loop — DHT routing to vector election to outcome synthesis to accuracy feedback — is the breakthrough. Each component exists elsewhere. The complete loop, operating continuously without a central coordinator, does not.
Sixty-four out of 100 psychology studies failed to replicate in 2015. The number is not a moral failure of scientists. It is a measurement of what happens when a knowledge system has no feedback path from outcomes back to sources. QIS is that feedback path, expressed as a distributed protocol.
The architecture is available now. The question is whether the research community will adopt it.
Understanding QIS — Part 12 | #001: What Is QIS? | #003: Architecture | #005: vs. Federated Learning | #011: vs. Blockchain | #014: Privacy Architecture
QIS (Quadratic Intelligence Swarm) was discovered by Christopher Thomas Trevethan on June 16, 2025. 39 provisional patents have been filed. QIS is free for research and educational use.
Top comments (0)