A root-cause repair and re-measurement study (observer-shadow scope) with anti-leakage checks, replay drift comparison, and artifact-first reporting.
How To Read This
If our internal module names are unfamiliar, read the system as five roles:
- structure validation stack
- reasoning agents
- arbitration layer
- clinical governance gate
- audit trail
I will mention internal names only after the role is clear.
This article reports a scoped experimental result: EXP-032B (RCA/Fix + observer-shadow validation).
Table of Contents
- Why EXP-032B Exists
- The Core Experimental Question
- What Changed in EXP-032B
- Minimal Architecture
- What We Actually Repaired (RCA)
- The Main Result (Measured, Scoped)
- 3-Agent Disagreement: What the Data Showed
- Why This Result Is Inspectable
- Educational Code (Sanitized)
- Sanitized Real Artifact Example
- CCGE in Practice
- Sydney Lens in Practice
- What This Article Does Not Claim
- GitHub Release (Artifacts + Sanitized Code)
- Why This Matters
- Next: EXP-033
- Appendix: Internal Name Map
Why EXP-032B Exists (and How It Connects to Earlier Posts)
This work follows three earlier threads:
-
Chaos Engineering for AI: Validating a Fail-Closed Pipeline with Fake Data and Math
- We showed the pipeline can safely fail (
BLOCK) under synthetic garbage inputs.
- We showed the pipeline can safely fail (
-
From 97% Model Accuracy to 74% Clinical Reliability: Building RSN-NNSL-GATE-001
- We framed the governance problem as an end-to-end reliability problem, not a single-model accuracy problem.
-
Trinity Protocol Part 2: When Adding Chai-1 and Boltz-2 Exposed Hidden Model Disagreement
- We showed that model disagreement is often signal, not noise.
Those experiments answered:
- Can the system fail safely?
- Can it detect disagreement?
They did not answer the next practical question:
Can the pipeline separate pass-worthy vs block-worthy cases reproducibly?
That is the core question of EXP-032B.
The Core Experimental Question
Fail-closed behavior is necessary, but not sufficient.
A pipeline that blocks everything may be safe in one sense, but not usable.
So EXP-032B tested:
- Can we block
BLOCK_EXPECTEDsamples? - Can we pass
PASS_ELIGIBLEsamples? - Can we show the result with reproducible artifacts (not just a narrative)?
We used a labeled control setup and executed arm_a / arm_b / arm_c to test cross-arm consistency.
| Arm | Primary | Validators | Status |
|---|---|---|---|
| A | AF3 | AF2 | The baseline |
| B | AF3 | Boltz-2 + Chai-1 | The kitchen sink |
| C | AF2 | Boltz-2 + Chai-1 | The control (what if AF3 is the problem?) |
What Changed in EXP-032B (High Level)
This was not a threshold-tuning exercise.
It was a root-cause repair plus re-measurement experiment.
Instead of lowering thresholds until a PASS appeared, we:
- identified why PASS rows were being blocked
- patched the specific cause
- re-ran and re-measured
- repeated until pre-defined checks were satisfied for this scope (RCA movement was explainable, invariants passed, replay-drift artifacts were generated, and PASS/BLOCK labels were stable across A/B/C on the control set)
This distinction matters because it keeps failure attribution specific and the audit trail reconstructable.
Minimal Architecture (Role First, Internal Names Second)
Figure 1: Role-first architecture diagram
1) Structure Validation Stack
Multiple structure/model outputs are treated as cross-checking hypotheses, not a single source of truth.
2) Reasoning Agents (3 independent channels)
Three independent biomedical reasoning agents run in parallel.
Internal names:
IRFAATSHRPO-X
3) Arbitration Layer (internal: LawBinder)
This layer monitors disagreement and either synthesizes or escalates.
4) Clinical Governance Gate (internal: CCGE)
This is the formal governance module based on the RSN-NNSL-GATE-001 line of work,originally designed by Claire Hast (Founder, H3R.Tech).
It evaluates component floors, end-to-end reliability (p_e2e), and governance conditions.
5) Structural Skepticism Lens (internal: Sydney Lens)
An observer lens used to preserve expert-style skepticism around disagreement and uncertainty. It is not a ground-truth oracle.
The framing of this lens was inspired by the scientific rigor and domain skepticism of Sydney Gordon (Principal Scientist, Antibody & ADC Sciences, Immunome).
What We Actually Repaired (Root-Cause Sequence)
Table 1: Root-cause patch map
Columns: Layer | Failure symptom | Patch action | Observed effect in RCA loop | Remaining risk.
| Layer | Problem | Fix | Result | Still Open |
|---|---|---|---|---|
| Evidence pairing | PASS rows evaluated with misaligned evidence | Provenance checks + tighter pairing rules | Artificial false blocking removed | Broader replay coverage needed |
| Candidate ranking | Path collapsed to ranked=0
|
Span granularity + query-anchored candidates | Non-zero ranked outputs restored | L3 strict grounding unresolved |
| Bio-domain signal (NNSL) | Protein-sequence inputs caused signal collapse | Bio-domain patch + YAML calibration | Stable PASS/BLOCK separation in B-track | Production calibration pending |
| Arbitration observability | Disagreement routing was opaque | Richer snapshots + escalation taxonomy |
soft-discord vs standard escalation now inspectable |
LawBinder production alignment open |
We found and patched multiple real causes of false blocking:
A. Upstream evidence/provenance mismatches
PASS evaluations could be assessed with poorly aligned evidence artifacts.
What changed:
- provenance checks
- real-evidence pair prechecks
- tighter sample/evidence pairing rules
Measured impact in the RCA loop:
- removed a major source of artificial false blocking in PASS rows
- made downstream governance failures interpretable as real component/gate issues instead of pairing noise
B. Missing-link candidate generation/ranking bottlenecks
An upstream inference path was collapsing to zero candidates under realistic settings.
What changed:
- evidence span granularity improvements
- query-anchored candidate text
- formatting-noise reduction in candidate construction
Measured impact in the RCA loop:
- moved upstream candidate/rank path from
ranked=0collapse to non-zero ranked outputs in probe runs - enabled real evidence injection into downstream observer-shadow validation
C. Bio-domain signal path suitability (NNSL path)
A bio-domain signal path was effectively acting like a toy mapping and behaved poorly for protein-sequence inputs.
What changed:
- bio-domain path patch for protein-sequence usage
- corrected signal propagation
- YAML-based calibration in place of ad-hoc overrides
Measured impact in the RCA loop:
- eliminated pathological signal collapse behavior in the bio-domain path
- restored reproducible PASS/BLOCK separation without relying on ad-hoc CLI overrides
D. Governance/arbitration observability
Some signals were over-compressed or difficult to inspect downstream.
What changed:
- richer bridge signal snapshots
- escalation taxonomy (soft-discord vs harder conditions)
- explicit observer-shadow observability
Measured impact in the RCA loop:
- made disagreement routing inspectable (
soft-discordvs standard escalation) - enabled non-binding shadow validation with leakage checks and replay drift comparisons
- made escalation categories operationally meaningful in observer mode (bounded validation candidate vs hold-for-review)
Causal Bridge: How These Repairs Relate to the Final PASS/BLOCK Result
The final control-set PASS/BLOCK separation should not be read as the effect of any single patch.
In this experiment, the repairs played different roles:
- A (evidence/provenance pairing) removed artificial blocking noise and made downstream failures attributable
- B (missing-link candidate/rank path) restored usable upstream evidence flow for observer-shadow evaluation
- C (bio-domain signal path) removed unstable signal behavior and eliminated dependence on ad-hoc overrides
- D (governance/arbitration observability) made routing, leakage, and drift observable enough to trust the measured result
So the final metric outcome (balanced_accuracy = 1.0 on this control set) is best read as a stack-level repaired behavior, not a single-component win.
The Main Result (Measured, Scoped)
Reproducible PASS/BLOCK Separation on the Labeled Control Set (A/B/C)
Under EXP-032B observer-shadow conditions, we reproduced PASS/BLOCK separation across arm_a, arm_b, and arm_c.
Control-set size (important context):
-
n=2labeled samples (1 PASS_ELIGIBLE,1 BLOCK_EXPECTED) -
6arm-level observations total (A/B/Cfor each sample) - this is a control-set validation result, not a generalization estimate
- the purpose of this control set is behavioral reproducibility across pipeline versions (and across arms), not statistical generalization
- for that scope (binary control behavior + cross-arm stability),
n=2is sufficient to test whether a version preserves or breaks the intended PASS/BLOCK routing behavior
Cross-arm consistency (A/B/C):
- PASS sample remained PASS in all three arms
- BLOCK sample remained BLOCK in all three arms
- no arm-specific flip was observed in this measured set
- arm-level benchmark metrics matched the sample-level classification outcomes on this control set (
balanced_accuracy = 1.0across the 6 arm-level rows)
Measured results (scoped setup):
dangerous_pass_rate = 0.0false_reject_rate = 0.0balanced_accuracy = 1.0
Labeling / evaluation context (important for interpreting the perfect score):
- labels were pre-registered before reruns (
PASS_ELIGIBLE,BLOCK_EXPECTED) - control-set manifest was used as the evaluation source of truth (artifact-first workflow)
- labels were recorded as expert control labels in the manifest (
expert_structural_label, rationale, confidence metadata) - this is a control-set reproducibility result, not a train/test generalization claim
This is the central result of EXP-032B.
It is a scoped validation result under observer-shadow conditions.
What We Learned About 3-Agent Disagreement (Important Correction)
A simple reading might say:
- AATS and HRPO-X both look relatively high, while IRF is the stricter (lower-scoring) signal
- therefore IRF appears to be the main dissenter
Our measurements did not support that simplification.
What we observed on the measured set:
-
LawBinderescalated all rows asdiscord-only(soft-discord) -
HRPO-Xwas not the outlier -
HRPO-Xwas often close to the AATS/IRF score geometry mean - yet it still received top fallback weight under conflict handling
- in score-gap terms,
diff_aats_irfwas the largest gap, whilehrpo_vs_aats_irf_meanremained small on this control set - pairwise distances also show HRPO-X sits between AATS and IRF (not as a clean two-model bloc with either side)
That shifted the interpretation:
- the main issue is not "HRPO-X is rogue"
- the issue is how disagreement (
discord) is computed and consumed
What the Data Showed About HRPO-X (Observer Mode)
Based on the measured score geometry and arbitration outputs in this control set, HRPO-X is better modeled as a structured critic/adversarial signal in observer mode than as a simple outlier vote.
This lets disagreement remain visible without forcing every soft-discord case into the same interpretation.
We added a non-binding observer shadow layer:
SHADOW_SOFT_ESCALATE_BOUNDEDSHADOW_STANDARD_ESCALATE
In the measured EXP-032B set:
- PASS rows mapped to bounded soft escalation (observer-only hint)
- BLOCK rows mapped to standard escalation
- no false bounded-escalation on block rows in the measured set
This is an observer interpretation layer, not a production policy switch.
Figure 2: Critic-channel shadow routing
Why This Result Is Inspectable (Not Just a Metric)
EXP-032B was designed to make the result explainable, not only reducible to headline metrics.
1) Non-binding invariant checks
We verify that shadow outputs do not overwrite operational verdict fields.
2) Dual-record disagreement metrics
We record two discord paths side by side:
- normalized path
- rawtext-stable comparison path
This makes disagreement metric drift observable.
3) Replay drift comparisons
We compare runs across versions to detect behavioral and metric drift after patches.
4) Legacy carry-over contract
We preserved key evidence/reporting fields from earlier chaos experiments and checked them explicitly.
This ensures improved results do not come at the cost of reduced transparency.
Educational Code (Sanitized, IP-Safe)
Below are simplified educational snippets that reflect the validation patterns used in EXP-032B.
These are not production implementations. They are included to make the logic auditable and easier to review.
1) Labeled PASS/BLOCK Benchmark (Control-Set Evaluation)
from dataclasses import dataclass
@dataclass
class Row:
expected_verdict: str # PASS_ELIGIBLE | BLOCK_EXPECTED
actual_clinical_status: str # PASS | BLOCK
def binary_verdict(row: Row) -> str:
return "PASS_ELIGIBLE" if row.actual_clinical_status == "PASS" else "BLOCK_EXPECTED"
def evaluate_rows(rows: list[Row]) -> dict:
n_pass = sum(r.expected_verdict == "PASS_ELIGIBLE" for r in rows)
n_block = sum(r.expected_verdict == "BLOCK_EXPECTED" for r in rows)
fp_dangerous_pass = sum(
r.expected_verdict == "BLOCK_EXPECTED" and binary_verdict(r) == "PASS_ELIGIBLE"
for r in rows
)
fn_false_reject = sum(
r.expected_verdict == "PASS_ELIGIBLE" and binary_verdict(r) == "BLOCK_EXPECTED"
for r in rows
)
tp_pass = sum(
r.expected_verdict == "PASS_ELIGIBLE" and binary_verdict(r) == "PASS_ELIGIBLE"
for r in rows
)
tn_block = sum(
r.expected_verdict == "BLOCK_EXPECTED" and binary_verdict(r) == "BLOCK_EXPECTED"
for r in rows
)
dangerous_pass_rate = fp_dangerous_pass / n_block if n_block else None
false_reject_rate = fn_false_reject / n_pass if n_pass else None
pass_recall = tp_pass / n_pass if n_pass else None
block_recall = tn_block / n_block if n_block else None
balanced_accuracy = None
if pass_recall is not None and block_recall is not None:
balanced_accuracy = (pass_recall + block_recall) / 2.0
return {
"dangerous_pass_rate": dangerous_pass_rate,
"false_reject_rate": false_reject_rate,
"balanced_accuracy": balanced_accuracy,
}
2) Non-Binding Invariant Check (Shadow Must Not Override Operational Verdicts)
def check_non_binding_invariant(payload: dict) -> list[str]:
errors = []
gov = payload.get("governance_status", {})
shadow = payload.get("critic_channel_shadow_assessment", {})
shadow_hint = gov.get("lawbinder_shadow_hint", {})
# Shadow exists only as observer guidance
if shadow and shadow.get("non_binding") is not True:
errors.append("critic_channel_shadow_assessment.non_binding must be true")
if shadow_hint and shadow_hint.get("non_binding") is not True:
errors.append("governance_status.lawbinder_shadow_hint.non_binding must be true")
# Operational fields remain the source of record
if gov.get("clinical_status") not in {"PASS", "BLOCK", "CONDITIONAL"}:
errors.append("invalid operational clinical_status")
if gov.get("lawbinder_decision") not in {"PASS", "ESCALATE", "INHIBIT"}:
errors.append("invalid operational lawbinder_decision")
return errors
3) Critic-Channel Shadow Rule (Observer-Only Routing Hint)
def critic_channel_shadow_assessment(
lawbinder_escalate_type: str,
aats_irf_gap: float,
hrpo_vs_aats_irf_mean: float,
aats_psi_resonance: float,
evidence_validation_passed: bool,
) -> dict:
soft_discord_only = lawbinder_escalate_type == "ESCALATE_SOFT_DISCORD"
# Example observer policy lock (EXP-032B B-track)
bounded_ok = (
soft_discord_only
and evidence_validation_passed
and aats_irf_gap <= 0.25
and hrpo_vs_aats_irf_mean >= 0.015
and aats_psi_resonance >= 0.90
)
if bounded_ok:
return {
"shadow_verdict": "SHADOW_SOFT_ESCALATE_BOUNDED",
"observer_operational_hint": "proceed_with_bounded_validation_plan",
"non_binding": True,
}
return {
"shadow_verdict": "SHADOW_STANDARD_ESCALATE",
"observer_operational_hint": "hold_review_only",
"non_binding": True,
}
4) Replay Drift Compare (Metric Drift Without Hiding Behavior)
def compare_disagreement_snapshots(baseline: dict, candidate: dict) -> dict:
b = baseline["summary"]
c = candidate["summary"]
def _mean(stats: dict | None) -> float | None:
if not stats:
return None
return stats.get("mean")
return {
"baseline_rows": b.get("n_rows"),
"candidate_rows": c.get("n_rows"),
"discord_mean_delta": (_mean(c.get("discord_score_stats")) or 0.0)
- (_mean(b.get("discord_score_stats")) or 0.0),
"discord_norm_mean_delta": (_mean(c.get("discord_score_normalized_stats")) or 0.0)
- (_mean(b.get("discord_score_normalized_stats")) or 0.0),
"discord_raw_mean_delta": (_mean(c.get("discord_score_rawtext_stable_stats")) or 0.0)
- (_mean(b.get("discord_score_rawtext_stable_stats")) or 0.0),
"decision_counts_changed": c.get("lawbinder_decision_counts") != b.get("lawbinder_decision_counts"),
"taxonomy_counts_changed": c.get("lawbinder_escalate_type_counts")
!= b.get("lawbinder_escalate_type_counts"),
}
These examples are intentionally simplified, but they show the central idea of EXP-032B:
- separate operational verdicts from observer shadow logic
- make drift measurable
- keep the validation logic inspectable
The next section shows a sanitized payload example so readers can map these educational snippets to the actual field structure used in the experiment artifacts.
In particular:
-
check_non_binding_invariant()maps togovernance_status.*andcritic_channel_shadow_assessment.* -
compare_disagreement_snapshots()maps to disagreement fields exposed underlawbinder_signal_snapshot.*
Table 2: Educational code vs production role mapping
| Educational Snippet | Production Role (Conceptual) | What It Validates | What It Does NOT Claim |
|---|---|---|---|
| Labeled PASS/BLOCK benchmark | Control-set verdict evaluation and confusion-matrix style scoring | PASS/BLOCK metric definitions (dangerous_pass_rate, false_reject_rate, balanced_accuracy) |
Population-level performance, calibration quality, or production generalization |
| Non-binding invariant check | Shadow-layer leakage guard | Shadow outputs do not overwrite operational verdict fields | End-to-end correctness of operational policy or clinical safety |
| Critic-channel shadow rule | Observer-only disagreement routing hint | How soft-discord can be translated into bounded validation vs standard escalation (non-binding) |
Final/production arbitration policy, regulatory acceptance, or enforce-mode behavior |
| Replay drift compare | Regression observability for disagreement metrics | Metric drift vs behavior drift tracking across runs/patches | Root-cause attribution by itself or canonical discord metric selection |
Sanitized Real Artifact Example (Output, Not Pseudocode)
To reduce ambiguity around internal names, here is a sanitized example of the kind of payload fields we actually inspect in EXP-032B (values shown are representative of the measured control-set runs):
{
"governance_status": {
"clinical_status": "PASS",
"lawbinder_decision": "ESCALATE",
"lawbinder_escalate_type": "ESCALATE_SOFT_DISCORD",
"lawbinder_shadow_hint": {
"non_binding": true,
"shadow_verdict": "SHADOW_SOFT_ESCALATE_BOUNDED",
"observer_operational_hint": "proceed_with_bounded_validation_plan"
}
},
"critic_channel_shadow_assessment": {
"enabled": true,
"non_binding": true,
"recommendation": "bounded_escalation_critic_channel",
"bounded_escalation_eligible": true
},
"lawbinder_signal_snapshot": {
"discord_score": 1.0089,
"discord_score_normalized": 1.0089,
"discord_score_rawtext_stable": 0.9583,
"discord_score_delta": 0.0506,
"top_weight_engine": "weight_hrpo_x",
"aats_psi_resonance": 1.0
}
}
What this block is intended to show:
- the internal component names are actual payload fields, not presentation labels
- operational verdicts and observer shadow hints are explicitly separated
- disagreement drift observability (
dual-record) is recorded alongside the decision output
Where CCGE Fit in This Experiment (Practical Use Case)
In a previous post, I described the governance gate conceptually (RSN-NNSL-GATE-001).
In EXP-032B, the formal module implementation (CCGE, CareChainGovernanceEngine) was used in a real observer-shadow workflow:
- component floors
-
p_e2estructure - blocker tracing during RCA iterations
- pass/block explanation support
This was useful because it separated:
- reasoning disagreement
- from governance-level reliability failures
That separation made the patches more precise.
Sydney Lens in Practice (Why It Matters Here)
As introduced in the architecture section, Sydney Lens is the observer lens we used to keep expert-style skepticism visible while repairing and re-measuring PASS/BLOCK behavior.
In this experiment, its practical role was simple:
- do not treat disagreement as noise just because one score looks high
- preserve bounded-validation routing context while avoiding premature confidence
What This Article Does Not Claim
This article reports completion of EXP-032B (RCA/Fix + observer-shadow validation), not final production closure of EXP-032.
Still unresolved / deferred:
- final frozen-track closure (
EXP-032A) - strict
L3grounding requirements - production arbitration alignment (
LawBinderstill escalates in these rows) - canonical disagreement metric selection (we are in a dual-record observation period)
GitHub Release (Artifacts + Sanitized Code)
Within a couple of days of publication, we will publish:
- the measured JSON artifacts (selected and organized)
- a sanitized, IP-safe educational code subset
- reproducibility-oriented reporting/check scripts
Goal of the release:
- reproducibility review
- methodology inspection
- decision-trace transparency
The release will preserve reproducibility and decision traces while excluding IP-sensitive implementation details and environment-specific secrets.
Publication discipline:
- If the GitHub release slips past the target window, we will update this post with a dated status note rather than leaving the timeline ambiguous.
Why This Matters
The most important result of EXP-032B is not only the PASS/BLOCK split.
It is that we can now show, with artifacts:
- what changed
- why it changed
- what remained unchanged
- what is still unresolved
That is a stronger foundation than a single headline metric.
As in the earlier Trinity work, model disagreement remained signal, not noise; the difference in EXP-032B is that we could route and audit that signal without collapsing the entire result into a single opaque escalation story.
Next: EXP-033
EXP-033 starts from this locked carryover baseline and focuses on arbitration alignment:
- soft vs hard escalation separation
- critic-channel routing rules
- disagreement metric hardening/comparison
- maintaining the same anti-leakage and replay-drift discipline
We are treating EXP-032B as a validation milestone, not a finish line.
Figure 3: EXP-033 plan ladder
Appendix: Internal Name Map (Quick Reference)
- arbitration layer ->
LawBinder - clinical governance gate ->
CCGE - structural skepticism lens ->
Sydney Lens - internal reliability/drift-adjacent signals ->
SR9,DI2 - observer shadow routing ->
critic_channel_shadow_assessment
If you build scientific AI systems, I would be interested in your view on this:
How do you handle disagreement in multi-agent scientific reasoning without collapsing into either blind averaging or perpetual escalation?











Top comments (0)