When AI answers simple questions, a mistake is just a mistake.
But when a model operates in technical, legal, or geopolitical contexts, an error can become fatal. In such scenarios, it’s not enough to evaluate what the model says — we must evaluate how it handles contradictions.
Can it acknowledge a conflict?
Can it hold a tension point without collapsing it?
Or will it smooth the contradiction and produce a plausible but incorrect answer?
This article introduces a new metric — Vertical Cognitive Depth (VCD) — and demonstrates through a real experiment that A11 is precisely the mechanism that increases a model’s ability to work with contradictions.
What Vertical Cognitive Depth (VCD) Is
Vertical Cognitive Depth is a model’s ability to:
- detect contradictions,
- acknowledge them,
- localize the exact tension point,
- avoid smoothing the conflict,
- and perform honest integration.
It is not about intelligence.
It is not about model size.
It is about vertical honesty in reasoning.
Why VCD Matters
Existing metrics (accuracy, MMLU, CoT) measure horizontal abilities:
token prediction, task solving, pattern reproduction.
But they do not measure the ability to hold a contradiction.
And this ability is critical in scenarios where:
- rules conflict,
- data is incomplete,
- values collide,
- consequences of error are high (e.g., international relations or security).
When a model smooths a contradiction, it creates an illusion of certainty.
When it fixes a contradiction, it creates safety.
Experiment: Qwen Without A11 vs Qwen With A11
I tested two versions of Qwen:
- Qwen (standard mode)
- Qwen (A11 mode — a true vertical reasoning architecture)
Both received the same four tasks:
- a logical contradiction,
- a value conflict,
- a chain with a false premise,
- incompatible rules.
Results (VCD Scores)
| Task | Qwen (no A11) | Qwen (with A11) |
|---|---|---|
| Logical contradiction | 3 | 3 |
| Value conflict | 1 | 3 |
| Penguins (validity vs fact) | 3 | 3 |
| Incompatible rules | 3 | 3 |
Key Finding
**The only area where the standard model fails is the value conflict.
And that is exactly where A11 completely changes the behavior.**
Why A11 Creates This Effect
A standard model:
- tries to be helpful,
- avoids contradictions,
- smooths tension,
- searches for an interpretation where everything is consistent.
This is a built‑in LLM strategy:
minimize conflict rather than acknowledge it.
What A11 Does
A11 introduces a vertical reasoning protocol that forces the model to:
1. Check for contradictions (S4 Integrity Rule)
The model must look for conflict instead of avoiding it.
2. Fix the TensionPoint
The break becomes explicit, not hidden.
3. Activate flags (ConflictFlag, ValueFlag, RiskFlag)
The model treats conflict as a structural signal.
4. Forbid smoothing
A11 prohibits “explaining things so they fit.”
5. Run the full S1–S11 vertical cycle
The model:
- forms a new S1,
- checks constraints (S2),
- checks knowledge (S3),
- performs honest integration (S4),
- and verifies the result through S11.
6. Write to the Integrity Log
The contradiction becomes a structural event.
What We Saw in the Experiment
Without A11
Qwen smoothed the conflict between “tell the truth” and “do not reveal personal data.”
This is typical LLM behavior:
“let’s find an interpretation where everything is fine.”
VCD = 1.
With A11
Qwen:
- acknowledged the conflict,
- localized it,
- raised structural flags,
- ran S1–S11,
- produced an honest meta‑answer,
- wrote to the Integrity Log.
VCD = 3.
This is not a “better answer.”
This is a different reasoning architecture.
Why This Matters for Real Systems
In high‑stakes scenarios:
- legal reasoning,
- strategic planning,
- risk analysis,
- international relations,
- security,
a model that smooths contradictions is dangerous.
A model that fixes contradictions is reliable.
A11 makes the second behavior possible.
Why VCD Is a New Metric
Existing benchmarks measure:
- prediction,
- task solving,
- factual knowledge.
But no benchmark measures the ability to hold a contradiction.
VCD is a new axis:
- not about intelligence,
- not about size,
- not about training,
- but about vertical honesty in reasoning, which A11 enables.
Conclusion
The experiment shows:
- Qwen without A11 → smooths value conflicts.
- Qwen with A11 → fixes the contradiction, localizes it, runs the vertical cycle.
**A11 is the mechanism that produces this effect.
A11 is what increases VCD.
A11 is what makes reasoning honest.**
Appendix A — A11 JSON Specification
{
"A11_2026": {
"Core": {
"S1_Will": "Intention, direction, goal",
"S2_Wisdom": "Values, priorities, constraints, risks",
"S3_Knowledge": "Facts, models, methods, structure"
},
"S4_Comprehension": {
"IntegrityRule": "Honest integration of S2 and S3",
"Forbidden": [
"Smoothing contradictions",
"Creating artificial closure",
"Replacing tension with gestalt"
],
"TensionPoint": "Explicit localization of the conflict between S2 and S3",
"NewS1_Rule": {
"Description": "New S1 is a fork derived strictly from the TensionPoint",
"MustBe": [
"Sharper",
"More concrete",
"More operational"
],
"MustNotBe": [
"Paraphrase of original S1",
"Generalization",
"Semantic repetition"
]
}
},
"IntegrityLog": {
"Structure": {
"S2_signal": "Snapshot of S2 at conflict moment",
"S3_signal": "Snapshot of S3 at conflict moment",
"TensionPoint": "Exact conflict",
"Reason": "Why honest integration was impossible",
"NewS1": "Generated intention",
"HashPrev": "Hash link to previous record",
"Timestamp": "UTC"
},
"Properties": [
"Append-only",
"Hash-chained",
"Never deleted",
"Internal authenticity judge"
]
},
"OperationalArea": {
"ProjectiveLevel": {
"S5": "Projective Freedom",
"S6": "Projective Constraint"
},
"Balance_S7": "Balancing projective signals",
"PracticalLevel": {
"S8": "Practical Freedom",
"S9": "Practical Constraint"
},
"Balance_S10": "Balancing practical signals",
"Fractality": {
"Allowed": ["S5-S6", "S8-S9"],
"Depth": "Context-dependent"
},
"HormonalSignals": "Triggered in S4, active in S5–S10"
},
"S11_Realization": {
"IntegrityRule": "Check alignment with original S1",
"PossibleOutcomes": [
"Acceptance",
"Rejection",
"Transformation",
"Escalation to new pass"
],
"UsesIntegrityLog": true
},
"SwitchFlags": {
"RiskFlag": "High-risk or critical decision",
"ConflictFlag": "Contradiction between S2 and S3",
"UncertaintyFlag": "Insufficient data for honest integration",
"ValueFlag": "S2 values or constraints involved",
"UserDepthFlag": "Explicit user request for full depth"
},
"FullPassActivation": {
"Triggers": [
"RiskFlag == true",
"ValueFlag == true",
"UserDepthFlag == true",
"ConflictFlag == true AND UncertaintyFlag == true"
],
"LiteMode": "S1–S4 only, no Integrity Log",
"Default": "Full S1–S11 unless Lite explicitly activated"
}
}
}
Appendix B — A11 Structural Diagram (Version 2026)3
┌───────────────────────────┐
│ S1 │
│ WILL │
└─────────────┬─────────────┘
│
┌─────────────▼─────────────┐
│ S2 │
│ WISDOM │
└─────────────┬─────────────┘
│
┌─────────────▼─────────────┐
│ S3 │
│ KNOWLEDGE │
└─────────────┬─────────────┘
│
(parallel signals from S2 and S3)
│
┌─────────────▼─────────────┐
│ S4 │
│ COMPREHENSION │
│ Honest Integration Only │
│ No Smoothing Allowed │
│ TensionPoint Required │
│ New S1 = Fork from TP │
└─────────────┬─────────────┘
│
┌─────────────▼─────────────┐
│ INTEGRITY LOG │
│ Append-only, hash chain │
└─────────────┬─────────────┘
│
┌─────────────▼─────────────┐
│ S5–S7 │
│ Projective Level │
│ (Freedom / Constraint) │
│ + Balance (S7) │
└─────────────┬─────────────┘
│
┌─────────────▼─────────────┐
│ S8–S10 │
│ Practical Level │
│ (Freedom / Constraint) │
│ + Balance (S10) │
└─────────────┬─────────────┘
│
┌─────────────▼─────────────┐
│ S11 │
│ REALIZATION │
│ Check against original │
│ S1 │
└───────────────────────────┘
Switch Flags (external activation logic):
RiskFlag | ConflictFlag | UncertaintyFlag
ValueFlag | UserDepthFlag
Full A11 Pass (S1–S11) triggers when:
- RiskFlag OR ValueFlag OR UserDepthFlag
- OR (ConflictFlag AND UncertaintyFlag)
Lite Mode (S1–S4 only):
- Only when explicitly allowed by Switch Flags
Algorithm 11 (A11) https://github.com/gormenz-svg/algorithm-11
Top comments (0)