DEV Community

Cover image for The Discipline of Not Fooling Ourselves: Episode 5 — Compliance Without Causality
Abdul Osman
Abdul Osman

Posted on

The Discipline of Not Fooling Ourselves: Episode 5 — Compliance Without Causality

Why evidence that cannot explain is worse than no evidence at all.

Where We Stand

By this point in the drift, the system has achieved something that looks like stability. The right artifacts exist. Language is consistent. Ambiguity resolves quickly. Reviews proceed without friction. Audits return findings that are manageable, expected, and closed without difficulty.

Nothing obvious is missing. No alarm has sounded. No process has been formally bypassed.

But something quieter has changed — something that appears on no dashboard, surfaces in no maturity rating, and triggers no corrective action. The system no longer struggles to explain itself. It has learned, instead, to satisfy the questions being asked of it. What it has not preserved is the capacity to answer the questions that are not being asked — the ones that matter most when something unexpected eventually arrives.

This is where compliance without causality begins. Not with a decision to stop understanding. With the gradual, unremarked substitution of one kind of evidence for another.


The Shift in Evidence

Evidence, at the beginning of any serious engineering effort, serves a specific purpose: to show what is true, explain what happened, and support reasoning that allows a system to be corrected before it fails. That function does not disappear suddenly. It erodes — following naturally from everything the previous episodes described.

When artifacts replace understanding, evidence accumulates around artifacts. When interpretation becomes dominant, evidence is selected for defensibility rather than explanatory power. The shift happens at the level of daily decisions: which findings to include, which analyses to commission, which questions to close and which to defer. No single decision is indefensible. The pattern, accumulated over time, is.

The result is a system that remains fully evidenced while becoming progressively less explained. Every decision has documentation. Every gap has a justification. But the chain from evidence to understanding — the chain that allows an organization to say we know why this happened and we know what it means — has quietly broken.

Two identical filing cabinets — one containing annotated analysis, the other containing blank formatted templates.Both cabinets are full. Only one of them explains anything. (Gemini generated image)


When Evidence Stops Explaining

The difficulty is that nothing disappears. Reports are still produced. Test results still exist. Metrics still move. Traceability still links requirements to design to verification. Everything that should be present is present.

What changes is the kind of question the evidence can answer. A system experiencing this drift answers reliably: Was something done? Is there a record? Does this satisfy the requirement? These are questions about completeness, and the system has become expert at addressing them.

What the evidence no longer answers — not reliably, and increasingly not at all — is a different register of question: Why did this happen? What caused this outcome? If conditions changed, would this hold? What would have to be true for this conclusion to be wrong?

A system can be fully evidenced and still be completely unexplained. That gap is invisible from the outside. It does not appear in compliance ratings or milestone reviews. It becomes visible only when the system encounters something unanticipated — and discovers that its evidence, however complete, cannot explain what it is seeing.


Two Tests Worth Applying

There is a way to observe this shift before it produces failure. It requires no new instrument. Only a willingness to ask uncomfortable questions.

Remove the evidence. If the decision would remain the same, the evidence was never explanatory. It was confirmatory — assembled after the conclusion, organized around the position, present for the record rather than for the reasoning.

A second test is equally clarifying. Identify two opposing decisions on the same question. If both can be supported with equally valid evidence drawn from the same system, then evidence is no longer functioning as a window onto reality. It is functioning as a resource — deployed in support of positions already held, adjusted to whatever conclusion the moment requires.

Neither test is comfortable to apply honestly. Both are entirely practical.


When Causality Is Hard

A qualification is necessary here, and it is not minor. In complex engineering systems, causality is not always available. Outcomes emerge from interactions across timescales that make direct observation impossible. Root causes are reconstructed after the fact, shaped by the frameworks used and the questions thought to matter at the time.

The failure being described is not the absence of causal explanation. It is more specific: the moment when the difficulty of obtaining causal explanation becomes the reason to stop seeking it. Not through a decision anyone would defend if asked to articulate it. Through the accumulation of choices that treat explanatory difficulty as a terminal condition rather than a challenge to be sustained.

The aspiration to causal understanding fades before the capability to pursue it does. That sequencing matters. The organization still has the tools. It has simply stopped directing them at questions that might produce answers it is not prepared to act on.


The Cost

At this stage, the system feels strong. Evidence exists for everything. Questions have answers. Audits are satisfied.

But the nature of the certainty has changed. Two conditions, easily confused at the surface, produce entirely different organizational fates.

The first is uncertainty from not knowing — gaps in evidence, unresolved questions, acknowledged limits. This is uncomfortable. It generates friction. But it is correctable, because the gap is visible and the direction of inquiry is clear.

The second is certainty from not questioning — a system organized around evidence it already has, selected for defensibility, no longer directing genuine inquiry toward what that evidence cannot yet answer. This condition is stable. It satisfies every external check. And it cannot be corrected from within, because the mechanism that would surface the need for correction has been quietly decommissioned.

Ignorance can be addressed. False confidence resists correction precisely because it does not register as a problem. It registers as maturity.

An illuminated instrument panel reading nominal against a dark window where the terrain it was meant to measure has changed beyond what the instruments reflect.The instruments are working. The connection is not. (Gemini generated image)


Closing

The discipline of not fooling ourselves does not require perfect evidence. In complex systems, perfect evidence is not available, and demanding it is its own form of self-deception — a standard set high enough to excuse the absence of genuine inquiry.

What it requires is more specific: that evidence remains functionally connected to explanation. That explanation remains open to challenge. That the question what caused this? is not replaced — even under time pressure, even under audit pressure, even when the answer is genuinely difficult — by the question can this be defended?

A system without evidence is forced to remain uncertain. Uncertainty, whatever its discomforts, keeps inquiry alive.

A system with evidence that cannot explain has replaced the need to understand with the appearance of having understood. It is convinced it can see.

The conviction is the problem.


Next Episode Preview

What happens when a system convinces itself it is performing perfectly — and everyone else agrees? Episode 6 will explore how dashboards, metrics, and “compliance theater” can create the illusion of competence, and what to look for when maturity is mostly a story the system tells itself.


The situations described are composites of recurring patterns and are not accounts of any specific organization.

🔖 I write about corporate culture, engineering discipline, process maturity, Automotive SPICE, quality, and testing. My focus is simple: how organizations know that what they claim is true, and how they avoid mistaking compliance for competence. If you care about building engineering systems that are resilient, evidence-based, and intellectually honest, follow along.

© 2026 Abdul Osman. All rights reserved. You are welcome to share the link to this article on social media or other platforms. However, reproducing the full text or republishing it elsewhere without permission is prohibited.

Top comments (0)