DEV Community

Rom C
Rom C

Posted on

AI Governance Isn’t a Compliance Problem. It’s a Code Problem

 ## Here is the pattern that keeps showing up in AI post-mortems.
Team ships an AI feature. Six months later, a compliance review, an insurance renewal, or an enterprise client questionnaire surfaces questions nobody documented at build time. Where does the data go? Does the provider train on it? Can we demonstrate controls?
The answers are never clean. And every unclear answer traces back to an architectural decision someone made — or skipped — when the pipeline was being built.

That is AI governance debt. And in 2026, it is coming due at scale.

The Gap Nobody Is Closing

Most organisations have an AI policy. Almost none have AI governance. The policy is a document. Governance is what the infrastructure actually does.
Real artificial intelligence risk management means the controls described in documentation are implemented at the infrastructure level. Data flows are auditable. Sensitive information is anonymised before it reaches a model. The audit trail exists by design, not by assumption.

The business cost of this gap is laid out well in the LinkedIn piece on AI governance as a business risk and explored from the inside-out in this Medium piece on ungoverned AI risk. Both worth reading before your next sprint planning.

The Fix: Anonymise Before Inference

The pattern that closes the governance gap is straightforward. Before any document hits an LLM endpoint, a pre-processing layer strips sensitive entities — names, financial figures, health identifiers, PII of any kind. The model works on the clean version. Raw data never leaves your environment.

raw_doc = load(input)
clean_doc, map = anonymiser.process(raw_doc)
output = llm.analyse(clean_doc) # model never sees raw data
final = map.restore(output) # optional

Build this layer provider-agnostic. It should survive model switches without a rebuild. When it is in place and auditable, you can answer every governance question cleanly — because the architecture does the work, not the policy document.
This is exactly the approach Questa AI has productised — an upload, anonymise, and analyse pipeline that makes AI governance an operational reality. Provider-agnostic, auditable by design, built for regulated environments.

Quick Checklist: Before You Ship the Next AI Feature

  • Does input data leave our environment? To which provider, under which terms?
  • Is there an anonymisation layer before the LLM endpoint?
  • Is the layer provider-agnostic and auditable?
  • Does the provider use our data for training? Under what conditions?
  • Can we describe these controls to an auditor in one paragraph with evidence?

If any of these produce a vague answer, that is your governance gap. Fix it before deployment, not after an audit.

The Reading Trail
This conversation has been building across platforms. If you want the full picture:
→ Business risk angle: LinkedIn :If You're Using AI Without Governance, You're Taking a Business Risk You Can't See Yet
→ Operational risk angle: Medium :The Hidden Business Risk of Using AI Without a Governance Framework
→ Policy vs architecture angle: Substack :AI Governance Is Not a Policy Document. It’s an Architecture Decision.
→ Technical deep-dive: hashnode AI Without Governance Is Just Technical Debt With a Friendlier Interface

Engineers built the adoption curve. Engineers can fix the governance gap underneath it. Build the anonymisation layer first. Everything else follows.

Top comments (0)