DEV Community

Cover image for What auditors actually ask when reviewing AI & OSS (and what founders miss)
Damian
Damian

Posted on

What auditors actually ask when reviewing AI & OSS (and what founders miss)

  1. “Who is accountable when this breaks?”

Not who wrote the code.
Not which model you’re using.

They want a named role, a decision path, and evidence that authority exists outside Slack messages.

If the answer is “the team” or “we’ll decide when it happens”, you’ve already lost ground.

  1. “Can you prove what ran — not what you intended?”

Design docs don’t count.
Architectural diagrams don’t count.

Auditors care about runtime reality:

what executed

with which dependencies

under which configuration

at that point in time

If you can’t reconstruct state deterministically, you’re arguing beliefs, not facts.

  1. “How do you show continuity, not perfection?”

Perfect systems don’t exist.
What they look for is graceful failure.

They’ll probe:

what happens when a model degrades

when an upstream OSS dependency changes

when an assumption quietly stops being true

The absence of a failure narrative is often worse than the failure itself.

  1. “Where is the evidence a non-expert can sign off on?”

This one surprises teams.

Auditors aren’t always cryptographers, ML engineers, or OSS specialists.
They need auditor-legible artifacts:

clear PASS / RISK / FAIL outcomes

traceable inputs and outputs

explanations that survive handoff

If your review requires “just trust us” or a deep technical explainer, friction goes up fast.

  1. “What happens six months from now?”

Reviews aren’t snapshots anymore — they’re continuity questions.

They’ll ask:

how reviews are repeated

how drift is detected

how evidence stays valid over time

A one-time checklist passes once.
A repeatable system passes organizations.

What founders usually miss

Most teams prepare for questions about AI.

Auditors are preparing for questions about control.

That gap is where delays, scope creep, and last-minute remediation live.

Final thought

If your review story depends on intent, policy, or verbal explanation — you’re exposed.

If it’s backed by deterministic artifacts and clear authority, reviews move fast.

Curious how others here handle audit readiness for AI-heavy systems — especially once OSS and runtime drift enter the picture.

Top comments (0)