<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Damian</title>
    <description>The latest articles on DEV Community by Damian (@razorglintlabs).</description>
    <link>https://dev.to/razorglintlabs</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/razorglintlabs"/>
    <language>en</language>
    <item>
      <title>What auditors actually ask when reviewing AI &amp; OSS (and what founders miss)</title>
      <dc:creator>Damian</dc:creator>
      <pubDate>Thu, 05 Feb 2026 15:53:02 +0000</pubDate>
      <link>https://dev.to/razorglintlabs/what-auditors-actually-ask-when-reviewing-ai-oss-and-what-founders-miss-3ph2</link>
      <guid>https://dev.to/razorglintlabs/what-auditors-actually-ask-when-reviewing-ai-oss-and-what-founders-miss-3ph2</guid>
      <description>&lt;ol&gt;
&lt;li&gt;“Who is accountable when this breaks?”&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Not who wrote the code.&lt;br&gt;
Not which model you’re using.&lt;/p&gt;

&lt;p&gt;They want a named role, a decision path, and evidence that authority exists outside Slack messages.&lt;/p&gt;

&lt;p&gt;If the answer is “the team” or “we’ll decide when it happens”, you’ve already lost ground.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;“Can you prove what ran — not what you intended?”&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Design docs don’t count.&lt;br&gt;
Architectural diagrams don’t count.&lt;/p&gt;

&lt;p&gt;Auditors care about runtime reality:&lt;/p&gt;

&lt;p&gt;what executed&lt;/p&gt;

&lt;p&gt;with which dependencies&lt;/p&gt;

&lt;p&gt;under which configuration&lt;/p&gt;

&lt;p&gt;at that point in time&lt;/p&gt;

&lt;p&gt;If you can’t reconstruct state deterministically, you’re arguing beliefs, not facts.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;“How do you show continuity, not perfection?”&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Perfect systems don’t exist.&lt;br&gt;
What they look for is graceful failure.&lt;/p&gt;

&lt;p&gt;They’ll probe:&lt;/p&gt;

&lt;p&gt;what happens when a model degrades&lt;/p&gt;

&lt;p&gt;when an upstream OSS dependency changes&lt;/p&gt;

&lt;p&gt;when an assumption quietly stops being true&lt;/p&gt;

&lt;p&gt;The absence of a failure narrative is often worse than the failure itself.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;“Where is the evidence a non-expert can sign off on?”&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This one surprises teams.&lt;/p&gt;

&lt;p&gt;Auditors aren’t always cryptographers, ML engineers, or OSS specialists.&lt;br&gt;
They need auditor-legible artifacts:&lt;/p&gt;

&lt;p&gt;clear PASS / RISK / FAIL outcomes&lt;/p&gt;

&lt;p&gt;traceable inputs and outputs&lt;/p&gt;

&lt;p&gt;explanations that survive handoff&lt;/p&gt;

&lt;p&gt;If your review requires “just trust us” or a deep technical explainer, friction goes up fast.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;“What happens six months from now?”&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Reviews aren’t snapshots anymore — they’re continuity questions.&lt;/p&gt;

&lt;p&gt;They’ll ask:&lt;/p&gt;

&lt;p&gt;how reviews are repeated&lt;/p&gt;

&lt;p&gt;how drift is detected&lt;/p&gt;

&lt;p&gt;how evidence stays valid over time&lt;/p&gt;

&lt;p&gt;A one-time checklist passes once.&lt;br&gt;
A repeatable system passes organizations.&lt;/p&gt;

&lt;p&gt;What founders usually miss&lt;/p&gt;

&lt;p&gt;Most teams prepare for questions about AI.&lt;/p&gt;

&lt;p&gt;Auditors are preparing for questions about control.&lt;/p&gt;

&lt;p&gt;That gap is where delays, scope creep, and last-minute remediation live.&lt;/p&gt;

&lt;p&gt;Final thought&lt;/p&gt;

&lt;p&gt;If your review story depends on intent, policy, or verbal explanation — you’re exposed.&lt;/p&gt;

&lt;p&gt;If it’s backed by deterministic artifacts and clear authority, reviews move fast.&lt;/p&gt;

&lt;p&gt;Curious how others here handle audit readiness for AI-heavy systems — especially once OSS and runtime drift enter the picture.&lt;/p&gt;

</description>
      <category>security</category>
      <category>opensource</category>
      <category>ai</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
