DEV Community

Cover image for I’m Not Building AI Demos. I’m Building AI Audits (ASDP + Slop Gates)
Kwansub Yun
Kwansub Yun

Posted on

I’m Not Building AI Demos. I’m Building AI Audits (ASDP + Slop Gates)

🧠TL;DR

ASDP makes trust an artifact you can store, diff, and audit — not a story you retell.


AI projects don’t usually fail because the model is slow.
They fail because trust has no shape.

Everything looks “production-ready”… until you follow the execution path and realize the core logic is thin, missing, or quietly a placeholder.

That’s why I’ve been building AI Slop Detector (implementation-depth signals via AST + rules) and ASDP (AI Sovereign Definition Protocol).

ASDP in one line:

A spec that forces AI systems to be explained by evidence — not narrative.


The Evidence Model

An ASDP “audit unit” is not a document or a memo. It is a concrete, packageable artifact that gives trust a shape.

ASDP as an operating unit showing definition, trace, verification, and drift policy components

As shown above, an audit unit packages four key elements into a single, diff-able record:

  1. Definition: What you built (spec hash).
  2. Execution trace: What actually ran (trace hash).
  3. Verification: The gates it passed and the results (report hash).
  4. Drift policy: What constitutes failure tomorrow (policy hash).

This is the smallest unit that can be stored, audited, and compared across time and teams.


Minimal ASDP Audit Unit (Spec)

Below is a synthetic example of the structure (v2.x). Note the clear separation of identity, evidence, and drift policy.

protocol: ASDP
version: 2.x
artifact_type: asdp.sovdef.audit_unit

identity:
  system: "example-system"
  intent: "Detect implementation slop"
  scope: ["static_analysis", "audit_trace", "gating"]

evidence:
  definition:
    spec_hash: "sha256:REDACTED"
    components:
      - name: "slop-detector"
        version: "2.x"
      - name: "audit-gate"
        version: "1.x"

  execution_trace:
    run_id: "run_2026-01-14"
    steps: ["parse_ast", "logic_density", "ghost_import_scan"]
    trace_hash: "sha256:REDACTED"

  verification:
    gates:
      - name: "logic_density_min"
        result: "PASS"
        threshold: "REDACTED"
      - name: "placeholder_ratio_max"
        result: "PASS"
        threshold: "REDACTED"
    report_hash: "sha256:REDACTED"

  drift_policy:
    tracked: ["logic_density", "dependency_discipline"]
    fail_cond: ["signal_delta_exceeds_baseline"]
    policy_hash: "sha256:REDACTED"

attestation:
  signer: "REDACTED"
  signature: "REDACTED"

Enter fullscreen mode Exit fullscreen mode

Key Structural Insights

  • Immutable Chain of Custody: Notice that definition, execution_trace, and verification each carry a hash. This links what was defined to what ran and what passed, creating a tamper-evident record.

  • Drift Policy is an Artifact: We don't just check "Did it pass now?". We bake the future failure conditions (drift_policy) directly into the artifact. This tells the Drift Watchdog exactly what to monitor post-release.

  • Identity vs. Evidence: Narrative (intent) is separated from proof (evidence). No amount of good description can override a failed 'trace_hash'.


Where This Fits in CI/CD

If you are an engineer, the simplest mental model is "Trust as a Build Artifact" that runs as a required check.

ASDP CI/CD Mermaid Chart

The Workflow:

  1. On PR: The CI pipeline generates the ASDP artifact.
  2. Gates: It runs deterministic checks (e.g., Logic Density > 0.8, no ghost imports).
  3. Blocker: If gates fail, the merge is blocked.
  4. Release: On merge/release, the final artifact (Trace + Policy Hash) is published to the registry.

Org-Level Governance

When every service emits this same evidence format, you move from disconnected checks to organization-wide governance.

Audit Registry + Drift Watchdog

You need a central place to store these artifacts and an automated way to monitor them after release.

  • Audit Registry: An immutable, central store for all ASDP artifacts across teams. This allows you to compare trust posture across different projects.
  • Drift Watchdog: A scheduled process that compares running systems against their baseline evidence. It detects if code behavior or policies have drifted and can trigger alerts or blocks.

The Governance Dashboard

Ultimately, this provides a "single pane of glass" for auditability—without needing to read code.

Stakeholders get visibility into release readiness, drift status, and a complete evidence trail. The outcome is faster, data-driven audits and fewer "just trust me" approvals.


Why This Matters

If you operate AI systems inside a team, these questions show up fast:

  • "What shipped, exactly, and when?"
  • "Under which rules was it approved?"
  • "What changed since the baseline?"

ASDP doesn’t solve everything — but it gives you a repeatable evidence format so answers to these questions aren't tribal knowledge.


Collaboration Note (Solo Limit)

I’ll be honest: I’m hitting the limits of what a solo builder can ship alone.

I have the core architecture and specs. But shipping audit-grade systems at scale needs more than one set of hands.

If you are an engineer or researcher working on:

  • CI/CD Gates: GitHub Actions, custom checks.
  • Audit Registry: Immutable storage and indexing.
  • Drift Signals: Monitoring logic/behavior drift over time.

...I’m looking for collaborators to turn ASDP into something teams can actually run.

Drop a comment below with the track you care about, or connect with me on LinkedIn.


Top comments (0)