DEV Community

Cover image for Closing the Loop: From Audit Violation to AI Fix in One Command
Dariusz Newecki
Dariusz Newecki

Posted on

Closing the Loop: From Audit Violation to AI Fix in One Command

Every developer who uses static analysis tools knows the feeling. You run your linter or audit tool, get a wall of violations, and then spend 10 minutes manually figuring out what to show your AI assistant to fix them.

Copy the file path. Find the relevant class. Construct a prompt. Hope the AI has enough context.

We just eliminated that entire step in CORE.

The Problem

CORE has a constitutional governance system — 85 rules that audit the codebase for violations ranging from architectural boundaries to AI safety patterns. When the audit fails, you get output like this:

❌ AUDIT FAILED
  ai.prompt.model_required    │ 38 errors
  architecture.max_file_size  │ 13 warnings
  modularity.single_responsibility │ 11 warnings
Enter fullscreen mode Exit fullscreen mode

Useful. But then what? You still have to manually translate "ai.prompt.model_required in src/will/agents/coder_agent.py line 158" into a context package that an AI can actually work with.

That translation is pure mechanical mapping. It shouldn't be human work.

The Insight

Audit tools know what is broken and where.

Context tools know what the AI needs to see to fix it.

The gap between them is just a function call.

What We Built

Part 1: core-admin context build

First, we built a command that simulates exactly what the autonomous CoderAgent sees before generating code. Not a semantic search, not a guess — the exact same pipeline:

core-admin context build \
  --file src/will/agents/coder_agent.py \
  --symbol CoderAgent \
  --task code_modification \
  --output var/context_for_claude.md
Enter fullscreen mode Exit fullscreen mode

Output:

================================================================================
CORE CONTEXT PACKAGE  [Agent Simulation Mode]
================================================================================
Target file  : src/will/agents/coder_agent.py
Target symbol: CoderAgent
Task type    : code_modification
Items        : 8
Tokens (est) : ~3644
Build time   : 944ms

### 1. src/shared/context.py::CoreContext
Source : 🔍 vector search / Qdrant (semantic: 0.74)
...

### 6. src/will/agents/coder_agent.py::CoderAgent.build_context_package
Source : 🗄️  DB lookup (direct / graph)
...
Enter fullscreen mode Exit fullscreen mode

Every item shows why it was included — AST force-add, DB graph traversal, or vector search with score. You can verify the AI's "view" before it touches anything.

Part 2: Audit hints

Then we added one function to the audit formatter. When the audit fails, it now prints the exact command to run for each actionable finding:

╭─────────────── AI Workflow — Next Steps ───────────────╮
│ 65 actionable location(s). Run the command below for   │
│ each, then paste the output to Claude.                 │
╰────────────────────────────────────────────────────────╯

  ERROR ai.prompt.model_required
  Line 158: direct call to 'make_request_async()' detected.

  core-admin context build \
      --file src/will/agents/coder_agent.py \
      --task code_modification \
      --output var/context_for_claude.md

  ERROR ai.prompt.output_validation_required
  Line 199: missing mandatory call(s): ['_validate_output']

  core-admin context build \
      --file src/shared/ai/prompt_model.py \
      --symbol _validate_output \
      --task code_modification \
      --output var/context_for_claude.md
Enter fullscreen mode Exit fullscreen mode

The audit knows the file. It knows the rule. The mapping to task type is mechanical (ai.*code_modification, test.*test_generation). So it just... does it.

The Complete Workflow

core-admin code audit
  → ❌ FAILED
  → 💡 65 actionable locations
  → copy one context build command
  → run it → var/context_for_claude.md
  → paste to Claude
  → fix
  → repeat
Enter fullscreen mode Exit fullscreen mode

Zero manual translation. Zero "what should I show the AI?" The audit tells you exactly what to run next.

The Meta Moment

After deploying, we ran the audit again. CORE found modularity debt in the files we had just written:

WARN  modularity.single_responsibility
src/cli/commands/check/formatters.py  ← file we just edited

WARN  modularity.single_responsibility
src/cli/resources/context/build.py    ← file we just built
Enter fullscreen mode Exit fullscreen mode

And immediately printed:

core-admin context build \
    --file src/cli/resources/context/build.py \
    --task code_modification \
    --output var/context_for_claude.md
Enter fullscreen mode Exit fullscreen mode

The system audited our work and told us how to fix it. That's the governance model working as designed.

Why This Matters

Most AI coding workflows are: human decides what context to provide → AI generates → human reviews.

The bottleneck is the first step. What context? Which file? Which class? How much?

CORE's constitutional audit already has that information. It found the violation, it knows where it lives. The only missing piece was surfacing it in a form that feeds directly into the next action.

This pattern is general. Any tool that produces findings with file paths and rule IDs can do this. Your linter, your security scanner, your test coverage report — they all know what's broken. The question is whether they tell you what to do next.

Code

CORE is MIT licensed and available at github.com/DariuszNewecki/CORE.

The relevant pieces:

  • src/cli/resources/context/build.py — agent simulation command
  • src/cli/commands/check/formatters.pyprint_context_build_hints()

The core idea fits in about 60 lines. The _extract_symbol() function tries three approaches in priority order: structured context dict, then message parsing. _infer_task_type() maps rule prefixes to task types. The rest is just formatting.


CORE is a constitutional AI governance system — an experiment in building autonomous development tools that remain safely bounded by human-defined constraints. Previous post: Building an AI That Follows a Constitution

CORE's PromptModel pattern was inspired by prompt engineering work by Ruben Hassid — worth following if structured AI invocation interests you.

Top comments (0)