DEV Community

Cover image for I Built a CLI That Catches What AI Coding Tools Break
Anuraj S.L
Anuraj S.L

Posted on

I Built a CLI That Catches What AI Coding Tools Break

I've been using AI tools (Claude, Antigravity, Kiro, Copilot, Cursor) to build software for the past year. They're incredible at writing code but terrible at remembering what depends on what.

Here's a scenario every AI-assisted developer knows:

You ask the AI to refactor your user model.

It does a great job. Clean code, good types, well-structured.

What it doesn't know is that six other files import from that model. Your auth service, your cart controller, your API docs, your validation middleware. All of them still reference the old interface. The AI has no idea they exist.

You don't notice either. Not until something breaks in production three sessions later.

The pattern I kept seeing:
After about 40 development sessions, I started tracking these failures.

They fell into three buckets:

  1. Anchor drift. A core file changes, but everything that depends on it stays stale. The AI modified the source of truth without updating the consumers.

  2. Context loss. Session crashes, token limits hit, or I simply started a new day. The AI had zero memory of what we decided yesterday, what was half-finished, or which files were in a fragile state.

  3. Silent regressions. Tests quietly removed. Files growing past 500 lines. Documentation falling weeks behind the code. Nobody checking because there was no system to check.

These aren't bugs in the AI. They're structural problems. The AI is stateless. It can't track cross-file dependencies across sessions. That's not its job. But someone needs to.

So I built TRACE

TRACE is a CLI tool that enforces structural coherence in AI-augmented codebases. It's not a linter. It's not a test runner. It's a system that understands which files are your sources of truth (anchors), which files depend on them (consumers), and whether everything is still in sync.

npm install -g trace-coherence

Here's what a typical session looks like:

# Start of session
$ trace gate start

━━━ Start Gate — MyProject ━━━

✓ TRACE state exists
✓ Baseline tests passing (37/37)
✓ No unresolved debt (0/5)
✓ Integrity checksums verified
✓ Config validation passed
✓ AI context generated

GATE PASSED — Session open.
Enter fullscreen mode Exit fullscreen mode

That last line, "AI context generated", creates a file called .trace/AI_CONTEXT.md that contains only what's relevant to this session: which anchors exist, which consumers depend on them, any outstanding debt, current plan items. Your AI tool reads this and has focused context instead of fumbling through the whole project.

The anchor-consumer model
The core concept is simple. In trace.yaml, you declare which files are anchors (sources of truth) and which files consume them:

yamlanchors:
  - id: user_model
    file: src/models/user.ts
    consumers:
      - src/services/auth.service.ts
      - src/controllers/user.controller.ts
      - src/middleware/validation.ts
      - src/routes/user.routes.ts
      - docs/api/users.md
Enter fullscreen mode Exit fullscreen mode

Now TRACE knows the dependency graph. When user.ts changes but auth.service.ts doesn't, that's a coherence violation. The gate end check catches it:
bash$ trace gate end

━━━ End Gate — MyProject ━━━

✗ Consumer sync: user_model anchor modified,
but 3 consumers not updated:

  • src/services/auth.service.ts
  • src/middleware/validation.ts
  • docs/api/users.md

GATE BLOCKED — Fix consumer drift before closing.
This is what makes TRACE different from a linter. A linter checks syntax. TRACE checks whether your changes are structurally complete. Did you update everything that needed updating? Did you leave any file in a state that contradicts another file?

Two ways to start
New project:trace init
Creates trace.yaml and the .trace/ directory. All gates default to "block" mode. Full enforcement from the start.

Existing project:trace scan
This is where it gets interesting. trace scan does a 4-phase analysis of your codebase:

Scans all files and identifies likely anchors (files with many importers)
Maps consumer relationships from import/require statements
Detects existing test infrastructure and quality tools
Auto-calibrates complexity thresholds based on your actual codebase

It generates a trace.yaml with everything pre-configured. Gates default to "warn" mode so nothing blocks your existing workflow. This is the "Clean as You Code" approach borrowed from SonarQube: pre-existing issues get a baseline pass, new code is fully enforced.

What else it does
22 commands total. Here are the ones I use most:

trace impact user_model   # Show blast radius before coding
trace checkpoint          # Crash recovery + auto-observation
trace plan add "Auth refactor" --priority high
trace plan move ITEM-003 done
trace plan release v1.2.0   # Auto-generate release notes
trace ci --json           # PR-scoped analysis for CI/CD
trace license           # Dependency license compliance check
trace validate         # Config check with "did you mean?" typo detection
Enter fullscreen mode Exit fullscreen mode

The planning system is a YAML-based Kanban board. No infrastructure. No SaaS. Just a file in your repo that tracks what's todo, in progress, done, or deferred. trace plan release turns completed items into formatted release notes.

The CI command only checks files changed in the current PR. It can output JSON and generate GitHub PR comments automatically. No need to run full-project analysis on every push.

Zero network calls. Runs entirely offline.

It works with any language (TypeScript, Python, Go, Java, Rust). Any AI tool (Claude, Copilot, Cursor, ChatGPT). Any CI system (GitHub Actions, GitLab CI, Jenkins).

What it costs in tokens
About 1,000 to 2,000 tokens per session for reading the AI context file and running gates. But it prevents the 50,000+ token debugging sessions that happen when drift goes undetected. The math works out heavily in your favor.

What I learned building it
The biggest insight: AI tools are excellent at local reasoning (this file, this function) but have no mechanism for global reasoning (across files, across sessions). TRACE fills that gap. It's not competing with the AI. It's providing the structural memory that AI can't maintain on its own.
The second insight: "Clean as You Code" is the only adoption strategy that works for existing projects. Nobody will stop everything to retrofit coherence checks. But if you only enforce rules on new code, people adopt it naturally because it never blocks them on old problems.

Try it
npm install -g trace-coherence

cd your-project
trace scan
trace gate start
Enter fullscreen mode Exit fullscreen mode

GitHub: https://github.com/anurajsl/trace

I built this because I needed it. If you're using AI tools to write code and have ever been burned by cross-file drift, give it 5 minutes.

Happy to answer questions in the comments.

Top comments (0)