DEV Community

Chairman Lee
Chairman Lee

Posted on

Your AI Agent Has No Audit Trail. Here Is How I Fixed That.

The Problem

AI agents are powerful — but who's watching them? When Claude Code edits your files, runs commands, or makes decisions, there's no tamper-proof record of what happened.

If something goes wrong, you can't trace what the agent did or why. Debugging becomes guesswork. And when regulators ask for proof? "The AI did it" isn't an acceptable answer.

The Solution: InALign

I built InALign, an open-source MCP server that creates cryptographic audit trails for AI agents.

How it works

  1. Install in 30 seconds: pip install inalign-mcp && inalign-install --local
  2. Every action is recorded with SHA-256 hashing + Ed25519 signatures
  3. Each record links to the previous one — if anyone modifies a single record, the entire chain breaks
  4. View everything in an interactive HTML dashboard: inalign-report

Key design decisions

  • Fully decentralized — all data stays on your machine. No servers, no accounts, no data collection.
  • Open source — the audit tool itself is auditable. You don't have to trust us.
  • Zero cost to scale — no cloud infrastructure needed. 10 users or 100,000 users, same $0 server cost.

What Gets Recorded

Every tool call, file access, and decision your AI agent makes is automatically captured in a hash chain:

  • User commands — what prompt triggered the action
  • Tool calls — every tool invocation with inputs and outputs
  • File operations — reads and writes with full context
  • Decisions — agent reasoning captured for accountability

Each record includes a SHA-256 hash of the previous record, creating a chain where tampering is mathematically detectable.

The Dashboard

Run inalign-report to open a 4-tab interactive dashboard:

  • Overview — session stats, Merkle root, chain validity
  • Provenance Chain — every recorded action with expandable details
  • Session Log — full conversation transcript with role filtering
  • AI Analysis — LLM-powered security analysis (Pro feature)

Who is this for?

  • Developers using Claude Code, Cursor, or any MCP-compatible agent
  • Teams that need compliance/audit trails for AI actions (EU AI Act is coming Aug 2026)
  • Security-conscious developers who want to know exactly what their AI agent did
  • Anyone who believes AI accountability should be a default, not an afterthought

Quick Start

pip install inalign-mcp && inalign-install --local
Enter fullscreen mode Exit fullscreen mode

That's it. No signup, no account, no server. Your agent's actions are now recorded with cryptographic verification.

Full documentation: inalign.dev/guide

The Bigger Picture

AI agents are getting more powerful every week. They edit production code, run shell commands, access sensitive files. But the governance infrastructure hasn't kept up.

I believe cryptographic provenance should be a standard layer for every AI agent — not something you bolt on after an incident.

InALign is my attempt to make that happen. It's open source, it's free, and it runs entirely on your machine.

I'd love your feedback. Is this something you'd actually use?

GitHub: github.com/Intellirim/inalign

Top comments (0)