DEV Community

João André Gomes Marques
João André Gomes Marques

Posted on

Why Your AI Agents Need an Audit Trail (and How to Add One in 5 Minutes)

AI agents are shipping to production. They're calling APIs, querying databases, sending emails, and making decisions that affect real users. But here's the problem: most teams have no idea what their agents actually did.

When something goes wrong - and it will - you need answers. Which agent made that API call? What data did it access? Who approved it? Without an audit trail, you're flying blind.

The compliance problem

The EU AI Act now requires audit trails for high-risk AI systems. DORA mandates operational resilience documentation for financial services. If your agents interact with regulated data or make consequential decisions, you need governance.

This isn't a future problem. These regulations are active now.

Adding governance in 5 minutes

Asqav is an open-source Python SDK that adds governance to any AI agent. Here's how to set it up.

Install

pip install asqav
Enter fullscreen mode Exit fullscreen mode

Create an agent

from asqav import Asqav

client = Asqav(api_key="sk_...")

# Register your agent
agent = client.create_agent(
    name="research-agent",
    algorithm="ML-DSA-65"  # quantum-safe signatures
)
Enter fullscreen mode Exit fullscreen mode

Sign every action

Every action your agent takes gets a cryptographic signature:

signature = client.sign(
    agent_id=agent.agent_id,
    action_type="data:read:users",
    action_id="fetch-active-users",
    payload={"filter": "active", "limit": 100}
)

print(f"Recorded: {signature.signature_id}")
# Recorded: sig_a1b2c3d4...
Enter fullscreen mode Exit fullscreen mode

This creates a tamper-proof record. The signature uses ML-DSA (FIPS 204) - a quantum-safe algorithm that will remain secure even when quantum computers arrive.

Enforce policies

Policies let you control what agents can do:

# Block dangerous actions
client.create_policy(
    name="block-deletions",
    action_pattern="data:delete:*",
    action="block_and_alert",
    severity="critical"
)

# Now any agent trying to delete data gets blocked
Enter fullscreen mode Exit fullscreen mode

When an agent tries a blocked action, it gets rejected before execution. No code changes needed in the agent itself.

Verify later

Any signature can be verified independently:

result = client.verify(signature_id="sig_a1b2c3d4...")
print(f"Valid: {result.valid}")
# Valid: True
Enter fullscreen mode Exit fullscreen mode

This is useful for audits, compliance reviews, or incident investigation.

Framework integration

Asqav works with the frameworks you're already using:

LangChain:

from asqav.integrations import LangChainCallback
chain.invoke(input, config={"callbacks": [LangChainCallback(client)]})
Enter fullscreen mode Exit fullscreen mode

CrewAI:

from asqav.integrations import CrewAICallback
crew = Crew(agents=[...], callbacks=[CrewAICallback(client)])
Enter fullscreen mode Exit fullscreen mode

MCP (Claude Desktop):

{
  "mcpServers": {
    "asqav": {
      "command": "asqav-mcp",
      "env": { "ASQAV_API_KEY": "sk_..." }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

CI/CD compliance scanning

There's also a GitHub Action that scans your codebase for AI compliance issues:

- uses: jagmarques/asqav-compliance@v1
  with:
    standard: eu-ai-act
Enter fullscreen mode Exit fullscreen mode

This catches missing audit trails, unprotected agent actions, and policy gaps before they reach production.

What you get

  • Every agent action signed with quantum-safe cryptography
  • Policies that block dangerous actions in real-time
  • Multi-party approval for critical operations
  • Compliance reports for EU AI Act and DORA
  • Dashboard to monitor everything

The SDK is open source: github.com/jagmarques/asqav-sdk

Full docs: asqav.com/docs


The question isn't whether your AI agents need governance. It's whether you add it now while you can design it properly, or scramble to bolt it on when an incident or a regulator forces your hand.

Top comments (0)