DEV Community

João André Gomes Marques
João André Gomes Marques

Posted on

How to make your AI agent accountable in 60 seconds

You built an AI agent. It calls APIs, reads databases, sends emails.

But can you prove what it did yesterday? If something goes wrong, can you show exactly which actions it took and in what order?

Most teams log to stdout and call it a day. That works until an auditor asks for tamper-evident proof.

Here is the fastest way to add real accountability:

pip install asqav
Enter fullscreen mode Exit fullscreen mode
import asqav

asqav.init(api_key=\"sk_...\")
agent = asqav.Agent.create(\"my-agent\")

# Every action gets a quantum-safe signature
agent.sign(\"email:send\", {\"to\": \"client@example.com\"})
agent.sign(\"db:query\", {\"table\": \"users\", \"rows\": 150})
agent.sign(\"api:openai\", {\"model\": \"gpt-4\", \"tokens\": 500})
Enter fullscreen mode Exit fullscreen mode

That is it. Each action now has:

  • A cryptographic signature (ML-DSA-65, quantum-safe)
  • A timestamp
  • A chain linking it to the previous action

You can verify any signature later:

assert asqav.verify(\"sig_abc123\")
Enter fullscreen mode Exit fullscreen mode

The audit trail is immutable. You cannot edit or delete entries after the fact. That is the difference between logging and accountability.

Works with existing frameworks

If you use LangChain:

from asqav.extras.langchain import AsqavCallbackHandler
handler = AsqavCallbackHandler(api_key=\"sk_...\")
chain.invoke(input, config={\"callbacks\": [handler]})
Enter fullscreen mode Exit fullscreen mode

CrewAI, OpenAI Agents, LiteLLM, and Haystack are also supported. One line each.

Why bother?

The EU AI Act Article 12 requires tamper-evident automatic event logging for high-risk AI systems by August 2026. Finance, healthcare, and government are already asking for this.

But even without regulations, knowing exactly what your agent did is just good engineering.

GitHub | Docs | Free tier available

Top comments (0)