DEV Community

João André Gomes Marques
João André Gomes Marques

Posted on

Your AI agents need audit trails before August 2026. Here is how I added them in 5 lines of Python.

Your AI agents need audit trails before August 2026. Here is how I added them in 5 lines of Python.

I have been building AI agents for the past year. LangChain pipelines, CrewAI crews, custom orchestrators. They work great until someone asks: "what exactly did your agent do last Tuesday at 3am?"

That question is about to become a legal requirement.

The EU AI Act deadline nobody is talking about

Article 12 of the EU AI Act requires automatic logging of AI system activity. The regulation enters full enforcement in August 2026. If your AI agents make decisions, call APIs, or process data, you need to be able to produce a complete audit trail of what they did, when, and why.

Most teams I talk to are handling this one of two ways:

  1. Printing logs to stdout and hoping that counts
  2. Ignoring it and planning to figure it out later

Neither works. The regulation requires records that are tamper-evident, timestamped, and retained for the system's lifetime. stdout logs that anyone can edit do not meet that bar.

What I built

I spent the last several months building asqav - an open-source Python SDK that adds cryptographically signed audit trails to AI agents. Every action your agent takes gets signed with ML-DSA-65 (the new NIST post-quantum signature standard, FIPS 204) and timestamped with RFC 3161.

The core idea: governance should be a few lines of code, not a separate infrastructure project.

pip install asqav
Enter fullscreen mode Exit fullscreen mode
import asqav

asqav.init(api_key="sk_...")
agent = asqav.Agent.create("my-agent")
sig = agent.sign("api:call", {"model": "gpt-4", "prompt_tokens": 150})
Enter fullscreen mode Exit fullscreen mode

That is it. Your agent now has a cryptographic identity and every action it takes is signed, timestamped, and stored in a verifiable audit trail.

Why cryptographic signatures instead of just logs?

Regular logs have a fundamental problem: anyone with access can modify them after the fact. When a regulator asks for your audit trail, you need to prove the records have not been tampered with.

ML-DSA-65 signatures solve this. Each action gets a signature that is mathematically bound to:

  • The agent's identity
  • The exact action and context data
  • The precise timestamp

Change one bit of any of these, and the signature verification fails. This is the same principle behind digital signatures in banking and legal documents, but applied to AI agent actions.

The "quantum-safe" part means these signatures will remain secure even when quantum computers become practical. ML-DSA is the algorithm NIST selected specifically for this purpose. If you are building audit trails that need to be valid for years (which the EU AI Act requires), using a quantum-vulnerable algorithm today is setting yourself up for problems.

Real integration: LangChain

If you are using LangChain, adding audit trails takes one extra import and one line of config:

pip install asqav[langchain]
Enter fullscreen mode Exit fullscreen mode
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from asqav.extras.langchain import AsqavCallbackHandler

# Create the audit handler - signs every chain, tool, and LLM event
handler = AsqavCallbackHandler(agent_name="support-agent")

# Your existing LangChain code stays exactly the same
llm = ChatOpenAI(model="gpt-4")
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful support agent."),
    ("human", "{input}")
])
chain = prompt | llm

# Just pass the handler in the config
response = chain.invoke(
    {"input": "What is your refund policy?"},
    config={"callbacks": [handler]}
)
Enter fullscreen mode Exit fullscreen mode

The handler automatically signs chain:start, llm:start, llm:end, and chain:end events. Tool calls and errors are captured too. Your existing code does not change at all.

Real integration: CrewAI

CrewAI has a callback system that makes this equally straightforward:

pip install asqav[crewai]
Enter fullscreen mode Exit fullscreen mode
from crewai import Agent, Task, Crew
from asqav.extras.crewai import AsqavCrewHook

# Create the audit hook
hook = AsqavCrewHook(agent_name="research-crew")

# Define your crew as usual
researcher = Agent(
    role="Senior Research Analyst",
    goal="Find and analyze market trends",
    backstory="Expert analyst with deep market knowledge"
)

task = Task(
    description="Research the current state of AI regulation in Europe",
    agent=researcher,
    expected_output="A summary of key EU AI Act requirements"
)

# Pass the hook callbacks to the crew
crew = Crew(
    agents=[researcher],
    tasks=[task],
    step_callback=hook.step_callback,
    task_callback=hook.task_callback
)

result = crew.kickoff()
Enter fullscreen mode Exit fullscreen mode

Every step and task completion in your crew gets signed automatically. The hook captures task descriptions, agent roles, output lengths, and any errors.

Beyond logging: policy enforcement

Audit trails are required, but they are reactive. asqav also lets you enforce policies in real-time:

import asqav

asqav.init(api_key="sk_...")

# Block dangerous actions before they happen
asqav.create_risk_rule(
    name="no-deletions",
    action_pattern="data:delete:*",
    action="block_and_alert",
    severity="critical"
)

# Require multi-party approval for high-stakes operations
config = asqav.create_signing_group(
    "agt_xxx",
    min_approvals=2,
    total_shares=3
)
Enter fullscreen mode Exit fullscreen mode

This means you can set up rules like "no agent can delete production data" or "financial transactions over $10k need two human approvals" and have them enforced at the cryptographic level.

Decorators for existing code

If you have existing Python functions that your agents call, you can add signing without restructuring anything:

import asqav

asqav.init(api_key="sk_...")

@asqav.sign
def call_model(prompt: str):
    return openai.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": prompt}]
    )

# Or use context managers for grouped operations
with asqav.session() as s:
    s.sign("step:fetch", {"source": "internal-api"})
    data = fetch_data()
    s.sign("step:process", {"records": len(data)})
    result = process(data)
Enter fullscreen mode Exit fullscreen mode

What you get

When you need to produce a compliance report (or when a regulator asks), you can export your full audit trail:

# Export as JSON for programmatic processing
trail = asqav.export_audit_json(agent_id="agt_xxx")

# Export as CSV for spreadsheets and reports
asqav.export_audit_csv(agent_id="agt_xxx", path="audit-trail.csv")

# Verify any specific signature
verification = asqav.verify_signature("sig_abc123")
print(verification.valid)  # True
print(verification.algorithm)  # ML-DSA-65
Enter fullscreen mode Exit fullscreen mode

Each record includes the agent identity, action type, context data, ML-DSA-65 signature, and RFC 3161 timestamp. All verifiable independently.

The setup

  • Free tier covers agent creation, signed actions, audit export, and all framework integrations
  • Zero native dependencies - all cryptography runs server-side, so pip install asqav just works
  • MIT licensed and fully open source
  • Works with LangChain, CrewAI, LiteLLM, Haystack, and OpenAI Agents SDK

August 2026 is closer than you think

The EU AI Act is not a proposal anymore. It is law. The compliance deadlines are staggered, but the logging requirements in Article 12 apply to high-risk AI systems, and the definition of "high-risk" is broader than most developers realize.

Even if your system is not classified as high-risk, having cryptographically verifiable audit trails is just good engineering. When something goes wrong at 3am, you want to know exactly what your agent did, with proof that the records have not been altered.

If you are building AI agents in production, start thinking about governance now. Not because a regulation says so (though it does), but because running autonomous systems without accountability is a risk you do not need to take.


Links:

If you have questions or want to discuss AI compliance, drop a comment or find me on GitHub. Happy to help.

Top comments (0)