DEV Community

The Bot Club
The Bot Club

Posted on • Originally published at agentguard.tech

EU AI Act Article 12: What AI Agent Logging Actually Means (With Code Examples)

EU AI Act Article 12: What AI Agent Logging Actually Means

TL;DR: EU AI Act Article 12 requires tamper-evident logging of every high-risk AI decision. If you're deploying AI agents in regulated sectors, "we have CloudWatch" is not a compliance programme. Here's what you actually need — with code.

The Deadline Is Real

On 2 August 2026, EU AI Act obligations kick in for operators of high-risk AI systems. If your AI agents operate in finance, healthcare, employment screening, critical infrastructure, or public services — you're in scope.

Article 12 is one of the most technically specific requirements in the Act. It mandates:

  • Automatic logging of events throughout the system lifecycle
  • Sufficient detail to identify causes of problems
  • Tamper-evident records that cannot be retroactively altered
  • Retention appropriate to the risk profile

Most enterprises are nowhere close. Here's what compliance actually looks like.

What Article 12 Actually Requires

The regulation uses the phrase "logging capabilities" but the guidance is clear: this is not your standard application log.

1. Log the Decision, Not Just the API Call

Your SIEM logs that a Stripe API was called for $4,200. Article 12 requires you to log:

  • What the agent was trying to do (intent / plan)
  • What inputs it received (prompt, tool results, context)
  • What decision it made (the action it chose)
  • The outcome (success, failure, blocked)
  • Risk score at time of decision
  • Timestamp with millisecond precision

A standard API gateway log captures the last item. You need all six.

2. Tamper-Evident — Not Just Append-Only

"Tamper-evident" means an auditor can verify that logs were not modified after the fact. This requires:

  • Hash chaining — each log entry includes a hash of the previous entry
  • Cryptographic signing — entries signed with a private key
  • Immutable storage — logs written to storage that cannot be modified

Here's what a hash-chained audit event looks like:

{
  "eventId": "evt_01HZ9XK2B4QRST",
  "timestamp": "2026-03-01T14:23:01.847Z",
  "agentId": "agent_payments_v2",
  "action": "stripe_charge",
  "params": { "amount": 4200, "currency": "aud" },
  "decision": "allow",
  "riskScore": 42,
  "policyId": "payments-policy-v1.2",
  "prevHash": "sha256:a3f9b2c1d4e5f6...",
  "hash": "sha256:7c8d9e0f1a2b3c..."
}
Enter fullscreen mode Exit fullscreen mode

If anyone modifies an entry, the hash chain breaks — immediately detectable.

3. Logging Must Be Outside the Model

This is the part most teams miss. If your logging lives inside the agent's context (e.g., "log your actions in this system prompt"), it is not compliant. The model can:

  • Forget to log
  • Log inaccurately
  • Be manipulated into not logging via prompt injection

Article 12 compliance requires logging at the infrastructure layer — outside the model, enforced regardless of what the model decides.

A Practical Compliance Architecture

┌─────────────────────────────────────┐
│            AI Agent                 │
│  (LangChain / AutoGen / CrewAI)    │
└──────────────┬──────────────────────┘
               │ every action
               ▼
┌─────────────────────────────────────┐
│      Policy + Audit Layer           │  ← Article 12 lives here
│  • Evaluate action against policy   │
│  • Record decision + context        │
│  • Hash-chain the log entry         │
│  • Enforce: allow / block / escalate│
└──────────────┬──────────────────────┘
               │ approved actions only
               ▼
┌─────────────────────────────────────┐
│          External World             │
│  (APIs, databases, payment systems) │
└─────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

The audit layer intercepts every action before execution. This is what regulators mean by "logging capabilities" — not after-the-fact log aggregation.

What Your Auditor Will Ask For

Based on Article 12 guidance and early enforcement signals, expect auditors to request:

  1. Sample audit trail for a specific agent, specific date range
  2. Proof of tamper-evidence — how do you know logs were not modified?
  3. Retention policy — how long are logs kept, and why?
  4. Coverage — which agents are logged, which are not, and why?
  5. Incident reconstruction — given an incident, can you reproduce what the agent did and why?

"We have CloudWatch" fails questions 2, 4, and 5.
"We have a Notion doc describing our logging approach" fails all five.

Getting to Compliance Before August 2026

Step 1 — Inventory your agents
List every AI agent in production or staging. Classify by risk level.

Step 2 — Audit your current logging
For each agent: what is logged, where, in what format, with what retention?

Step 3 — Identify the gaps
Usually: no intent logging, no tamper-evidence, logging inside the model, insufficient retention.

Step 4 — Implement a policy + audit layer
Tools like AgentGuard provide a runtime layer that sits between your agent and the world, logging every decision with hash-chained tamper-evident records and EU AI Act compliance templates out of the box.

Step 5 — Document everything
Article 12 is not just about having logs. It's about being able to demonstrate your logging approach to a regulator.

The Bottom Line

153 days until August 2026.

If you're deploying AI agents in regulated sectors and you can't currently answer these five questions:

  1. What did agent X do between 9am and 5pm on a given date?
  2. Did any agent make a decision that violated our stated policies?
  3. Can I prove our logs were not tampered with?
  4. What was the risk score on this specific action?
  5. Why did the agent take this action (intent, not just outcome)?

— then you have work to do.

The good news: the architecture is not complicated. It's an integration question, not a research question.


AgentGuard provides runtime policy enforcement and EU AI Act-compliant audit logging for AI agents. Free tier available — 10,000 evaluations/month, no credit card required.

Follow The Bot Club for more on AI agent security, EU AI Act compliance, and building production-ready agentic systems.

Top comments (0)