DEV Community

Jason Shotwell
Jason Shotwell

Posted on

Making Google ADK Agents Audit-Ready for the EU AI Act

Google open-sourced the Agent Development Kit — the same framework powering Agentspace and Customer Engagement Suite. It's already pulling 3.7 million downloads a month on PyPI. ADK is going to be everywhere.

But there's a problem nobody's talking about yet: none of these agents have audit trails.

Every ADK agent you deploy will need to prove EU AI Act compliance by August 2, 2026. The penalties are up to €35 million or 7% of global annual turnover. And right now, there's no tooling for it.

So I built one.

air-adk-trust: EU AI Act Compliance for Google ADK

pip install air-adk-trust
Enter fullscreen mode Exit fullscreen mode

Three lines to make any ADK agent audit-ready:

from google.adk.agents import Agent
from google.adk.runners import Runner, InMemorySessionService
from air_adk_trust import AIRBlackboxPlugin

agent = Agent(
    model="gemini-2.0-flash",
    name="my_agent",
    instruction="You are a helpful assistant.",
    tools=[my_tool],
)

runner = Runner(
    agent=agent,
    app_name="my_app",
    session_service=InMemorySessionService(),
    plugins=[AIRBlackboxPlugin()]  # ← this line
)
Enter fullscreen mode Exit fullscreen mode

That's it. Every LLM call, every tool execution, every agent delegation — logged to a tamper-evident HMAC-SHA256 audit chain. No cloud. No API keys. Runs entirely on your machine.

Why ADK Makes This Easy

Most agent frameworks require monkey-patching or wrapper functions to add observability. ADK is different — it was designed with a first-class Plugin system and callback hooks at every stage of the agent lifecycle.

Here's what the trust layer hooks into:

User Message
    │
    ▼
before_agent  → Start audit record, check risk tier
    │
    ▼
before_model  → Scan prompt for PII, log request hash
    │
    ▼
  LLM Call
    │
    ▼
after_model   → Log response hash, track token spend
    │
    ▼
before_tool   → Classify tool risk, enforce policy
    │
    ▼
  Tool Runs
    │
    ▼
after_tool    → Log tool result, append to audit chain
    │
    ▼
after_agent   → Seal HMAC chain, finalize record
Enter fullscreen mode Exit fullscreen mode

Six callback hooks. Six EU AI Act articles. The mapping is clean:

EU AI Act Article What the Plugin Does
Art. 9 — Risk Management Classifies agent actions by risk tier, blocks high-risk tools
Art. 10 — Data Governance Detects PII in prompts and responses before they reach the LLM
Art. 11 — Technical Documentation Generates structured audit logs for every agent action
Art. 12 — Record-Keeping HMAC-SHA256 tamper-evident chain — cryptographically verifiable
Art. 14 — Human Oversight Tool confirmation gates for high-risk operations
Art. 15 — Robustness Tracks failures, detects loops, monitors error rates

Multi-Agent Coverage Out of the Box

This is where ADK's architecture really helps. ADK agents delegate to sub-agents — coordinators route to researchers, writers, reviewers. The plugin fires callbacks for every agent in the tree, not just the root.

from google.adk.agents import Agent
from air_adk_trust import AIRBlackboxPlugin

researcher = Agent(name="researcher", model="gemini-2.0-flash", ...)
writer = Agent(name="writer", model="gemini-2.0-flash", ...)
reviewer = Agent(name="reviewer", model="gemini-2.0-flash", ...)

orchestrator = Agent(
    name="orchestrator",
    model="gemini-2.0-flash",
    sub_agents=[researcher, writer, reviewer],
)

runner = Runner(
    agent=orchestrator,
    app_name="content_pipeline",
    session_service=InMemorySessionService(),
    plugins=[AIRBlackboxPlugin()]
)
# All four agents are now covered. One plugin instance.
Enter fullscreen mode Exit fullscreen mode

The audit chain captures the full delegation tree — which agent called which, what tools they used, and what the LLM returned at each step. When a regulator asks "show me the decision chain," you hand them the chain.

The Audit Chain (How HMAC-SHA256 Works Here)

Every event in the agent's lifecycle gets chained together cryptographically. Each record includes a hash of the previous record, making the chain tamper-evident — if someone modifies a record, the chain breaks and you can prove it.

Record 1: agent_start
  hash: abc123

Record 2: model_call
  prev_hash: abc123
  hash: def456

Record 3: tool_call (web_search)
  prev_hash: def456
  hash: ghi789

Record 4: agent_complete
  prev_hash: ghi789
  hash: jkl012 ← final seal
Enter fullscreen mode Exit fullscreen mode

This isn't logging. This is evidence. The same principle flight recorders use — if the chain is intact, the record is trustworthy.

What It Doesn't Do

I want to be direct about this: air-adk-trust checks technical requirements. It's a linter for AI governance, not a legal compliance tool. It won't make you "EU AI Act compliant" — nobody can promise that with a pip install.

What it does: gives your agents tamper-evident audit trails, PII detection, risk classification, and policy enforcement. The technical foundation that auditors and compliance teams need to see.

Framework #6 in the Ecosystem

air-adk-trust joins five other trust layers in the AIR Blackbox ecosystem:

pip install air-langchain-trust    # LangChain / LangGraph
pip install air-crewai-trust       # CrewAI
pip install air-autogen-trust      # AutoGen / AG2
pip install air-anthropic-trust    # Anthropic Claude SDK
pip install air-rag-trust          # RAG pipelines
pip install air-adk-trust          # Google ADK  ← new
Enter fullscreen mode Exit fullscreen mode

All open source. All Apache 2.0. All on PyPI.

The goal is coverage — whatever framework you're building with, audit trails should be one import away.

Try It

pip install air-adk-trust
Enter fullscreen mode Exit fullscreen mode

GitHub: github.com/airblackbox/air-adk-trust
Full ecosystem: github.com/airblackbox
Live demo: airblackbox.ai/demo

August 2026 is 17 months away. Your agents need audit trails. This is a place to start.


If you have questions about the architecture, how the HMAC chain works, or how to integrate with your existing ADK agents — drop a comment or open an issue on GitHub.

Top comments (0)