The EU AI Act's high-risk rules take effect August 2, 2026. If you're deploying LLM-based agents — LangChain chains, CrewAI crews, AutoGen teams, OpenAI function calling — your code needs to prove compliance with 6 specific technical articles.
Most teams either don't know about this deadline or assume their existing logging covers it. It doesn't. I built an open-source scanner that tells you exactly what's missing.
The Problem
Six articles in the EU AI Act apply directly to AI agent infrastructure:
- Article 9 — Risk management: every tool call needs risk classification
- Article 10 — Data governance: PII must be tokenized before reaching the LLM
- Article 11 — Technical documentation: structured logs of every operation
- Article 12 — Record-keeping: logs that regulators can mathematically verify haven't been altered
- Article 14 — Human oversight: ability to interrupt agent execution at runtime
- Article 15 — Robustness: defense against prompt injection and data poisoning
Article 12 is where most teams fail. Your standard logger.info() won't cut it. Regulators need tamper-evident chains — cryptographically linked entries where altering one record breaks the entire chain.
The Scanner
pip install air-compliance
air-compliance scan ./my-project
About 3 seconds on a typical project. No cloud, no API keys — runs entirely on your machine.
Here's what it looks like scanning a LangChain agent that has no trust layer:
Every finding maps to a specific EU AI Act article, with a concrete fix.
Try It Without Installing
If you want to see the scanner before running pip install, there's an interactive demo on Hugging Face:
AIR Blackbox Scanner — Hugging Face Space
Paste any Python AI agent code, click "Scan for Compliance," and see the report instantly.
Fixing the Gaps
Scanning is step one. Step two is adding the compliance controls. We built drop-in trust layers for 5 frameworks:
pip install air-langchain-trust # LangChain / LangGraph
pip install air-crewai-trust # CrewAI
pip install air-autogen-trust # AutoGen / AG2
pip install air-openai-trust # OpenAI Agents SDK
pip install air-rag-trust # RAG pipelines
Each one hooks into your existing code with about 3 lines. Here's LangChain:
from langchain.agents import AgentExecutor
from langchain_openai import ChatOpenAI
from air_langchain_trust import (
AuditLedger, ConsentGate, DataVault, InjectionDetector
)
llm = ChatOpenAI(model="gpt-4")
# Add compliance layer
ledger = AuditLedger() # HMAC-SHA256 tamper-evident logging
gate = ConsentGate() # Risk-classifies tool calls
vault = DataVault() # Tokenizes PII before LLM
detector = InjectionDetector() # 15+ prompt injection patterns
agent = AgentExecutor(
agent=create_openai_tools_agent(llm, tools),
tools=tools,
callbacks=[ledger, gate, vault, detector]
)
After adding the trust layer, re-scan and you go from 0/6 to 5/6 or 6/6 passing.
What Each Component Does
AuditLedger — Every agent decision gets logged to an HMAC-SHA256 chain. Each entry is cryptographically linked to the previous one. Alter one record and the entire chain breaks. This is what Article 12 actually requires.
ConsentGate — Risk-classifies tool calls as LOW / MEDIUM / HIGH / CRITICAL. Critical operations get blocked until a human approves. Article 14 compliance.
DataVault — Tokenizes PII (names, emails, API keys) before they reach the LLM. Your sensitive data stays on your infra. Article 10 compliance.
InjectionDetector — 15+ weighted patterns scanning prompts before they hit the model. Catches injection attempts, jailbreaks, encoded payloads. Article 15 compliance.
What "Tamper-Evident" Actually Means
Regular logging:
2026-02-24 INFO: Tool called: send_email
2026-02-24 INFO: LLM responded with 340 tokens
Anyone with server access can edit these. Regulators have no way to verify integrity.
AIR Blackbox logging:
{
"seq": 42,
"event": "tool_call",
"tool": "send_email",
"timestamp": "2026-02-24T18:30:00Z",
"signature": "hmac-sha256:a3f2b1...",
"prev_hash": "sha256:9c4d2e...",
"chain_valid": true
}
Each entry is signed and chained. Any auditor can verify the entire chain with one call:
is_valid = ledger.verify_chain() # True if nothing was altered
evidence = ledger.export_audit() # Evidence bundle for compliance review
The Fine-Tuned Model
I also fine-tuned a 1B parameter Llama 3.2 model on 2,000 compliance examples. It runs locally via Ollama — no API calls, your code never leaves your machine.
ollama run air-compliance "Analyze this agent for EU AI Act compliance: ..."
The model and training data are open:
Not production-grade yet — think of it as a fast local triage tool.
What This Doesn't Do
I want to be upfront: this is a linter for AI governance, not a legal compliance tool. It checks technical requirements. It doesn't make you "EU AI Act compliant" — that involves legal review, organizational processes, and risk assessments beyond code.
What it does: gets you audit-ready on the technical side so your compliance team has something concrete to work with.
Links
| Package | Framework | Install |
|---|---|---|
| air-langchain-trust | LangChain / LangGraph | pip install air-langchain-trust |
| air-crewai-trust | CrewAI | pip install air-crewai-trust |
| air-openai-agents-trust | OpenAI Agents SDK | pip install air-openai-agents-trust |
| air-autogen-trust | AutoGen / AG2 | pip install air-autogen-trust |
| air-rag-trust | RAG Knowledge Bases | pip install air-rag-trust |
| air-compliance | Compliance Scanner | pip install air-compliance |
- GitHub: github.com/airblackbox — 7 PyPI packages, 25 repos, Apache 2.0
- Website: airblackbox.ai
- Gate (AI action firewall): airblackbox.ai/gate
- Interactive Demo: HF Space
August 2, 2026 is not a soft deadline. Fines go up to €35M or 7% of global annual turnover.
The code changes to get audit-ready are small. The risk of not making them is not.
Questions about what the EU AI Act requires for your agent deployment? Drop a comment — happy to help map the articles to your architecture.
Top comments (0)