The Clock is Ticking
On August 2, 2026, the EU AI Act begins enforcement for high-risk AI systems. If your AI agents make decisions that affect people — hiring, lending, healthcare, legal, customer service — you're likely in scope.
Most AI agent projects fail 5 out of 6 EU AI Act compliance checks. Not because the teams are careless — because no one built the tooling to check.
We built an open-source scanner that checks your LangChain, CrewAI, or OpenAI agent code against 6 articles of the EU AI Act. Run it in 10 seconds:
pip install air-compliance && air-compliance scan .
Watch it run live: airblackbox.ai/demo
The scanner covers the technical layer — what your code does or doesn't implement. Full compliance also requires organizational processes and documentation. But the code gaps are where most teams should start.
Most engineering teams I've talked to fall into one of three camps:
- "We'll deal with it when enforcement starts" (too late)
- "Our existing logging is probably fine" (it's not)
- "Wait, what deadline?" (you're reading this just in time)
The regulation is 144 pages. I've read it. Here's what actually matters for AI agent deployments, and how to fix the gaps in your codebase today.
What the EU AI Act Actually Requires
Six articles apply directly to AI agent infrastructure:
Article 9 — Risk Management
Every tool call your agent makes needs to be classified by risk level. A send_email tool is low risk. A delete_database tool is critical. You need a system that knows the difference and can block the dangerous ones.
Article 10 — Data Governance
PII flowing through your agent pipeline needs to be tokenized before it reaches the LLM. If you're running RAG, the documents in your knowledge base need provenance tracking — who added them, when, and whether they've been tampered with.
Article 11 — Technical Documentation
Not a PDF on a shelf. The regulation wants structured, machine-readable documentation of every operation your system performs. Full call graphs: chain → LLM → tool → result.
Article 12 — Record-Keeping (The Big One)
This is where most teams fail. Article 12 requires logs that regulators can mathematically verify haven't been altered. Your standard logger.info() statements won't cut it. You need tamper-evident chains — think blockchain-style integrity without the blockchain.
Article 14 — Human Oversight
Humans need the ability to review what the agent did and interrupt it mid-execution. Not after the fact. During runtime.
Article 15 — Robustness & Security
Your agent needs defense against prompt injection, data poisoning, and adversarial manipulation. If you're running RAG, that includes knowledge base poisoning — malicious documents that persist across queries.
The Open-Source Fix
We built AIR Blackbox — drop-in compliance layers for every major AI agent framework. Here's how it works.
Step 1: Find Your Gaps (30 seconds)
pip install air-compliance
air-compliance scan ./my-project
Output looks like this:
EU AI Act Compliance Report
===========================
Article 9 — Risk Management: 2/4 PASS
Article 10 — Data Governance: 1/3 PASS
Article 11 — Technical Docs: 0/3 PASS
Article 12 — Record-Keeping: 0/4 PASS ← biggest gap
Article 14 — Human Oversight: 1/4 PASS
Article 15 — Robustness: 1/4 PASS
Step 2: Add Your Trust Layer (3 lines of code)
LangChain / LangGraph:
pip install air-langchain-trust
from air_langchain_trust import AirTrustCallbackHandler
handler = AirTrustCallbackHandler()
agent.invoke({"input": query}, config={"callbacks": [handler]})
That single handler gives you:
- HMAC-SHA256 tamper-evident audit chain (Article 12)
- PII tokenization with 14 detection patterns (Article 10)
- Consent gating for critical tools (Article 9, 14)
- Prompt injection scanning with 15+ patterns (Article 15)
CrewAI:
pip install air-crewai-trust
from air_crewai_trust import AirTrustHook, AirTrustConfig
hook = AirTrustHook(config=AirTrustConfig())
crew = Crew(agents=[...], hooks=[hook])
OpenAI Agents SDK:
pip install air-openai-agents-trust
from air_openai_agents_trust import activate_trust
activate_trust() # patches the SDK globally
Step 3: Protect Your Knowledge Base (if using RAG)
pip install air-rag-trust
from air_rag_trust import AirRagTrust, WritePolicy
rag = AirRagTrust(
write_policy=WritePolicy(
allowed_sources=["internal://*", "verified://*"],
blocked_content_patterns=[r"ignore previous", r"system prompt"],
max_writes_per_minute=30,
)
)
# Every document gets provenance-tracked
rag.ingest(content=doc_text, source="internal://kb", actor="data-team")
# Retrieval triggers drift detection automatically
events = rag.record_retrieval(query="quarterly revenue", doc_ids=retrieved_ids)
The drift detector watches for anomalies: new untrusted sources appearing, single documents dominating retrieval, volume spikes that indicate poisoning attempts.
Step 4: Verify Your Chain
# Any auditor can verify the entire chain hasn't been tampered with
is_valid = handler.verify_chain() # True if no entries modified
# Export evidence bundle for compliance review
evidence = handler.export_audit()
What Makes This Different From "Just Add Logging"
Regular logging:
2026-02-24 INFO: Tool called: send_email
2026-02-24 INFO: LLM responded with 340 tokens
A malicious admin (or compromised system) can edit these logs. Regulators have no way to verify integrity.
AIR Blackbox logging:
{
"seq": 42,
"event": "tool_call",
"tool": "send_email",
"timestamp": "2026-02-24T18:30:00Z",
"signature": "hmac-sha256:a3f2b1...",
"prev_hash": "sha256:9c4d2e...",
"chain_valid": true
}
Each entry is cryptographically signed and chained to the previous one. Alter any single entry and the entire chain breaks. This is what Article 12 actually requires.
The Ecosystem
| Package | Framework | Install |
|---|---|---|
| air-langchain-trust | LangChain / LangGraph | pip install air-langchain-trust |
| air-crewai-trust | CrewAI | pip install air-crewai-trust |
| air-openai-agents-trust | OpenAI Agents SDK | pip install air-openai-agents-trust |
| air-autogen-trust | AutoGen / AG2 | pip install air-autogen-trust |
| air-rag-trust | RAG Knowledge Bases | pip install air-rag-trust |
| openclaw-air-trust | TypeScript / Node.js | npm install openclaw-air-trust |
| air-compliance | Compliance Scanner | pip install air-compliance |
Zero core dependencies on the trust layers. Apache 2.0 licensed. The compliance scanner maps directly to EU AI Act articles and tells you exactly what's missing.
5 Months
August 2, 2026 is not a soft deadline. Fines for non-compliance go up to 35 million EUR or 7% of global annual turnover — whichever is higher.
The code changes to get compliant are small. The risk of not making them is not.
GitHub: github.com/airblackbox
Scanner: pip install air-compliance
If you have questions about what the EU AI Act requires for your specific agent deployment, drop a comment — happy to help map the articles to your architecture.
Top comments (0)