Forem

Dongha Koo
Dongha Koo

Posted on

EU AI Act Compliance in 47 Lines of Python

Your AI app serves EU users? You have 131 days before enforcement starts. The fine: 35 million EUR or 7% of global revenue -- whichever is higher. For context, GDPR maxes out at 4%.

Most AI applications I've looked at fail at least 3 of the 8 mandatory requirements. Here's what actually matters and how to fix it before August.


What the EU AI Act requires from your code

The EU AI Act (Regulation 2024/1689) doesn't mention "AI agents" by name. But if your system makes decisions affecting people -- customer service bots, healthcare triage, financial advisors, HR screening -- it's high-risk under Annex III.

Four articles will ruin your day if you ignore them:

Article What it demands In developer terms
Art. 9 Risk management system Every action needs a risk level. Documented. In code.
Art. 12 Tamper-proof logging Every decision logged with cryptographic integrity
Art. 14 Human oversight High-risk actions pause for human approval
Art. 17 Quality management Policies versioned, auditable, not in someone's head

Enforcement date: August 2, 2026. Not optional. Not delayed.


The compliance checklist nobody wants to do manually

Your agent needs all of these before touching production in the EU:

  • [ ] Risk classification for every action type (low / medium / high / critical)
  • [ ] Policy rules as code -- not comments, not Notion docs
  • [ ] Automatic audit logging of every action and decision
  • [ ] Tamper-evident logs (cryptographic hash chains)
  • [ ] Human approval gates for high-risk actions
  • [ ] Blocking rules for actions that should never execute
  • [ ] Anomaly detection for unusual agent behavior
  • [ ] Exportable compliance evidence for auditors

Now imagine building that from scratch. Risk classification engine. Cryptographic audit chain. Approval workflow. Anomaly detector. Evidence generator. Policy versioning.

That's months of infrastructure work. For every team. From scratch.


Or: 47 lines of Python.

pip install agent-aegis
Enter fullscreen mode Exit fullscreen mode
from pathlib import Path
from aegis import (
    Action, PolicyBuilder, CryptoAuditChain,
    AnomalyDetector, ComplianceMapper, RegulatoryFramework,
)

# 1. Policy-as-code -- Art. 9 risk management
policy = (
    PolicyBuilder()
    .defaults(risk_level="high", approval="approve")
    .rule("read_auto").match(type="read*").risk("low").approve_auto()
    .rule("write_review").match(type="write*").risk("medium").approve_human()
    .rule("delete_block").match(type="delete*").risk("critical").block()
    .build()
)

# 2. Tamper-evident audit chain -- Art. 12
chain = CryptoAuditChain(algorithm="sha256")

# 3. Anomaly detection -- Art. 15
detector = AnomalyDetector(burst_limit=10, burst_window=60.0)

# 4. Every agent action: classify → log → detect
for action in [
    Action("read", "customer_db"),
    Action("write", "crm_record", params={"customer_id": "C-1234"}),
    Action("delete", "user_account"),
]:
    decision = policy.evaluate(action)
    chain.append(
        agent_id="my-agent",
        action_type=action.type,
        action_target=action.target,
        decision=decision.approval.value,
        risk_level=decision.risk_level.value,
        matched_rule=decision.matched_rule,
    )
    detector.record(action, agent_id="my-agent",
                    blocked=not decision.is_allowed)

# 5. Verify chain integrity + generate audit evidence
assert chain.verify().valid
chain.generate_evidence_package(Path("evidence/compliance.json"))

# 6. Check your EU AI Act coverage
analysis = ComplianceMapper().analyze(RegulatoryFramework.EU_AI_ACT)
print(f"EU AI Act coverage: {analysis.coverage_score:.0f}%")
Enter fullscreen mode Exit fullscreen mode

That's it. Risk classification, tamper-proof logging, anomaly detection, and exportable compliance evidence. 47 lines.


What this gets you

Art. 9 ✓ -- Every action classified by risk level. Policy-as-code, not policy-as-prayer.

Art. 12 ✓ -- SHA-256 hash-chained audit log. Tamper one entry, the whole chain breaks. Try explaining that gap to a regulator.

Art. 14 ✓ -- approve_human() blocks execution until a human says yes. High-risk actions don't slip through.

Art. 15 ✓ -- Behavioral anomaly detection catches agents going rogue at 3 AM.

Art. 17 ✓ -- Policies are versioned code. Diffable. Auditable. Rollbackable.

Plus: compliance mapper that tells you exactly where your gaps are, mapped to specific EU AI Act articles. Hand that report to your auditor.


What software can't do for you

Being honest: no framework covers 100% of the Act alone. Articles 10 (data governance) and 11 (technical documentation) require organizational processes -- staff training, management reviews, documented procedures. That's on you.

The compliance mapper is transparent about this. It tells you what's covered, what's partial, and what needs human work.


131 days

  • Now: EU AI Act already in force (August 1, 2024)
  • August 2, 2025: General-purpose AI rules apply
  • August 2, 2026: High-risk system requirements enforced

The tooling exists. The question is whether you start now or scramble later.

GitHub: github.com/Acacian/aegis -- pip install agent-aegis
Try in browser: Playground


What's your EU AI Act compliance strategy? Building in-house or using a framework? I'd love to hear what approaches others are taking.

Top comments (0)