I spent two weeks reading the full EU AI Act — not because I wanted to, but because I realized the tools I build might not be compliant.
What surprised me most: the obligations aren't vague policy language. They're specific technical requirements that affect how you write code. Here are the 5 that caught me off guard.
The EU AI Act enforcement started in 2025, and if your AI system serves EU users, you're in scope. Here are the 5 obligations that directly affect your code — with practical examples.
1. AI Transparency Disclosure (Article 50)
The rule: Any AI system interacting with humans must clearly disclose that users are dealing with AI.
This applies to chatbots, AI-generated emails, synthetic media, and automated decision systems.
In practice:
# Before sending any AI-generated response
def send_ai_response(user_id: str, content: str, channel: str):
disclosure = (
"🤖 This response was generated by an AI system. "
"You have the right to request human review."
)
if channel == "email":
content = f"{disclosure}\n\n{content}"
elif channel == "chat":
content = f"[AI-Generated] {content}"
elif channel == "api":
# Add header for machine-readable disclosure
headers = {"X-AI-Generated": "true", "X-AI-Model": "gpt-4"}
return {"content": content, "ai_disclosed": True}
Common mistake: Hiding the disclosure in footer text or terms of service. The regulation requires clear and immediate notification at the point of interaction.
2. Risk Classification of Your AI System (Article 6)
The rule: You must classify your AI system into one of four risk tiers. Your obligations scale with the risk level.
| Risk Level | Examples | Key Obligations |
|---|---|---|
| Unacceptable | Social scoring, real-time biometric surveillance | Banned entirely |
| High-risk | HR screening, credit scoring, medical diagnosis | Full compliance suite |
| Limited risk | Chatbots, content recommenders | Transparency only |
| Minimal risk | Spam filters, game AI | No specific obligations |
In practice:
def classify_ai_risk(system_description: dict) -> str:
"""Classify your AI system per EU AI Act Annex III."""
HIGH_RISK_DOMAINS = [
"employment", # CV screening, hiring decisions
"credit", # Loan approval, credit scoring
"education", # Exam grading, student assessment
"law_enforcement", # Predictive policing, evidence eval
"immigration", # Visa processing, border control
"healthcare", # Diagnosis, treatment recommendations
"critical_infra", # Energy, water, transport management
]
domain = system_description.get("domain", "")
makes_decisions = system_description.get("autonomous_decisions", False)
affects_rights = system_description.get("affects_fundamental_rights", False)
if domain in ["social_scoring", "realtime_biometric"]:
return "UNACCEPTABLE" # Banned - do not deploy
if domain in HIGH_RISK_DOMAINS or (makes_decisions and affects_rights):
return "HIGH_RISK" # Full compliance required
if system_description.get("interacts_with_humans", False):
return "LIMITED_RISK" # Transparency obligations
return "MINIMAL_RISK" # No specific obligations
Key insight: Most developer tools using LLMs for code generation, summarization, or content creation fall into "limited risk." But if your AI makes decisions about people (hiring, loans, access), you're likely high-risk.
3. Technical Logging Requirements (Article 12)
The rule: High-risk AI systems must maintain logs that allow traceability of the system's operation throughout its lifecycle.
This isn't just "keep server logs." It means structured, auditable records of every AI decision.
In practice:
import json
from datetime import datetime, timezone
def log_ai_decision(decision: dict, system_id: str):
"""EU AI Act compliant decision logging."""
log_entry = {
"timestamp": datetime.now(timezone.utc).isoformat(),
"system_id": system_id,
"system_version": "1.2.3",
# What was the input?
"input_hash": hash_pii_safe(decision["input"]),
"input_type": decision["input_type"],
# What did the AI decide?
"output": decision["output"],
"confidence": decision.get("confidence", None),
"model_used": decision["model"],
# Why? (explainability)
"reasoning_summary": decision.get("explanation", ""),
"features_used": decision.get("top_features", []),
# Who was affected?
"affected_party": decision.get("subject_id_hash", None),
"risk_level": decision.get("risk_level", "unknown"),
# Can it be reviewed?
"human_reviewable": True,
"review_endpoint": f"/api/v1/decisions/{decision['id']}/review"
}
# Append-only log (immutable audit trail)
with open(f"logs/ai_audit_{system_id}.jsonl", "a") as f:
f.write(json.dumps(log_entry) + "\n")
return log_entry
What regulators will ask for: Input-output traceability, model version tracking, and the ability to reconstruct why a specific decision was made — months after it happened.
4. Human Oversight Mechanism (Article 14)
The rule: High-risk AI systems must be designed to allow effective human oversight. Humans must be able to understand, intervene, and override the AI.
This is the "kill switch" requirement — but it goes deeper than just a button.
In practice:
class HumanOversightController:
"""EU AI Act Article 14 compliant oversight."""
def __init__(self, system_id: str, auto_approve_threshold: float = 0.95):
self.system_id = system_id
self.threshold = auto_approve_threshold
self.override_active = False
def evaluate_decision(self, ai_output: dict) -> dict:
confidence = ai_output.get("confidence", 0)
risk = ai_output.get("risk_level", "high")
if self.override_active:
return {"action": "BLOCKED", "reason": "Human override active"}
if risk == "high" or confidence < self.threshold:
return {
"action": "HUMAN_REVIEW_REQUIRED",
"ai_recommendation": ai_output,
"review_url": f"/review/{ai_output['id']}",
"deadline_hours": 24
}
return {
"action": "AUTO_APPROVED",
"ai_output": ai_output,
"logged": True
}
def emergency_stop(self, reason: str):
"""Article 14(4): Ability to immediately halt operations."""
self.override_active = True
notify_operations_team(
f"AI system {self.system_id} halted: {reason}"
)
return {"status": "HALTED", "reason": reason}
The trap: Many teams implement oversight as "a human looks at a dashboard." That's not enough. The regulation requires that humans can intervene and override in real-time, not just observe after the fact.
5. Serious Incident Reporting (Article 62)
The rule: Providers of high-risk AI must report serious incidents to the relevant national authority — within specific timeframes.
A "serious incident" includes: death or serious harm, fundamental rights violations, critical infrastructure disruption, or widespread property damage.
In practice:
from enum import Enum
class IncidentSeverity(Enum):
CRITICAL = "critical" # Death, serious injury
HIGH = "high" # Rights violation, discrimination
MEDIUM = "medium" # Service disruption, data breach
LOW = "low" # Minor malfunction, degraded output
def report_ai_incident(incident: dict):
"""EU AI Act Article 62 incident reporting."""
severity = incident["severity"]
REPORTING_DEADLINES = {
IncidentSeverity.CRITICAL: "immediately", # No delay
IncidentSeverity.HIGH: "72_hours",
IncidentSeverity.MEDIUM: "15_days",
}
if severity in REPORTING_DEADLINES:
report = {
"provider_name": "Your Company",
"system_id": incident["system_id"],
"incident_date": incident["date"],
"severity": severity.value,
"description": incident["description"],
"affected_parties": incident.get("affected_count", "unknown"),
"corrective_actions": incident.get("actions_taken", []),
"reporting_deadline": REPORTING_DEADLINES[severity]
}
# Submit to national AI authority
submit_to_authority(report, country=incident["jurisdiction"])
# Internal escalation
notify_legal_team(report)
notify_dpo(report) # Data Protection Officer
return {"reported": True, "deadline": REPORTING_DEADLINES[severity]}
# Low severity: log internally, no mandatory external report
return {"reported": False, "logged": True}
Pro tip: Set up automated monitoring that detects potential incidents (bias spikes, confidence drops, user complaints) BEFORE they become reportable events. Prevention is cheaper than reporting.
Quick Self-Assessment
Not sure where you stand? Ask yourself these 5 questions:
- Does your AI interact with end users? → You need transparency disclosures
- Does your AI make decisions about people? → Likely high-risk, full compliance needed
- Can you trace why your AI made a specific decision 6 months ago? → If not, your logging is insufficient
- Can a human override your AI in real-time? → Required for high-risk systems
- Do you have an incident response plan for AI failures? → Mandatory for high-risk providers
If you answered "no" to any of these, you have work to do.
Want to Check Your Compliance Automatically?
We built a free, open-source MCP server that scans your AI codebase and checks compliance against EU AI Act requirements.
👉 Try it: Free EU AI Act Compliance Scanner
It checks transparency labels, risk classification, logging practices, and more — in minutes, not weeks.
The scanner is open-source on GitHub.
What obligation surprised you the most? Have you started compliance work yet? Let me know in the comments.
Top comments (0)