DEV Community

Arkforge
Arkforge

Posted on

5 EU AI Act Compliance Checks Every Python Developer Should Know

I spent two weeks decoding the EU AI Act's 144 pages so you don't have to. Here are five checks you can run in 10 minutes with Python scripts. No compliance consultant needed.

Check 1: Is your system "high-risk"?

The Act treats high-risk systems (Annex III) differently. Most dev tools and chatbots are NOT high-risk — but verify.

HIGH_RISK_DOMAINS = {
    "biometrics": ["facial recognition", "voice identification"],
    "critical_infra": ["energy grid", "water supply"],
    "employment": ["resume screening", "hiring decisions"],
    "credit": ["credit scoring", "loan approval"],
    "law_enforcement": ["predictive policing", "lie detection"],
}

def check_risk(description: str) -> str:
    desc = description.lower()
    for domain, keywords in HIGH_RISK_DOMAINS.items():
        if any(kw in desc for kw in keywords):
            return f"HIGH-RISK ({domain}) — full compliance required"
    return "LIMITED/MINIMAL — basic transparency obligations only"

print(check_risk("chatbot using GPT-4 for recipe suggestions"))
# LIMITED/MINIMAL — basic transparency obligations only
Enter fullscreen mode Exit fullscreen mode

Check 2: Do you have minimum required documentation?

Article 13 requires transparency. Can someone understand what your AI does without reading source code?

from pathlib import Path

REQUIRED = ["README.md", "MODEL_CARD.md", "docs/limitations.md"]

def check_docs(root: str) -> list:
    missing = []
    for f in REQUIRED:
        p = Path(root) / f
        if not p.exists() or len(p.read_text().strip()) < 100:
            missing.append(f)
    return missing

gaps = check_docs(".")
print(f"Missing docs: {gaps}" if gaps else "All docs present")
Enter fullscreen mode Exit fullscreen mode

The fix: add a "Limitations" section to your README and a one-page doc explaining what your AI does. You already know this stuff — it just needs writing down.

Check 3: Are AI decisions logged?

Article 12 requires operational logging for high-risk systems. Even for lower-risk, it's best practice.

from pathlib import Path

AI_PATTERNS = ["client.chat.completions", "ChatOpenAI",
               "pipeline(", "model.predict", "model.generate"]
LOG_PATTERNS = ["logging.", "logger.", "wandb.", "mlflow."]

def check_logging(root: str) -> int:
    unlogged = 0
    for py in Path(root).rglob("*.py"):
        if ".venv" in str(py):
            continue
        lines = py.read_text(errors="ignore").split("\n")
        for i, line in enumerate(lines):
            if any(p in line for p in AI_PATTERNS):
                ctx = "\n".join(lines[max(0,i-3):i+5])
                if not any(p in ctx for p in LOG_PATTERNS):
                    unlogged += 1
    return unlogged

print(f"Unlogged AI calls: {check_logging('.')}")
Enter fullscreen mode Exit fullscreen mode

Minimum viable logging: timestamp, input hash, output summary, model name, latency. Five fields per call.

Check 4: Basic bias testing

Articles 9-10 touch on bias. You don't need a PhD in fairness metrics — just a sanity check.

def bias_probe(ai_fn, prompts: dict) -> dict:
    """Run same question with different demographics."""
    return {demo: ai_fn(prompt)
            for demo, prompt in prompts.items()}

# Example:
# results = bias_probe(your_ai, {
#     "french_male": "Evaluate loan for Jean Martin, 30",
#     "north_african_female": "Evaluate loan for Fatima Benali, 30",
# })
# Compare outputs — flag significant differences
Enter fullscreen mode Exit fullscreen mode

This won't catch everything, but showing regulators you ran basic checks puts you ahead of having nothing.

Check 5: What AI frameworks are in your deps?

The one most developers miss. Your requirements.txt might pull in AI frameworks transitively.

import importlib.metadata

AI_PKGS = ["openai", "anthropic", "transformers", "torch",
           "tensorflow", "langchain", "mistralai", "cohere",
           "llama-index", "groq", "replicate"]

found = []
for pkg in AI_PKGS:
    try:
        v = importlib.metadata.version(pkg)
        found.append(f"{pkg}=={v}")
    except importlib.metadata.PackageNotFoundError:
        pass

print(f"AI frameworks found: {len(found)}")
for f in found:
    print(f"  {f}")
Enter fullscreen mode Exit fullscreen mode

Every framework found is a compliance trigger — not a problem, just something to document.

Priority order

  1. Check 5 first — 30 seconds, tells you if the Act applies
  2. Check 1 — determines how strict your obligations are
  3. Check 2 — usually the biggest gap, easiest fix
  4. Check 3 — add now, thank yourself later
  5. Check 4 — start simple, iterate

Most Python AI projects need a weekend of documentation, not a six-month program.


I built these checks into an open-source MCP server that scans Python projects automatically — dependency detection, risk classification, the works. Free tier available, no signup.

Run these on your project and drop a comment — especially if you found something unexpected.

Top comments (0)