DEV Community

Arkforge
Arkforge

Posted on

I Added EU AI Act Compliance to My CI/CD in 3 Lines

A few weeks ago I wrote about accidentally discovering my Python app fell under the EU AI Act because of a LangChain import I'd forgotten about.

The response surprised me. Turns out a lot of devs had the same "wait, this applies to me?" moment. The most common question: "How do I check this automatically?"

So I packaged the scanner I'd been using locally and put it on PyPI.

The shortest version

pip install eu-ai-act-scanner
eu-ai-act-scanner scan .
Enter fullscreen mode Exit fullscreen mode

That's it. It walks your project, checks source files and dependency files for 26 AI frameworks, and tells you what it found with the corresponding EU AI Act articles.

No API key. No account. No dependencies. Works offline.

What it actually looks like

Here's a real scan on a project that uses LangChain and OpenAI:

$ eu-ai-act-scanner scan ./my-saas

Files scanned: 847
AI frameworks detected: 3

  [!!!] OpenAI (risk: high)
      Article: Art. 51-53 (GPAI)
      Remediation: Register as GPAI provider, document training data
      - src/api/chat.py
      - src/utils/embeddings.py

  [!!!] LangChain (risk: medium)
      Article: Art. 50
      Remediation: Add transparency notices for AI-generated content
      - src/chains/summary.py
      - src/chains/qa.py

  [!] HuggingFace Transformers (risk: low)
      Article: Art. 6 + Annex III
      Remediation: Assess if use case falls under high-risk categories
      - notebooks/experiment.py
Enter fullscreen mode Exit fullscreen mode

Three frameworks, three different risk levels, with the exact EU AI Act articles that apply. The stuff that would take you an afternoon of reading legal text.

Compliance check in one command

Scanning is step one. Step two is checking what you actually need to do:

$ eu-ai-act-scanner check ./my-saas --risk limited

Risk category: limited
Compliance score: 3/7 (43%)

  [PASS] AI disclosure
  [PASS] Version tracking
  [FAIL] Transparency documentation
  [FAIL] Data governance
  [FAIL] Human oversight process
  [FAIL] Risk management documentation

Recommendations:
  - Create TRANSPARENCY.md documenting AI-generated outputs
    Why: Art. 50 requires users to know when they interact with AI

  - Add RISK_MANAGEMENT.md with risk assessment
    Why: Art. 9 requires documented risk management for AI systems
Enter fullscreen mode Exit fullscreen mode

It checks for the documents and processes the EU AI Act requires for your risk category, and tells you exactly what's missing.

3 lines in GitHub Actions

This is where it gets useful at scale. Add this to your CI/CD and you'll catch new AI dependencies before they hit production without compliance review:

- name: EU AI Act Compliance
  run: |
    pip install eu-ai-act-scanner
    eu-ai-act-scanner check . --risk limited --json > compliance.json
    python -c "
    import json, sys
    r = json.load(open('compliance.json'))
    score = r.get('compliance_percentage', 0)
    if score < 100:
        print(f'EU AI Act compliance: {score}% - FAIL')
        for rec in r.get('recommendations', []):
            if rec['status'] == 'FAIL':
                print(f'  - {rec[\"what\"]}')
        sys.exit(1)
    print('EU AI Act compliance: PASS')
    "
Enter fullscreen mode Exit fullscreen mode

Now every PR that adds an AI framework gets flagged. No manual review needed.

What it catches that grep doesn't

I used to grep -rn "from openai" and call it a day. The scanner finds things that approach misses:

Cloud provider wrappers. boto3 calling Bedrock, azure-ai-openai, google-cloud-aiplatform — the package name says "cloud", but the usage is AI. The scanner knows.

Dependency files. Your code doesn't import torch directly, but it's in your requirements.txt because something depends on it. Still counts under the regulation.

26 frameworks, not just the obvious ones. DeepSeek, Qwen, Moonshot, Groq, CrewAI, DSPy, Haystack — the scanner covers the full landscape, not just OpenAI and HuggingFace.

8 languages. Python, JavaScript, TypeScript, Java, Go, Rust, C, C++. It scans .py, .js, .ts, .java, .go, .rs, .cpp, .c files.

Python API if you need it

from eu_ai_act_scanner import EUAIActScanner

scanner = EUAIActScanner("./my-project")
results = scanner.scan()
compliance = scanner.check_compliance("limited")

print(f"Frameworks found: {len(results['detected_models'])}")
print(f"Compliance: {compliance['compliance_percentage']}%")

for rec in compliance["recommendations"]:
    if rec["status"] == "FAIL":
        print(f"  FIX: {rec['what']}")
Enter fullscreen mode Exit fullscreen mode

Useful if you want to build this into your own tooling or dashboards.

The deadline is real

August 2026. That's when most EU AI Act provisions take effect. Fines go up to 35M EUR or 7% of global turnover.

For most startups, compliance is a few days of documentation work. But you can't fix what you don't know about. Step one is scanning your codebase. Step two is checking what's missing. Both take about 30 seconds now.

pip install eu-ai-act-scanner
eu-ai-act-scanner check . --risk limited
Enter fullscreen mode Exit fullscreen mode

The code is MIT-licensed and on GitHub. Issues and PRs welcome.


Building compliance tooling because I needed it myself. If you run the scanner on your project and find something unexpected, I'd like to hear about it.

Top comments (0)