Building an EU AI Act Compliance Checker as MCP Server
TL;DR: With the EU AI Act enforcement deadline approaching (August 2026), I built an MCP (Model Context Protocol) server to help developers check AI system compliance. It's open-source, and I'd love your feedback.
Why This Matters (and Why Now)
If you're building AI systems that serve EU users, August 2, 2026 should be on your calendar. That's when the EU AI Act becomes fully enforceable, with fines up to €35M or 7% of global annual turnover for non-compliance.
The regulation categorizes AI systems into four risk levels:
- Unacceptable risk: Banned (e.g., social scoring, real-time biometric identification)
- High risk: Strict requirements (e.g., hiring tools, credit scoring, biometric verification)
- Limited risk: Transparency obligations (e.g., chatbots, deepfakes)
- Minimal risk: No specific requirements (e.g., spam filters, video games)
The problem? Most developers I talk to either:
- Don't know their system's risk classification
- Don't know what compliance requirements apply
- Are overwhelmed by the 144-page regulation
Enter Model Context Protocol (MCP)
While researching solutions, I discovered MCP - Anthropic's open standard for connecting AI assistants to external data sources and tools. Think of it as a universal adapter between LLMs and your tooling.
Why MCP is perfect for compliance:
- Context-aware: MCP servers provide structured context to AI assistants
- Tool integration: Exposes compliance checks as callable tools
- Standardized: Works with any MCP-compatible client (Claude Desktop, IDEs, etc.)
- Composable: Can combine with other MCP servers (documentation, code analysis, etc.)
Instead of manually parsing 144 pages of legalese, developers can ask their AI assistant: "Is my facial recognition API compliant with the EU AI Act?" and get instant, actionable answers.
Architecture: How It Works
┌─────────────────────┐
│ Claude Desktop │
│ (or any MCP │
│ compatible client)│
└──────────┬──────────┘
│
│ MCP Protocol
│
┌──────────▼──────────┐
│ EU AI Act MCP │
│ Compliance Server │
├─────────────────────┤
│ • Risk classifier │
│ • Requirement │
│ checker │
│ • Article lookup │
│ • Compliance report │
│ generator │
└──────────┬──────────┘
│
│
┌──────────▼──────────┐
│ EU AI Act Database │
│ (Articles, Annexes,│
│ Risk Categories) │
└─────────────────────┘
Core Components
1. Risk Classification Engine
// Example: Classify an AI system
{
"system_description": "Chatbot for customer support",
"use_case": "Automated responses to product questions",
"data_processed": ["user queries", "chat history"]
}
// Returns: "Limited risk - Transparency obligations apply"
2. Requirement Mapping
Based on risk level, the server maps specific compliance requirements:
- Documentation obligations
- Human oversight requirements
- Data governance rules
- Transparency mandates
3. Interactive Guidance
Instead of static documentation, MCP enables conversational compliance:
User: "I'm building a CV screening tool. What do I need?"
Assistant (via MCP):
Your system is classified as HIGH RISK (Annex III, 4.a - Employment).
Required compliance measures:
✓ Risk management system (Article 9)
✓ Training data governance (Article 10)
✓ Technical documentation (Article 11)
✓ Automatic logging (Article 12)
✓ Human oversight (Article 14)
✓ Accuracy requirements (Article 15)
Next step: Would you like me to generate a compliance checklist?
Implementation Highlights
1. Structured Knowledge Base
The server indexes the full EU AI Act into queryable structures:
interface Article {
number: string;
title: string;
content: string;
risk_levels: RiskLevel[];
requirements: Requirement[];
penalties: Penalty[];
}
2. MCP Tools Exposed
{
"tools": [
{
"name": "classify_ai_system",
"description": "Determines EU AI Act risk category",
"inputSchema": { /* ... */ }
},
{
"name": "get_compliance_requirements",
"description": "Lists specific obligations for a system",
"inputSchema": { /* ... */ }
},
{
"name": "check_article",
"description": "Looks up specific articles and annexes",
"inputSchema": { /* ... */ }
},
{
"name": "generate_compliance_report",
"description": "Creates a compliance assessment report",
"inputSchema": { /* ... */ }
}
]
}
3. Smart Defaults & Guardrails
- Conservative classification (when in doubt, suggests higher risk)
- Cites specific articles for every recommendation
- Flags ambiguous cases requiring legal review
- Updates automatically as clarifications are published
Real-World Usage Example
Here's how a developer might use it in their workflow:
# Install MCP server
git clone https://github.com/ark-forge/mcp-eu-ai-act.git
cd mcp-eu-ai-act
python3 server.py
# Configure in Claude Desktop
# (add to claude_desktop_config.json)
{
"mcpServers": {
"eu-ai-act": {
"command": "python3",
"args": ["path/to/mcp-eu-ai-act/server.py"]
}
}
}
Then in Claude Desktop:
Me: I'm building a loan approval AI. What are my obligations?
Claude: [Using EU AI Act MCP server...]
Your system is HIGH RISK (Annex III, 5.b - creditworthiness assessment).
Key obligations:
1. Article 9: Implement risk management throughout lifecycle
2. Article 10: Use representative training data
3. Article 13: Enable human oversight of decisions
4. Article 52: Inform users they're interacting with AI
Timeline:
- August 2, 2026: Full compliance required
- 18 months before: Submit conformity assessment
Would you like me to generate a compliance roadmap?
What's Next: Feedback Needed
This is very much a build-in-public project. I'm looking for feedback on:
- Use cases I'm missing: What compliance questions would you want to ask?
- Integration points: What other tools should this connect to? (CI/CD, documentation generators, audit logs?)
- Risk classification edge cases: The regulation leaves some areas ambiguous - where do you see grey areas?
- Multi-language support: Should compliance reports be available in all 24 EU official languages?
Try It Out
🔗 MCP Server: arkforge.fr/mcp
📖 Documentation: GitHub
💬 Discussion: Drop questions in the comments
Why Open Source This?
Three reasons:
- Compliance benefits everyone: The more accessible compliance guidance is, the better AI systems we'll all build
- Community validation: I'm a developer, not a lawyer - I need the community to stress-test the classifications
- Transparency: When a tool tells you your system is "high risk," you should be able to audit why
The Bigger Picture
The EU AI Act is the first comprehensive AI regulation, but it won't be the last. Similar frameworks are emerging in the US (Executive Order 14110), UK (AI Safety Summit commitments), and other jurisdictions.
Making compliance developer-friendly isn't just about avoiding fines - it's about building AI systems that are:
- Safer by design
- Auditable
- Transparent to users
- Aligned with societal values
MCP is uniquely positioned to make this happen because it meets developers where they already work: in their IDE, their terminal, their AI assistant.
Questions for the Community
- If you're building AI systems for EU users, what's your compliance strategy?
- Have you used MCP servers in production? What's been your experience?
- What other regulatory frameworks would benefit from this approach?
Let's make compliance less painful, together. 🚀
Disclaimer: This tool provides guidance based on publicly available EU AI Act text. It is not legal advice. Consult qualified legal counsel for compliance decisions.
About the author: I'm building ArkForge, developer tools focused on AI compliance. Our open-source MCP server helps teams assess EU AI Act obligations. Always happy to connect with fellow builders.
Top comments (0)