If you're building AI features right now, you and your team are probably arguing about the tech stack:
- Should we use LangChain or LlamaIndex?
- Should we hit the OpenAI API or run Llama 3 locally?
Here is the harsh truth about the upcoming EU AI Act:
Regulators do not care about your tech stack.
They don’t care if it’s a 100B parameter model or a simple Python script using scikit-learn.
The law only cares about one thing:
Your use case.
Why This Matters
Your use case determines your risk category.
If your product falls into the High-Risk category, you are legally required to implement:
- human oversight
- risk management systems
- detailed technical documentation (Annex IV)
Getting this wrong doesn’t just mean “non-compliance”.
It means:
- failed procurement audits
- blocked enterprise deals
- serious regulatory exposure
🔍 5 Real-World AI Scenarios
Here are practical examples to help you understand where your system might fall.
1. AI Chatbot for Customer Support
Use case:
- routing tickets
- answering FAQs
Classification:
👉 Limited Risk
Dev requirement:
Add UI elements disclosing that users are interacting with AI.
The trap:
If your bot starts making decisions (e.g. auto-refunds, banning users), you might cross into High-Risk territory.
2. AI for CV Screening / Hiring
Use case:
- parsing resumes
- ranking candidates
Classification:
👉 High-Risk (explicitly listed under Annex III)
Dev requirement:
- bias monitoring
- human-in-the-loop (HITL) flows
- full decision logging
3. E-commerce Recommendation Engine
Use case:
- tracking user behavior
- suggesting products
Classification:
👉 Minimal Risk
Dev requirement:
Almost none under the AI Act (GDPR still applies).
4. AI Credit Scoring System
Use case:
- determining loan eligibility
Classification:
👉 High-Risk
Dev requirement:
Full traceability — you must be able to explain decisions made by the system.
5. AI Generating Marketing Content
Use case:
- generating blog posts
- writing ad copy
Classification:
👉 Minimal to Limited Risk
Dev requirement:
Minimal — unless generating deepfakes (then disclosure/watermarking applies).
🛠️ The Real Risk: Feature Creep
The biggest danger isn’t writing documentation.
It’s this:
Your system can move from Limited Risk to High-Risk with a single merged PR.
A small feature change can completely change your regulatory obligations.
Quick Self-Check
If you’re targeting the EU market, ask yourself:
- Does my system influence hiring decisions?
- Does it impact financial outcomes?
- Does it affect people’s rights or opportunities?
If yes:
👉 You may already be in High-Risk territory.
🧪 A Simple Way to Check
If you're not sure, I built a free developer tool to calculate this instantly:
👉 https://www.complianceradar.dev/ai-act-risk-classification
No signup required.
Final Thought
Most AI products won’t fail because of bad code.
They’ll fail because of misunderstood regulation.
Understand your risk level early — and build with confidence.
💬 What kind of AI features are you building right now?
Drop your use case below and we can try to classify it together.
Top comments (0)