If you're building an AI wrapper, a chatbot, or integrating LLMs into your SaaS right now, you've probably heard about the EU AI Act.
It's the world's first comprehensive AI regulation.
And yes, it applies to you even if your servers are in the US but your users are in the EU.
The problem?
Most developers and indie hackers have no idea where their product falls under this law.
We're used to writing code, not reading 300-page legal documents.
But getting this wrong can become an expensive mistake.
So here's a simple, developer-friendly breakdown of how to classify your AI product.
πΊ The Risk Pyramid: Where Do You Belong?
The EU AI Act doesn't care about how your AI works.
It doesn't matter if you're using:
- GPT-4
- Claude
- Llama
- a custom model
What the law actually cares about is the use case.
In other words:
What is your AI being used for?
The Act divides AI systems into four categories.
π« Prohibited AI (Banned)
These systems are outright illegal.
Examples include:
- social scoring systems
- biometric manipulation
- AI exploiting vulnerable populations
If you are building something in this categoryβ¦
You should probably stop.
β οΈ High-Risk AI (Heavy Compliance)
This is where things get serious.
Typical examples include AI used for:
- CV screening or hiring decisions
- credit scoring
- medical devices
- law enforcement
If your system falls here, compliance becomes a serious engineering task.
π Limited Risk (Transparency Rules)
This category includes things like:
- chatbots
- deepfake generators
- emotion recognition systems
The main requirement here is simple:
Users must clearly know they are interacting with AI.
β Minimal Risk (Almost No Restrictions)
This is where most everyday AI lives.
Examples include:
- spam filters
- AI features in video games
- recommendation engines
- inventory prediction
If your system falls here, you're mostly free to build without heavy regulatory overhead.
βοΈ Why Risk Classification Matters (For Your Codebase)
If your app falls into the High-Risk category, things change dramatically.
You can't just deploy to production and move on.
The EU AI Act requires several architectural and operational changes.
Risk Management System
You must continuously test for:
- bias
- errors
- unintended outcomes
Human Oversight
Your UI/UX must allow a human to intervene.
That could mean:
- override mechanisms
- manual review
- emergency shutdown features
Logging & Traceability
Your infrastructure must keep clear logs of AI decisions.
Think:
- inputs
- outputs
- model behavior
- timestamps
Technical Documentation (Annex IV)
This is the scary one.
You must produce formal documentation describing your entire system.
Things like:
- system architecture
- training data description
- evaluation procedures
- risk management methods
π§© The Hard Part: Understanding Annex III
Ironically, the hardest part of compliance isn't writing documentation.
It's figuring out whether your system is high-risk in the first place.
The rules (especially Annex III of the AI Act) are complicated.
For example:
A simple HR chatbot might be Limited Risk if it only answers FAQs.
But the moment it filters candidates based on resumes, it suddenly becomes High-Risk AI.
The boundary is extremely blurry.
π οΈ A Simple Self-Assessment Tool
I recently ran into this problem while auditing my own AI projects.
After spending hours reading legal documents, I got frustrated.
So I did what most developers would do.
I built a tool to automate the classification.
It's a simple EU AI Act Risk Classification Quiz.
It takes about 2 minutes and based on your use case it estimates which risk tier your AI system falls into.
π Check your AI app's risk level here:
https://www.complianceradar.dev/ai-act-risk-classification
π‘ Final Thought
When it comes to the EU AI Act, ignorance won't help you.
But the real difficulty isn't compliance itself.
It's understanding where your product sits on the risk pyramid.
Once you know your classification, the path forward becomes much clearer.
And then you can go back to doing what you do best:
building and shipping software.
π¬ What kind of AI app are you building right now?
Drop your use-case in the comments and let's figure out your risk tier together.
Top comments (0)