The EU AI Act 2026 Cheat Sheet for Developers
If you ship an LLM feature, a recommender, a recruitment filter, or any AI scoring system to EU users, the EU AI Act now applies to you. The first wave of enforcement landed in August 2025; the next major obligations enter force from 2 August 2026, and high-risk systems get full enforcement from 2 August 2027. This is what developers actually need to know — without lawyer fees.
The four-tier risk pyramid (and where your code lives)
The Act sorts AI systems into four tiers:
| Tier | Examples | Your obligation |
|---|---|---|
| Unacceptable | Social scoring, manipulative subliminal AI, predictive policing | Banned. Period. |
| High-risk | Recruitment AI, credit scoring, education grading, biometric ID, critical infrastructure | Conformity assessment, data governance, human oversight, post-market monitoring |
| Limited risk | Chatbots, deepfakes, generative content | Transparency: tell users they're interacting with AI |
| Minimal risk | Spam filters, AI photo enhancement, video games | No specific obligations |
If you're building a SaaS, the realistic categories are Limited and High-risk. Most developers misclassify. A "career suggestion engine" is high-risk. A "résumé score" is high-risk. A chatbot that recommends financial products may be high-risk depending on use.
The fines are not theoretical
- Prohibited AI: up to €35M or 7% of global turnover
- Non-compliance with high-risk obligations: up to €15M or 3%
- Misleading info to authorities: up to €7.5M or 1%
Member states started naming national authorities in late 2025. Spain's AESIA is already accepting complaints. Germany's BSI published its enforcement playbook in February 2026.
Article 50 (transparency) — the rule everyone underestimates
If your product:
- generates synthetic text that looks like journalism, advice, or research → must be marked as AI-generated in machine-readable form
- includes a chatbot that interacts with humans → must disclose it's an AI, unless obvious from context
- produces deepfakes → must label them, with limited art/satire exceptions
For developers this means: a <meta name="ai-generated" content="true"> on AI pages, plus an in-UI disclosure. C2PA content credentials are the de-facto standard for media labeling.
The 30-minute audit you can run today
Open a notebook and answer these for each AI feature you ship:
- Risk classification — is it high-risk per Annex III? (employment, education, credit, biometrics, critical infra)
- Data governance — do you log training data provenance? Bias testing? Representativity checks?
- Human oversight — can a human override? Is there an escalation path?
- Post-market monitoring — do you log AI decisions, drift, and accuracy degradation?
- Technical documentation — Annex IV requires a system description, design choices, dataset summary, oversight measures
- Transparency — do users know they're interacting with AI? Are AI-generated outputs labeled?
- Provider vs deployer — are you the provider (you built it) or deployer (you use it)? Both have different obligations
- GPAI obligations — if you fine-tune Llama, Mistral, or use OpenAI's API, you may be a provider of a derivative GPAI
- Accessibility crossover — high-risk AI used by public sector also triggers EAA + Web Accessibility Directive
- Incident reporting — serious incidents must be reported to authorities within 15 days
A starter compliance template (copy/paste)
Add this to your /ai-transparency page:
## AI Transparency Statement
This product uses AI in the following ways:
- [Feature] — [Purpose] — [Model/Provider] — [Risk category]
Data we send to AI providers: [list]
Data we DO NOT send: [list]
Human oversight: [how humans review AI outputs]
Your rights: opt-out, deletion, explanation (Art. 86 right to explanation for high-risk)
Contact for AI-related questions: [email]
Last updated: [date]
Most enforcement will start with companies that have no transparency page at all. Having one — even imperfect — is a 10x improvement.
What I built (and why)
After auditing five SaaS for our own portfolio, the single recurring failure was: nobody had Article 50 disclosure, nobody had a deployer-vs-provider classification, nobody logged AI decisions for post-market monitoring.
We built CompliPilot to run those checks automatically. It crawls your public pages, looks for AI disclosure markers, classifies your features by Annex III, and outputs a 200+ check report. There's a free tier (no card) if you want to try it on your stack: complipilot.dev.
Whether you use our tool or not, the timeline is real and the fines are real. The 30-minute audit above is the minimum you should do this quarter.
Antonio Altomonte builds compliance and accessibility tooling at DevToolsmith. Find more on the DevToolsmith portfolio.
Top comments (0)