DEV Community

Rom C
Rom C

Posted on

EU AI Act Compliance 2026 — The Developer's Complete Guide to What's Already Enforceable

⚡ TL;DR The EU AI Act has been rolling out since 2024. Prohibited AI practices are already banned. GPAI (foundation model) rules are already live. Full enforcement for high-risk AI hits August 2, 2026 — five months away. Penalties go up to €35M or 7% of global turnover. This guide breaks down everything developers and tech teams need to know, right now.

If you build AI systems, integrate APIs from foundation models, or work on any product that touches users in the EU — the EU AI Act already applies to you.
This isn't a "prepare for next year" post. Two major enforcement waves have already passed. A third arrives in five months. And over half of organisations still don't have a complete AI inventory — which is the most basic compliance step you can take.
This guide covers the complete picture: what's already law, what activates in August 2026, the technical requirements for high-risk AI systems, and what your team needs to do before the deadline.

What Is the EU AI Act?

The EU AI Act is the world's first comprehensive legal framework regulating artificial intelligence. Passed into law in August 2024, it classifies every AI system by risk level and assigns compliance obligations accordingly.

It applies to:

  • Developers building AI systems used by EU residents
  • Companies deploying AI tools that affect people in the EU
  • Businesses providing AI-powered products or services to European clients
  • Anyone integrating third-party AI APIs into products used by EU users

The extraterritorial reach matters. You do not need a European office to fall within scope. If your AI system affects EU residents — whether you're in India, the US, or anywhere else — the law applies to you.

The Enforcement Timeline in 2026

Here's where every milestone stands as of March 2026:

Aug 2024 — AI Act enters into force
Feb 2025 — Prohibited practices banned + AI literacy required
Aug 2025 — GPAI model rules + EU AI Office active
Mar 2026 — Digital Omnibus under negotiation
Aug 2,2026 — Full enforcement: High-risk AI + Article 50
Aug 2027 — High-risk AI in regulated products

⚠️ Note on the Digital Omnibus: The Commission's Omnibus proposal (Nov 2025) could push certain Annex III deadlines to December 2027 — but only if harmonised technical standards remain unavailable, and only if the proposal passes. It is still under negotiation. Treat August 2026 as the binding deadline.

The Four Risk Tiers Explained

Every AI system falls into one of four tiers. Your obligations depend entirely on which tier your system sits in.

🚫 Tier 1 — Unacceptable Risk (Prohibited)
These are banned outright. No compliance path. No exceptions (outside narrow law-enforcement carve-outs).
Banned since February 2025.

⚠️ Tier 2 — High Risk (Strict Requirements)
AI systems used in:

  • Hiring, worker management, and HR decisions
  • Credit scoring and access to financial services
  • Healthcare triage and patient management
  • Education and vocational training
  • Critical infrastructure
  • Law enforcement, migration, border control
  • Administration of justice Full compliance required by August 2, 2026. This is the tier most enterprise and SaaS developers need to focus on.

ℹ️ Tier 3 — Limited Risk (Transparency Obligations)
Chatbots, deepfake generators, AI-generated content tools. Must disclose AI interaction to users and label AI-generated content.
Enforceable from August 2, 2026.

✅ Tier 4 — Minimal Risk (Largely Unregulated)
Spam filters, recommendation engines, video game AI. Minimal obligations — but all other applicable EU law still applies.

What’s Already Banned — Since February 2025

🚨 Enforceable right now: These are not upcoming rules. Every day of non-compliance is a day of live legal exposure. These prohibitions have been active for over a year.

1. Social Scoring Systems
AI that evaluates or classifies individuals based on their behaviour and causes detrimental treatment — applies to private companies, not just governments.

2. Subliminal or Manipulative AI
Systems that exploit psychological techniques to distort users’ decision-making in ways that cause harm — without their awareness or consent.

3. Exploitation of Vulnerable Groups
AI that specifically targets children, elderly people, or people with disabilities based on their vulnerabilities to influence behaviour or decisions.

4. Untargeted Facial Image Scraping
Scraping facial images from the internet or CCTV footage at scale to build or expand biometric recognition databases.

5. Biometric Categorisation by Protected Characteristics
AI that infers or predicts a person’s race, political opinions, religion, trade union membership, sexual orientation, or nationality from biometric data.

6. Real-Time Biometric Surveillance in Public Spaces
Live identification of individuals in publicly accessible areas. Narrow, strictly defined law-enforcement exceptions apply only.

7. Predictive Policing Based Purely on Profiling
Assessing an individual’s likelihood of criminal behaviour based solely on profiling — without individual behavioural assessment or verified facts.

8. Emotion Recognition at Work and in Education
Systems that detect or infer emotions of employees or students in workplace or educational settings, outside of medically or safety-justified contexts.

Check your vendors too: If any of your third-party AI tools or data providers operate systems touching these categories, your organisation may share liability. Vendor audits are not optional.

The GPAI Rules — What Foundation Model Users Must Know

If your product or service is built on a foundation model — GPT-4o, Gemini, Claude, Mistral, Llama, or similar — you are a downstream deployer under GPAI rules that have been active since August 2025.

What Your Provider Must Give You

  • Transparency about the model’s capabilities and limitations
  • Information about training data sources and methodology
  • Documentation of known risks and mitigation measures

What You Must Ensure

  • Copyright compliance policies for AI-generated outputs
  • Appropriate transparency to your end users about AI interactions
  • Contractual agreements with your AI provider that reflect GPAI obligations

For Systemic-Risk Models (10²⁵ FLOPs+)

  • Additional risk assessments are required upstream by the provider
  • Your contracts with the model provider must explicitly reflect these obligations
  • Request evidence of their systemic-risk assessment before continuing deployment

Developer Checklist for GPAI
☐ Request your provider’s EU AI Act compliance documentation
☐ Confirm whether the model is classified as systemic-risk
☐ Update your Terms of Service to disclose AI usage
☐ Ensure user-facing disclosures reflect AI-generated content
☐ Review your data processing agreements for GPAI clauses

Further Reading

We’ve published companion pieces on this topic across platforms — each written for a different level of depth and audience.

📖 Full regulatory breakdown — Medium
The EU AI Act in 2026: From Regulation to Reality — What Every Business Must Know Before August

💼 Executive summary — LinkedIn
5 Months to Full Enforcement: Is Your Business Ready for the EU AI Act?

🌐 Original deep-dive — Our Website
The European AI Act: A New Rulebook for the Age of Algorithms

Top comments (0)