DEV Community

Ari Volcoff
Ari Volcoff

Posted on

The EU AI Act Kicks In August 2. Here's What Your AI Product Actually Needs to Do

If you're building AI products that touch European users, August 2, 2026 is a date you need to have in your calendar. That's when the EU AI Act's high-risk AI obligations and GPAI (General Purpose AI) rules become enforceable — with fines up to €35M or 7% of global turnover for non-compliance.

I built Complizo to solve this for SMBs, and in the process learned a lot about what compliance actually requires at a technical level. Here's the practical breakdown.

What the EU AI Act actually requires

The Act introduces a risk-based framework. Most AI systems fall into one of four tiers:

Unacceptable risk — banned outright (e.g. social scoring, real-time biometric surveillance in public spaces)

High risk — the hard one. Defined in Annex III, this covers 8 categories including:

  • Biometric identification and categorisation
  • Critical infrastructure management
  • Education and vocational training
  • Employment and worker management
  • Access to essential services (credit scoring, insurance)
  • Law enforcement
  • Migration and asylum management
  • Administration of justice

If your system falls here, you need: a conformity assessment, a risk management system, data governance documentation, technical documentation, human oversight mechanisms, and registration in the EU database.

Limited risk — transparency obligations only. Chatbots must disclose they're AI. Deepfakes must be labelled.

Minimal risk — no obligations. Spam filters, AI in video games, etc.

The Annex III trap most developers fall into

The trickiest part isn't the clear cases — it's the edge ones. A recruitment screening tool that ranks CVs? Likely high-risk (Article 6, employment category). An AI credit scoring component embedded in a larger fintech app? High-risk. A sentiment analysis tool used in hiring decisions? Probably high-risk too.

The language in the Act is intentionally broad, which means you need to actually reason through your system's purpose, deployment context, and whether a human is "meaningfully" in the loop — not just technically present.

What documentation you actually need to produce

For high-risk systems, the minimum viable compliance package looks like this:

  1. AI system inventory — register every AI system you deploy, with purpose, input/output data types, intended users
  2. Risk classification decision — documented reasoning for why you landed at your classification (or why you're not high-risk)
  3. Technical documentation — architecture, training data description, performance metrics, known limitations
  4. Data governance policy — how training and input data is sourced, validated, and monitored for bias
  5. Human oversight protocol — how a human can intervene, override, or stop the system
  6. Conformity assessment — either self-assessed (for most) or third-party (for certain biometric/law enforcement use cases)
  7. Audit-ready export — all of the above in a format you can hand to a regulator

The GPAI rules (relevant if you're building on top of foundation models)

If you're deploying a general-purpose AI model (think: your own fine-tuned LLM or a wrapper around GPT/Claude/Gemini), you have additional obligations:

  • Publish a technical summary of training data
  • Implement a copyright compliance policy
  • Maintain model documentation and make it available to downstream deployers

For GPAI models with "systemic risk" (above a compute threshold — currently 10^25 FLOPs), there are even stricter requirements including adversarial testing and incident reporting.

The Digital Omnibus delay — and why it doesn't mean you can relax

There's been a lot of noise about the Digital Omnibus potentially delaying high-risk obligations by 16 months. Even if that passes, the GPAI rules and general obligations (transparency, prohibited practices) still kick in August 2. And building your compliance foundation now means you're not scrambling in 2027.

What I built

After going through this process for my own AI products, I built Complizo — it walks you through each step with guided forms, auto-generates the documentation you need, scores your compliance readiness, and exports audit-ready PDFs. Free for up to 3 AI systems.

It's not a legal service — it's the engineering scaffolding that gets your docs in shape before you talk to a lawyer (and makes that conversation a lot cheaper).

Happy to answer questions about the Act in the comments — this stuff is genuinely confusing and the official guidance is not developer-friendly.

Top comments (0)