The EU AI Act entered into force on August 1, 2024. By August 2026, most obligations apply — including those affecting you if you ship software with any AI component to EU users.
This is not a GDPR repeat. It is more targeted, but the penalties are equally serious: up to €30 million or 6% of global annual turnover.
Here is what I have learned building a compliance scanner for it.
Who Does It Actually Affect?
The Act applies to any AI system that is:
- Placed on the EU market, or
- Put into service in the EU, or
- Has output used in the EU
That last one is critical. If your SaaS is US-based but EU users interact with your AI features, you are in scope.
The Risk Tier System
Unacceptable Risk (banned)
Social scoring systems, subliminal manipulation tools, and real-time biometric surveillance in public spaces. If your product does any of these: stop.
High Risk
AI in hiring (CV screening, interview analysis), credit scoring, insurance underwriting, or safety-critical systems. Obligations include conformity assessments, technical documentation, human oversight mechanisms, and EU database registration.
Limited Risk — most developers are here
- Chatbots and conversational AI: must disclose that the user is interacting with AI
- Deepfake generation: must label content as AI-generated
- AI-generated text on topics of public interest: disclosure required
Minimal Risk
Most AI features in B2B SaaS tools (spam filters, recommendation engines, image classifiers) fall here. Limited obligations.
Concrete Actions for Developers
1. Add Disclosure at Point of Interaction
If you have a chat widget powered by an LLM, this is mandatory:
// Compliant: disclosure visible before/at start of interaction
function ChatWidget() {
return (
<div>
<p className="text-xs text-gray-500">
You are chatting with an AI assistant
</p>
<ChatInput />
</div>
);
}
This must be visible before or at the start of the interaction — not buried in a ToS link.
2. Document Your AI Components
Maintain a registry of every AI model or API your product calls. For each entry, document: what it does, what data it processes, which users it affects, and who the provider is (OpenAI, Anthropic, internal model, etc.). This is the foundation of your technical documentation if you are ever audited.
3. Human Oversight Mechanisms
For limited-risk and higher applications, users should be able to:
- Know they are interacting with AI
- Opt out of AI-assisted decisions in high-stakes contexts (HR, finance, healthcare)
- Escalate to a human
In practice, this usually means adding a "Talk to a human" option alongside AI-generated responses.
4. Log AI Decisions for High-Risk Applications
async function aiDecision(input, userId) {
const result = await model.predict(input);
await auditLog.record({
timestamp: new Date().toISOString(),
userId,
inputHash: hash(input), // never log raw PII
decision: result.output,
modelVersion: model.version,
confidence: result.confidence
});
return result;
}
5. Check Prohibited Practices Explicitly
Go through your feature list and verify none of them: exploit psychological vulnerabilities, use subliminal techniques, or infer sensitive attributes (race, political views, sexual orientation) from behavioral data without clear consent.
What Is Actually Enforced Now vs. Later
| Milestone | What applies |
|---|---|
| February 2025 | Prohibited practices rules ✅ |
| August 2025 | GPAI model obligations (OpenAI/Anthropic integrations) ✅ |
| August 2026 | Full high-risk system compliance |
If you integrate OpenAI, Anthropic, or similar APIs into a product with EU users, the GPAI obligations already apply to your supply chain.
Automating the Checks
Manually auditing this takes hours. I built CompliPilot to automate the first pass: it scans your application against 200+ EU AI Act compliance checks — disclosure requirements, prohibited practice patterns, logging gaps, documentation completeness. It catches the obvious gaps in minutes instead of days.
Summary
| If you... | Minimum obligation |
|---|---|
| Have a chatbot | Disclose AI at point of interaction |
| Integrate a GPAI model (GPT, Claude, etc.) | Track, document, apply provider guidelines |
| Use AI-based scoring | Add human oversight and logging |
| Use AI for hiring/credit/healthcare | Full high-risk conformity assessment |
The teams documenting their AI systems now will have a huge advantage over those scrambling to comply in 2026.
Are you already thinking about EU AI Act compliance? What is the trickiest part for your use case? Drop a comment below.
Top comments (0)