The EU AI Act deadline for high-risk AI systems is August 2, 2026. That's roughly 4 months away.
If you're running an SME that uses AI — chatbots, recommendation engines, hiring tools, content moderation, credit scoring — this regulation applies to you. Not just to big tech. Not just to EU-based companies. Anyone serving EU customers.
This guide cuts through the legal jargon and tells you exactly what to do.
Who Is Actually Affected?
The AI Act uses a risk-based classification:
Unacceptable Risk (Banned)
- Social scoring by governments
- Real-time biometric surveillance in public spaces
- Manipulation of vulnerable groups
If you're doing any of these, stop. No compliance checklist will help.
High Risk (Strict Requirements — August 2026 deadline)
- HR/Recruitment: AI screening resumes, ranking candidates
- Credit scoring: AI assessing creditworthiness
- Education: AI grading students, determining access
- Healthcare: AI assisting diagnosis or treatment
- Law enforcement: AI in predictive policing, evidence evaluation
- Critical infrastructure: AI managing energy, water, transport
Limited Risk (Transparency obligations)
- Chatbots: Must disclose they're AI
- Deepfakes: Must be labeled as AI-generated
- Emotion recognition: Must inform the user
Minimal Risk (No obligations)
- Spam filters, AI in video games, inventory management
Most SMEs fall into Limited or High Risk. If you have a customer-facing chatbot, you're at minimum in the "limited risk" category with transparency obligations.
The 5 Key Obligations for SMEs
1. Risk Classification
First, classify every AI system you operate or deploy:
- What does it do? (chatbot, recommendation, scoring, generation)
- Who does it affect? (employees, customers, general public)
- What's the impact? (convenience vs. life-changing decisions)
Document this. Regulators will ask.
2. Technical Documentation
For high-risk systems, you need:
- Description of the system's purpose and intended use
- Design specifications and development methodology
- Training data: sources, preparation, known biases
- Performance metrics and testing results
- Risk assessment and mitigation measures
For limited-risk systems, lighter documentation suffices — but "we use ChatGPT" is not documentation.
3. Human Oversight
High-risk AI must have:
- A human who can understand the system's outputs
- The ability to override AI decisions
- Clear escalation procedures
- Logging of AI decisions for review
This doesn't mean a human reviews every decision. It means a human can intervene when needed.
4. Transparency
All AI systems interacting with people must:
- Clearly disclose they're AI (chatbots, virtual assistants)
- Label AI-generated content (images, text, audio)
- Inform users about automated decision-making
- Provide explanations when AI decisions affect individuals
5. Data Governance
Training data must be:
- Relevant and representative
- Free from known biases (or biases documented and mitigated)
- Compliant with GDPR (you're already doing this, right?)
- Properly versioned and traceable
How to Self-Assess: Step by Step
Step 1: Inventory (Week 1)
List every AI system in your company. Include third-party tools:
- Customer support chatbot (e.g., Intercom with AI)
- Email marketing personalization
- HR screening tools
- Recommendation engines
- Content generation tools
Step 2: Classify (Week 2)
For each system, determine the risk level using the categories above. When in doubt, classify higher.
Step 3: Gap Analysis (Week 3)
Compare current state with requirements:
- Do you have documentation? ❌/✅
- Is there human oversight? ❌/✅
- Are users informed? ❌/✅
- Is training data documented? ❌/✅
Step 4: Remediate (Weeks 4-12)
Prioritize by risk level. High-risk systems first. Start with documentation — it's the most time-consuming.
Step 5: Ongoing Monitoring
Compliance isn't a one-time event. You need:
- Regular re-assessment (quarterly)
- Incident reporting procedures
- Updated documentation when systems change
Tools and Resources
Automated assessment: CompliPilot runs 200+ automated checks against EU AI Act requirements. Free tier: 3 assessments/month. It classifies your AI systems, identifies gaps, and generates documentation templates.
Official guidance: The EU AI Office publishes implementation guidelines — dense but authoritative.
Legal counsel: For high-risk systems, get a lawyer. Automated tools handle the checklist; lawyers handle the gray areas.
The Fines Are Real
- Prohibited AI practices: Up to EUR 35M or 7% of global annual turnover
- High-risk non-compliance: Up to EUR 15M or 3% of turnover
- Incorrect information to authorities: Up to EUR 7.5M or 1% of turnover
For SMEs, there are reduced fines — but "reduced" still means potentially business-ending amounts.
The Other EU Regulation You're Probably Ignoring
While you're sorting AI compliance, check your website accessibility. The European Accessibility Act (EAA) has been enforced since June 2025, with fines up to EUR 300K per violation.
FixMyWeb scans your website for 201 WCAG accessibility issues in 60 seconds. Because getting fined for two EU regulations simultaneously would be embarrassing.
And if your SaaS handles recurring payments, PaymentRescue recovers 30-50% of failed payments automatically — because compliance costs money, and you'll want to plug revenue leaks elsewhere.
Action Plan: April to August 2026
| Month | Action |
|---|---|
| April | Complete AI inventory and risk classification |
| May | Begin documentation for high-risk systems |
| June | Implement transparency measures (chatbot disclosures, content labeling) |
| July | Complete human oversight procedures, test incident reporting |
| August | Final review, submit conformity assessment if required |
Start Now
Four months sounds like a lot until you realize that documentation alone takes 4-6 weeks for a high-risk system.
Run a free compliance check today. You'll know in 5 minutes whether you have a problem — and exactly what to fix.
The companies that start now will be compliant by August. The ones that wait until July will be scrambling. Don't be the latter.
Is your company preparing for the EU AI Act? What's been the biggest challenge? Share in the comments.
Top comments (0)