I've been building Annexa, a tool that helps AI companies comply with the EU AI Act, for about a month now. Figured I'd share some honest numbers and lessons since this sub appreciates transparency.
The problem: The EU AI Act requires companies deploying high-risk AI systems to produce detailed technical documentation (called "Annex IV") by August 2, 2026. Fines go up to €35M or 7% of global revenue. Most companies I've talked to have no idea where to start.
What I built: A tool that classifies your AI system's risk level (free, no signup), then generates the compliance documentation by analyzing your actual codebase, Python files, YAML configs, JSON schemas. The Pro tier is €49/month.
Stack: Streamlit frontend, Groq (Llama 3.3 70B) for the AI layer, Supabase for auth/storage, Stripe for payments. The core package has zero Streamlit dependencies so the business logic is framework-independent.
What's working:
- The free risk triage gets people in the door — zero friction
- SEO is starting to pick up. 6 blog posts targeting "eu ai act" long-tail keywords
- The August 2026 deadline creates real urgency — people are searching for solutions NOW
What I'd do differently:
- Should have started content marketing earlier. SEO takes months to compound
- Underestimated how much time compliance research takes vs actual coding
- The €49/month price point feels right for SMEs but I'm still validating
Biggest lesson: In compliance SaaS, trust is everything. I added [LEGAL REVIEW REQUIRED] markers throughout generated documents to be transparent about what the tool is and isn't. Counterintuitively, this honesty seems to increase trust rather than reduce conversions.
Happy to answer questions about the EU AI Act, compliance tooling, or the technical architecture.
Link: https://annexa.eu
Top comments (0)