The EU AI Act entered into force on 1 August 2024. High-risk AI systems must comply by August 2026. Most guides to the Act are written by lawyers. This one is for the engineers who have to build the systems that generate compliance evidence.
The engineering problem
Articles 9 through 15 define six technical obligations for high-risk AI systems: risk management, data governance, technical documentation, record-keeping and logging, transparency, and human oversight, plus accuracy, robustness, and cybersecurity requirements.
These aren't abstract policy goals. They're engineering requirements that need to be designed, built, tested, and maintained.
What the guide covers
- Risk classification: Mapping your system's function to Annex III categories with a documented, repeatable process
- Technical documentation (Annex IV): What the documentation must contain and how to make it a first-class development artefact rather than an afterthought
- Audit logging (Article 12): Structured, immutable logging of inputs, inference decisions, confidence scores, human overrides, and configuration changes
- Conformity assessment: The self-assessment process for Annex VI and how documentation, logging, and risk management converge into auditable evidence
We've also open-sourced a reference implementation of Article 12 audit logging for the Vercel AI SDK: aiact-audit-log (https://github.com/systima-ai/aiact-audit-log).
Read the full guide
The full guide goes deeper into each area with architectural patterns, implementation considerations, and references to the relevant Articles and Annexes:
EU AI Act Engineering Compliance Guide (https://systima.ai/blog/eu-ai-act-engineering-compliance-guide)
The August 2026 deadline is approaching. If your team is deploying AI in regulated domains, the time to build compliance into your architecture is now.
Top comments (0)