The August 2, 2026 deadline is less than five months away. On that date, the full requirements for high-risk AI systems under the EU AI Act become enforceable — with fines up to €15 million or 3% of global turnover for non-compliance.
This checklist covers every obligation that providers of high-risk AI systems must meet. Use it to audit where you stand and identify what still needs work.
Step 1: Determine if the AI Act applies to you
Before working through the technical requirements, confirm your obligations.
[ ] Classify your AI system. Is it high-risk under Annex III? High-risk domains include biometrics, critical infrastructure, education, employment, access to essential services, law enforcement, migration, and administration of justice.
[ ] Determine your role. Are you a provider (you developed the system or had it developed and placed it on the market under your name) or a deployer (you use the system under your authority)?
[ ] Check territorial scope. The AI Act applies to providers placing systems on the EU market and deployers located in the EU — but also to providers and deployers outside the EU if the output of their system is used in the EU. If you are unsure, this post explains the extraterritorial reach.
[ ] Verify the system is not exempt. Systems used exclusively for military, defence, or national security purposes are excluded. Systems used purely for scientific research and development are also exempt. Most commercial AI systems are not.
Step 2: Risk management system (Article 9)
Article 9 requires a documented risk management system that runs throughout the AI system's entire lifecycle — not a one-time assessment.
[ ] Identify and analyse known and reasonably foreseeable risks that the AI system can pose to health, safety, or fundamental rights.
[ ] Estimate and evaluate risks that may emerge when the system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse.
[ ] Adopt risk management measures to eliminate or reduce identified risks. Where risks cannot be eliminated, implement mitigation measures (technical safeguards, human oversight, user information).
[ ] Test the system to identify the most appropriate risk management measures. Testing must be suitable to the intended purpose and must use independently collected test datasets where appropriate.
[ ] Document the risk management process including decisions made, risks identified, measures adopted, and residual risks accepted with justification.
[ ] Plan for continuous monitoring and updating of the risk management system throughout the system's lifecycle, not just at deployment.
Step 3: Data governance (Article 10)
Training, validation, and testing datasets must meet specific quality standards.
[ ] Document your data governance practices including design choices, data collection processes, and preparation operations (annotation, labelling, cleaning, enrichment).
[ ] Ensure datasets are relevant, sufficiently representative, and as free of errors as possible in view of the intended purpose of the system.
[ ] Assess data for possible biases that could lead to discriminatory outcomes, particularly regarding the groups of persons on whom the system is intended to be used.
[ ] Identify and address data gaps or shortcomings and document how these are mitigated.
[ ] Consider the statistical properties of the data, including where applicable the persons or groups of persons on whom the system is intended to be used.
[ ] Where special categories of personal data are processed (Article 10(5)), ensure this is strictly necessary for bias detection and correction, with appropriate safeguards.
Step 4: Technical documentation (Article 11 / Annex IV)
Detailed technical documentation must be drawn up before the system is placed on the market and kept up to date.
[ ] General description of the AI system: intended purpose, developers, version, how the system interacts with hardware and software, applicable regulations.
[ ] Detailed description of system elements and development process: methods and steps taken for development, design specifications, system architecture, computational resources, data processing, training methodologies, key design choices and rationale.
[ ] Monitoring, functioning, and control mechanisms: capabilities and limitations, degrees of accuracy for specific persons or groups, foreseeable unintended outcomes, human oversight measures, technical specifications for input data.
[ ] Risk management documentation: full description of the risk management system implemented per Article 9.
[ ] Description of changes throughout the lifecycle: if the system has been modified, what changed and why.
[ ] Performance metrics and test results: accuracy, robustness, and cybersecurity benchmarks, test logs, validation results.
[ ] Description of the quality management system in place.
This is typically the most time-consuming item on this checklist. Annexa can generate a first draft of your Annex IV technical documentation by analysing your actual codebase — giving you a starting point instead of a blank page.
Step 5: Record-keeping and logging (Article 12)
High-risk AI systems must be designed to automatically log events during operation.
[ ] Enable automatic logging of events relevant for identifying risks at the national level and substantial modifications throughout the system's lifecycle.
[ ] Logs must record at minimum: the period of each use, the reference database against which input data was checked, input data that led to a match, and the identification of natural persons involved in the verification of results.
[ ] Ensure logs are proportionate to the intended purpose of the system and comply with data protection law.
[ ] Retain logs for a period appropriate to the intended purpose, and at least six months unless otherwise provided by applicable law.
Step 6: Transparency and information to deployers (Article 13)
The system must come with clear documentation for deployers (the organisations that use it).
[ ] Provide instructions for use that include the provider's identity and contact details, the system's capabilities and limitations, intended purpose, and level of accuracy.
[ ] Describe known or foreseeable circumstances under which the system may create risks to health, safety, or fundamental rights.
[ ] Specify the performance metrics of the system, including the metrics for specific groups of persons on whom the system is intended to be used.
[ ] Describe human oversight measures and how to effectively implement them.
[ ] Specify the expected lifetime of the system and any necessary maintenance and care measures.
Step 7: Human oversight (Article 14)
The system must be designed to allow effective human oversight during the period it is in use.
[ ] Enable human oversight through appropriate human-machine interface tools identified by the provider before the system is placed on the market.
[ ] Ensure the human overseer can fully understand the system's capacities and limitations, correctly interpret output, decide not to use the system or disregard its output, and intervene or interrupt the system.
[ ] For high-risk systems that identify or affect persons: ensure at least two independent natural persons verify the system's output before action is taken based on that output (unless justified by specific circumstances).
Step 8: Accuracy, robustness, and cybersecurity (Article 15)
[ ] Declare and document the accuracy levels of the system, including the metrics used to measure them.
[ ] Ensure resilience against errors, faults, or inconsistencies that may occur in the system's environment or among other systems with which it interacts.
[ ] Implement technical redundancy solutions where appropriate (backup plans, fail-safes).
[ ] Protect against adversarial attacks relevant to the system, including data poisoning, model manipulation, and adversarial inputs (prompt injection, evasion attacks).
[ ] Implement cybersecurity measures appropriate to the risks, including protection against unauthorised access, model extraction, and data leakage.
Step 9: Quality management system (Article 17)
Providers must implement a documented quality management system.
[ ] Document a compliance strategy including an assessment of applicable regulatory requirements.
[ ] Establish processes for design, design control, development, quality control, and quality assurance.
[ ] Define examination, test, and validation procedures to be carried out before, during, and after development.
[ ] Specify technical standards applied, and where harmonised standards are not applied in full, explain the means used to meet the requirements.
[ ] Establish systems for data management including data collection, analysis, labelling, storage, filtration, mining, aggregation, retention, and any other data operation.
[ ] Implement a post-market monitoring system in accordance with Article 72.
[ ] Define procedures for incident reporting and communication with national competent authorities.
[ ] Document resource management including supply-chain and third-party component management.
[ ] Establish an accountability framework including management and other personnel responsibilities.
Step 10: Conformity assessment and market placement
Before placing the system on the EU market, complete the administrative requirements.
[ ] Conduct the conformity assessment (Article 43). Most high-risk systems under Annex III can use the internal control procedure (Annex VI). Certain biometric systems require third-party assessment by a notified body.
[ ] Draw up the EU declaration of conformity (Article 47) — a formal statement that the system meets all requirements.
[ ] Affix the CE marking (Article 48) — indicating conformity with the AI Act.
[ ] Register the system in the EU database for high-risk AI systems (Article 49) — this must happen before the system is placed on the market.
[ ] Designate an authorised representative in the EU if you are established outside the Union (Article 22).
Timeline: what to prioritise
With less than five months until August 2, 2026:
Do now (March-April 2026):
Classify all your AI systems — know which are high-risk
Start technical documentation (Annex IV) — this takes the longest
Begin your risk management system documentation
Audit your training data governance
Do next (May-June 2026):
Complete the quality management system
Implement logging and record-keeping
Prepare instructions for deployers
Validate human oversight mechanisms
Finalise (July 2026):
Conduct the conformity assessment
Prepare the EU declaration of conformity
Register in the EU database
Affix CE marking
The companies that start now will have time to iterate. The ones that wait until July will be scrambling — and the documentation will show it.
Get started
The longest item on this checklist is technical documentation. If you want to compress weeks of work into hours, Annexa's free risk triage classifies your system in minutes, and the Pro tier generates a full Annex IV dossier from your actual codebase for €49/month.
Whether you use a tool or do it manually, the important thing is to start. August is closer than it looks.
Top comments (0)