DEV Community

Gregorio von Hildebrand
Gregorio von Hildebrand

Posted on • Originally published at aivigilia.com

EU AI Act Article 9: Risk Management for High-Risk AI Systems

Article 9 mandates continuous risk management for high-risk AI. Learn what documentation, processes, and testing you need before August 2026 enforcement.

What Article 9 Actually Requires

Article 9 of the EU AI Act establishes the risk management framework that every provider of high-risk AI systems must implement. It's not a one-time checkbox—it's a continuous, documented process that must be in place before you place your system on the market and maintained throughout its lifecycle.

If your AI system falls under Annex III (HR tools, credit scoring, law enforcement, critical infrastructure, education, etc.), Article 9 applies to you. The fines for non-compliance reach €35 million or 6% of global annual turnover, whichever is higher. Enforcement begins August 2, 2026.

Here's what Article 9 demands in plain language:

  • Establish and document a risk management system that is continuous and iterative
  • Identify and analyze known and foreseeable risks associated with each high-risk AI system
  • Estimate and evaluate risks that may emerge when the system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse
  • Adopt suitable risk management measures to address identified risks
  • Test the system to ensure risk management measures are effective
  • Update the risk management process throughout the entire lifecycle of the system

The key word is continuous. You can't run a risk assessment in January 2026, file it, and forget it. Article 9 requires ongoing monitoring, testing, and documentation updates as your system evolves.

The Five-Step Risk Management Process

Article 9 doesn't prescribe a specific methodology, but it does outline a clear sequence. Here's how to structure your compliance:

Step 1: Risk Identification

Document every reasonably foreseeable risk associated with your AI system. This includes:

  • Risks to health and safety
  • Risks to fundamental rights (privacy, non-discrimination, freedom of expression)
  • Risks arising from intended use
  • Risks arising from reasonably foreseeable misuse

Concrete example: If you're deploying an AI-powered recruitment tool, foreseeable risks include discriminatory outcomes based on protected characteristics (gender, age, ethnicity), privacy violations from excessive data collection, and misuse by hiring managers who over-rely on the system without human review.

Step 2: Risk Analysis and Estimation

For each identified risk, estimate:

  • Severity: What is the magnitude of harm if the risk materializes?
  • Probability: How likely is this risk to occur?
  • Affected populations: Who is exposed to this risk?

Document your methodology. If you use a risk matrix (e.g., 5×5 likelihood-impact grid), define your scoring criteria and thresholds.

Step 3: Risk Evaluation

Determine whether each risk is acceptable or requires mitigation. Article 9 requires you to evaluate risks in light of:

  • The intended purpose of the system
  • Reasonably foreseeable misuse
  • The state of the art in risk mitigation

If a risk exceeds your acceptable threshold, you must implement controls.

Step 4: Risk Mitigation

Adopt measures to eliminate or reduce risks to an acceptable level. Article 9 explicitly requires:

  • Design and development controls: Build safety and fairness into the system architecture
  • Testing and validation: Demonstrate that controls work as intended
  • Information to users: Provide clear instructions and warnings (see Article 13)
  • Human oversight mechanisms: Enable meaningful human intervention (see Article 14)

Document every mitigation measure and map it back to the specific risk(s) it addresses.

Step 5: Continuous Monitoring and Update

Risk management doesn't stop at deployment. Article 9 requires you to:

  • Monitor the system's performance in production
  • Update risk assessments when you modify the system or learn of new risks
  • Maintain records of all risk management activities

This means version-controlled documentation, change logs, and periodic reviews—not a static PDF.

Article 9 Documentation Requirements

The EU AI Act doesn't specify a document template, but Article 11 (technical documentation) and Article 9 together imply you must maintain:

Document Purpose Update Frequency
Risk Management Plan Describes your overall process, methodology, roles, and review cadence Annually or when process changes
Risk Register Lists all identified risks with severity, probability, and status Continuously (living document)
Risk Assessment Report Detailed analysis of each risk, including evidence and evaluation Per system version or major change
Mitigation Control Specification Describes each control, its implementation, and effectiveness testing Per control; updated when modified
Test and Validation Records Evidence that mitigations work (test plans, results, pass/fail criteria) Per test cycle
Monitoring and Incident Log Production performance data, anomalies, user complaints, near-misses Continuously (append-only log)

All documentation must be explainable and auditable. If a national authority requests your Article 9 records, you need to produce them within a reasonable timeframe (typically 30 days).

Common Gaps and Anti-Patterns

Most organizations fail Article 9 compliance in predictable ways. Here are the eight most common anti-patterns we detect in Vigilia audits:

  1. One-time risk assessment: Treating risk management as a pre-launch checklist instead of a continuous process
  2. No misuse analysis: Identifying intended-use risks but ignoring foreseeable misuse scenarios
  3. Undocumented methodology: Using subjective risk judgments without defined scoring criteria
  4. No traceability: Listing risks and controls in separate documents with no clear mapping
  5. Missing test evidence: Claiming mitigations are effective without documented validation
  6. No production monitoring: Deploying the system and never checking if risk assumptions hold in the real world
  7. Stale documentation: Risk registers that haven't been updated in 12+ months
  8. No version control: Overwriting old risk assessments instead of maintaining a change history

Each of these gaps can trigger enforcement action. Article 9 compliance is not about having some documentation—it's about having the right documentation, kept current, and demonstrably used to make decisions.

How Article 9 Connects to Other Requirements

Article 9 is the foundation, but it doesn't stand alone. Your risk management system must feed into:

  • Article 10 (Data Governance): Risk assessment informs what training data you need and how you validate it
  • Article 13 (Transparency): Identified risks determine what information you must provide to users
  • Article 14 (Human Oversight): Risk severity dictates the level of human control required
  • Article 15 (Accuracy, Robustness, Cybersecurity): Risk mitigation drives your technical performance requirements
  • Article 61 (Post-Market Monitoring): Continuous risk management requires ongoing performance tracking

If your Article 9 process is weak, every downstream obligation becomes harder to satisfy.

Practical Implementation Checklist

Here's a 30-day roadmap to establish Article 9 compliance:

Week 1: Scoping and Methodology

  • Confirm your system is high-risk (check Annex III)
  • Define your risk management process (who owns it, review cadence, escalation paths)
  • Choose a risk assessment methodology (ISO 31000, NIST AI RMF, or custom)
  • Document your risk scoring criteria (severity scale, probability scale, acceptability thresholds)

Week 2: Risk Identification and Analysis

  • Conduct a structured risk workshop with engineering, product, legal, and compliance
  • Identify risks to health, safety, and fundamental rights
  • Analyze reasonably foreseeable misuse scenarios
  • Populate your risk register with initial severity and probability estimates

Week 3: Risk Evaluation and Mitigation Planning

  • Evaluate each risk against your acceptability criteria
  • Design mitigation controls for unacceptable risks
  • Map each control to the specific risk(s) it addresses
  • Define test plans to validate control effectiveness

Week 4: Testing, Documentation, and Monitoring Setup

  • Execute validation tests for each mitigation control
  • Document test results and update risk register with residual risk levels
  • Set up production monitoring (performance metrics, anomaly detection, user feedback channels)
  • Schedule your first quarterly risk management review

This isn't a one-person job. Article 9 compliance requires cross-functional collaboration and executive sponsorship.

What Happens If You Don't Comply

Non-compliance with Article 9 is classified as a high-severity infringement under Article 71 of the EU AI Act. National market surveillance authorities can:

  • Require you to take corrective action within a specified timeframe
  • Restrict or prohibit the placing on the market of your AI system
  • Withdraw your system from the market
  • Impose administrative fines up to €35 million or 6% of total worldwide annual turnover

Beyond regulatory penalties, inadequate risk management exposes you to:

  • Civil liability: If your AI system causes harm and you can't demonstrate reasonable risk management, you may face lawsuits under national product liability laws
  • Reputational damage: Public disclosure of enforcement actions can destroy customer trust
  • Procurement exclusion: Many EU public sector buyers will require proof of Article 9 compliance in RFPs

The cost of non-compliance far exceeds the cost of getting it right.

How Vigilia Helps You Meet Article 9 Requirements

Vigilia automates the Article 9 gap analysis that traditionally takes consultants weeks to complete. In 20 minutes, you get:

  • Risk classification: Determines if your system is high-risk under Annex III
  • Article 9 compliance score: Evaluates your current risk management process against all Article 9 requirements
  • Gap analysis: Identifies missing documentation, process weaknesses, and anti-patterns
  • Remediation roadmap: Prioritized action items with effort estimates and fine exposure calculations
  • Audit-ready PDF: Exportable report you can share with legal, compliance, or external auditors

Traditional compliance audits cost €5,000–€40,000 and take 1–3 months. Vigilia costs €499 and delivers results in 20 minutes. You get the same article-by-article analysis, documented methodology, and remediation guidance—without the consultant overhead.

August 2, 2026 doesn't move. If you're deploying high-risk AI in the EU, you need Article 9 compliance in place before enforcement begins. The sooner you start, the more time you have to close gaps and validate your controls.

Ready to see where you stand? Generate your EU AI Act compliance report now — €499, 20 minutes, article-by-article gap analysis including Article 9 risk management requirements.


This article provides general information about the EU AI Act and does not constitute legal advice. For specific compliance questions, consult a qualified attorney with expertise in EU AI regulation.


Originally published at Vigilia.

Top comments (0)