When teams discuss EU AI Act readiness, they usually focus on classification, documentation, and conformity assessment. But one of the hardest obligations in practice is operational: what happens when something goes wrong in production.
Article 73 requires providers of high-risk AI systems to report serious incidents to market surveillance authorities, with strict time windows (up to 15 days, and faster in severe cases). For many organizations, that is not mainly a legal drafting issue. It is a detection, escalation, and evidence-preservation issue.
This guide is a practical playbook for building a reporting process that works under pressure.
What Article 73 Actually Requires (in plain terms)
From the legal text and Service Desk publication, the main requirements are:
- Report serious incidents to the market surveillance authority in the Member State where the incident occurred.
- Report immediately after establishing a causal link (or reasonable likelihood), and no later than 15 days after awareness.
- Report within 2 days for widespread infringement / very severe incident categories.
- Report within 10 days in case of death.
- Initial incomplete reports are allowed, followed by complete reporting.
- Investigate without delay, cooperate with authorities, and avoid altering evidence in ways that could compromise later evaluation.
Sources:
- European Commission AI Act Service Desk, Article 73: https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-73
- Consolidated AI Act text (Article 73): https://artificialintelligenceact.eu/article/73/
Why companies miss deadlines (even when they know the law)
In preparation audits, the same operational gaps appear repeatedly:
No incident taxonomy for AI-specific harm
Security teams have incident classes; product teams have bug severity; legal has regulatory risk. They are not aligned.No trigger owner
Teams debate whether the issue is “serious enough” while the reporting clock is running.Evidence gets overwritten
Logs are rotated, models are hot-patched, or prompts are not preserved in reproducible form.Cross-border confusion
Teams do not know which Member State authority should receive first notice when users are in multiple countries.Provider vs deployer ambiguity
Contracts do not clearly define who notifies and who supplies supporting facts.
The result is predictable: delayed notification, incomplete documentation, and inconsistent narratives across legal, engineering, and customer-facing teams.
A realistic 15-day playbook
Below is a pragmatic sequence you can implement now.
Day 0 (first awareness): stabilize and preserve
Objective: avoid evidence loss and establish accountable ownership.
- Appoint an incident lead (name, backup, and on-call path).
- Open a dedicated incident record with immutable timestamps.
- Preserve relevant artifacts immediately:
- model version and configuration
- prompts/inputs and outputs (where lawful and available)
- human-oversight actions and overrides
- logs, alerts, user complaints, and rollback actions
- Freeze non-essential changes to affected components until legal/technical review confirms safe handling.
Critical principle: do not wait for perfect causality proof before opening the case. You can submit an initial incomplete report later if needed.
Day 1–2: classify severity and reporting clock
Objective: determine if Article 73 thresholds are likely met.
- Apply a pre-defined seriousness matrix (harm impact × scale × reversibility × affected rights/safety).
- Decide whether the incident could fit:
- standard serious incident (up to 15 days)
- widespread/severe category (2 days)
- death-related incident (10 days)
- Record rationale for classification decisions, including dissenting views.
If threshold is plausibly met, prepare draft notification immediately. Over-reporting with clear uncertainty notes is generally safer than late reporting after internal debate.
Day 2–5: submit initial notification (if needed)
Objective: meet timing while facts are still emerging.
A strong initial report should include:
- system identity and intended purpose
- incident date/time window and discovery path
- known/potential harm profile
- affected geography / Member State context
- current containment actions
- known data gaps and expected follow-up timeline
Article 73 explicitly allows incomplete initial reporting, provided follow-up is timely and substantive.
Day 5–10: investigate and update
Objective: move from hypothesis to defensible cause analysis.
- Run technical root-cause analysis (data drift, edge case, oversight failure, integration issue, etc.).
- Quantify affected population and confidence intervals.
- Validate whether corrective actions reduce recurrence risk.
- Coordinate messaging so regulatory, legal, and customer communications are factually consistent.
If death-related criteria apply, ensure the 10-day deadline is handled as a hard stop.
Day 10–15: complete report and corrective plan
Objective: provide authorities with decision-grade information.
Final reporting package should include:
- incident chronology
- causal analysis and uncertainty boundaries
- corrective and preventive actions (CAPA)
- deployment restrictions or temporary suspension decisions
- monitoring commitments and review cadence
Think of this not as a one-off filing, but as the first artifact in your post-market compliance trail.
Provider–deployer split: define it before incident day
Many AI products involve one company as provider and another as deployer. Under pressure, this becomes a bottleneck unless contracts are explicit.
Minimum contractual clauses to include now:
- who files Article 73 notification
- max handoff time for incident facts (for example, 24 hours)
- evidence retention responsibilities
- contact points for regulatory follow-up
- authority for emergency suspension / kill switch decisions
Without these clauses, your legal position may be clear on paper but unworkable in operations.
The evidence package regulators usually expect
No two investigations are identical, but teams should prepare to produce:
- versioned model and release history
- validation/testing evidence relevant to failure mode
- human oversight procedures and operator logs
- incident triage records and decision timestamps
- corrective-action verification data
If you cannot reconstruct what happened from your own records, authorities will assume your controls are weaker than your policy documents claim.
Practical controls that reduce reporting risk
You do not need a huge compliance program to improve immediately. Start with these controls:
- AI-specific incident rubric integrated into existing security/ops workflows.
- 72-hour internal escalation SLA from first signal to legal/compliance review.
- Evidence lock protocol for logs, prompts, and model artifacts.
- Pre-mapped authority list by country and product footprint.
- Quarterly tabletop exercise for one realistic serious-incident scenario.
These five controls usually deliver a larger risk reduction than producing another policy PDF.
What to do this week (if you are behind)
- Name your incident lead and backup.
- Create a one-page Article 73 trigger matrix.
- Test evidence preservation on one live model.
- Review provider/deployer language in top customer contracts.
- Run one 60-minute simulation with legal + engineering + support.
If these actions feel basic, that is the point. Basic execution under time pressure is what prevents deadline failure.
A sober note on enforcement reality
The AI Act is often discussed as future-state compliance. Incident reporting is different: it is measured against actual operational behavior during real harm events.
Teams that perform well are not the ones with the longest policies. They are the ones with clear ownership, preserved evidence, and repeatable cross-functional response.
If you are already preparing for conformity assessment and post-market monitoring, this playbook should be part of that same system, not a separate legal checklist.
If you want a structured way to track incident-readiness controls and documentation gaps, you can use AktAI’s dashboard:
- https://aktai.eu/dashboard?utm_source=bluesky&utm_medium=social&utm_campaign=article_incident_reporting_playbook
- https://aktai.eu/dashboard?utm_source=mastodon&utm_medium=social&utm_campaign=article_incident_reporting_playbook
- https://aktai.eu/dashboard?utm_source=devto&utm_medium=social&utm_campaign=article_incident_reporting_playbook
- https://aktai.eu/dashboard?utm_source=hashnode&utm_medium=social&utm_campaign=article_incident_reporting_playbook
Related guides:
Top comments (0)