DEV Community

Guillermo Llopis
Guillermo Llopis

Posted on

EU AI Act vs GDPR: What's Different and What Overlaps

If your company is already GDPR-compliant, you might assume the EU AI Act is more of the same. It is not. While both regulations share some DNA — risk-based thinking, documentation requirements, transparency obligations — they are fundamentally different laws with different goals, different scopes, and different compliance mechanisms.

Understanding where they overlap and where they diverge is critical for avoiding both compliance gaps and duplicated effort.

Different laws, different purposes
The GDPR is a fundamental rights law focused on protecting individuals' personal data. It governs how data is collected, processed, stored, and shared, regardless of what technology is involved.

The EU AI Act is a product safety law focused on ensuring AI systems are safe, transparent, and respect fundamental rights. It regulates how AI systems are designed, developed, validated, and deployed — based on risk category, not data type.

This distinction matters in practice: the AI Act applies even when no personal data is processed. An AI system that analyzes satellite imagery for infrastructure planning, or that optimizes industrial processes using only machine-generated sensor data, still falls under the AI Act if it meets the definition of an AI system. The GDPR would not apply in those scenarios.

Conversely, processing personal data with simple rule-based software (no AI involved) triggers GDPR obligations but not the AI Act.

Where they genuinely overlap
Despite their different focuses, there are real areas of intersection — especially when AI systems process personal data, which most do.

Transparency
Both regulations require transparency, but about different things:

GDPR (Articles 13-14): You must tell individuals that their data is being processed, what data, why, how long, and their rights regarding it.
AI Act (Article 13): You must ensure the AI system itself is designed to be sufficiently transparent for deployers to understand and use it appropriately — including its capabilities, limitations, accuracy levels, and foreseeable misuse scenarios.
If your AI system processes personal data, you need to satisfy both. A privacy notice alone does not meet AI Act transparency requirements, and an AI system transparency sheet does not replace your GDPR obligations.

Risk assessments
This is where duplication becomes a real operational concern:

GDPR requires a Data Protection Impact Assessment (DPIA) under Article 35 when processing is likely to result in high risk to individuals' rights and freedoms.
AI Act requires a Fundamental Rights Impact Assessment (FRIA) under Article 27 for deployers of high-risk AI systems used in certain domains (credit scoring, insurance pricing, law enforcement, migration management, and others).
These two assessments have different scopes, different supervisory authorities, and different procedural requirements. But in practice, most high-risk AI systems that trigger a FRIA will also trigger a DPIA — because they typically process personal data in ways that affect people's rights.

The practical approach: conduct the DPIA first, using the information the AI provider must give you under Article 13 of the AI Act. Then build on it for the FRIA. Do not treat them as entirely separate exercises.

Automated decision-making
GDPR Article 22 gives individuals the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects. This includes profiling.

The AI Act addresses a similar concern but from the product side. High-risk AI systems must include human oversight measures (Article 14) — technical features that enable humans to understand, supervise, and intervene in the system's operation. Several categories of AI systems in Annex III explicitly target automated decision-making scenarios: credit scoring, hiring, insurance, and benefits administration.

If your AI system makes or significantly influences decisions about people, you likely need to address both Article 22 GDPR and the AI Act's human oversight requirements. They are not the same obligation, but the engineering solutions often overlap: providing explanations, enabling human review, allowing contestation.

Data governance
GDPR has comprehensive requirements for lawful processing, purpose limitation, data minimization, accuracy, and storage limitation.

The AI Act adds AI-specific data governance requirements in Article 10 for high-risk systems, covering training, validation, and testing datasets. These include requirements for data relevance, representativeness, absence of errors, completeness, and statistical properties. Article 10 also allows processing of special categories of personal data (health, ethnicity, biometrics) for bias detection and correction purposes — something that GDPR alone would generally prohibit.

This is one area where the AI Act explicitly overrides the GDPR's defaults, creating a legal basis for processing sensitive data that is necessary to ensure AI systems do not discriminate.

What GDPR compliance does NOT cover
If you have been through GDPR compliance, you have a head start. But several AI Act requirements have no GDPR equivalent:

AI Act Requirement GDPR Equivalent
Risk classification (prohibited, high, limited, minimal) None — GDPR does not classify systems by risk tier
Annex IV technical documentation None — GDPR records of processing are far less detailed
Conformity assessment (self-assessment or notified body) None
CE marking for high-risk AI systems None
EU database registration (Article 71) DPA registration is mostly gone post-GDPR
Post-market monitoring system (Article 72) None — GDPR does not require ongoing product monitoring
Accuracy, robustness, and cybersecurity requirements (Articles 15) Security measures exist but are far less prescriptive
Quality management system (Article 17) None at this level of specificity
The most significant gap is Annex IV technical documentation. GDPR's records of processing activities (Article 30) cover data flows and purposes. Annex IV demands detailed documentation of your system's architecture, training data provenance, development methodology, testing procedures, risk management system, and post-market monitoring plan. It is an order of magnitude more comprehensive.

What you can reuse from GDPR
Not everything is new ground. GDPR compliance gives you:

Data mapping and inventories. You already know what personal data you process and where it flows. This feeds directly into Annex IV Section 2 (data requirements).

Lawful basis analysis. You have already determined your legal basis for processing. This groundwork helps with AI Act Article 10 data governance requirements.

DPIA methodology. Your DPIA process, templates, and governance structures can be extended for AI Act FRIAs rather than building from scratch.

DPO and governance structures. The organizational muscles you built for GDPR — designated officers, compliance processes, training programs — are directly applicable.

Individual rights mechanisms. Your processes for handling data subject requests can inform the human oversight and contestation mechanisms the AI Act requires.

Vendor management. GDPR Article 28 controller-processor agreements have parallels in AI Act provider-deployer obligations. Your vendor assessment processes can be adapted.

The Digital Omnibus complication
The European Commission's Digital Omnibus proposal (November 2025) aims to reduce regulatory overlap between the AI Act, GDPR, and other digital regulations. Among its proposed changes:

Clarifying the interaction between AI Act FRIAs and GDPR DPIAs to reduce duplication
Streamlining conformity assessment procedures
Extending simplified documentation requirements to all SMEs (not just microenterprises)
The Omnibus is still working through the legislative process. Until it is adopted, organizations must comply with both regulations as written — which means some duplication is unavoidable.

What to do now
Map your AI systems against both frameworks. For each AI system, identify which GDPR obligations apply (does it process personal data?) and which AI Act obligations apply (what is its risk classification?). The overlap set is where you need integrated compliance.

Integrate your impact assessments. Do not run DPIAs and FRIAs as separate projects. Start with the DPIA, extend it to cover fundamental rights beyond data protection, and you have a solid foundation for both.

Extend your documentation. GDPR Article 30 records are a starting point, not a finish line. For high-risk AI systems, you need full Annex IV technical documentation — and it requires significantly more technical detail than anything GDPR demands.

Classify your AI systems. The first step is knowing whether your AI systems are high-risk under the AI Act. Annexa's free risk triage can classify your system in minutes, with no signup required — giving you clarity on which obligations apply before you invest in compliance effort.

Do not assume GDPR compliance is enough. It is a foundation, not a substitute. The AI Act introduces product-safety obligations that have no precedent in data protection law.

The August 2026 deadline is five months away. Companies that treat AI Act compliance as a GDPR extension will find gaps. Companies that start from scratch will waste effort. The right approach is somewhere in between — build on what you have, but recognize what is genuinely new.

Top comments (0)