If you use AI in your product or operations — and in 2026, most companies do — you probably already think about GDPR. But the EU AI Act, which entered into force in August 2024 and applies progressively through 2027, adds a second compliance layer that sits alongside GDPR rather than replacing it.
The relationship between GDPR and the EU AI Act is not simple. They have different scopes, different enforcement mechanisms, and different risk frameworks. But for many AI systems, both apply simultaneously. Understanding how the GDPR EU AI Act interaction works — where the frameworks overlap, where they diverge, and what each requires — is now a baseline competency for any company using AI to process personal data.
This guide explains the interaction clearly, without burying you in legal text.
What the EU AI Act Is
The EU AI Act is a risk-based regulatory framework for artificial intelligence systems. Unlike GDPR, which centres on personal data, the AI Act focuses on the risk that AI systems pose to health, safety, fundamental rights, and democracy — regardless of whether personal data is involved.
The Act establishes four risk tiers:
Unacceptable risk — AI systems that are prohibited outright. These include social scoring systems operated by public authorities, AI that subliminally manipulates behaviour to cause harm, and most uses of real-time remote biometric identification in public spaces.
High risk — AI systems in sensitive domains that are permitted but subject to strict requirements. Examples include CV screening tools, creditworthiness assessment models, biometric identification systems, AI used in educational access decisions, AI used in critical infrastructure, and AI used in law enforcement.
Limited risk — AI systems with lighter transparency obligations. Chatbots, for example, must disclose that the user is interacting with AI.
Minimal risk — The vast majority of AI applications, which face no additional obligations beyond existing law (including GDPR).
The AI Act is enforced by national market surveillance authorities and, for general-purpose AI models, by the European AI Office. Penalties for prohibited practices reach €35 million or 7% of global annual turnover. Penalties for high-risk non-compliance reach €15 million or 3% of global turnover.
The AI Act Timeline: Key Application Dates Through 2027
The GDPR EU AI Act interaction is not an immediate binary — the AI Act is phased in over several years:
- August 2024 — AI Act entered into force
- February 2025 — Prohibitions on unacceptable-risk AI apply; EU AI literacy obligations take effect
- August 2025 — Rules for general-purpose AI models (GPAIs) apply, including those underlying large language models
- August 2026 — High-risk AI obligations apply for most high-risk categories
- August 2027 — AI Act obligations apply to high-risk AI systems that are safety components of products regulated under existing EU harmonisation legislation
If you use a third-party AI tool classified as high-risk, your obligation to understand and document its compliance status begins in August 2026. If you deploy general-purpose AI models in your product, the GPAI rules applied from August 2025.
Where GDPR and the AI Act Overlap
The GDPR EU AI Act overlap is significant wherever AI processes personal data — which is most commercial AI use.
GDPR applies whenever an AI system processes personal data about identified or identifiable natural persons. That includes:
- Training AI on datasets containing personal data
- Using AI to make decisions about individuals (recruitment, credit, pricing)
- Deploying AI assistants that access user account data
- Running AI analytics on user behaviour
The AI Act applies to AI systems based on their risk classification — and most high-risk AI categories directly involve processing personal data. CV screening tools process candidate data. Creditworthiness models process financial and behavioural data. Biometric systems process biometric data, which is special category data under GDPR.
The result: for high-risk AI systems that process personal data, companies must comply with both frameworks simultaneously.
GDPR Obligations for AI Systems
Even before the AI Act, GDPR imposed specific obligations on AI systems that process personal data. These continue in full under the GDPR EU AI Act dual-compliance regime.
Data minimisation (Article 5(1)(c)) — AI systems must only process personal data that is adequate, relevant, and limited to what is necessary. Training a model on a rich dataset of historical customer data requires justification that each data field is necessary.
Purpose limitation (Article 5(1)(b)) — Data collected for one purpose cannot be repurposed to train an AI model without a compatible legal basis. Many companies have made this mistake: scraping their CRM data to train a custom model without analysing whether the original collection purpose covers AI training.
Transparency (Articles 13 and 14) — Individuals must be informed if their data is used in AI systems. Privacy notices must describe the logic involved, the significance, and the envisaged consequences of AI processing that significantly affects them.
Automated decision-making (Article 22) — Individuals have the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects. Where such decisions are necessary, individuals must be able to obtain human review, contest the decision, and express their point of view.
Legal basis — Processing personal data for AI purposes requires a valid legal basis. Consent, legitimate interest, and contract are the most common. Consent must be specific, informed, and freely given — not buried in terms of service.
AI Act Obligations for High-Risk Systems
For AI systems classified as high-risk, the AI Act imposes obligations that go beyond GDPR in important ways:
Conformity assessment — Before deploying a high-risk AI system, providers must demonstrate conformity with AI Act requirements. For some categories, this requires third-party assessment; for others, self-assessment with documented evidence.
Technical documentation — Providers must maintain detailed technical documentation covering the system's design, training data, performance metrics, intended purpose, and known limitations.
Human oversight — High-risk AI systems must be designed to allow effective human oversight. They must include the ability for humans to monitor, intervene, and override system outputs.
Transparency to users — Deployers of high-risk AI systems must provide meaningful information to affected individuals about the AI system being used and its implications.
Accuracy, robustness, and cybersecurity — High-risk AI systems must meet standards for accuracy, resilience against errors, and protection against adversarial attacks.
Post-market monitoring — Providers must implement systems to monitor AI performance after deployment and report serious incidents to authorities.
Using Personal Data to Train AI: The GDPR EU AI Act Intersection
Training AI on personal data is one of the most complex areas of the GDPR EU AI Act interaction.
Under GDPR, you need a lawful basis to process personal data for AI training. If you trained a recommendation model on user behaviour data, the lawful basis used for the original data collection (e.g., consent to use the service) may not extend to AI training without explicit disclosure.
Under the AI Act, if the resulting trained model is high-risk, you additionally need to maintain documentation about the training datasets — their source, curation methodology, coverage, and potential biases.
The combination means:
- Identify your lawful basis for using the personal data in training (GDPR)
- Document that you informed individuals their data could be used for training (GDPR Articles 13/14)
- If the model is high-risk, produce technical documentation about the training dataset (AI Act)
- Maintain records of processing activities covering the training operation (GDPR Article 30)
This is not theoretical. The Italian Data Protection Authority (Garante) investigated ChatGPT in 2023 specifically on GDPR grounds related to training data lawful basis. Enforcement at the GDPR EU AI Act intersection is already live.
Prohibited AI Practices: Mostly Also GDPR-Violating
The AI Act's prohibited practices — unacceptable-risk AI — largely overlap with processing that would already violate GDPR.
Social scoring by public authorities — Assigning individuals scores based on their social behaviour or personal characteristics for preferential or adverse treatment. This would also violate GDPR Article 22 (automated decision-making) and likely constitute processing without valid legal basis.
Real-time biometric surveillance in public spaces — Processing biometric data of individuals in public spaces without their knowledge constitutes processing special category data (GDPR Article 9) without a valid exception.
Subliminal manipulation — AI that exploits psychological weaknesses to influence behaviour would constitute unfair processing (GDPR Article 5(1)(a)) and, where consent is obtained, invalid consent.
The practical implication: if you are avoiding prohibited AI practices, you are already some way toward GDPR compliance for those use cases — but the AI Act prohibition is absolute, with no legal basis override.
High-Risk AI Categories and GDPR Article 22
The GDPR EU AI Act overlap is most pronounced in the high-risk AI categories that involve decisions about individuals.
High-risk AI categories under the AI Act include:
- CV screening and recruitment AI — Also subject to GDPR Article 22 (automated decision-making) and data subject rights to explanation and human review
- Creditworthiness assessment AI — Also caught by GDPR Article 22; credit decisions based solely on automated profiling require human review on request
- Biometric identification systems — Also processing special category data under GDPR Article 9, requiring explicit consent or another Article 9(2) exception
- Educational access decisions — AI that determines whether someone gains access to educational institutions; also engages GDPR Article 22 for significant effects
For each of these categories, running GDPR Article 22 compliance (documenting the logic, enabling human review, providing meaningful information) is a prerequisite, not an alternative, to AI Act high-risk compliance.
The DPIA-AI Act Documentation Connection
Data Protection Impact Assessments (DPIAs) under GDPR Article 35 are required when processing is likely to result in high risk to individuals. The GDPR criterion maps closely to the AI Act's high-risk classification.
If you are deploying a high-risk AI system under the AI Act, you almost certainly need a DPIA under GDPR. Both documents require analysis of the same underlying facts: what data is processed, how decisions are made, what safeguards are in place, and what risks remain.
This creates an opportunity for efficiency. The GDPR EU AI Act documentation overlap means that a well-prepared DPIA and the AI Act's technical documentation requirement can be built from the same evidence base:
- Processing purpose → also AI Act intended purpose documentation
- Data sources and categories → also AI Act training data documentation
- Risk assessment → also AI Act risk management system
- Safeguards → also AI Act human oversight and accuracy measures
- Consultation outcomes → also AI Act post-market monitoring plans
Coordinate your privacy and AI governance teams to produce integrated documentation rather than parallel, duplicated efforts.
Governance: AI Responsible Person and the DPO Role
The AI Act requires providers and deployers of high-risk AI systems to designate an individual responsible for AI Act compliance — sometimes called the "AI responsible person" or AI compliance officer.
GDPR requires data controllers processing data at high risk to designate a Data Protection Officer (DPO). Where high-risk AI systems process personal data — which is most of them — both roles are triggered.
In practice, for many organisations, particularly smaller ones, these roles will overlap substantially. The DPO's expertise in data governance, individual rights, and risk assessment is directly relevant to AI Act compliance. Building an integrated privacy and AI governance function, rather than two separate siloes, is both more efficient and more coherent.
Third-Party AI Tools: Vendor Due Diligence Under Both Frameworks
Many businesses do not build their own AI systems — they use third-party tools: AI writing assistants, customer service chatbots, marketing personalisation platforms, AI-powered analytics.
The GDPR EU AI Act framework applies regardless of whether you built the AI or bought it.
Under GDPR, if a third-party AI tool processes personal data on your behalf, you are a data controller and the vendor is a data processor. You need a Data Processing Agreement (DPA). You need to understand what data the tool processes, where it goes, and how long it is retained.
Under the AI Act, if the third-party tool is a high-risk AI system, you as the deployer have obligations to verify conformity, conduct a fundamental rights impact assessment, and implement human oversight measures. You cannot outsource these obligations to the vendor — you remain responsible.
Vendor due diligence checklist for AI tools:
- Is the tool classified as high-risk under the AI Act? Request the provider's conformity documentation.
- Does the tool process personal data? Obtain a DPA with adequate data processing and sub-processor terms.
- Does the tool make or substantially influence decisions about individuals? Assess Article 22 obligations.
- Is the tool's training data disclosed? Evaluate any GDPR lawful basis implications.
- Where is data processed? Assess international transfer implications under GDPR Chapter V.
Practical Checklist: 7 Steps for GDPR + AI Act Dual Compliance
1. Inventory your AI systems. List every AI tool in use across your business — purchased, built, or accessed via API. Classify each by AI Act risk tier.
2. Identify high-risk AI systems. For each high-risk system, confirm you have (or will have by August 2026) the required conformity documentation, human oversight mechanisms, and logging.
3. Audit AI-related personal data processing. For every AI system that processes personal data, document the lawful basis, the data categories, the retention periods, and the data flows. Update your Article 30 records.
4. Update privacy notices. Ensure your privacy policy and Article 13/14 notices describe AI processing, the logic involved, and the rights individuals have — including Article 22 rights for automated decision-making.
5. Conduct DPIAs for high-risk processing. Align your DPIA process with AI Act documentation requirements. Build integrated evidence covering both GDPR and AI Act requirements.
6. Review AI training data lawful basis. For any AI model trained on personal data, confirm the lawful basis covers AI training, or that individuals were adequately informed.
7. Establish vendor due diligence. For third-party AI tools, obtain DPAs, conformity declarations where applicable, and document your assessment of the vendor's AI Act compliance status.
Conclusion: GDPR EU AI Act Compliance Is One Problem, Not Two
The GDPR EU AI Act interaction is complex, but the underlying question is simpler than it appears: are you using AI responsibly, transparently, and with respect for individuals' rights and safety?
Both frameworks are ultimately asking the same questions in different ways. GDPR asks: what are you doing with people's data? The AI Act asks: what are you doing with AI that affects people? For AI systems that process personal data — the majority of commercial AI use cases — the answer to both questions must be robust.
The practical path forward is integrated governance: privacy and AI compliance built together, using shared documentation, shared risk assessments, and shared accountability structures.
If your website uses AI-powered tools that process visitor data, the first step is understanding what those tools actually do. Custodia scans your site to identify trackers, analytics, and AI-powered tools that collect visitor data — and generates the documentation you need to demonstrate compliance.
Scan your website to identify AI-powered tools that process visitor data at Custodia
Last updated: March 27, 2026. This post provides general information about GDPR and EU AI Act compliance. It does not constitute legal advice. Both frameworks are subject to ongoing regulatory guidance — consult a qualified privacy and AI law professional for advice specific to your organisation.
Top comments (0)