GDPR Automated Decision Making: What Article 22 Requires
Your ML model scores loan applications. Your algorithm screens job candidates. Your pricing engine adjusts insurance premiums based on behavioural data. If any of these sound familiar, GDPR has specific rules that apply — and they are more demanding than most companies realise.
Article 22 of the GDPR gives individuals a fundamental right not to be subject to decisions based solely on automated processing when those decisions produce significant legal or similarly significant effects. As AI-powered scoring, credit decisions, and personalisation become ubiquitous across almost every industry, understanding when GDPR automated decision making rules apply — and what you must do — is no longer optional.
This guide covers everything: what Article 22 covers, what counts as "significant effects", the three lawful bases, disclosure obligations, the rights to explanation and human review, and how the EU AI Act interacts with existing GDPR obligations.
What Article 22 Actually Covers
Article 22(1) states that a data subject "shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her."
There are four elements that must all be present for Article 22 to apply:
- A decision is made — not just data collection, but an actual output that determines something
- The decision is based solely on automated processing — no meaningful human involvement
- Profiling may be involved — but is not required
- The decision produces significant effects — legal effects or similarly significant consequences
Miss any one of these elements and Article 22 may not apply — though other GDPR provisions around transparency and data minimisation still do.
What Counts as "Significant Effects"
The GDPR does not exhaustively define what constitutes a "significant effect", but the Article 29 Working Party (now the European Data Protection Board) has provided guidance. Legal effects are the clearest cases: decisions that affect someone's legal rights, legal status, or legal position. But "similarly significant" effects have a much wider reach.
Examples that clearly fall within GDPR automated decision making rules:
- Credit scoring and loan decisions — approving or rejecting a mortgage, setting credit limits, or determining interest rates based on automated risk scoring
- Insurance pricing — using telematics, health data, or behavioural patterns to automatically set premiums
- Job application screening — using automated CV parsing or scoring tools to decide which candidates advance without human review
- Benefits determination — government systems that automatically calculate eligibility for welfare, housing, or healthcare support
- Fraud detection decisions — automatically blocking a bank account or flagging a transaction without human review
- Clinical decision support — systems that automatically determine treatment protocols (disputed, but high-risk)
Examples in a grey area:
- Targeted advertising — the EDPB considers that while behavioural profiling is covered by Article 22 obligations around transparency, the mere delivery of a targeted ad may not constitute a "decision" with significant effects unless it results in exclusion from opportunities
- Content personalisation — showing different Netflix recommendations likely does not meet the threshold; denying someone access to a service based on their profile likely does
- Dynamic pricing — generally low risk unless the pricing creates discriminatory barriers to access
The key test is whether the individual is meaningfully affected in a way that impacts their choices, opportunities, or wellbeing. If someone is denied a loan, screened out of a job pool, or charged a materially different price — that is significant.
What "Solely Automated" Means
This is where many companies make a costly assumption. "Solely automated" does not mean that no human ever sees the output. It means there is no meaningful human review that actually influences the decision.
A human rubber-stamping an algorithm's recommendation — without independently assessing the underlying data, questioning the system's output, or having genuine authority to override — does not break the "solely automated" chain under GDPR.
The EDPB has been clear: for human involvement to take Article 22 out of scope, the reviewer must:
- Have the authority and ability to change the decision
- Actually review and consider the individual's circumstances
- Not be processing so many cases that meaningful review is impossible
This matters enormously for high-volume automated processes. If a credit provider processes 10,000 loan applications per day with a team of five reviewers, the "human review" is likely cosmetic rather than meaningful. GDPR automated decision making obligations would still apply.
The Three Lawful Bases for Automated Decision Making
If Article 22 does apply to your processing, you cannot proceed unless one of three conditions in Article 22(2) is met:
1. Explicit Consent
The individual must have given explicit consent — which under GDPR means a clear affirmative act specifically consenting to this type of processing. Standard checkbox consent at sign-up is unlikely to be specific enough. The consent must:
- Be specific to the automated decision making (not bundled with general terms)
- Be freely given, meaning there must be no penalty for refusing
- Be withdrawable at any time
Explicit consent is often impractical as the sole lawful basis for automated decisions in commercial contexts, because refusing consent cannot result in denial of a service the person needs.
2. Contractual Necessity
Automated decision making is permitted where it is necessary for the performance of a contract between you and the individual, or to take steps at their request before entering into a contract.
The classic example is an automated credit check as part of a loan application — it is necessary to fulfil the contract. However, "necessary" is interpreted narrowly. If the same outcome could reasonably be achieved without automated decision making, this lawful basis is on shaky ground.
3. EU or Member State Law
Automated decision making authorised by EU law or the law of a Member State, which also lays down suitable measures to safeguard the individual's rights and freedoms, provides a third route. Examples include certain tax administration systems or social security processing — but this basis is rarely available to private sector companies.
What You Must Tell Individuals: Articles 13 and 14 Disclosure
Even where automated decision making is lawful, GDPR Article 13(2)(f) and Article 14(2)(g) require controllers to proactively disclose specific information:
- The existence of automated decision making (including profiling)
- Meaningful information about the logic involved — not a full technical specification, but enough for a person to understand how the system works
- The significance and the envisaged consequences — what decisions will be made and what impact those decisions will have on the individual
This information must appear in your privacy policy or notice at the point of collection. Vague statements like "we use automated processes to improve your experience" do not meet this standard. You need to explain, for example: "We use an automated scoring system that assesses your credit risk based on income, debt-to-income ratio, and payment history. This may result in your application being automatically approved or declined."
The obligation to provide meaningful information about the logic is one of the most difficult to operationalise — particularly for complex ML models where the logic is genuinely difficult to explain in plain language.
The Right to Human Review
Where Article 22 applies, individuals have the right — under Article 22(3) — to:
- Obtain human intervention — to have a human actually review the decision
- Express their point of view — to provide additional context or challenge the inputs used
- Contest the decision — to formally object to the outcome
These are distinct from the general right to object under Article 21. The Article 22(3) right specifically attaches to automated decisions and requires a genuine human review pathway — not a form that goes into a queue no one reads.
In practice this means building an operational process:
- A mechanism for individuals to request human review (a form, email address, or in-app process)
- Internal routing to a person with authority to actually change the decision
- A response within a reasonable timeframe (the GDPR does not specify one for Article 22 requests, but 30 days by analogy with DSARs is the standard practice)
- Documentation of the review and outcome
The Right to Explanation
The "right to explanation" for automated decisions is one of the most debated points in GDPR scholarship. The GDPR does not contain a standalone "right to explanation" as such. What exists is:
- The obligation to provide information about the logic at collection (Articles 13/14, covered above)
- The right to obtain a meaningful explanation of the decision as part of a human review request
The EDPB's guidance confirms that individuals can request an explanation of an automated decision as part of the Article 22(3) process. But this is different from a general right to a full algorithmic audit. The explanation must be meaningful and specific to the individual's case — why was their particular application declined, not a general description of how the model works.
This is the area where many organisations fall short: they can explain the model in general terms but cannot easily generate a per-individual explanation of why a specific decision was reached.
Special Category Data: Extra Restrictions
Article 22(4) imposes an additional layer of restriction for automated decisions based on special category data — which includes:
- Health data
- Racial or ethnic origin
- Political opinions
- Religious or philosophical beliefs
- Trade union membership
- Genetic data
- Biometric data used for identification
- Sexual orientation
Automated decisions that rely on special category data are prohibited unless the individual has given explicit consent or the processing is necessary for reasons of substantial public interest under EU or Member State law, with proportionate safeguards.
This is particularly significant for health tech and insurtech companies. Using health data to automatically set insurance premiums without explicit consent is not just a potential Article 22 issue — it engages the Article 9 prohibition on special category processing combined with Article 22(4).
Profiling vs Automated Decision Making: Related but Distinct
Article 4(4) defines profiling as "any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person." Profiling is extremely common — it includes any analysis that builds a profile of a user's preferences, behaviour, or characteristics.
Key distinctions:
- Profiling alone (without a significant decision) is not subject to Article 22, but is still subject to general GDPR principles: transparency, lawful basis, data minimisation
- Automated decision making based on profiling is the combination that triggers Article 22's full requirements
- Profiling without significant decisions still requires disclosure under Articles 13/14 and must have a lawful basis
Many companies conflate the two and either over-engineer their profiling compliance or miss the Article 22 obligations that apply when profiling drives meaningful automated decisions.
How the EU AI Act Interacts with Article 22
The EU AI Act, which came into force in August 2024 with provisions applying progressively through 2026 and 2027, adds a parallel compliance layer for AI systems used in high-risk contexts.
High-risk AI systems under the AI Act include AI used for:
- Credit scoring
- Employment recruitment and CV screening
- Benefits administration
- Law enforcement risk assessment
- Education access decisions
For these systems, the AI Act imposes requirements including conformity assessments, technical documentation, human oversight obligations, accuracy and robustness requirements, and transparency to affected persons.
The relationship with GDPR automated decision making is complementary rather than duplicative. Where Article 22 focuses on individual rights (to contest, to human review, to information), the AI Act focuses on system-level obligations on the provider and deployer (documentation, testing, oversight mechanisms).
If you deploy a high-risk AI system that also constitutes GDPR automated decision making, you must comply with both frameworks. The AI Act's human oversight requirements align with but do not replace Article 22(3)'s right to human review.
What Companies Must Implement
To operationalise Article 22 compliance, you need the following:
Privacy policy disclosure — specific, meaningful information about each automated decision system: what data it uses, what decisions it produces, and what the consequences are for individuals.
Lawful basis documentation — for each automated decision process, document which of the three lawful bases applies and why. This should sit in your Record of Processing Activities (RoPA).
Contest mechanism — a clearly accessible route for individuals to request human review and express their view. This cannot be buried or made deliberately difficult.
Human review process — an internal workflow that routes Article 22 requests to someone with genuine authority to change the decision, with a documented response process.
Special category data audit — an audit of all automated decision systems to identify any that use or proxy for special category data, with additional safeguards or explicit consent mechanisms where required.
Per-decision explanation capability — ideally, the ability to generate a specific explanation for each automated decision, not just a general description of the model.
Practical Checklist: 7 Steps for Article 22 Compliance
- Map your automated decision systems — identify every system that produces decisions about individuals, including third-party tools (credit bureaus, fraud platforms, HR software)
- Apply the Article 22 test — for each system, assess whether it is (a) solely automated and (b) produces significant legal or similarly significant effects
- Establish lawful basis — document whether you are relying on explicit consent, contractual necessity, or legal authorisation for each covered system
- Update privacy notices — add specific disclosure for each covered system: the logic used, the significance, and the consequences
- Build the human review pathway — create an accessible mechanism for individuals to request review, and map it to an internal process with genuine decision authority
- Audit for special category data — check whether any automated system uses or proxies for health, ethnic origin, or other special category data; apply additional safeguards
- Document everything — record your Article 22 compliance analysis in your RoPA, along with decisions about lawful basis and the design of your human review process
Map Your Processing Activities — Including Automated Decision Systems
Custodia helps organisations understand and document their data processing activities, including identifying and mapping automated decision systems subject to Article 22. Our platform scans your website for data flows, generates privacy policy content that reflects your actual processing, and maintains a Record of Processing Activities that captures the information regulators expect to see.
If you use any form of automated scoring, screening, or personalisation that affects users in significant ways, now is the time to assess whether Article 22 applies — and to build the disclosure, contestation, and human review processes it requires.
Scan your website and start mapping your processing activities at Custodia →
Last updated: March 27, 2026. This post provides general information about GDPR automated decision making requirements under Article 22. It does not constitute legal advice. For advice specific to your organisation and jurisdiction, consult a qualified privacy professional.
Top comments (0)