DEV Community

Cover image for When Security Failures Become Legal Liabilities: Mapping OWASP Top 10 to GDPR and DPDP
Chhayashree
Chhayashree

Posted on

When Security Failures Become Legal Liabilities: Mapping OWASP Top 10 to GDPR and DPDP

Most developers treat OWASP Top 10 as a security checklist. Regulators don’t. They treat the same issues as legal violations.

Imagine this scenario:

Attackers divert users from your website to a fraudulent one. Around 500,000 customers have their data exposed — login details, payment information, travel records, even CVV numbers. Soon after, regulators step in. A fine is announced — initially in nine digits, later reduced, but still significant.

At first glance, it looks like a large-scale breach caused by “poor security.” But break it down, and it becomes more precise:

  • Users were redirected → a failure in application integrity
  • Sensitive data was exposed → weak data protection controls
  • Data was harvested at scale → lack of monitoring and detection

What appears to be a single incident is actually a chain of well-known failures — many of which are outlined in the OWASP Top 10. And it didn’t stop at a security failure. It became a regulatory one. This is exactly what happened in the 2018 data breach involving British Airways.

Investigations by the UK Information Commissioner’s Office (ICO) found that British Airways had failed to process personal data in a manner that ensured appropriate security, specifically violating Article 5(1)(f) and Article 32 of the GDPR. The airline was found to have used hardcoded passwords in unencrypted plain text files and lacked multi-factor authentication (MFA) for remote access — vulnerabilities that fall squarely under OWASP categories.

For developers and architects, the lesson is clear: code is no longer just logic; it defines your legal exposure. To design a system that survives both hackers and regulators, we must map technical risks to their statutory counterparts.

A01: The “Stranger’s Profile” Problem

Consider a user who discovers that changing a single digit in a URL allows them to view a stranger’s transaction history. This is an Insecure Direct Object Reference (IDOR), a classic example of Broken Access Control. These failures allow users to act outside their intended permissions, leading to the unauthorized disclosure or modification of data.

The GDPR addresses this through the principle of Integrity and Confidentiality. Article 5(1)(f) requires that personal data be processed in a manner that ensures protection against “unauthorized or unlawful processing” using appropriate technical measures. India’s DPDP framework is equally direct; Rule 6(1)(b) of the DPDP Rules 2025 mandates “appropriate measures to control access to the computer resources” used for processing to prevent such breaches.

A02: A Statutory Obligation to Hardened Systems

Regulators view Security Misconfiguration — such as leaving “debug” modes active or using default passwords — not just as an oversight, but as a failure of the mandatory evaluation process. If an administrator leaves a cloud storage bucket open to the public, the organization has failed to observe the safeguards required by law.

This ties back to GDPR’s core security obligation under Article 32(1)(d), which requires a “process for regularly testing, assessing and evaluating” security measures. Similarly, DPDP Rule 6(1)(g) requires “appropriate technical and organizational measures to ensure effective observance” of security safeguards.

A03: The Cascading Risk of Dependencies

Modern software isn’t built from scratch; it’s assembled from thousands of third-party blocks. The Software Supply Chain Failure category highlights that a single compromised library can put your entire data ecosystem at risk, as seen with Log4j.

Supply chain integrity is legally anchored in the relationship between the “Controller” and the “Processor.” GDPR Article 28(1) states that a controller must use only processors providing “sufficient guarantees” to implement appropriate security measures. DPDP Rule 6(1)(f) mirrors this by requiring “appropriate provision in the contract” between fiduciaries and processors for taking reasonable safeguards. The lesson for architects is simple: you can outsource code, not responsibility.

A04: Plain Text is a Legal Liability

Storing passwords in unencrypted plain text files was one of the primary findings in the British Airways investigation. Cryptographic Failures involve the lack of encryption or the use of weak algorithms, making sensitive data an easy target for exfiltration.

In this area, the law and code are most explicit. This is exactly what GDPR Article 32(1)(a) is meant to prevent by explicitly recommending “the pseudonymization and encryption of personal data.” On the Indian side, DPDP Rule 6(1)(a) mandates “securing of personal data through encryption, obfuscation, masking” or virtual tokens. Encryption is no longer a best practice; it is a statutory baseline.

A05: Protecting the Integrity of the Command

Injection vulnerabilities, like SQLi or Cross-Site Scripting (XSS), allow untrusted input to be executed as part of a system command, primarily threatening the confidentiality and integrity of data.

Under the GDPR, this aligns most directly with Article 5(1)(f) (integrity and confidentiality) and Article 32, which require secure processing. In the Indian context, the DPDP Act Section 8(3) adds a secondary layer of concern regarding data accuracy, requiring fiduciaries to make “reasonable efforts” to ensure that personal data is “accurate and complete.” An injection attack that modifies a user’s record is a failure of both security and the legal standard for data accuracy.

A06: When Code Cannot Fix a Flawed Blueprint

Code-level patches cannot compensate for a fundamentally flawed design. Insecure Design focuses on risks related to architectural flaws, such as missing threat modeling or insecure business logic flows.

This category maps directly to the legal principle of “Data Protection by Design and by Default.” GDPR Article 25(1) requires organizations to implement measures “designed to implement data-protection principles” at the time the processing means are determined. For larger entities in India, DPDP Rule 13(3) imposes a specific duty to verify that their technical measures, including “algorithmic software,” are not likely to pose a risk to user rights.

A07: The Gatekeeper’s Failure

If an attacker can bypass a login page because the system lacks MFA or has weak session management, the front door is effectively left open. Authentication Failures allow attackers to assume the identities of legitimate users.

Authentication is the foundational mechanism for ensuring that personal data is only accessible to the correct individual. Under GDPR Article 12(1), communication with the data subject must be accessible and transparent, but Article 12(6) adds a critical security layer: if a controller has “reasonable doubts concerning the identity” of a person making a request, they are empowered to request additional information for confirmation. This makes strong authentication a prerequisite for fulfilling the “Right of Access.” More broadly, this is another place where Article 32 becomes enforceable. The DPDP framework follows a similar path; Rule 10(1) requires “appropriate technical and organizational measures” to ensure “verifiable consent” of a parent when handling children’s data.

A08: Resilience Against Unauthorized Updates

A system’s resilience is tested when it accepts software updates or critical data without verification. Software and Data Integrity Failures often lead to Remote Code Execution (RCE) if an application blindly trusts a tampered file.

The legal requirement for “System Resilience” and “Availability” is directly threatened here. This ties back to GDPR’s core security obligation under Article 32(1)(b), which mandates the “ability to ensure the ongoing confidentiality, integrity, availability and resilience” of systems. DPDP Rule 6(1)(d) echoes this, requiring “reasonable measures for continued processing” in the event that data integrity is compromised, specifically suggesting the use of “data-backups.”

A09: The Forensic Clock is Ticking

Without logs, you don’t detect the breach. And if you don’t detect it, you miss the legal window to report it. Security Logging and Alerting Failures prevent the timely response that regulators expect.

This category is critical for meeting notification mandates. GDPR Article 33(1) requires notification to the authority “not later than 72 hours after having become aware” of a breach. The DPDP Rules are even more specific about visibility. Rule 6(1)(c) mandates “visibility on the accessing of such personal data, through appropriate logs,” while Rule 6(1)(e) requires organizations to “retain such logs and personal data for a period of one year” for investigation and remediation.

A10: The Risk of “Failing Open”

Consider a scenario where an external authentication service times out, and the application defaults to granting access because it doesn’t know how to handle an unexpected state. This is a classic example of Mishandling of Exceptional Conditions, which often results in systems “failing open.”

This risk maps back to the resilience mandates in both laws. GDPR Article 32(1)(b) requires the “ability to ensure the ongoing… resilience of processing systems.” In the Indian context, DPDP Rule 6(1)(d) requires measures for “continued processing” in the event of availability being compromised. A system that crashes and exposes raw data or skips an authorization check has failed its legal requirement to remain resilient under stress.

Summary Reference Guide

Summary Table
These mappings are interpretative and illustrate how technical risks may translate into legal obligations, rather than strict one-to-one legal classifications.

The Path Forward: Building for Both Security and Compliance

The OWASP Top 10 highlights how applications fail in practice, while regulations like the GDPR and the Digital Personal Data Protection Act define what those failures cost. Looking at them together changes how we approach system design.

Security is no longer just about preventing exploits. It is about ensuring that systems behave predictably under stress, protect data by default, and provide enough visibility to respond when something goes wrong. For developers and architects, this means shifting from fixing vulnerabilities to engineering systems that are resilient, observable, and controlled from the start.

Because in practice, compliance is not achieved through documentation or policies. It is achieved through how the system is built. Which OWASP category do you think creates the biggest compliance risk in real systems?


This article is an independent analysis connecting application security risks with regulatory frameworks. If you spot gaps or inaccuracies, I’d value the correction — this space evolves, and so should our understanding.

Top comments (0)