Dark Patterns in Tech: How Companies Engineer Deception and What Developers Can Do About It [2026]
In 2022, Epic Games paid $520 million to settle FTC charges that Fortnite used dark patterns to trick players — including children — into making unintended purchases. The buttons were designed so that a single accidental tap could charge a credit card. No confirmation screen. No undo. That wasn't a bug. Someone sat in a meeting, looked at the conversion metrics, and decided that was the right design.
Dark patterns in tech are everywhere. They're the reason it takes six clicks to cancel a subscription but one click to sign up. They're why cookie consent banners have a giant green "Accept All" button and a barely visible "Manage Preferences" link buried in gray text. These aren't mistakes. They're engineered, A/B tested, and shipped with full knowledge of what they do to users.
I've been building software for over fourteen years, and I've been in rooms where these decisions get made. Not the cartoonishly evil ones, but the gray-area ones — where someone says "let's just preselect the newsletter checkbox" or "let's make the free tier cancellation flow a little more... thorough." This post is about how those patterns actually work under the hood, why they persist, and what we as developers can stop building.
What Are Dark Patterns and Why Should Developers Care?
Dark patterns — increasingly called "deceptive design patterns" thanks to Harry Brignull, the UX researcher who coined the term in 2010 — are user interface designs crafted to manipulate users into actions they didn't intend to take. They exploit cognitive biases, visual hierarchy, and deliberate friction to serve business metrics at the user's expense.
Here's the taxonomy that matters most if you're the one writing the code:
- Confirmshaming: Guilt-tripping users who decline an offer ("No thanks, I don't want to save money")
- Hard to cancel (aka Roach Motel): Signing up is frictionless. Canceling requires a phone call, six screens, and what feels like an emotional hostage negotiation.
- Preselection: Default-checking boxes that opt users into newsletters, data sharing, or add-on purchases
- Hidden costs: Low price upfront, then surprise fees at checkout after the user has already invested time
- Forced action: Requiring account creation, app downloads, or data sharing to access basic functionality
- Visual interference: Making the option the company wants you to pick visually dominant. The alternative? Deliberately hard to find.
The reason developers should care isn't just ethical. It's legal. The FTC has made dark patterns an enforcement priority. The EU's Digital Services Act explicitly prohibits deceptive interfaces. California's CPRA has provisions targeting manipulative consent flows. If you're the one implementing these patterns, you're not just following orders. You're building evidence.
The Engineering Behind the Deception
What makes dark patterns so effective is that they look like normal product decisions from the inside. I've seen teams ship deceptive flows without anyone using the phrase "dark pattern" once. It's always framed as "optimization" or "reducing churn" or "improving conversion."
Let me walk through how three of the most common patterns actually get built.
The Asymmetric Flow (Hard to Cancel)
Amazon Prime's cancellation flow got so notorious that the FTC filed a lawsuit alleging the company used a process internally nicknamed "Iliad" — as in Homer's epic — because it was so long and convoluted. The engineering is straightforward: the sign-up flow is a single API call with minimal validation. The cancellation flow routes through multiple screens, each with a different retention offer, countdown timer, and carefully worded guilt message. The technical implementation is trivial. The A/B testing that optimized each screen for maximum retention? That's where the real engineering hours went.
Brignull calls this the "symmetry test": if it's harder to get out of something than it was to get into it, the design is likely deceptive. It's a devastatingly simple heuristic. I've used it for years, and it catches almost every subscription dark pattern I've ever encountered.
The Consent Theater (Cookie Banners)
Most cookie consent banners in 2026 are technically compliant and functionally deceptive. The pattern: "Accept All" gets a high-contrast button with a large click target. "Manage Preferences" opens a secondary screen with dozens of toggles, most pre-enabled, requiring individual action to disable. The reject option — if it exists at all — is styled as a text link, not a button. Some implementations go further. The "Accept All" button loads instantly, but the preferences panel introduces a deliberate loading delay.
I've reviewed cookie implementations where the consent management platform was configured to treat closing the banner (clicking X) as implicit consent. That's not a UX decision. That's a legal strategy disguised as a UI component. If you're curious about how companies handle privacy decisions at the browser level, the patterns are disturbingly similar.
The Misleading Default (Preselection)
Windows installation flows are a masterclass in preselection. During setup, telemetry options, advertising identifiers, and data-sharing toggles come pre-enabled. The visual design makes the "Recommended" option (maximum data sharing) look like the normal path, while custom configuration requires clicking through additional screens. Microsoft's approach to settings and defaults has been a recurring issue. The pattern isn't new. It just keeps getting more sophisticated.
The engineering here isn't complex. It's a boolean that defaults to true instead of false. But the product impact is massive: studies consistently show that 80-90% of users never change default settings. When you set a default, you're choosing for hundreds of millions of people.
How Misleading Benchmarks Extend the Pattern
Dark patterns aren't limited to UI tricks. They extend to how companies market their products, especially in tech.
AI benchmark gaming has become an art form. Companies cherry-pick evaluation datasets, optimize specifically for benchmark tasks, or compare against outdated versions of competitors. I've looked at marketing pages where the "performance comparison" chart uses a y-axis that starts at 85% instead of 0%, making a 2% improvement look like a 10x leap. That's not a data visualization choice. That's lying with charts.
Same pattern in cloud pricing. "Starting at $0.001 per request" sounds incredible until you discover that number only applies to the first 1,000 requests per month, after which pricing jumps 10x. The pricing page is technically accurate and practically misleading. Having spent years evaluating infrastructure decisions, I can tell you comparing services honestly is harder than it looks. Companies exploit that complexity on purpose.
The most effective dark patterns don't feel like manipulation. They feel like convenience.
Speed claims are another favorite. "Up to 10Gbps" means the theoretical maximum under perfect lab conditions that no real user will ever see. "99.99% uptime" means 52 minutes of downtime per year — unless the SLA defines "downtime" so narrowly that a service can be effectively unusable without technically being "down." I've shipped enough infrastructure to know these numbers are marketing, not engineering.
Are Dark Patterns Illegal?
Increasingly, yes. The FTC's enforcement against Epic Games resulted in a $520 million settlement — one of the largest in the agency's history. The complaint specifically cited interface designs that made it easy for children to make purchases without parental consent.
In Europe, the GDPR and Digital Services Act have given regulators real teeth. France's CNIL fined Google €150 million in 2022 for making cookie rejection harder than acceptance. The Irish Data Protection Commission has gone after Meta for similar reasons.
California's CPRA, effective since 2023, explicitly addresses "dark patterns" by name. It defines them as interfaces "designed or manipulated with the substantial effect of subverting or impairing user autonomy, decision-making, or choice." That's specific language for a statute. Someone on that committee knew exactly what they were targeting.
But here's the reality: enforcement is still way behind implementation. Most dark patterns live in a gray zone. Not clearly illegal, not clearly ethical. And that's exactly where companies want them. The legal risk is low enough to be worth the conversion uplift. For now.
What Developers Can Actually Do About Dark Patterns
Here's the thing nobody's saying about dark patterns: developers aren't just bystanders. We're the ones who build them.
Every deceptive flow was implemented by an engineer. Someone wrote the conditional logic that hides the cancel button. Someone configured the default checkbox state. Someone built the A/B test that optimized for maximum guilt in a confirmshaming modal.
I'm not naive enough to think individual developers can single-handedly fix systemic incentive problems. But I've been in enough orgs to know that engineering pushback works more often than people think. Here's what I've actually seen make a difference:
Apply the symmetry test religiously. Before shipping any flow, ask: is the reverse action equally easy? If signing up takes one click but canceling takes six, flag it. Document it. Make the asymmetry visible in your design review.
Name the pattern out loud. When someone proposes preselecting a data-sharing checkbox, don't just say "I'm not comfortable with that." Say "that's a preselection dark pattern, and it's the kind of thing the FTC has fined companies for." Naming it changes the conversation from vibes to risk.
Build ethical defaults into your architecture. Design consent systems with opt-out as the default, not opt-in. Build the cancellation API as clean as the signup API. Don't wait for someone to ask.
Document the decisions. If your team ships a pattern you've flagged as deceptive, document your objection. Not just as a CYA move (though it doesn't hurt). Written dissent creates institutional memory. The next engineer who inherits that codebase will see it. Ethical concerns in code reviews, like security concerns in AI-generated code, need to be called out explicitly.
Know when to refuse. I realize this is easy to say and hard to do when you have rent to pay. But I've watched senior engineers refuse to implement specific features and seen the company find a less deceptive alternative. It doesn't always work. Silence, though? Silence never works.
The Pattern That Should Worry You Most
The dark patterns of 2026 are getting harder to spot because they're moving from static UI tricks to dynamic, personalized manipulation. ML models can now adjust friction levels, emotional tone of copy, and visual prominence of buttons based on individual user behavior.
Think about what that means concretely. A cancellation flow that's easy for users likely to leave bad reviews and agonizingly hard for users the model predicts will eventually give up. That's not hypothetical. The infrastructure to build it already exists in every major product analytics platform.
This is where the conversation needs to go next. Not just cataloging the patterns we can see, but building detection systems for the ones that are personalized to each user and invisible in aggregate.
If you're building products in 2026, the question isn't whether you'll encounter pressure to implement a deceptive pattern. You will. The question is whether you'll recognize it, name it, and push back. The engineers who do that consistently are the ones I want to work with. They're the ones building products that survive regulatory scrutiny, earn actual user trust, and don't require a legal team to defend their checkout flow.
This is one of those things where the boring answer is actually the right one: build the thing that respects the user. Make it easy to leave. Make defaults honest. Make the cancel button the same size as the signup button. It's not complicated engineering. It's just uncommon courage.
Originally published on kunalganglani.com
Top comments (0)