- My project: Hermes IDE | GitHub
- Me: gabrielanhaia
A Jury Did What Congress Couldn't
Years of Senate hearings. Whistleblower leaks. Outraged op-eds. None of it stuck. Then twelve ordinary people in a courtroom looked at how Instagram and YouTube were built, looked at the harm those designs caused to a young woman, and said: you did this on purpose.
Meta and Google are now legally liable for the addictive design of their platforms. Not for hosting bad content. For how they engineered the experience itself.
That distinction matters. A lot. Because every developer reading this has shipped some version of these exact patterns.
What Was Actually on Trial
The plaintiffs didn't argue that social media is bad. The argument was surgical: Meta and Google knowingly designed features to maximize engagement even after internal research showed those features caused psychological harm to younger users.
Internal documents proved both companies knew. Engineers and researchers inside these organizations had flagged the problems. Leadership picked engagement metrics over user safety anyway. A jury saw those documents and reached the obvious conclusion.
Here's the part that should bother every developer who's been in a sprint planning meeting where "increase session duration" was the top priority: the features on trial aren't exotic. They're the same patterns sitting in your codebase right now.
Show Me the Code
Enough abstraction. Let's look at what "addictive by design" actually looks like when it hits production.
Infinite Scroll: Removing the Exit
The entire point of infinite scroll is to eliminate decision points. Pagination gives users a natural moment to think "do I want more?" Infinite scroll removes that moment.
// The pattern that just became a legal liability
class InfiniteFeed {
constructor(feedElement, api) {
this.feed = feedElement;
this.api = api;
this.loading = false;
this.cursor = null;
// There is no end. That's the point.
this.hasMore = true;
this.setupObserver();
}
setupObserver() {
// Fire when user gets within 800px of the bottom
// They never see the end coming. They never reach it.
const observer = new IntersectionObserver(
(entries) => {
if (entries[0].isIntersecting && !this.loading) {
this.loadMore();
}
},
{ rootMargin: '800px' }
);
observer.observe(this.feed.querySelector('.sentinel'));
}
async loadMore() {
this.loading = true;
const { posts, nextCursor } = await this.api.getFeed(this.cursor);
this.cursor = nextCursor;
// Note what's missing: there's no condition where hasMore
// becomes false. The API always returns something.
// Ran out of new content? Serve old content. Ran out of
// followed accounts? Serve "suggested" content. The feed
// never ends. The user never gets a reason to stop.
this.renderPosts(posts);
this.loading = false;
}
}
That rootMargin: '800px' is doing heavy lifting. It means new content loads before users even realize they're near the bottom. There's no seam in the experience. No breath. No moment of "wait, why am I still here?"
A paginated feed with a "Load More" button would work fine. But it would give users agency. And agency kills session duration.
Autoplay: Stealing the Decision
Autoplay removes the conscious choice to watch the next video. The previous one ends, and before the user's brain processes "I'm done," the next one is already running.
class AutoplayEngine {
constructor(player, recommendationAPI) {
this.player = player;
this.recAPI = recommendationAPI;
this.nextVideo = null;
// Pre-fetch the next video while current one plays.
// By the time this video ends, the next one is cached
// and ready to fire instantly. Zero delay = zero
// opportunity to reconsider.
this.player.on('timeupdate', () => {
if (this.remainingTime() < 15 && !this.nextVideo) {
this.prefetchNext();
}
});
this.player.on('ended', () => this.playNext());
}
remainingTime() {
return this.player.duration - this.player.currentTime;
}
async prefetchNext() {
// "Up next" is chosen by engagement prediction, not
// relevance, not quality, not what the user searched for.
const recommendations = await this.recAPI.getNext({
currentVideoId: this.player.videoId,
watchHistory: this.getSessionHistory(),
// This flag is real. Engagement score is the sort key.
sortBy: 'predicted_watch_time',
});
this.nextVideo = recommendations[0];
this.preloadVideo(this.nextVideo);
}
playNext() {
// Countdown overlay: "Next video in 5...4...3..."
// The user has to actively intervene to NOT watch.
// Default behavior is continued consumption.
this.showCountdown(5, () => {
this.player.load(this.nextVideo);
this.player.play();
});
}
}
Read that again: the user has to actively intervene to stop. The default is more content. That's not a UX decision. That's a retention mechanism dressed up as convenience.
Notification Timing: Engineered Urgency
This one is subtle and arguably the most cynical. Notifications aren't sent when they happen. They're batched and timed to pull users back during predicted low-engagement windows.
class NotificationScheduler:
"""
Notifications are not alerts. They are re-engagement
triggers. Timing is everything.
"""
def schedule(self, user, notification):
# Don't notify users who are already active.
# That's a wasted notification. Save it for when
# they've been gone too long.
if self.is_currently_active(user):
return self.queue_for_later(user, notification)
# Find the moment they're most likely to come back
optimal_time = self.predict_reengagement_window(user)
# If they've been gone a while, escalate urgency
hours_absent = self.hours_since_last_session(user)
if hours_absent > 24:
# "You have 14 new notifications" hits different
# than "Someone liked your post"
notification = self.batch_and_amplify(
user,
min_batch_size=5
)
elif hours_absent > 6:
# Social proof trigger: "X and 3 others..."
notification = self.add_social_proof(notification)
return self.send_at(notification, optimal_time)
def predict_reengagement_window(self, user):
"""
ML model trained on the user's historical patterns.
Knows when they check their phone. Knows their
commute. Knows their lunch break. Knows when they
wake up.
"""
return self.timing_model.predict(
user_id=user.id,
features=[
user.historical_open_times,
user.timezone,
user.device_usage_pattern,
self.get_current_context(user),
]
)
That scheduler doesn't care about informing the user. It's optimized for one thing: getting them back in the app. The notification isn't the message. The notification is the hook.
Variable Reward: The Slot Machine in Your Pocket
Pull-to-refresh isn't a technical necessity. APIs can push updates. But pull-to-refresh creates a variable reward loop identical to a slot machine lever.
class PullToRefresh {
onRefresh() {
// Sometimes 0 new items. Sometimes 12.
// The unpredictability is the feature, not a bug.
// Variable ratio reinforcement schedule:
// the same psychology that makes gambling addictive.
return this.api.getNewContent().then(items => {
if (items.length === 0) {
// Even "nothing new" reinforces the habit.
// The user pulled, got nothing, and will
// pull again in 3 minutes to check.
// That's not failure. That's the near-miss
// effect keeping the loop alive.
this.showEmptyState("You're all caught up!");
} else {
// Dopamine hit. New content. Random amount.
// The brain logs this as a successful pull
// and increases the likelihood of repeating.
this.renderNewItems(items);
}
});
}
}
"You're all caught up!" sounds friendly. But that screen exists to validate the check and encourage the next one. If the product actually wanted users to stop checking, it would say "Nothing new. We'll notify you when there is."
The Engagement Score: Where It All Connects
Under the hood, every one of these patterns feeds a single metric. Here's a simplified version of what an engagement scoring system looks like:
def calculate_engagement_score(user, content):
"""
This function is the business model. Everything else
is just infrastructure to serve its output.
"""
score = 0.0
# Predicted watch/read time (primary signal)
score += predict_consumption_time(user, content) * 0.4
# Likelihood of interaction (like, comment, share)
score += predict_interaction(user, content) * 0.25
# Emotional activation (high arousal = high engagement)
# Anger and anxiety score high here. So does outrage.
# This is where the "ragebait rises to the top" problem
# lives. It's not a bug. It's a weighted feature.
score += predict_emotional_activation(user, content) * 0.2
# Session extension probability
# "Will this piece of content make them stay longer?"
score += predict_session_extension(user, content) * 0.15
return score
Notice what's not in that function. User wellbeing. Informational value. Whether the content is actually good. The score optimizes for one thing: keeping eyeballs on screen longer. Everything else is externality.
A jury looked at this math and called it what it is.
"Just Following the Product Spec" Is Dead
There's a story developers tell themselves: product makes the decisions, business sets the goals, engineers just type. Hands on keyboard. Don't shoot the pianist.
This verdict kills that story.
When a jury finds that a product's design is the source of harm, every person in the design chain is relevant. That includes the PM who wrote the spec, the designer who removed pagination, and the engineer who picked predicted_watch_time as the sort key.
Construction engineers can't blame the architect when a building collapses due to known structural flaws they built without objection. Automotive engineers can't hide behind management when they ship a car with a known safety defect.
Software engineers just joined that club.
The Incentive Machine
Here's why this isn't just a Meta and Google problem.
The entire ad-supported tech economy runs on attention. More attention means more ad impressions. More impressions means more revenue. Every feature that increases time-on-site directly increases the company's bottom line.
Promotions go to the team that moved the retention metric. Bonuses tie to DAU growth. Performance reviews celebrate the engineer whose recommendation algorithm bumped average session duration by 12%.
The most profitable product is the most addictive product. That's not a conspiracy. It's a business model. And a jury just ruled that business model can generate legal liability.
What Actually Changes
The verdict's being appealed. Meta and Google have functionally unlimited legal budgets. The precedent will take years to settle.
But the signal is already out. Here's what it means in practice:
Ethics review becomes a release gate. Not because companies found religion. Because their legal teams are now quantifying risk in every engagement feature. Expect "ethics review" to sit alongside security review and accessibility testing in the deployment pipeline.
Internal dissent is now evidence. When engineers raise concerns about harmful features and those concerns are documented, they become plaintiff's exhibits. Companies that ignore internal warnings are pre-building the case against themselves. Smart organizations will create formal channels for ethical objections. Smarter ones will actually listen.
Natural stopping points, transparent recommendation logic, genuine user control over notification frequency -- these aren't feel-good conference talk topics anymore. They're legal shields.
And resumes read differently too. "Built recommendation engine that increased session time 40%" lands differently now than it did two years ago. The market for developers who can build engaging products without exploitative patterns is about to grow.
Note: Open source and indie developers aren't exempt either. The legal principle applies to the design patterns themselves, not just to companies above a certain valuation. Scale affects damages. It doesn't affect liability.
The Line
None of this means engagement is bad. People genuinely enjoy well-designed products. Building something people want to use isn't a crime and shouldn't be.
The line sits between engagement and exploitation. A product people enjoy using vs. one designed to override their ability to stop. Content someone actually values vs. a loop they can't escape.
That line always existed ethically. Now it exists legally.
What Developers Can Do Right Now
If any of that code looked familiar, here's what you can do about it:
Audit your scroll implementation. Does your feed have a natural end? If not, why not? Can you add pagination or a "you've been scrolling for 20 minutes" checkpoint?
Check your autoplay defaults. Is autoplay opt-in or opt-out? Does the user have to act to stop consuming, or to start? Flip the default.
Your notification logic deserves a hard look too. Are notifications timed to inform or to re-engage? Is there a scheduler optimizing for return visits? That's the part a plaintiff's attorney will put on a slide.
Then there's the engagement score. What's being optimized? If the answer is only time-on-site and interaction volume, that's the function a jury just found liable. Add wellbeing signals. Add negative feedback loops. Add something that isn't just "more."
Finally, document your objections. If you raise a concern about a potentially harmful feature and leadership overrides it, make sure it's on the record. This protects you and creates accountability.
The Uncomfortable Part
Most developers didn't set out to build addictive products. They set out to build good software. But engagement metrics are seductive. Watching your numbers climb feels like winning. It's validating. Addictive, even.
There's an irony in that: the people building the engagement loops are themselves caught in an engagement loop. Ship feature, watch metric climb, get praised, ship more features. The dopamine cycle isn't just in the product. It's in the sprint retrospective.
This verdict is a forced stopping point. A pagination break in an industry that's been infinite-scrolling for fifteen years.
The engineers who build the next generation of products get to decide what they look like. They can build systems that respect users' autonomy. They can push back on specs that treat addiction as a KPI. They can design stopping points instead of removing them.
Or they can keep shipping the same patterns and wait for the subpoena.
This verdict is going to reshape how we build software. If you've ever implemented any of the patterns described above, what would you change? Have you ever pushed back on an engagement feature at work? Drop your experience in the comments.
Top comments (0)