From nudify tools to AI-generated Spotify hits, tech isn’t just moving fast; it’s blurring what’s real, what’s fake, and who gets hurt in the process. And most developers still aren’t asking: should we build this?
We don’t need to be malicious to build something harmful.
We just need to be fast. Efficient. Focused on the roadmap.
And willing to believe that what happens after launch… isn’t our problem.
That’s how we got here.
In the midst of everything that shows up on social media lately AI-generated bands fooling Spotify, high school scandals triggered by fake images, even private moments becoming public memes I’ve found myself stepping back.
Just as a human.
What are we building? Where are the boundaries?
And what does it say about us when the tools we create are being used to deceive, exploit, or humiliate not by accident, but by design?
AI tools today can generate music that outperforms real artists, fabricate images that violate privacy and dignity, and turn everyday moments into viral spectacles, all in seconds, and all built on systems we shipped, optimized, and scaled.
These tools don’t just exist. We built them.
Maybe not to cause harm, but without doing nearly enough to prevent it.
This isn’t a piece about AGI or robots taking jobs.
This is about what happens when developers build fast, skip the ethical questions, and let “someone else” worry about misuse.
The cost isn’t theoretical. It’s here.
Press enter or click to view image in full size
In this article, we’ll explore:
- Real stories of how AI tools are being used causing emotional, reputational, and economic harm.
- Why we often skip ethical reviews, even when we care about the outcome
- The deeper tradeoff between scale and consent in product design
- What happens when we build for virality without responsibility
- The mindset shift developers need from “does it work?” to “what happens if it does?”
1. The AI reality distortion field
We’ve built systems that don’t just generate content; they generate confusion. Tools that don’t just create, they impersonate, amplify, and, in the worst cases, exploit.
And most of the time, the problem isn’t just what the AI can do.
It’s what we let it do because we didn’t ask enough questions.
Let’s zoom in.
- Velvet Sundown, an AI-generated 1960s-style band, racked up millions of streams on Spotify. Listeners thought they’d found a cool retro group with a vintage vibe. In reality, the vocals, lyrics, and instrumentation were all synthetic. No humans. No label. No mention that it was fake. And someone, maybe many someones, wrote the code, built the data pipeline, trained the model, deployed the product, and chose not to label it. The result wasn’t illegal. But it was deceptive.

A transparent mention could have been better, I think.
But it blurred the line between creativity and simulation, and the industry barely blinked.
- Francesca Mani, a 14-year-old student, found out AI-generated explicit images of her made from a public Instagram photo were circulating among classmates. She hadn’t sent anything. She hadn’t posed. But someone uploaded her photo to a nudify tool and clicked a button.The tool worked perfectly. Too perfectly.Someone built the UI. Someone designed the pipeline. Someone open-sourced the model or hosted the inference.And no one added a friction layer. No consent step. No age detection. Just push and generate.
She was a real person. The tool treated her like a dataset.
- And then there’s Andy Byron, CEO of Astronomer, who was caught on a Coldplay “kiss cam” video hugging someone who wasn’t his wife. The clip captured, edited, and posted by someone in the crowd exploded online. Within hours, the internet identified him. Within days, he was suspended.No AI manipulation. No deepfake. Just digital exposure at speed.The infrastructure that made it viral?
Real-time video processing, facial detection, and engagement-boosting algorithms. And who builds those systems? Mostly developers.
Developers create an ecosystem where privacy is optional, and virality is a weapon.
In all of these cases, the harm didn’t happen by accident.
It happened because we made it easy.
Because we prioritized “possible” over “appropriate.”
Because no one stopped the feature and said: what could go wrong if someone really used this?
It’s the kind of scenario you’d expect from an episode of Black Mirror.
Think “The Waldo Moment” or “White Bear” where public humiliation becomes a form of entertainment, algorithmically amplified, morally distant. But this isn’t fiction. It’s our infrastructure.
It’s real-time video processing, facial recognition, and engagement algorithms, working exactly as designed.
And here’s the part that stings most:
Press enter or click to view image in full sizeThe tech didn’t fail. It did exactly what it was designed to do.

2. What happens when we don’t ask, “Should We?”
Let’s be honest, most of us aren’t sitting around plotting how to exploit people with tech.
We’re trying to ship. We’re up against sprints, backlogs, investor meetings, deadlines, and that one feature request that keeps creeping in from “a really big client.”
So the ethical conversation? It’s not that we reject it. We just… never get to it.
A 2024 study published in AI and Ethics interviewed 40 developers (source: The ethical agency of AI developers) and machine learning engineers who had built or contributed to generative AI systems, some powering image, voice, or video manipulation. Most weren’t bad actors. They weren’t building “nudify” tools or deepfake factories. They were building infrastructure, plugins, and models. Some of their tools ended up downstream, powering exactly the kinds of harm we just described.
What did they say?
“We knew it could be misused… but it wasn’t really our job to control what people did with it.”
“We were focused on the technical breakthrough. The use cases came later.”
“There was this pressure to open-source it before someone else did.”
In other words:
They were moving fast.
And ethics felt like someone else’s feature.
The study found that even devs who wanted to consider harm often lacked the structure, support, or time to do so. There were no dedicated checkpoints. No friction layers. No internal processes for weighing real-world consequences. The result? Responsibility fell through the cracks.
This is the developer version of “I was just following orders.”
Except now we’re not taking orders from some boss we’re taking them from culture:
— Move fast.
— Solve hard problems.
— Be first to market.
And if it breaks someone’s life? We’ll patch it later.
But here’s the uncomfortable truth:
By the time harm shows up in the form of fake nudes, fake music, or real reputational collapse…
There’s no patch. There’s just aftermath.
3. Shame as a Service
Let’s connect the dots.
YouTube Shorts. TikTok. Discord. Reddit.
These aren’t just platforms.
They’re pipelines, where content flows, friction vanishes, and shame gets packaged as engagement.
And much of it runs on developer infrastructure:
- Recommendation systems that boost the most outrageous, fastest-spreading content.
- Video tools that auto-caption, auto-crop, and sometimes auto-doxx.
- Bots that scrape, sort, and surface private data faster than any human mod could delete it.
What does that mean in practice?
It means a Coldplay concert kiss-cam video can turn into a cheating scandal complete with facial recognition, LinkedIn profiles, and home addresses before the subject even gets home.
(Andy Byron, CEO of Astronomer, didn’t post anything. Someone else did. The internet did the rest.)
It means nudify tools can let someone undress a teenage girl in seconds and Discord servers can distribute the images before anyone has time to file a takedown.
(Francesca Mani was 14. She didn’t pose. But the algorithm filled in the blanks.)
It means “AI bands” like Velvet Sundown can rack up millions of streams on Spotify blending in seamlessly with human artists while real musicians fight copyright claims from synthetic clones.
(The band wasn’t real. The money was.)
But let’s go deeper. Shame doesn’t just spread. It scales.
Platforms reward engagement.
Engagement feeds outrage.
And outrage prefers villains, victims, and viral twists.
So the very systems we build; the APIs, the models, the “growth hacks”, become engines for contextless visibility.
Moments divorced from nuance.
Faces divorced from consent.
People divorced from their right to disappear.
We didn’t invent this dynamic. But we built its infrastructure.
We optimized for velocity.
And now people get hit by the very speed we shipped.
Ask yourself:
- Who designed the clip-editing AI that made the kiss-cam moment meme-worthy?
- Who enabled “nudify” APIs to run in real-time, anonymously, for free?
- Who tuned the TikTok algorithm to prioritize outrage over accuracy?
We did.
4. What Real Harm Looks Like
We’ve seen it in viral memes and shady apps but this is where AI harms become painfully real: let’s explore some cases.
Voice Scam: Florida Mom Loses $15K
Case: A scammer used AI to clone the daughter’s voice of a Florida retiree, “April Monroe,” pretending she’d been in an accident. The mother sent $15,000 before realizing the voice was fake (source: The Guardian).
Dev angle: Models trained on scraped public videos, deployed in phone-based APIs again with no oversight or parental consent.
Corporate Deepfake: WPP CEO Impersonated
Case: Fraudsters impersonated WPP’s CEO Mark Read via deepfake voice on WhatsApp and Microsoft Teams, nearly duping employees into sharing sensitive data (source: The Guardian).
Dev angle: The same voice-cloning libraries developers use for legitimate audio features were weaponized for corporate espionage without traceability or audit.
School Deepfake: Principal Falsely Accused
Case: In Baltimore County (MD), an AI-generated audio falsely captured a school principal making hateful remarks, sparking internal reviews before being debunked (source: AP News).
Dev angle: Publicly available speech-to-text and voice-generation pipelines can falsely produce harmful content that snowballs before correction.
Teacher Deepfake Nude: Victim of AI Exploitation
Case: A teacher in Victoria, Australia, and others had explicit AI-generated images made from student or teacher photos, shared across school networks (soruce: ABC).
Dev angle: Easy-to-use image-to-image tools, hosted on standard infra, can turn personal photos into non-consensual content in seconds.
Taylor Swift Deepfake Scandal: Celebrity Targeted
Case: In early 2024, numerous AI-generated pornographic images of Taylor Swift circulated online, with one post viewed over 47 million times before removal (source: en.wikipedia.org).
Dev angle: Open diffusion models trained on scraped data with no curation developers built them, deployed them, but didn’t gate harmful content.
These were just a few examples but the internet is already overflowing with them.
What’s truly alarming is this: it’s only the beginning.
5. The Ethics Gap and the Way Forward
The Problem: AI Is Moving Fast, Ethics Is Still Optional
We have linters for syntax, CI/CD for pipelines, and performance dashboards for uptime. But for ethical risk? We’ve got… vibes. Maybe a Slack debate. Maybe a PM raising a brow. And that’s assuming anyone even notices.
Let’s be real:
Most devs were never trained in ethics. Most bootcamps don’t teach it.
CS programs often treat it as a checkbox elective somewhere between “Intro to HCI” and “History of Computing.” Even now, the most popular dev frameworks don’t include checklists for consent, privacy harm, or edge-case abuse.
Instead, we optimize for:
- Speed
- Efficiency
- Technical novelty
But not:
- Misuse potential
- Power imbalance
- Long-term consequences
And laws? They’re either outdated or completely absent.
The EU’s AI Act is the first major attempt at regulation, but enforcement is slow, and most devs don’t even know what tier their project falls under.
So the question becomes:
If your platform can fake a face, clone a voice, or spread misinformation at scale and no one told you to stop, would you?
The Way Forward: Design Like Harm Is Your Bug
We won’t fix this with another ethics committee or an HR-approved code of conduct.
We need engineer-level thinking for engineer-level problems.
6. Try This Framework Next Time You’re Shipping Something Powerful
This isn’t about over-regulating every idea or killing innovation. It’s about engineering harm reduction into the stack — by default. Here’s how to build like harm is your bug.
6.1. Consent Isn’t Optional — Especially for Human Likeness
If your tool generates a human face, simulates a voice, or reuses real photos:
- Ask this: Would this person know their likeness is being used here?
- And this: Would they agree if they did?
If the answer is fuzzy or “depends on use case” that’s a red flag.
Action: Require verification, opt-in, or at minimum watermarking for anything depicting real human likeness (especially minors or private individuals).
6.2. Image & Voice Generation Needs Guardrails
It should never be easier to fake a human than it is to verify one.
- For image generators: ➤ Block training on personal/social media data without consent ➤ Auto-detect and block nudes or intimate likenesses without user-confirmed identity
- For voice cloning: ➤ Require speaker verification for upload ➤ Embed traceable fingerprints for accountability
Action: Treat synthetic likenesses like biometric data. Because they are.
6.3. Visibility ≠ Consent
Just because someone is in a public video or photo doesn’t mean their face, body, or identity is public property.
If your product enables virality (sharing, remixing, reframing content), build in friction:
➤ Blur untagged faces by default
➤ Ask: “Do you have consent to share this person’s identity?”
Action: If the platform encourages virality, it should also defend dignity.
6.4. Build an Abuse-Aware Prompt Layer
Most genAI tools don’t fail at output — they fail at input safety.
- ➤ Add prompt evaluation layers that detect likely harm: nudity, impersonation, revenge content
- ➤ Flag requests like “make this person naked,” “clone this celebrity,” or “make him look drunk” — and freeze them
Action: Think of input validation as ethical sanitization, not censorship.
6.5. Design for the Worst-Case, Not Just the Demo
Before launch, ask your team:
- “What’s the most malicious thing someone could do with this?”
- “How quickly would it go viral?”
- “Who would pay the cost?”
Action: Run misuse drills like fire drills.
Red team it. Abuse it internally. Fix before release.
We don’t need to halt innovation.
But we do need to ship with foresight, not just fascination.
You are not just building features.
You’re setting norms for how the next 1,000 devs will treat power.
7. What We Owe Each Other
In every story mentioned, the fake band, the scandal leak, and the teenage girl who didn’t give consent the tech wasn’t broken.
It worked exactly as designed. That’s the problem.
We built tools to replicate voices, generate faces, mimic writing styles, and remix reality. But we rarely stopped to ask:
Who gets to use this? For what? And at whose expense?
We call it innovation.
But when the harm is invisible, when it lands on strangers, on kids, on people with no legal team or platform to fight back, that’s not innovation. That’s negligence wearing a hoodie.
And developers? Developers are not bystanders but architects.
Every form field, every upload button, and every API permission we built the gateway.
“But I just wrote the code.”
That line doesn’t hold up anymore. Not in 2025. Not when the fallout is measured in ruined reputations, stolen likenesses, and shattered trust.
This isn’t a guilt trip. It’s a responsibility check. Because we also have power.
We can build tools that ask for consent.
That flag deepfake misuse.
That protects people by design, not by patch.
We can demand AI models be trained with transparency and not on stolen, explicit, or intimate data scraped without permission.
We can prioritize human impact as much as user growth.
We owe each other that much.
Not just because we’re engineers.
But because we’re people.
And the world we’re shaping with code?
It’s the same world we’ll all have to live in.
What do you think?
Have you seen these shifts in your own work, teams, or tools?
Are we overreacting or not reacting enough?
Drop your thoughts in the comments.
Let’s make this a space where developers, designers, and anyone shaping tech can talk honestly about what we’re building and what we’re breaking.
If this sparked something in you, share it.
The more voices we gather, the better the questions we’ll ask next time we sit down to ship.
My personal favorite useful Resources on AI Ethics
- MLCommons “People + AI Guidebook” (by Google): I found the design principles in this especially helpful for thinking through user consent and feedback loops; definitely bookmark it if you’re working on any consumer-facing AI product.
- Hugging Face Ethics Toolkit:What I appreciated here was the practical, dev-friendly structure if you’re building with open-source models, this helps map out real-world risks fast. Great for teams shipping NLP or vision tools. https://huggingface.co/blog/ethics-toolkit
- Ethical OS Toolkit (from Institute for the Future + Omidyar Network): This one’s more strategic but eye-opening I liked how it frames future risks like deepfakes and data voids before they hit production. Worth reading before your next feature planning cycle.
- AI Risk Management Framework (NIST, U.S.): It’s formal, but incredibly useful if you’re part of a company dealing with compliance or high-impact AI solid framework for thinking long-term across stakeholders and harm scenarios.

Top comments (0)