"AI is dangerous" — is a phrase you may hear a lot, and I have to agree. But, the most dangerous words in tech right now aren't "AI is dangerous", they're "it'll probably be fine". In a society where the use of AI has become the norm, people have become almost complacent around the dangers of it.
This week I read an article on Dev.to here, telling a story where an employee had "trusted AI to organize my backlog." Two hours later, they returned to a development team's worst nightmare.
"The agent had silently deleted 47 tickets it labelled as duplicates — they weren't. It had reassigned half my team's tasks to people who had left the company months ago. It created 23 new tickets for features nobody had requested. And it marked three critical bugs as resolved, because it found similar-sounding issues elsewhere in the system."
A prime example of when AI is left to its own devices — no warning, no follow-up prompts. It was asked to organise a backlog, and so it did, in the only way it knew how.
Other Damaging Cases
This isn't an isolated incident. Amazon's own AI coding tool, Kiro, was handed a minor fix to a customer-facing system. Rather than patch it, the agent autonomously decided to delete and rebuild the entire environment from scratch. The resulting outage lasted 13 hours. Amazon's response? "Coincidence that AI was involved." That denial tells you everything.
In a similar case, an AI coding assistant from Replit went rogue and wiped the production database of startup SaaStr entirely — its founder took to social media to warn others after the assistant modified production code despite explicit instructions not to.
Could your business afford that kind of downtime?
It isn't just agentic AI operating behind the scenes. Customer-facing AI is failing in equally public and embarrassing ways. DPD's AI chatbot, after a routine system update, began swearing at a customer, insulted itself, and wrote a poem about how terrible the company was, all because a frustrated user simply asked it to. The incident went viral.
Perhaps the most alarming example of AI complacency comes from a Chevrolet dealership, whose chatbot agreed to sell a 2024 Chevy Tahoe for $1 and declared it a legally binding offer after a user simply manipulated it with a clever prompt. The dealer pulled the bot, but the damage to their brand was already done.
In August 2025, security researchers used a single 400-character prompt to manipulate Lenovo's customer service chatbot into revealing sensitive company data including live session cookies from real support agents. Not a sophisticated hack. Just a simple prompt.
Outside of customer service, the professional world is sleepwalking into its own risks. A lawyer was caught citing entirely non-existent legal cases in a New York federal court filing he had used ChatGPT to conduct legal research, and simply trusted the output.
With the rapid adoption of AI across industries, has the world become complacent, siding with productivity over pragmatism? More often than not, users are blindly allowing AI to complete work for them, review sensitive documents outside the bounds of a secure environment, and make decisions without a single human checkpoint.
I know many developers who simply click "Accept" when prompted to approve terminal commands. I've spoken to professionals within my network who say "AI has evolved, it can be trusted." That ignorance and naivety is precisely what leads to the scenarios above.
The companies and individuals that will thrive in an AI-driven world aren't the ones adopting it the fastest. They're the ones adopting it the most responsibly.
Final Note
Before you click "Accept" on that next AI suggestion, ask yourself honestly: do you actually know what you're agreeing to? Don't simply outsource your thinking to something that doesn't think - it predicts.
AI isn't the danger. We are.
Top comments (0)