DEV Community

Samar Rai
Samar Rai

Posted on

YOUR AI ASSISTANT MIGHT BE LYING TO YOU — AND OTHER UNCOMFORTABLE TRUTHS ABOUT AI TODAY

Let's start with a scenario. You ask an AI chatbot whether a medication is safe to take with alcohol. It answers confidently, without hesitation. You follow its advice. But what if it was wrong? What if the AI simply made up an answer because it was designed to always sound sure of itself — even when it isn't? This isn't science fiction. It's something that researchers, ethicists, and security experts are sounding alarms about right now. Artificial Intelligence has arrived in our lives faster than our ability to fully understand it — and that gap between how powerful these tools are and how well we oversee them is where serious problems begin to grow.
This blog is for anyone who uses AI — which, at this point, is most of us. You don't need a technical background to care about these issues. In fact, the less technical you are, the more important it is that someone explains what's at stake in plain language. So let's get into it.
The Problem with a Machine That Never Says 'I Don't Know'
One of the most well-documented issues in AI today is something researchers call "hallucination." It sounds almost poetic — but the reality is far more unsettling. Hallucination is when an AI confidently makes up information that is completely false.
Ask an AI to recommend a book, and it might invent a title that doesn't exist. Ask it about a legal case, and it may cite a court ruling that was never made. Ask it for a medical fact, and it might blend two unrelated pieces of information into one dangerous answer — delivered with total calm and authority.
88% of AI-generated responses in one Stanford study contained at least one factual inaccuracy when tested across medical and legal topics (2023).
The reason this happens comes down to how these systems are built. Most large AI tools are trained to predict what word or sentence should come next based on enormous amounts of text from the internet. They are extraordinarily good at sounding coherent and knowledgeable — but they don't actually "know" anything in the way humans do. There's no internal fact-checker. No alarm that goes off when the AI is guessing.
And here's the thing: most users don't know this. A survey by the Reuters Institute in 2023 found that over 60% of regular AI users believed the information provided to them was accurate and verified. The trust is high. The guardrails are thin.
Who Is Watching the Watchers?
Here's a question most people never think to ask: who decides how an AI behaves? When you use a chatbot built by a large technology company, there's an entire team of engineers and policy writers who have shaped its personality, its limits, and its values. But those choices are made behind closed doors — and they aren't always made with your best interests in mind.
Take the issue of bias. AI systems learn from human-generated data, and humans carry biases — in their language, their assumptions, and the structures they've built over centuries. An AI trained on this data absorbs those biases quietly, then reproduces them at scale.
"A hiring algorithm used by a major tech firm was found to systematically score resumes from women lower than those from men — because it had been trained on a decade of historical hiring data from a male-dominated industry." — MIT Technology Review, 2022
This wasn't a rogue programmer with a grudge. It was an AI doing exactly what it was trained to do. And the people it was quietly discriminating against had no idea — and no recourse.
The issue of who controls AI also runs deeper than corporate decisions. Governments around the world are scrambling to catch up. The European Union's AI Act, passed in 2024, represents the world's first comprehensive legal framework for regulating AI systems — classifying them by risk level and placing strict rules on the most dangerous applications. It's a promising start, but enforcement remains a serious challenge. The United States, by contrast, still has no single federal AI law on the books.
$150 Billion+ estimated global spending on AI by governments and corporations in 2024, yet only a fraction goes toward AI safety and ethics research (OECD, 2024).
The Privacy You Didn't Know You Were Giving Away
Every time you type a question into an AI chatbot, something happens that most people don't think about: your words are data. They may be stored, analyzed, and — depending on the company's policies — used to train the next version of the model. This means that the embarrassing question you asked at 2am, the medical symptom you described, the business idea you were testing out — all of it may live somewhere in a company's servers.
In 2023, Samsung engineers accidentally leaked confidential source code and internal meeting notes by pasting them into ChatGPT. The information entered OpenAI's systems and could not be retrieved or deleted. Three separate incidents happened within a matter of weeks. Samsung subsequently banned internal use of generative AI tools.
This kind of data exposure isn't unique to corporations. Individuals do it every day without realizing it. A study by Cyberhaven in 2023 found that employees were pasting sensitive company data into AI tools at a rate of 11% of all their AI interactions — and most companies had no policies in place to prevent it.
When AI Becomes a Weapon
Up to this point, we've talked about AI causing harm by accident — through false information, embedded bias, or careless data handling. But there's a darker chapter to this story: AI being used deliberately to cause harm.
Deepfakes — AI-generated videos or audio that make it look and sound like someone said or did something they never did — have exploded in recent years. In 2024, a finance worker at a Hong Kong firm was tricked into transferring $25 million after attending a video call with what he believed were his company's executives. Every person on that call was a deepfake. The whole meeting was fabricated.
3,000% increase in deepfake fraud attempts reported by businesses between 2022 and 2024 (Onfido Identity Fraud Report, 2024).
Beyond deepfakes, AI is being used to supercharge phishing attacks — those fake emails designed to trick you into clicking a dangerous link or handing over your password. In the past, phishing emails were often easy to spot because of awkward language or obvious errors. Today, AI can generate thousands of personalized, perfectly written phishing messages in minutes, tailored to specific individuals based on their social media profiles and online behavior.
"AI-generated phishing emails have a click-through rate nearly 2x higher than those written by human attackers." — IBM X-Force Threat Intelligence Index, 2024
So What Can Actually Be Done?
Reading all of this, it's tempting to feel helpless. But the situation isn't hopeless — it requires attention, accountability, and collective action. Here's what meaningful progress actually looks like:
Transparency from AI companies is a fundamental starting point. When an AI system makes a decision that affects your life — whether it's rejecting your loan application or influencing what news you see — you deserve to know why. Explainability, as researchers call it, is a design choice. Companies can build it in. Many simply choose not to.
Independent auditing of AI systems is another critical need. Right now, most AI tools are evaluated only by the companies that built them. That's the equivalent of letting a pharmaceutical company be the only one that tests its own drugs. Third-party audits, especially for high-stakes applications in healthcare, law, and finance, are not a luxury — they're a necessity.
And then there's education. The single most powerful tool against AI manipulation, misinformation, and misuse is an informed public. When people understand that AI can hallucinate, that it can be biased, that it isn't a neutral oracle — they engage with it more wisely. Schools, workplaces, and media outlets all have a role to play in closing this literacy gap.
Only 35% of adults in a 2024 global survey by Ipsos reported feeling confident they could identify AI-generated misinformation.
The technology will keep advancing. That's certain. The question is not whether AI will become more powerful — it will. The question is whether we build the wisdom, the laws, and the culture to match that power with responsibility.
Final Thought
We are living through one of the most significant technological shifts in human history — and most of us are doing so without a map. The decisions being made right now about how AI is built, who it serves, and how it's governed will shape the world for generations. Those decisions shouldn't be made only by engineers and executives behind closed doors.
You are part of this story. The AI tools you use, trust, question, or reject send signals to the companies building them. The politicians you vote for will decide whether meaningful regulation ever gets written. The conversations you have — with your family, your colleagues, your community — help determine what kind of AI future we actually build.
"The most dangerous thing about AI is not that it will become too smart. It's that we will stop asking questions about it."
Question to leave you with: The next time an AI gives you a confident answer — about your health, your finances, your rights — will you take it at face value? Or will you stop and ask: how does this machine actually know that, and whose interests does it serve when I believe it without question?

Top comments (0)