DEV Community

Cover image for The Lies AI Tools Tell Us (And Why Your Boss Believes Them)
Evan Lausier
Evan Lausier

Posted on

The Lies AI Tools Tell Us (And Why Your Boss Believes Them)

I had an disagreement with a coworker recently. Not a heated one, more of a polite professional one about whether our application could perform a particular function. He was confident it could. I was confident it couldn't. I had the minor advantage of having actually worked with the application for years, but he had something more powerful: ChatGPT agreed with him.
This is the conversation that followed, more or less:

Him: "ChatGPT says it can do this."

Me: "It can't."

Him: "But ChatGPT says—"

Me: "Ask ChatGPT to show you where in the documentation it says that."

There was a pause. Some typing. A longer pause.

Him: "...now it's saying something different."

And there it was. The moment the oracle blinked when asked to reference Oracle documentation(LOL). ChatGPT, confronted with a request for actual evidence, quietly revised its position to align with what I'd been saying all along. No apology, no acknowledgment that it had just been making things up with complete confidence. Just a pivot, like a politician who's been shown footage of themselves saying the opposite thing.

I use AI tools constantly. Claude Code has genuinely transformed parts of my work life. I dont want to admit how many emails its answered... LOL. "Nicely respond to this person saying ____"

I've written about the productivity gains elsewhere and I meant every word. But there's a difference between using these tools and trusting these tools, and I'm watching that line get blurred in ways that are starting to concern me.

Here's what I've learned the hard way: AI tools don't know what they don't know. And more importantly, they will never tell you. Ask a human developer if a particular function exists and they'll say "I'm not sure, let me check." Ask an AI and it will describe that function in confident detail, explain how to implement it, and even suggest best practices. All for something that doesn't exist and has never existed.

I've lost count of how many times I've been told that some piece of application functionality was entirely possible, only to discover during testing that it absolutely was not. The explanation sounded plausible. The syntax looked right. The confidence was unwavering. And none of it worked.

This becomes a real problem when you've got junior developers who've grown up with these tools. I'll assign a task, and they'll do what feels natural to their generation—plug it into an AI, get an answer, and run with it. They're not being lazy. They're being efficient in the way they've been taught to be efficient. The issue is that nobody told them this particular tool will lie to their face without flinching.

What I see missing most often is proper testing. And I don't just mean "run it and see if it works." I mean positive and negative testing—verifying that it passes when it should pass AND fails when it should fail. That second part is the one that catches AI-generated nonsense, because AI is remarkably good at producing code that looks like it handles edge cases while actually handling nothing of the sort.

The fundamental thing people don't seem to grasp is that the AI is only operating on the parameters you give it. It doesn't know your specific use case. It doesn't know your business logic. It doesn't know that when you said "customer" you meant the franchise owner, not the end consumer. It doesn't know that the "simple" requirement you described has seventeen exceptions that only exist in your head because you've been doing this for years. It takes what you give it, fills in the gaps with statistically plausible guesses, and presents the whole thing with the confidence of someone who has definitely read the documentation.

It hasn't read the documentation. There's a decent chance the documentation doesn't even say what it claims.

The people getting the most out of these tools right now are, paradoxically, the people who need them least. Senior developers who already understand how systems work can spot when the AI is hallucinating. They know what questions to ask. They know when an answer smells wrong. They're using AI to accelerate work they could already do, not to replace understanding they never had.

The danger is the person who pastes output directly into production. The manager who settles technical debates by asking ChatGPT. The developer who skips testing because the code "looked right." These tools have a confidence problem, and it's contagious.

So here's my unsolicited advice for anyone working with AI tools in 2026: Trust but verify. Actually, scratch that—verify, then decide whether to trust. Treat every response like it came from a very enthusiastic intern who might be brilliant but also might be making things up to avoid looking stupid. Ask for sources. Test the edges. And when it tells you something is possible, make it prove it.

Because right now, somewhere, ChatGPT is telling your manager that the system can definitely do something it definitely cannot. And it sounds very confident about it.

Top comments (0)