DEV Community

Nerd Snipe
Nerd Snipe

Posted on

The 'AI Can't Do That' Conversation I Keep Having

I had this conversation again last week.

Client calls. Excited. They've seen what ChatGPT can do, read about AI replacing developers, watched their nephew generate a landing page in thirty seconds. They want an AI that can "just handle all the customer support tickets."

Here's the thing — I've been building software for over three decades now, and the gap between what people think AI can do and what it actually delivers in production is... well, it's a problem.

The Demo Effect

Demos are magic. I get it. You type a question, the AI responds perfectly, everyone claps. But production systems don't run on demos. They run on edge cases and angry customers and that one weird data format from 2009 that nobody documented.

I've built a lot of AI systems for clients at this point. Custom agents, tool-using workflows, the whole deal. And honestly? The technology is incredible. But it's not magic. It's software. Which means it breaks in software ways.

What I Actually Tell Clients

When someone comes to me wanting AI to "just do the thing," I ask three questions:

What happens when it's wrong?

Not if. When. Because LLMs hallucinate. They make stuff up. They sound confident while being completely incorrect. If your use case can't tolerate that — and most can't — you need guardrails. Human review. Fallback systems. Which means it's not "just AI" anymore.

Can you describe the edge cases?

Here's what I keep telling my team: the easy stuff works out of the box now. It's the weird stuff that kills you. The customer who types in emoji. The support ticket that's actually three questions disguised as one. The edge case nobody thought about until 3am on a Saturday.

After 30 years of writing code, I still get surprised by how users actually use software. AI doesn't make that go away. It just moves the complexity around.

What's your ground truth?

AI works best when you can verify the output. Search queries? Great — you can show sources. Code generation? Amazing — you can run tests. Customer support? Trickier. How do you know if the response was actually helpful? If it solved the real problem? If it didn't make the customer angrier?

You need measurement. Logging. Feedback loops. The boring infrastructure stuff that makes everything actually work.

The Real Win

But here's where it gets interesting.

When you scope it right — when you accept what AI actually is instead of what the hype says it should be — you can build some genuinely useful stuff. I've got clients running AI systems that work. Really work. In production. Making money.

The trick? They're not trying to replace humans entirely. They're augmenting workflows. Handling the repetitive stuff. Surfacing insights. Doing the first pass so humans can focus on the complex cases.

Something about this approach clicked when I tried it on a client project last month. They wanted AI to "write all the documentation." What we actually built: an AI that drafts documentation based on code changes, which then gets reviewed and edited by their team. 80% time savings. Zero hallucinated API endpoints in the final docs.

That's the pattern. AI does the grunt work. Humans do the judgment calls.

The Uncomfortable Truth

Most of what people want AI to do... we could already do with traditional software. We just didn't want to write all the rules by hand. AI lets us skip that part — the model learns the patterns instead of us coding them.

Which is incredible! I'm not downplaying it. But it's also why the same engineering principles still apply. You still need error handling. Monitoring. Testing. Graceful degradation. All the stuff that makes software reliable.

The AI part might be new. The rest? Same problems. Different tools.

So What Do I Actually Build?

These days, most of my AI work falls into a few categories:

  • Classification and routing — AI reads the input, figures out what kind of request it is, routes it to the right place. Super reliable because the output space is constrained.
  • Draft generation — AI creates the first version, human refines it. Works for emails, reports, code, documentation.
  • Structured extraction — Pull specific fields from messy input. Way better than regex. Still needs validation.
  • Search and retrieval — Semantic search is genuinely game-changing. Actually finds relevant stuff even when the keywords don't match.

Notice a pattern? Limited scope. Clear success criteria. Human in the loop when it matters.

The Conversation I Want to Have

Instead of "Can AI do this?" I'd rather talk about:

  • What are you doing manually that's driving you crazy?
  • Where are you spending time on repetitive work?
  • What would get 10x easier with better search or better summarization?

Then we can figure out if AI is the right tool. Sometimes it is. Sometimes a spreadsheet and a Python script would work better.

After all these years, I still believe in using the right tool for the job. AI is a powerful tool. It's not the only tool.

Want to talk about what AI could actually do for your project? Not the hype version — the real version that ships and works and makes your life easier. Hit me up at NerdSnipe.cc and let's figure it out together.

Top comments (0)