Published March 14, 2026 on matbanik.info
The Resume Said "AI Expert." The Conversation Said Otherwise.
A friend who leads a marketing team told me about an interview she ran last month. The candidate's resume could have been printed on glossy cardstock. Three years of "AI-driven marketing strategies." A certification in prompt engineering. The skills section listed ChatGPT, Claude, Midjourney, and four other tools she'd never heard of.
My friend nodded along as the candidate walked through her experience. Impressive stuff. Campaigns that hit their numbers. Workflows she'd "revolutionized with AI."
Then my friend asked one question: "Tell me about a time the AI got it completely wrong. What happened next?"
The candidate paused. The confident posture shifted. "I mean, I usually just regenerate until it gives me something usable."
That pause told my friend more than the entire resume.

Here's the thing. I hear stories like this constantly. Friends and colleagues who hire for AI-adjacent roles describe the same pattern: candidates with sparse resumes light up when describing how they caught a hallucination that would have tanked a client report. Candidates with stacked credentials go blank when asked to explain their actual thinking process.
The gap between "I use AI tools" and "I understand how to work with AI" has become the single biggest hiring challenge the people around me face. And from what I've seen in my own daily AI use, I understand why.
"I Use ChatGPT" Is a Statement, Not a Skill
A recruiter friend sent me a stat recently that stopped me mid-scroll: 62% of U.S. hiring leads report a significant skills mismatch when filling AI-related roles. Sixty-two percent. That's not a rounding error. That's a systemic problem.
The confidence-competence gap has always existed. Dunning-Kruger isn't new. But AI has turbocharged it in ways we weren't prepared for.
Think about it. The tools are genuinely impressive. You can prompt ChatGPT to write a marketing email and get something polished in seconds. You can ask Claude to analyze a dataset and receive what looks like expert-level insight. The output feels competent even when the person generating it isn't.
This creates a weird inversion. People who've used AI for six months can produce artifacts that look identical to work from someone who's used it for three years and actually understands its limitations.

Listing ChatGPT on your resume is like listing Microsoft Word. Yes, I assume you can use it. That's table stakes. The question isn't whether you can open the application.
The question is: why do you use AI? How do you handle it when it fails? Does AI make you more capable, or has it become a crutch that masks gaps in your own thinking?
A colleague told me about a junior developer he'd interviewed who'd only been using AI tools for eight months. But when asked about his process, the developer described a verification system he'd built for himself. Every time Claude generated code, he'd trace through it line by line before implementing. Not because someone told him to. Because he'd shipped a bug once that took him four hours to find, and it turned out the AI had hallucinated a function that didn't exist.
That eight-month developer understood something the "AI expert" from my friend's story hadn't learned in three years.
Stop Asking What. Start Asking Why.
A hiring manager I know described her old approach to AI interviews. "How would you structure a prompt for X?" "What's the difference between temperature settings?" "When would you use chain-of-thought prompting?"
Then she realized something uncomfortable. ChatGPT can answer all of those questions better than most candidates. She was testing whether people could remember things that any AI tool could tell them in seconds.
Sound familiar?
Now she asks why. "Why do you use AI in your work?" It's an open door. What walks through tells her everything.
The best answers share a common thread. They're purpose-driven and specific.
One developer she interviewed said: "I use it to prototype faster. When I'm exploring a new architecture, I'll have Claude generate three different approaches in twenty minutes. Then I pick apart what I like from each one. It's like having a brainstorming partner who never gets tired, but I'm still the one making the architectural decisions."
A marketing manager told her: "I built a workflow where AI handles first drafts of our weekly reports. But I realized I was spending more time fixing its mistakes than writing myself. So now I only use it for the data synthesis piece, where it's actually faster and more accurate than me."
A designer described his process: "I'll describe a concept to Midjourney and see what it generates. Not to use directly—the output is usually wrong in interesting ways. But those wrong outputs show me what I was actually trying to say."

I notice something in the answers my friends share with me. Energy directed toward a clear purpose, with an honest assessment of where the stress points are. That ratio—energy times purpose, divided by stress—shows up in every strong AI practitioner I've encountered, including in my own daily work. They know what they're trying to accomplish, they bring genuine curiosity to the process, and they've mapped where the friction lives.
The weak answers? "I use it to be more efficient." "It helps me work faster." "Everyone's using it now, so I figured I should too."
Those aren't wrong. They're just empty. They tell me nothing about how this person actually thinks.
The Question That Changes Everything
The question that keeps coming up in every conversation I have with friends who hire: "Tell me about a time AI gave you a confidently wrong answer. What did you do next?"
They tell me the reactions split into two distinct camps.
Camp one lights up. They lean forward. They have a specific story ready because it happened to them last week, or yesterday, or this morning. One candidate described a financial model Claude had generated that looked perfect until she noticed it had invented a tax regulation that didn't exist. "I almost sent it to the client," she said. "Now I fact-check every regulatory reference, even when I'm ninety percent sure it's right."
Camp two gets uncomfortable. The answers turn vague. "I mean, I just re-prompt it until it's correct." Or worse: "That hasn't really happened to me."
That second answer is the reddest flag I know. If you've used AI tools with any regularity and claim you've never encountered a hallucination, one of two things is true: you're not paying attention, or you're not being honest.
There's a concept in biology called hormesis. Small doses of stress make organisms stronger. A little bit of cold exposure improves your immune response. Moderate exercise creates micro-tears in muscle that rebuild stronger. The stress isn't the enemy. It's the training signal.
AI hallucinations work the same way.

Every confidently wrong answer is a moment of hormetic stress. It's an opportunity to build your verification instincts, to develop pattern recognition for when something feels off, to strengthen the critical thinking muscles that AI can't replace.
The candidates who've been through those moments—and learned from them—are fundamentally different from the ones who've been lucky or oblivious.
A colleague shared a story about a candidate who described what she did after catching a hallucination in a research summary. She'd built a personal checklist. Three questions she now asks herself before trusting any AI-generated claim. It took her twenty minutes to create. It's saved her hours of potential embarrassment.
That's hormesis in action. The failure made her better.
One More Thing: Show Me Your Chat History
This started in academia. Professors trying to detect AI-assisted plagiarism realized they could ask students to share their chat logs. The conversations revealed everything—who was using AI as a thinking partner versus who was copying and pasting without comprehension.
Now it's showing up in enterprise hiring. The trend is still early—Alpha-stage, really—but growing fast. I've seen reports suggesting 800% year-over-year growth in companies requesting chat histories as part of their evaluation process.
A friend in engineering management tried it last month with a candidate who'd done a take-home assignment. "Walk me through your AI conversations while you worked on this."
You can't fake the journey.

The candidate's chat history showed iteration. Dead ends. Moments where he pushed back on the AI's suggestions. One exchange where he'd written, "That doesn't match what I know about the API—can you check the documentation?" The AI had been wrong. He'd caught it.
That fifteen-minute walkthrough told my friend more than the polished final deliverable ever could. She saw his thinking process. His verification habits. The questions he asked when something felt off.
Gartner predicts that 50% of organizations will enforce AI-free assessment rounds by 2026. I get the impulse. But from everything my friends in hiring tell me, the better approach is the opposite: let candidates use AI, then make them show their work.
The Real Test
Every story I've heard points to the same underlying truth.
The best AI hire isn't the person who's memorized the most tools or earned the most certifications. It's the person who knows what to do when the tools break. When the confident answer is wrong. When the polished output hides a fundamental error.
That's the skill that doesn't show up on resumes. And it's the only one that matters.
So here's my question for you: what's the best interview question you've encountered—or heard about—that actually revealed someone's AI competence? I'm genuinely curious. Drop it in the comments.
Originally published on matbanik.info. Cross-posted with ❤️ to Dev.to.
Top comments (0)