DEV Community

Cover image for Stop Trying to Prompt Your Way Out of a Hallucination
Shreeni D
Shreeni D

Posted on

Stop Trying to Prompt Your Way Out of a Hallucination

I learned this the hard way.

I recently built a personal agent to map dependencies across my local repositories. The goal was simple: ask a question like “Which projects use this specific library version?” and get a clean, reliable answer.

What I got instead was something else entirely.

The agent responded with confidence. The formatting was pristine. The explanation was coherent. And the answer was completely fabricated.

It even referenced a directory that didn’t exist.

I spent time chasing that ghost—digging through my filesystem, double-checking paths—before realizing what had happened: the model had guessed. It saw a pattern in my project names and filled in the blanks with something that looked right.

That’s when it clicked.

The Problem Isn’t the Prompt

My first instinct was to tweak the prompt.

Maybe I needed to:

  • Be more explicit
  • Add constraints
  • Tell it to “only use verified data”
  • Emphasize accuracy over completeness

But none of that actually solves the problem.

Because hallucination isn’t a prompt failure. It’s a system design failure.

You’re asking a probabilistic model to behave like a deterministic system—and no amount of prompt engineering will change that.

The Fix: Add a Source of Truth

The real solution wasn’t better wording. It was better architecture.

Instead of asking the model to know, I required it to check.

I had the AI generate a Python script that scans my local repositories and verifies whether a given library version actually exists. Then I wired that script into the agent’s workflow with a hard rule:

The agent must execute the verification step before responding.

If the script finds matches, great—the model can explain and format the results.

If it doesn’t?

The agent is explicitly forced to say:
“I don’t have enough information.”

No guessing. No filling in gaps. No “best effort” answers.

Let the Model Build Its Own Guardrails

The interesting part is that I still used the LLM—but not as a source of truth.

I used it to:

  • Generate the verification script
  • Define the workflow
  • Integrate reasoning with execution

In other words, the model helped build the system that limits its own behavior.

That’s the shift:

  • From magic → to mechanism
  • From answers → to process
  • From trusting outputs → to verifying inputs

Reasoning Engine, Not Database

LLMs are incredibly good at reasoning over information.

They are not reliable sources of truth.

If your agent is answering questions about:

  • Your codebase
  • Your infrastructure
  • Your documents
  • Your data

…then the model should never be the final authority.

It should sit on top of a system that can:

  • Retrieve real data
  • Execute checks
  • Enforce constraints

Think of it this way:

Your LLM is the brain.
Your code is the nervous system.
Your data is reality.

If those aren’t connected, you don’t have intelligence—you have improv.

Stop Prompting, Start Designing

When an AI system fails, it’s tempting to stay in the prompt layer. It feels fast, iterative, and controllable.

But that’s often just avoiding the real work.

If your agent:

  • Hallucinates
  • Makes unverifiable claims
  • Invents structure
  • Sounds right but isn’t

…don’t rewrite the prompt.

Fix the architecture.

Add:

  • Deterministic checks
  • Tooling integrations
  • Execution steps
  • Clear failure modes

Reliability doesn’t come from convincing the model to behave.

It comes from building a system where it can’t misbehave without being caught.

A Better Default

Here’s a simple rule that has held up well for me:

If the answer depends on real-world state, the model must verify it before speaking.

That one constraint eliminates an entire class of hallucinations.

The Real Question

So now I’m curious:

When your AI fails, do you reach for the prompt…

—or the architecture?

Top comments (0)