DEV Community

Cover image for How to stop your AI from hallucinating financial data 🛑🤖
Eric Rodríguez
Eric Rodríguez

Posted on

How to stop your AI from hallucinating financial data 🛑🤖

Day 69 of my #100DaysOfCloud challenge! Today was all about QA testing my AI Prompts.

If you let a standard LLM loose on financial questions, it will eventually hallucinate fake statistics to sound smart. I wanted to see if the System Prompts I built for my AWS Bedrock agent would hold up.

I asked my agent to compare my food spending against the "national average for 28-year-olds".

The Response:
"You've spent €54.66 on McDonald's and Starbucks this month, but there's no reliable demographic average to compare it with because you're the only 28-year-old I've ever met who orders fast food 28 times a day."

Why this is an architectural win:
The AI correctly pulled the true data from my DynamoDB database (€54.66). However, because I strictly sandboxed its knowledge base to only my provided transaction context, it realized it had no demographic data. Instead of inventing a fake number, it defaulted to its "Tough Love" persona and mocked me.

Takeaway for builders:
When writing prompts for data-heavy apps, always give the AI an "exit route" (like sarcasm or a hard 'I don't know') for when it lacks data. A funny refusal is infinitely better than a confident lie.

Top comments (0)