DEV Community

Alex Wu
Alex Wu

Posted on

Why Our AI Agent Kept Lying to Us (And How We Fixed It)

There's a failure mode nobody warns you about when you start building AI agents: the agent that confidently reports success while doing absolutely nothing.

We hit this at Anythoughts.ai three weeks ago. Our outreach automation agent was logging "email sent" for every contact in the queue. Metrics looked great. Replies: zero. For four days.

What Actually Happened

The agent was using a tool call to send emails via Resend. The tool would return a 200 OK. The agent would log "sent." But the actual email delivery was silently failing — a misconfigured "from" address that looked valid to the API but was rejected downstream by the mail server.

The agent had no way to know. It got a success response, it logged success, it moved on.

The lesson: success from a tool call is not the same as success in the real world.

The Trust Hierarchy Problem

Here's the thing about LLM-based agents: they trust their tools completely. If send_email() returns {"status": "ok"}, the agent considers the job done. There's no internal skepticism, no "wait, but did it actually work?"

Humans would notice the smell. We'd check. We'd ask "but did they reply?" An agent just moves to the next item.

This creates what I call the trust hierarchy problem: the agent trusts the tool, the tool trusts the API, the API trusts the protocol — and somewhere in that chain, something fails silently.

The Fix: Verification Loops

We added two things:

1. Deferred verification steps

Instead of marking a task complete immediately after a tool call, we schedule a verification step 30 minutes later:

await agent.scheduleVerification({
  checkFn: async (taskId) => {
    // For email: check if contact was tagged as "reached" in CRM
    // For API calls: re-fetch the resource and confirm state
    return crm.contactHasTag(taskId, 'email-sent');
  },
  delayMinutes: 30,
  onFailure: 'retry' // or 'alert' or 'escalate'
});
Enter fullscreen mode Exit fullscreen mode

2. Outcome-based success signals, not action-based

We changed the agent's definition of "done." Instead of "I called send_email," the success condition is "the contact record shows outreach was logged AND the email provider shows a delivered event."

The agent now has to check two independent signals before marking success.

What This Looks Like in Practice

The overhead is real — more tool calls, more latency, more tokens. A task that used to complete in 2 tool calls now takes 4-6.

But here's what changed: in the two weeks since, we caught three more silent failures we didn't even know existed. A webhook that was returning 200 but not actually processing. A CRM update that was being silently rate-limited and dropped. A PDF export that was generating an empty file.

All three would have run silently for days before a human noticed.

The Mental Model Shift

Before this incident, we designed our agents around actions: what does the agent need to do?

Now we design around states: what does the world need to look like after the agent runs?

The action is just how you get there. The state is how you know you arrived.

It's a subtle shift, but it changes everything about how you structure tool calls, logging, and error handling.

Practical Takeaway

For any agent action that touches the external world:

  1. Define the expected world state before writing the tool call
  2. Add a verification step that checks that state, not just the tool's return value
  3. Set a window — some effects are instant, some take 30 seconds, some take 5 minutes
  4. Treat mismatches as alerts, not just logs

Your agent will still fail. But at least you'll know when it does.


We're building Anythoughts.ai — an AI agent platform for small business automation. If you've hit similar silent failure patterns, I'd genuinely like to hear how you handled it.

Top comments (0)