DEV Community

George Belsky
George Belsky

Posted on

Why Your AI Agent Shouldn't Block When It Needs Human Approval

Your AI agent asks a question. The human is at lunch.

What happens next depends on your architecture. And most architectures get this wrong.

The Pattern Everyone Uses (and Why It Breaks)

agent: "Should I delete the staging database?"
human: (at lunch)
agent: (blocks)
agent: (still blocking)
agent: (30 minutes later, still blocking)
agent: (timeout, session lost, start over)
Enter fullscreen mode Exit fullscreen mode

This is the default in every framework. LangGraph's interrupt(). CrewAI's human_input=True. OpenAI's run pause. They all do the same thing: block and wait.

It works in demos. It breaks in production because:

  • Humans aren't instant. They're in meetings. On a plane. Asleep. The agent needs an answer in 5 seconds, the human responds in 5 hours.
  • Connections drop. WebSocket disconnects. SSH session times out. Terminal closes. Approval state is gone.
  • Nobody knows the agent is waiting. No push notification. No email. No reminder. The agent sits there silently burning compute.
  • No escalation. If the assigned reviewer is on vacation, the agent waits forever. Nobody else gets notified.

What Should Happen Instead

agent -> send_intent("needs_approval", remind=5min, timeout=8h)

5 min later:  platform sends reminder (Slack push / email)
30 min later: platform escalates to backup reviewer
2 hours later: human approves from phone

agent <- resumes with approval result
Enter fullscreen mode Exit fullscreen mode

The agent doesn't poll. Doesn't block. Doesn't hold a connection open. It suspends durably and resumes when the human responds - whether that's 5 seconds or 5 hours later.

The Difference: Sync vs Async Approval

Sync (block and wait) Async (durable)
Human at lunch Agent blocks forever Reminder in 5 min
Human doesn't respond Timeout, session lost Escalation to backup
Connection drops Approval state lost State durable in DB
Response time Must respond NOW Hours or days later
Reminder None Configurable
Escalation None Chain: A -> B -> team
Task types Yes/No only Approval, review, form, override, ...

What This Looks Like in Code

Before: DIY async approval (200+ lines)

def request_approval(reviewer, context):
    token = secrets.token_urlsafe(32)
    db.insert("approvals", token=token, status="pending")
    send_slack_message(reviewer, f"Approval needed: {context}")
    schedule_reminder(token, delay=300)       # custom scheduler
    schedule_escalation(token, delay=1800)     # custom escalation
    schedule_timeout(token, delay=28800)       # custom timeout
    return token

# Plus: reminder cron job, escalation handler, webhook callback,
# polling loop, token expiry, audit logging, DB cleanup...
Enter fullscreen mode Exit fullscreen mode

After: AXME intent lifecycle (4 lines)

from axme import AxmeClient, AxmeClientConfig

client = AxmeClient(AxmeClientConfig(api_key=os.environ["AXME_API_KEY"]))

intent_id = client.send_intent({
    "intent_type": "intent.data.export_approval.v1",
    "to_agent": "agent://myorg/production/data-analyst",
    "payload": {"dataset": "customers_q1", "pii_detected": True},
})
result = client.wait_for(intent_id)
Enter fullscreen mode Exit fullscreen mode

No reminder scheduler. No escalation handler. No webhook endpoint. No polling loop. The platform handles all of it.

Not Just Yes/No

Most approval implementations only support binary yes/no. Real workflows need more:

Type Example
approval "Deploy to production?"
review "Review this PR summary" (with comments)
form "Fill in budget justification" (structured fields)
manual_action "Flip the DNS switch" (out-of-band action)
override "Override rate limit for VIP customer"
clarification "Which region should I deploy to?"

Works With Any Framework

This isn't framework-specific. The same pattern works whether your agent is built with LangGraph, CrewAI, AutoGen, OpenAI Agents SDK, Google ADK, Pydantic AI, or raw Python.

The agent framework handles reasoning. The coordination layer handles waiting.

Try It

Full working example with scenario, agent, and approval flow:

github.com/AxmeAI/async-human-approval-for-ai-agents

Built with AXME - a coordination layer for operations that take minutes, hours, or days to complete. Alpha - feedback welcome.

Top comments (0)