Most AI-automation posts talk about “agents doing everything.”
Mine didn’t work—until I fixed one microscopic but fatal detail:
Forcing the agent to say “I don’t know.”
This single constraint changed the entire system.
The Problem: Agents Guessing
When my workflow automation first went live, the planning agent kept inventing missing context:
wrong deadlines
imaginary stakeholders
nonexistent dependencies
Not malicious—just probabilistic.
In automation, a confident guess is more dangerous than a clear error.
The Fix: A Strict “Blocker-First” Contract
I rewrote the agent’s system prompt around one rule:
If any required field is missing, return:
{
"blockers": [
{"field": "X", "question": "What is the deadline?"}
]
}
Do NOT infer. Do NOT approximate. Do NOT proceed.
And I enforced JSON validation at the orchestration layer.
If the agent guessed → schema failed → task dropped.
If the agent asked a question → I answered once → pipeline resumed.
Why This Works
This turns the model from a “predictor” into a deterministic validator.
No silent assumptions
No phantom tasks
No hallucinated requirements
No speculative routing
You force the agent into a request-for-information loop,
which mirrors how real engineers block work when specs are incomplete.
Result
Error rate dropped from 27% → 0% in 48 hours.
Suddenly, my automation became reliable:
specs were correct
release notes matched reality
planning tasks stopped drifting
nothing moved forward without defined inputs
A single constraint unlocked the entire workflow.
Top comments (0)