A production AI database agent should not always try harder.
Sometimes the safest answer is no.
Or more precisely:
I cannot run that query with the current scope, permissions, and context.
That is fail-closed behavior.
It is less exciting than a perfect demo, but it is the difference between useful automation and a system that quietly crosses boundaries.
What fail-open looks like
Fail-open tools keep going when something is unclear.
- the tenant is missing, so the tool runs a broad query
- schema context is stale, so the model guesses
- a result is truncated, so the model summarizes it as complete
- a user asks for a write, so the agent hides it inside a general SQL tool
These failures often look like helpfulness.
They are not helpful in production.
Fail closed on missing scope
If the workflow requires tenant, account, workspace, or user scope, missing scope should stop execution.
A database tool should not infer scope from a vague prompt.
It should require a trusted server-side value, approved role, or explicit workflow context.
Fail closed on broad intent
Natural language makes broad requests easy:
Show all customers affected by this.
Export the failed transactions.
Find every user with this email domain.
Some of those may be legitimate.
They should still be classified before execution.
A production MCP database layer should distinguish lookup, aggregate, search, export, write, and broad-read query classes. Each class can have different limits, credentials, and approvals.
Failures need contracts too
A failure result should be structured:
- failure class
- safe user-facing explanation
- whether retry is allowed
- required scope or approval
- policy rule that blocked execution
- audit identifier
- suggested narrower query
That lets the agent be helpful without inventing a workaround.
Longer version: Fail-closed MCP database tools
The practical rule:
For production AI database access, “unable to run safely” is a feature, not a bug.
Top comments (0)