The fastest way to get a wrong answer from an AI database agent is to ask a simple business question.
What was revenue last month?
That sounds easy.
The database has invoices, subscriptions, payments, refunds, credits, discounts, taxes, trials, failed charges, and test accounts.
The model sees tables.
Your business sees definitions.
If those definitions are not part of the system, the model has to guess.
Valid SQL can still be wrong
A table called payments may include failed attempts.
subscriptions may include trials.
amount may be gross, net, pre-tax, post-tax, or stored in cents.
created_at may mean invoice creation, payment capture, or customer signup.
An AI agent can write syntactically valid SQL against all of that and still answer the wrong question.
This is why natural-language SQL needs metric context, not just schema context.
Approved views beat clever prompts
A prompt can tell the model how to calculate MRR.
An approved view makes the definition executable.
Instead of exposing raw invoice and payment tables, expose something like:
reporting.monthly_recurring_revenue
with reviewed columns, tenant scope, time grain, currency assumptions, and test-account filtering already handled.
The model still helps users ask flexible questions.
But the business definition lives in infrastructure, not in a fragile instruction.
What should travel with the tool
For AI reporting, the MCP tool should carry context such as:
- metric description
- allowed dimensions
- time zone and grain
- exclusions
- freshness timestamp
- exact vs estimated status
- scope and tenant boundaries
- warnings the final answer must preserve
Otherwise the model may produce a confident answer while hiding the caveats that matter.
Longer version: Metric definitions for AI database agents
The practical rule:
If a metric is important enough for a leadership meeting, it is important enough to define before an agent calculates it.
Top comments (0)