Read-only database access is the right default for AI analytics.
It is also not enough.
That sounds weird until you watch what happens in production.
A team gives an AI agent a role that can only run SELECT. Everyone relaxes because the agent cannot mutate data.
Then it runs an expensive query.
Or returns 50,000 rows when a summary would have been enough.
Or exposes sensitive columns.
Or answers from the wrong table because the schema looked obvious and wasn't.
No data was modified.
The workflow can still be unsafe.
What SELECT-only actually solves
It prevents writes.
That is important. It should usually be the first boundary.
But it does not decide:
- which rows can be viewed
- which columns are sensitive
- which tables are authoritative
- which queries are too expensive
- which metric definitions are correct
- which answers need an audit trail
Read-only protects data integrity.
It does not automatically protect confidentiality, performance, or answer quality.
Governed read-only is the real goal
For AI analytics, the safer pattern is narrower:
- approved reporting views
- schema context in business language
- row limits and timeouts
- aggregate-first tools
- separate paths for write-capable operations
- audit logs for every meaningful query
The model should not have to remember all of that from a prompt.
The access layer should enforce it.
We wrote more here: Read-only AI analytics: why SELECT-only is necessary but not enough
Conexor is built around this exact MCP layer: connecting live databases and APIs to AI clients without turning every prompt into a production risk review.
The practical question is not "can the AI only SELECT?"
It is:
SELECT what, for whom, through which tool, with which context, under which limits, and with what audit trail?
Top comments (0)