Over the last two months, while building a database tool and adding AI features to it, I started realizing that I was not just shipping product work. I was bumping into a larger question: what happens to databases when software no longer just reads and writes records, but also interprets schema, suggests queries, explains plans, preserves context, and collaborates with language models?
My current view is simple: databases are not becoming chatbots.
The real change is happening in the layer around the database.
Databases still matter because structured truth still matters. LLMs do not replace the need for schemas, constraints, transactions, and systems of record. If anything, they make those things more valuable.
What AI changes is interpretation, navigation, and action around the database.
That became very concrete to me while building Tabularis, especially once AI, notebooks, visual explainability, MCP, and plugins started living in the same workflow.
Here are the main bets I find myself making:
- AI should not become the source of truth
- AI should stay optional
- preserved context matters more than one-shot generation
- the future is probably composable
And here are the mistakes I think I may be making:
- using AI where better UX would be enough
- overestimating generation and underestimating control
- underestimating auditability and reproducibility
- designing too much for the future
I wrote the full version here, with more detail on the technical decisions and the mistakes behind them:
Top comments (0)