Last week I did something that would've made me uncomfortable six months ago. I opened my Claude Desktop config, added an MCP server URL pointing at my production Postgres database, and told Claude to go look at real customer data.
Nothing caught fire.
I've been building QueryBear for a while now, and I'd always been careful to test against staging data, demo databases, seed data. Production was the thing I protected.
But I kept hitting the same wall. I'd be deep in a debugging thread with Claude, connected to Linear and my codebase, and I'd get 90% of the way to understanding a customer issue. Then I'd tab over to my database client, look up the user, write a couple joins, squint at the results, copy them back into chat. Every single time.
It's not hard. It's just friction. After doing it fifty times in a week I started thinking: why am I the bottleneck here?
So I built an MCP server that sits between the AI and my database. The AI doesn't get a connection string. It doesn't get credentials. It gets tool calls, and the server decides what happens.
The config I added to Claude Desktop was two fields:
{
"mcpServers": {
"querybear": {
"url": "https://mcp.querybear.com/mcp",
"headers": {
"Authorization": "Bearer qb_live_xxxxx"
}
}
}
}
I restarted Claude and typed: "How many users signed up in the last 7 days?"
Three seconds later I had the answer. From prod. Correct.
I asked it to break signups down by day. Then by referral source. Then I asked it to cross-reference with which users created their first query. Each answer came back fast, and each one was right.
What surprised me wasn't that it worked. I built the thing. What surprised me was how natural it felt. I wasn't context-switching anymore. I was just asking questions inside the same thread where I was already working.
The reason I wasn't scared: every query passes through a security pipeline before it touches the database. SQL gets parsed into an AST, only SELECTs pass. Tables get checked against an allowlist. Sensitive columns get stripped. Row limits get enforced by rewriting the query. Timeouts kill anything that runs too long, server-side. The whole thing runs inside a read-only transaction. And everything gets logged.
The worst case is the AI gets a "query rejected" response. That's a much better failure mode than "oops, that was production."
Before this, debugging a customer issue meant reading the ticket in my AI thread, tabbing to the database, writing queries, copying results back. Now I paste the ticket and say "look up this user's account and tell me what's going on." Steps 2 through 5 just disappeared.
I've also started using it for things I wouldn't have bothered querying before. Quick sanity checks during development. "Did that migration actually backfill the new column?" Instead of writing a throwaway query, I just ask.
I'm still iterating on this. But the core loop of "ask your AI a question about your data and get a real answer" already works, and it's already changed how I work day to day.
If you want to try it, you can set it up at querybear.com in a couple minutes. And if you've already connected AI to your database some other way, I'm curious how you handled the security side. Still figuring out where the line should be.
Top comments (1)
AST parsing for SELECT only is the right call, regex falls apart on CTEs. Worth adding statement_timeout at the Postgres level too, row limits alone won't save you from expensive seq scans on big tables.