DEV Community

Cover image for I Built a Safety Layer Between AI Agents and Postgres — Here's Why Raw SQL Access Is a Trap
Satyabrata
Satyabrata Subscriber

Posted on

I Built a Safety Layer Between AI Agents and Postgres — Here's Why Raw SQL Access Is a Trap

Let me describe a scenario that should make any developer uncomfortable.

You give an AI agent access to your Postgres database. A user asks it something like "clean up the old test data." The agent generates DELETE FROM users WHERE created_at < '2024-01-01' — and runs it. No preview. No confirmation. Just gone.

This isn't hypothetical. It's the default behavior of almost every AI + database setup being built right now.

I built EnginiQ to fix this.


The Problem with "Just Give the LLM Your Database URL"

When I started experimenting with AI agents doing real database work, the naive setup looked like this:

  1. Pass the database URL to the agent
  2. Let it generate SQL
  3. Run it

It works — until it doesn't. The failure modes aren't always dramatic. Sometimes it's a TRUNCATE on the wrong table. Sometimes it's an ALTER TABLE that takes a lock mid-traffic. Sometimes it's a well-intentioned DELETE missing a WHERE clause.

The problem isn't that LLMs write bad SQL. They're actually quite good at it. The problem is that you can't predict every path through a conversation, and one wrong turn with unrestricted database access is irreversible.

What I wanted was a runtime that made AI agents useful for database work without making them dangerous.


What EnginiQ Actually Does

EnginiQ sits between your AI agent and your Postgres (or Supabase) database. Instead of raw SQL access, agents get a set of structured, guardrailed tools:

import {
  Engine,
  SupabaseConnector,
  listTablesTool,
  getSchemaTool,
} from "enginiq-core";

const engine = new Engine();
const connector = new SupabaseConnector(process.env.DATABASE_URL);

engine.setConnector(connector);
engine.registerTool(listTablesTool(engine));
engine.registerTool(getSchemaTool(engine));

await engine.connect();
const tables = await engine.runTool("list_tables", {});
Enter fullscreen mode Exit fullscreen mode

The agent calls named tools — list_tables, describe_table, create_table, add_column, run_migration — instead of writing arbitrary SQL. This is the core insight: structured tools are auditable, arbitrary SQL is not.

What's always blocked

Some operations are blocked by default, no exceptions:

  • DROP DATABASE / DROP SCHEMA
  • TRUNCATE
  • ALTER ROLE
  • Any mutation to auth_*, storage_*, or supabase_* tables
  • DELETE without a WHERE clause

These aren't configurable because there's no AI agent use case for them that doesn't benefit from a human reviewing first.

The approval queue

For mutations that are legitimate — schema changes, migrations, parameterized writes — EnginiQ routes them through an approval queue. The agent proposes the change. A human reviews the SQL preview. The change executes only after approval.

The design principle I kept coming back to: discovery is cheap, mutation is expensive. Let agents read and inspect freely. Make writes explicit and visible.


Why MCP Makes This Cleaner

EnginiQ ships an MCP (Model Context Protocol) server, which means it works natively with Cursor, Claude, and any agent host that supports the protocol.

MCP changes the dynamic in an important way. Instead of the agent improvising its database interface from a prompt, it gets a declared list of tools with typed schemas. The model can see exactly what operations are available and what inputs they expect — no guessing, no prompt engineering around what SQL is "safe."

enginiq-mcp exposes:
→ list_tables       (read, unrestricted)
→ describe_table    (read, unrestricted)
→ get_schema        (read, unrestricted)
→ create_table      (write, goes to approval queue)
→ add_column        (write, goes to approval queue)
→ run_migration     (write, goes to approval queue)
Enter fullscreen mode Exit fullscreen mode

Read tools are fast and open. Write tools go through the guardrail layer. The agent always knows which is which because the tool schema says so.


Audit Visibility: Trust Comes From Observability

One thing I kept thinking about while building this: you can't trust a system you can't inspect.

Every tool call in EnginiQ gets logged — actor, environment, trust mode, SQL preview, and outcome. That audit trail is what turns "AI helped with the database" into something a team can stand behind.

As a team, you should always be able to answer "what did the agent do to our database this week?" with a clear, complete log. Right now most setups can't answer that question at all.


The Three Interfaces: SDK, CLI, MCP

One thing I wanted to get right was that EnginiQ shouldn't force you into one workflow.

SDK — for when you're building your own agent or automation and want to call Postgres tools from Node.js directly.

CLI — for developers and CI pipelines. Run migrations, inspect schema, verify setup from the terminal without touching an agent at all. This turned out to be more useful than I expected early on.

MCP server — for Cursor, Claude, and IDE agents. Expose the full guardrailed toolset to your IDE agent so it can help with real database work without ever touching raw SQL.

All three share the same core runtime and the same guardrail rules.


What I'd Do Differently

I built the fun parts first. The MCP server and SDK felt more interesting, so I built those before the CLI. But the CLI has turned out to be one of the most practically useful pieces. Build for the boring workflows first — that's where real adoption happens.

Tests should have been first, not last. The guardrail logic — the code that decides what gets blocked — is the most safety-critical part of the whole system. I added tests to it late. That was backwards.

The approval UX still needs a lot of work. The queue is functional but approving a schema change should feel as natural as merging a PR. It doesn't yet.


Where It Stands

EnginiQ is at v0.1. The core runtime, CLI, and MCP server are live. The hosted dashboard with team approval workflows is in early access — if your team is evaluating AI-safe Postgres tooling, reach out at hello@enginiq.dev.

If you're building anything with AI agents and a real Postgres database, I'd genuinely like to hear what you're running into. The site is enginiq.dev and the GitHub is github.com/enginiq.

The agentic future is happening whether or not the safety layer exists. I'm trying to make sure the safety layer exists.


Satyabrata Mohanty is a software developer studying Data Science at IIT Madras. He previously interned at MagicSlides, where he worked on AI content generation pipelines that scaled to 12,000 daily users.


Tags: postgres ai mcp typescript database security developer-tools supabase agents

Top comments (0)