Safe SQL Access for AI with MCP
Giving an AI direct database access sounds useful immediately, and dangerous a few seconds later.
You want the model to inspect schema, understand relationships, run queries, and help with analysis. But you do not want it to mutate data, run unrestricted SQL, or pull back massive result sets because a prompt was vague.
That is the motivation behind ajan-sql.
ajan-sql is a small MCP server for SQL that gives AI clients schema-aware, read-only database access with a guard layer in front of query execution.
What ajan-sql does
ajan-sql runs as an MCP server over stdio and connects through DATABASE_URL.
It currently supports:
- PostgreSQL
- MySQL
- SQLite
It exposes these tools:
server_infolist_tablesdescribe_tablelist_relationshipssearch_schemarun_readonly_queryexplain_querysample_rows
It also exposes these schema resources:
schema://snapshotschema://table/{name}
The goal is simple: give LLMs useful database visibility without handing them unsafe SQL execution.
Why I built it
A lot of AI plus database demos jump from "the model can write SQL" straight to "let it query production-like data."
That gap matters.
If you want something usable in practice, you need a few things:
- schema discovery
- safe defaults
- hard query restrictions
- predictable outputs for clients
I wanted something closer to:
psql plus schema awareness plus an AI-safe guard layer
But packaged in a way that works naturally with MCP clients.
The safety model
The core rule is that query execution must stay read-only and constrained.
ajan-sql enforces:
-
SELECTonly, including guardedWITH ... SELECTqueries - rejection of
INSERT - rejection of
UPDATE - rejection of
DELETE - rejection of
DROP - rejection of
ALTER - rejection of
TRUNCATE - rejection of multi-statement SQL
- rejection of SQL comments
- default
LIMIT 100 - max
LIMIT 100 - max query timeout of 5 seconds
- max result size checks
That means the model can inspect and analyze, but it cannot mutate the database through this server.
Tooling that is actually useful
The schema tools are what make the server practical instead of just being a thin SQL wrapper.
server_info
Returns runtime details such as:
- server version
- active SQL dialect
- available tools
- available resources
- readonly guard settings
list_tables
Returns visible tables and includes metadata such as:
- schema
- table name
- table comment
- estimated row count when available
describe_table
Returns:
- columns
- data types
- nullability
- default values
- primary key metadata
- unique metadata
- foreign key references
- index metadata
list_relationships
Returns foreign key relationships across the schema.
search_schema
Searches table names and column names with a simple substring match, which is useful when a client knows the concept but not the exact table.
run_readonly_query
Executes guarded SELECT queries and returns:
- normalized SQL
- row count
- duration
- column metadata
- rows
explain_query
Runs a guarded explain plan and returns:
- query timing
- raw plan
- a lightweight summary of the root plan node
sample_rows
Returns a limited sample from a table and can optionally target selected columns.
Structured outputs matter
One thing I wanted from the beginning was output that works well for both humans and clients.
So the tools return both:
-
contentfor a short human-readable summary -
structuredContentfor machine-friendly consumption
That makes the server easier to use from MCP-compatible clients that want stable structured payloads instead of parsing JSON out of plain text.
Example setup
Install it globally:
npm install -g ajan-sql
Run it with PostgreSQL:
DATABASE_DIALECT=postgres \
DATABASE_URL=postgres://USER:PASSWORD@HOST:PORT/DB ajan-sql
MySQL example:
DATABASE_DIALECT=mysql \
DATABASE_URL=mysql://USER:PASSWORD@HOST:PORT/DB ajan-sql
SQLite example:
DATABASE_DIALECT=sqlite \
DATABASE_URL=file:/absolute/path/to/database.sqlite ajan-sql
DATABASE_DIALECT defaults to postgres, so PostgreSQL users only need to set DATABASE_URL.
Adding ajan-sql to an MCP client
After installing ajan-sql, add it to your MCP client as a stdio server.
If your client can launch globally installed commands directly, use:
{
"mcpServers": {
"ajan-sql": {
"command": "ajan-sql",
"env": {
"DATABASE_DIALECT": "postgres",
"DATABASE_URL": "postgres://USER:PASSWORD@HOST:PORT/DB"
}
}
}
}
If you are developing from the repository instead of a global install, point the client at the built CLI:
{
"mcpServers": {
"ajan-sql": {
"command": "node",
"args": ["/absolute/path/to/ajan-sql/dist/index.js"],
"env": {
"DATABASE_DIALECT": "postgres",
"DATABASE_URL": "postgres://USER:PASSWORD@HOST:PORT/DB"
}
}
}
}
For MySQL, set DATABASE_DIALECT to mysql and use a MySQL connection string.
For SQLite, set DATABASE_DIALECT to sqlite and use a file URL such as:
{
"mcpServers": {
"ajan-sql": {
"command": "ajan-sql",
"env": {
"DATABASE_DIALECT": "sqlite",
"DATABASE_URL": "file:/absolute/path/to/database.sqlite"
}
}
}
}
Once the server is added, MCP clients can discover the available tools and resources automatically and start using schema inspection, relationship discovery, guarded query execution, explain plans, and sample rows through the standard MCP flow.
Why this shape works well for AI workflows
The point is not to let a model "do anything with SQL."
The point is to give it enough visibility to explore schema, understand relationships, inspect sample data, and answer analytical questions while keeping execution bounded and read-only.
That is the shape I wanted: useful enough for real AI workflows, constrained enough to trust in practice.
Top comments (0)