DEV Community

Qi
Qi

Posted on

I Built a Free PostgreSQL Health Checker with 23 MCP Tools and CI Integration

I Built a Free PostgreSQL Health Checker with 23 MCP Tools and CI Integration

After years of working with PostgreSQL in production, I kept running into the same problem: the tooling options are either expensive, complex to set up, or don't fit into a modern AI-assisted workflow.

  • pganalyze — great product, $149/month, cloud-only
  • Grafana + Prometheus — free, but you're maintaining 3 services
  • pgAdmin — good for browsing, useless for automated health checks
  • None of them work with Claude, Cursor, or any MCP-compatible AI assistant

So I built pg-dash — a self-hosted, zero-config PostgreSQL health checker.

One command to start

npx @indiekitai/pg-dash check postgres://user:pass@host/db
Enter fullscreen mode Exit fullscreen mode

It runs 46+ automated checks across performance, maintenance, schema, and security, grades your database A–F, and gives you executable SQL fixes — not just a list of problems.

Three things I haven't seen elsewhere

1. Migration safety checker

Before you run a migration, analyze it:

pg-dash check-migration ./migrations/015_add_index.sql
Enter fullscreen mode Exit fullscreen mode

It detects:

  • CREATE INDEX without CONCURRENTLY — locks the table
  • ADD COLUMN NOT NULL without a DEFAULT — full table rewrite in older PG versions
  • ALTER COLUMN TYPE — another full rewrite
  • RENAME TABLE / RENAME COLUMN — breaks application code
  • DELETE / UPDATE without WHERE — obvious but easy to miss under pressure
  • And 6 more patterns

It also connects to your database to check actual row counts, so it can say "this lock would affect a 2.4M-row table — expect ~30 seconds of downtime" instead of just flagging the pattern.

Exits with code 1 in CI if any unsafe patterns are found. Works great as a pre-flight check on every migration PR.

2. Production readiness audit

These are the things that slowly kill production databases:

Unused indexes — Indexes with 0 scans since the last stats reset. They waste space and slow down every INSERT/UPDATE/DELETE. pg-dash finds them and generates safe DROP INDEX CONCURRENTLY SQL.

Table bloat — Dead tuple ratio per table. High bloat means autovacuum can't keep up. Surfaces both last_autovacuum and last_vacuum timestamps.

Autovacuum health — Classifies each table as ok / stale / overdue / never. If a table has never been vacuumed and has 500K dead tuples, you want to know.

Lock monitoring — Active lock-wait chains (who is blocking whom) and long-running queries over 5 seconds.

Config recommendations — Audits your PostgreSQL settings: shared_buffers, work_mem, checkpoint_completion_target, max_connections, idle_in_transaction_session_timeout, and more. Flags common misconfigurations with severity levels and documentation links.

3. 23 MCP tools for AI assistants

This is the part I'm most excited about. Connect pg-dash to Claude Desktop or Cursor:

{
  "mcpServers": {
    "pg-dash": {
      "command": "npx",
      "args": ["-y", "-p", "@indiekitai/pg-dash", "pg-dash-mcp", "postgresql://user:pass@host/db"]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Now you can ask your AI assistant:

  • "Which indexes on this database are never used?"
  • "Is my autovacuum healthy? Which tables are overdue?"
  • "Check this migration file for lock risks"
  • "Compare the schema between staging and production"
  • "What PostgreSQL config changes would help performance?"

It's the difference between copying query results into chat and your AI actually having access. All 23 tools are read-only by default, with a strict allowlist for write operations (VACUUM, ANALYZE, REINDEX, CREATE INDEX CONCURRENTLY, pg_terminate/cancel_backend — no raw DDL/DML).

CI integration

- name: PostgreSQL health check
  run: npx @indiekitai/pg-dash check $DATABASE_URL --ci --diff --format md
Enter fullscreen mode Exit fullscreen mode

--diff compares against the last saved snapshot. --ci emits ::error:: / ::warning:: GitHub Actions annotations and generates a Markdown PR comment. Catch database regressions on every migration, not when production is on fire.

Everything stays local

No cloud, no signup, no telemetry. Metrics are stored in SQLite on the machine running pg-dash. The dashboard (live metrics, EXPLAIN tree visualization, query trends, disk growth prediction with linear regression) is optional — the CLI is the main workflow.

Try it

# Health check
npx @indiekitai/pg-dash check postgres://user:pass@host/db

# Migration safety check
npx @indiekitai/pg-dash check-migration ./migrations/add_users_index.sql

# Compare two environments
npx @indiekitai/pg-dash diff-env --source postgres://local/db --target postgres://staging/db

# Start dashboard
npx @indiekitai/pg-dash postgres://user:pass@host/db
Enter fullscreen mode Exit fullscreen mode

GitHub: indiekitai/pg-dash
npm: @indiekitai/pg-dash


Built with Hono, React, SQLite, and TypeScript. 362 tests. Feedback welcome.

Top comments (0)