I got tired of EXPLAIN tools that sent my query plans to AI services or hallucinated index suggestions. So I built something different.
The problem
Last month I was debugging a slow query at work. The EXPLAIN output was 400 lines of JSON. I tried three different "AI-powered" analyzers:
- Two suggested adding an index that already existed
- One hallucinated a table name that wasn't in my database
I just wanted to know: where did the time actually go?
The realization
Current tools fall into two camps:
- AI wrappers – Send your plan to OpenAI, get generic advice.
- Visualizers – Pretty graphs, but you still interpret them.
Neither tells you "4.5 seconds were spent in triggers, 0.03ms in the scan."
What I built
PlanCheck is a client-side PostgreSQL EXPLAIN analyzer with one rule: truth only.
- No AI. No API calls. No "smart" suggestions.
- 100% client-side. Your query plans never leave the browser.
- Hardcoded rules based on actual PostgreSQL behavior.
18 edge cases I tested
| Issue | What it looks like |
|---|---|
| Trigger amplification | 4.5s in triggers, 0.03ms in scan |
| JIT overhead | 700ms compilation to save 100ms |
| Bitmap recheck failure | Index returns 15K rows, all discarded |
| Recursive CTE runaway | 100K iterations with 99.9% filter rejection |
| Partition pruning failure | 53M rows scanned, 1 returned |
| ... plus 13 more | (too many to list, but you get the idea) |
The technical challenge
Parsing EXPLAIN output in the browser sounds easy. It's not.
PostgreSQL has 40+ node types. Each has different fields, metrics, and edge cases. I had to handle:
-
TEXTformat (messy indentation, inconsistent fields) - JSON with
nullvalues (missing timing data) - "Never executed" nodes that shouldn't be flagged
- Deep nesting (4+ levels of nested loops)
All while keeping the bundle under 200KB.
What "truth only" actually means
When a tool suggests "add an index," it's guessing your intent. Maybe you want that full table scan for analytics. Maybe the index exists but PostgreSQL chose not to use it.
So PlanCheck only reports measurable facts:
- "Scanned 1M rows sequentially"
- "Trigger took 4.5s, scan took 0.03ms"
- "Index returned 15k rows, recheck discarded all"
You decide what to fix.
Try it
Paste your EXPLAIN (ANALYZE, BUFFERS, FORMAT JSON) output. No signup. No data collection.
What should I add next?
I'm looking for edge cases I haven't tested yet. If you have a query plan that breaks it, I want to see it. Drop a comment or reach out!
Top comments (0)