PostgreSQL Slow Queries: How to Find and Fix Them
Your app is slow. You've ruled out the server, the network, the app code. It's the database.
Here's how to find and fix slow PostgreSQL queries systematically.
Step 1: Enable Slow Query Logging
-- Check current settings
SHOW log_min_duration_statement;
SHOW log_directory;
-- Enable logging for queries over 1 second
ALTER SYSTEM SET log_min_duration_statement = 1000;
SELECT pg_reload_conf();
Or in postgresql.conf:
log_min_duration_statement = 1000 # milliseconds
log_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h '
Now any query taking over 1 second appears in your PostgreSQL logs:
sudo tail -f /var/log/postgresql/postgresql-*.log | grep duration
Step 2: Find Slow Queries with pg_stat_statements
-- Enable the extension (once)
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
-- Find your worst queries
SELECT
round(mean_exec_time::numeric, 2) AS avg_ms,
calls,
round(total_exec_time::numeric, 2) AS total_ms,
query
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 10;
This shows you the average execution time, how many times it's been called, and the query itself.
Step 3: EXPLAIN ANALYZE the Slow Query
Take the worst query from above and run EXPLAIN ANALYZE:
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT u.*, COUNT(o.id) as order_count
FROM users u
LEFT JOIN orders o ON o.user_id = u.id
WHERE u.created_at > NOW() - INTERVAL '30 days'
GROUP BY u.id;
Real output to watch for:
Seq Scan on users (cost=0.00..4521.00 rows=180000) (actual time=0.042..892.341 rows=180000)
Seq Scan = full table scan = no index being used. This is almost always the problem.
Compare to what you want:
Index Scan using users_created_at_idx on users (cost=0.43..12.45 rows=10) (actual time=0.021..0.089 rows=10)
Step 4: Add the Missing Index
-- Add index on the filtered column
CREATE INDEX CONCURRENTLY idx_users_created_at ON users(created_at);
-- For the join column
CREATE INDEX CONCURRENTLY idx_orders_user_id ON orders(user_id);
-- For queries filtering by multiple columns
CREATE INDEX CONCURRENTLY idx_orders_user_status ON orders(user_id, status);
CONCURRENTLY means the index builds without locking the table. Use it in production.
After adding the index, run EXPLAIN ANALYZE again. You should see Index Scan instead of Seq Scan.
Step 5: Common Slow Query Patterns and Fixes
Pattern 1: N+1 Query
-- Bad: running this for every user
SELECT * FROM orders WHERE user_id = $1;
-- Fix: join in one query
SELECT u.*, o.*
FROM users u
LEFT JOIN orders o ON o.user_id = u.id
WHERE u.id = ANY($1); -- batch IDs
Pattern 2: SELECT *
-- Bad: fetching all columns including large blobs
SELECT * FROM products;
-- Fix: select only what you need
SELECT id, name, price FROM products;
Pattern 3: LIKE with leading wildcard
-- Bad: can't use index
SELECT * FROM users WHERE email LIKE '%@gmail.com';
-- Better: if you need full-text search, use pg_trgm
CREATE EXTENSION pg_trgm;
CREATE INDEX idx_users_email_trgm ON users USING GIN(email gin_trgm_ops);
Quick Diagnostic Checklist
-- 1. Find tables without indexes on foreign keys
SELECT
tc.table_name,
kcu.column_name
FROM information_schema.table_constraints tc
JOIN information_schema.key_column_usage kcu
ON tc.constraint_name = kcu.constraint_name
WHERE tc.constraint_type = 'FOREIGN KEY'
AND NOT EXISTS (
SELECT 1 FROM pg_indexes
WHERE tablename = tc.table_name
AND indexdef LIKE '%' || kcu.column_name || '%'
);
-- 2. Find bloated tables (needs VACUUM)
SELECT schemaname, tablename,
n_live_tup, n_dead_tup,
round(n_dead_tup::numeric/NULLIF(n_live_tup+n_dead_tup,0)*100,1) AS dead_pct
FROM pg_stat_user_tables
WHERE n_dead_tup > 1000
ORDER BY dead_pct DESC;
-- 3. Run VACUUM on bloated tables
VACUUM ANALYZE your_table_name;
I built ARIA to solve exactly this.
Try it free at step2dev.com — no credit card needed.
Top comments (0)