PostgreSQL Slow Query Log: Finding & Fixing Your Slowest Queries
Here's something that surprises a lot of developers: PostgreSQL doesn't log slow queries by default. The log_min_duration_statement parameter ships at -1 (disabled), which means your database could be running queries that take 5 seconds, 10 seconds, even 30 seconds -- and nobody knows unless someone complains. Performance regressions happen silently, accumulate over months, and by the time they cross the pain threshold, the root cause is buried under layers of schema changes and data growth.
Let's fix that.
The Problem
The most common scenario is a query that gradually degrades. It took 50ms six months ago, takes 500ms today, and nobody noticed the drift because slow query logging was never enabled. The table grew, statistics shifted, maybe an index got dropped during a migration. Without logging, there's no trail to follow.
Even teams that enable slow query logging often misconfigure it. Three common mistakes:
Too high a threshold. Setting log_min_duration_statement to 10 seconds catches only catastrophic queries. The steady stream of 1-2 second queries that collectively dominate database load goes completely unnoticed.
Too low a threshold. Setting it to 1ms floods the logs with noise. Finding meaningful signals in gigabytes of log output is impractical.
Optimizing outliers instead of total load. Slow query logs show individual executions. A query that runs once at 800ms is less important than a query running 10,000 times per hour at 50ms each (500 seconds of total load). Without aggregation via pg_stat_statements, you're chasing the wrong queries.
How to Detect It
First, check what's currently configured:
-- Check current slow query log configuration
SELECT
name,
setting,
unit,
short_desc
FROM pg_settings
WHERE name IN (
'log_min_duration_statement',
'log_statement',
'log_duration',
'log_line_prefix',
'auto_explain.log_min_duration'
)
ORDER BY name;
If log_min_duration_statement shows -1, slow query logging is completely disabled. A value of 0 logs everything (useful for debugging, too noisy for production). A good production starting point is 250 to 1000 milliseconds.
Use pg_stat_statements to find the queries that matter most -- sorted by total execution time, not individual execution time:
-- Top queries by total execution time (cumulative load)
SELECT
substring(query, 1, 100) AS query_preview,
calls,
round(total_exec_time::numeric, 1) AS total_time_ms,
round(mean_exec_time::numeric, 1) AS avg_time_ms,
round(max_exec_time::numeric, 1) AS max_time_ms,
round(stddev_exec_time::numeric, 1) AS stddev_ms,
rows
FROM pg_stat_statements
WHERE calls > 10
ORDER BY total_exec_time DESC
LIMIT 20;
The total_exec_time column is the most actionable metric. A query averaging 5ms called 1 million times (5,000 seconds total) deserves more attention than one averaging 2 seconds called 10 times (20 seconds total). The stddev_exec_time reveals plan instability -- queries where some executions are fast and others are slow.
Setting Up Slow Query Logging
Enable with an appropriate threshold:
-- 250ms is a good starting point for web/API workloads
ALTER SYSTEM SET log_min_duration_statement = '250ms';
SELECT pg_reload_conf();
-- Verify
SHOW log_min_duration_statement;
For analytical workloads where multi-second queries are normal, set it higher (2-5 seconds). You can always lower it later once you've processed the initial findings.
Add useful context to log lines:
ALTER SYSTEM SET log_line_prefix = '%m [%p] %q%u@%d ';
-- %m = timestamp with milliseconds
-- %p = process ID
-- %u = user name
-- %d = database name
-- Log parameters for parameterized queries (PostgreSQL 14+)
ALTER SYSTEM SET log_parameter_max_length_on_error = 1024;
SELECT pg_reload_conf();
Enable auto_explain for automatic plan capture:
-- In postgresql.conf: shared_preload_libraries = 'pg_stat_statements, auto_explain'
ALTER SYSTEM SET auto_explain.log_min_duration = '500ms';
ALTER SYSTEM SET auto_explain.log_analyze = off; -- on = actual times (adds overhead)
ALTER SYSTEM SET auto_explain.log_buffers = on;
ALTER SYSTEM SET auto_explain.log_format = 'json';
ALTER SYSTEM SET auto_explain.log_nested_statements = on;
SELECT pg_reload_conf();
Setting auto_explain.log_analyze = off logs the estimated plan without running ANALYZE, avoiding execution overhead. Turn it on only during targeted debugging.
On AWS RDS/Aurora, configure via parameter groups:
log_min_duration_statement = 250
log_statement = none
shared_preload_libraries = pg_stat_statements,auto_explain
auto_explain.log_min_duration = 500
RDS logs go to CloudWatch. Enable "Publish to CloudWatch" in the RDS console to export PostgreSQL logs automatically.
Enable pg_stat_statements if not active:
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
ALTER SYSTEM SET pg_stat_statements.max = 10000;
ALTER SYSTEM SET pg_stat_statements.track = 'all';
SELECT pg_reload_conf();
Analyzing Logs at Scale
For historical analysis, pgBadger generates HTML reports from PostgreSQL log files:
# Generate an HTML report
pgbadger /var/log/postgresql/postgresql-*.log -o slow_query_report.html
# For a specific date range
pgbadger --begin "2026-02-01 00:00:00" --end "2026-02-28 23:59:59" \
/var/log/postgresql/postgresql-*.log -o february_report.html
pgBadger normalizes queries, groups them by pattern, and shows total execution time, call frequency, and hourly distribution. It's an excellent complement to real-time monitoring.
Prevention Strategy
Establish a slow query budget as part of your performance SLA. Define what "slow" means for your app -- for a web API, over 100ms might be unacceptable; for a batch pipeline, 5 seconds might be fine. Set log_min_duration_statement to match your SLA and treat every logged query as a performance bug.
Monitor trends, not just individual slow queries. A query degrading from 10ms to 50ms over three months won't trigger a 250ms threshold, but it's a 5x regression that will eventually become a problem.
Include EXPLAIN ANALYZE in code reviews. Before merging any PR that adds or modifies a database query, run it against a production-sized dataset and verify the plan uses indexes appropriately.
Reset pg_stat_statements after major releases (SELECT pg_stat_statements_reset()) to get a clean baseline. Cumulative stats make it hard to isolate the impact of recent changes.
Originally published at mydba.dev/blog/postgres-slow-query-log
Top comments (0)