DEV Community

Thesius Code
Thesius Code

Posted on • Originally published at datanest-stores.pages.dev

PostgreSQL Optimization Kit

PostgreSQL Optimization Kit

Turn your slow PostgreSQL database into a fast one. This kit provides query tuning scripts that identify exactly which queries need attention, index analysis tools that find missing and unused indexes, vacuum configuration templates tuned for your workload size, connection pooling setup with PgBouncer, and monitoring dashboards that track performance trends over time. Every tool includes the diagnostic query and the fix.

Key Features

  • Query performance analyzer using pg_stat_statements to rank queries by total time, calls, mean time, and rows returned
  • Index advisor scripts that detect missing indexes from sequential scans, unused indexes wasting disk, and duplicate indexes
  • VACUUM and AUTOVACUUM tuning with per-table configuration templates for OLTP, analytics, and mixed workloads
  • Connection pooling setup with PgBouncer configs for transaction-mode pooling, connection limits, and failover
  • EXPLAIN ANALYZE interpreter with a guide to reading execution plans and identifying common bottlenecks
  • Table bloat estimator that calculates dead tuple ratios and recommends VACUUM FULL or pg_repack based on severity
  • Lock monitoring queries showing active locks, blocked queries, and lock wait chains for deadlock investigation
  • Performance baseline capture to track query latency percentiles, cache hit ratios, and checkpoint frequency over time

Quick Start

unzip postgresql-optimization-kit.zip
cd postgresql-optimization-kit/

# 1. Enable pg_stat_statements (if not already)
psql -h localhost -U postgres -c "CREATE EXTENSION IF NOT EXISTS pg_stat_statements;"

# 2. Run the top-queries analyzer
psql -h localhost -U postgres -d myapp -f src/queries/top_slow_queries.sql

# 3. Run the index advisor
psql -h localhost -U postgres -d myapp -f src/indexes/missing_indexes.sql

# 4. Check table bloat
psql -h localhost -U postgres -d myapp -f src/maintenance/table_bloat.sql
Enter fullscreen mode Exit fullscreen mode

Quick diagnostic — find your top 10 slowest queries:

SELECT
    LEFT(query, 80) AS query_preview,
    calls,
    ROUND(total_exec_time::numeric, 1) AS total_ms,
    ROUND(mean_exec_time::numeric, 1) AS mean_ms,
    ROUND((100.0 * shared_blks_hit / NULLIF(shared_blks_hit + shared_blks_read, 0))::numeric, 1)
        AS cache_hit_pct,
    rows
FROM pg_stat_statements
WHERE calls > 10
ORDER BY total_exec_time DESC
LIMIT 10;
Enter fullscreen mode Exit fullscreen mode

Architecture / How It Works

postgresql-optimization-kit/
├── src/
│   ├── queries/
│   │   ├── top_slow_queries.sql      # Rank queries by total execution time
│   │   ├── most_called_queries.sql   # Highest call frequency queries
│   │   └── cache_miss_queries.sql    # Queries with worst cache hit ratios
│   ├── indexes/
│   │   ├── missing_indexes.sql       # Tables doing sequential scans
│   │   ├── unused_indexes.sql        # Indexes with zero scans
│   │   ├── duplicate_indexes.sql     # Redundant overlapping indexes
│   │   └── index_size_report.sql     # Index storage consumption
│   ├── maintenance/
│   │   ├── table_bloat.sql           # Dead tuple ratio per table
│   │   ├── vacuum_config.sql         # Per-table autovacuum tuning
│   │   └── checkpoint_analysis.sql   # Checkpoint frequency and duration
│   ├── locks/
│   │   ├── active_locks.sql          # Current lock holders and waiters
│   │   └── lock_wait_chain.sql       # Deadlock chain visualization
│   ├── connections/
│   │   ├── pgbouncer.ini             # PgBouncer configuration
│   │   └── connection_stats.sql      # Per-user/database connection counts
│   └── baseline/
│       └── capture_baseline.sql      # Snapshot current performance metrics
├── examples/
│   ├── explain_analyze_guide.md
│   └── optimization_workflow.md
└── config.example.yaml
Enter fullscreen mode Exit fullscreen mode

Usage Examples

Find missing indexes (tables doing too many sequential scans):

SELECT
    schemaname,
    relname AS table_name,
    seq_scan,
    seq_tup_read,
    idx_scan,
    ROUND(
        100.0 * idx_scan / NULLIF(idx_scan + seq_scan, 0), 1
    ) AS idx_scan_pct,
    pg_size_pretty(pg_relation_size(relid)) AS table_size
FROM pg_stat_user_tables
WHERE seq_scan > 100
  AND pg_relation_size(relid) > 10485760  -- tables > 10MB only
  AND ROUND(100.0 * idx_scan / NULLIF(idx_scan + seq_scan, 0), 1) < 90
ORDER BY seq_tup_read DESC
LIMIT 20;
Enter fullscreen mode Exit fullscreen mode

Tune autovacuum for a high-churn table:

-- Default autovacuum triggers at 20% dead tuples, which is too late
-- for tables with millions of rows. Tune per table:
ALTER TABLE orders SET (
    autovacuum_vacuum_threshold = 1000,         -- start after 1000 dead rows
    autovacuum_vacuum_scale_factor = 0.01,      -- plus 1% of table (not 20%)
    autovacuum_analyze_threshold = 500,
    autovacuum_analyze_scale_factor = 0.005,
    autovacuum_vacuum_cost_delay = 2            -- ms; lower = faster vacuum
);

-- Verify current autovacuum settings per table
SELECT
    relname,
    n_dead_tup,
    last_autovacuum,
    last_autoanalyze,
    pg_size_pretty(pg_relation_size(relid)) AS size
FROM pg_stat_user_tables
ORDER BY n_dead_tup DESC
LIMIT 10;
Enter fullscreen mode Exit fullscreen mode

PgBouncer configuration for transaction pooling:

; pgbouncer.ini — transaction-mode pooling
[databases]
myapp = host=127.0.0.1 port=5432 dbname=myapp

[pgbouncer]
listen_addr = 0.0.0.0
listen_port = 6432
auth_type = scram-sha-256
auth_file = /etc/pgbouncer/userlist.txt

pool_mode = transaction        ; release connection after each transaction
default_pool_size = 25         ; connections per user/database pair
max_client_conn = 200          ; total client connections allowed
reserve_pool_size = 5          ; extra connections for burst traffic
reserve_pool_timeout = 3       ; seconds before using reserve pool

server_idle_timeout = 300      ; close idle server connections after 5 min
query_timeout = 30             ; kill queries running longer than 30s
Enter fullscreen mode Exit fullscreen mode

Configuration

# config.example.yaml
postgresql:
  host: localhost
  port: 5432
  database: myapp
  user: postgres
  password: YOUR_PASSWORD_HERE

optimization:
  pg_stat_statements: true       # requires extension
  min_table_size_mb: 10          # ignore small tables in reports
  min_query_calls: 10            # ignore rarely-executed queries
  seq_scan_threshold_pct: 90     # flag tables below this index usage

vacuum:
  high_churn_scale_factor: 0.01  # for tables with > 1M rows
  normal_scale_factor: 0.05      # for medium tables
  cost_delay_ms: 2               # aggressive vacuum (default is 20)

pgbouncer:
  pool_mode: transaction
  default_pool_size: 25
  max_client_conn: 200
Enter fullscreen mode Exit fullscreen mode

Best Practices

  1. Reset pg_stat_statements after major changes with SELECT pg_stat_statements_reset(); to get clean performance data for your new baseline.
  2. Don't add indexes blindly. Each index slows writes and consumes disk. Verify with EXPLAIN ANALYZE that the index actually changes the plan.
  3. Set autovacuum_vacuum_scale_factor to 0.01 on tables with over 1 million rows. The default 0.2 means vacuum waits until 200K dead tuples accumulate.
  4. Use PgBouncer in transaction mode for web applications. Session mode wastes connections; statement mode breaks prepared statements.
  5. Monitor checkpoint frequency. If checkpoints happen more than once per minute, increase max_wal_size to reduce I/O spikes.
  6. Profile before you optimize. The slowest query by mean time may only run once a day. Focus on total_exec_time for overall impact.

Troubleshooting

Problem Cause Fix
pg_stat_statements returns empty Extension not in shared_preload_libraries Add to postgresql.conf and restart (reload is not sufficient)
High bloat despite autovacuum running Long-running transactions holding back xmin Find and terminate idle-in-transaction sessions: SELECT * FROM pg_stat_activity WHERE state = 'idle in transaction';
PgBouncer "no more connections" error max_client_conn exceeded Increase limit or investigate connection leak in application
EXPLAIN shows Seq Scan despite index existing Query uses function on indexed column Create a functional index: CREATE INDEX ON tbl (LOWER(email));

This is 1 of 9 resources in the Database Admin Pro toolkit. Get the complete [PostgreSQL Optimization Kit] with all files, templates, and documentation for $49.

Get the Full Kit →

Or grab the entire Database Admin Pro bundle (9 products) for $109 — save 30%.

Get the Complete Bundle →


Related Articles

Top comments (0)