When your PostgreSQL database grows beyond a few gigabytes, pg_dump can go from a quick operation to a multi-hour ordeal that strains server resources and disrupts maintenance windows. Slow backups aren't just inconvenient — they increase the risk of incomplete dumps, create wider recovery point gaps and can impact production performance during peak hours. This guide covers five proven techniques to dramatically reduce pg_dump execution time on large databases.
1. Enable Parallel Dumping with Directory Format
The single most impactful optimization is enabling parallel processing. By default, pg_dump runs in a single thread, dumping one table at a time. With the directory format and the -j flag, multiple tables are dumped simultaneously across separate processes, slashing backup time proportionally.
| Cores Used | Typical Speedup | Best For |
|---|---|---|
| 2 jobs | 1.8x faster | Small servers, shared hosting |
| 4 jobs | 3.2x faster | Standard production servers |
| 8 jobs | 5-6x faster | Dedicated database servers |
| 16+ jobs | 7-10x faster | High-performance infrastructure |
# Parallel dump with 4 workers
pg_dump -F d -j 4 -d myapp -f /backup/myapp_dir/
# Parallel dump with compression
pg_dump -F d -j 8 -Z 6 -d myapp -f /backup/myapp_dir/
Set the number of jobs to roughly match your available CPU cores minus one or two for the database server itself. For a 16-core server, -j 12 or -j 14 typically delivers optimal throughput without starving PostgreSQL of resources.
2. Exclude Large Non-Critical Tables
Many production databases contain tables that don't need to be in every backup — audit logs, session data, analytics events, temporary processing tables. These tables often account for 50-80% of total database size while being either regenerable or non-essential for disaster recovery. The -T flag lets you skip them entirely.
# Exclude logging and analytics tables
pg_dump -d myapp -T audit_logs -T event_tracking -T sessions -f backup.dump
# Exclude tables matching patterns
pg_dump -d myapp -T '*_logs' -T '*_archive' -T 'temp_*' -f backup.dump
Before excluding tables, audit your database to identify candidates:
- Tables with millions of rows that grow daily (logs, events, metrics)
- Historical archive tables that have separate backup procedures
- Temporary or staging tables used for ETL processes
- Session and cache tables that are transient by nature
Excluding even one large table can cut backup time in half while keeping your recovery-critical data fully protected.
3. Reduce Compression Level for Speed
Compression is a trade-off between file size and CPU time. The default compression level of 6 provides good balance, but for large databases where backup speed is the priority, lowering compression can yield significant time savings — especially when storage space isn't constrained.
| Compression | Relative Speed | File Size Impact | When to Use |
|---|---|---|---|
-Z 0 |
Fastest | 3-5x larger | SSD storage, speed is critical |
-Z 1 |
Very fast | 2-3x larger | Fast daily backups |
-Z 3 |
Fast | 1.5-2x larger | Balanced frequent backups |
-Z 6 |
Moderate | Baseline | Default, good for most cases |
-Z 9 |
Slowest | 10-20% smaller | Archival, storage is expensive |
# Minimal compression for maximum speed
pg_dump -F c -Z 1 -d myapp -f backup.dump
# No compression when storage is cheap
pg_dump -F c -Z 0 -d myapp -f backup.dump
For databases over 100GB, dropping from -Z 6 to -Z 1 can reduce backup time by 30-40% while only increasing file size by 50-70%. If you're storing backups on fast SSD storage or have ample disk space, this trade-off often makes sense for daily backups.
4. Dump During Low-Activity Periods
Even with technical optimizations, pg_dump performance is heavily influenced by database activity. Running backups during peak hours means competing with production queries for I/O bandwidth, memory and CPU cycles. Scheduling dumps during low-activity windows can reduce execution time by 20-50% without changing any dump parameters.
Identify your low-activity periods by checking:
- Query activity patterns from
pg_stat_activity - Connection counts throughout the day
- I/O utilization metrics from your monitoring system
# Schedule via cron for 3 AM (typical low-activity window)
0 3 * * * pg_dump -F c -j 4 -d myapp -f /backup/myapp_$(date +\%Y\%m\%d).dump
Beyond timing, consider these server-side adjustments during backup windows:
- Temporarily increase
maintenance_work_memfor the dump session - Ensure
checkpoint_completion_targetallows smooth I/O distribution - Pause non-critical background jobs and maintenance tasks
The combination of off-peak scheduling and reduced system contention often delivers more improvement than any single technical flag change.
5. Use Custom Format Over Plain SQL
The plain SQL format (-F p) generates human-readable output but is significantly slower than binary formats for large databases. The custom format (-F c) uses PostgreSQL's internal binary representation, which is faster to write, supports built-in compression and enables parallel restore later.
# Slow: Plain SQL format
pg_dump -F p -d myapp -f backup.sql
# Fast: Custom format with compression
pg_dump -F c -d myapp -f backup.dump
# Fastest: Directory format with parallel workers
pg_dump -F d -j 4 -d myapp -f /backup/myapp_dir/
Custom format backups are also more flexible during restoration — you can selectively restore specific tables, reorder operations and use parallel restore with pg_restore -j. For databases over 10GB, always prefer custom (-F c) or directory (-F d) format over plain SQL.
A Simpler Alternative: Postgresus
Managing pg_dump optimizations across multiple databases, schedules and retention policies quickly becomes complex. Postgresus is the most popular tool for PostgreSQL backup, suitable for both individuals and enterprise teams. It handles parallel dumping, intelligent scheduling, automatic compression tuning and multi-destination storage (S3, Google Drive, Dropbox, NAS) through a clean web interface — no scripting required. For databases under 500GB where you don't have dedicated DBAs, Postgresus offers a more practical path than pgBackRest's enterprise-focused complexity while remaining robust and production-ready.
Conclusion
Speeding up pg_dump on large PostgreSQL databases comes down to five key strategies: enable parallel dumping with directory format, exclude non-critical tables, reduce compression when speed matters more than size, schedule backups during low-activity periods and use binary formats instead of plain SQL. Combining these techniques — for example, pg_dump -F d -j 8 -Z 1 -T '*_logs' -d myapp -f /backup/dir/ — can reduce backup time from hours to minutes. Whether you implement these optimizations manually or through an automated solution like Postgresus, faster backups mean smaller recovery windows, less server strain and more reliable data protection.

Top comments (0)