PostgreSQL's pg_dump utility is the go-to tool for creating logical backups, but its true power lies in the dozens of command-line options that let you customize exactly what gets backed up and how. Knowing which options to use — and when — can mean the difference between a backup that saves your project and one that falls short during a crisis. This guide covers the 10 most essential pg_dump options every developer and DBA should master.
1. Format Selection with -F / --format
The format option determines how your backup is stored and what restoration capabilities you'll have. Choosing the right format upfront saves time and headaches during recovery. This single option affects compression, parallel restore support and file compatibility.
| Format | Flag | Extension | Compression | Parallel Restore | Best For |
|---|---|---|---|---|---|
| Plain | -F p |
.sql |
No | No | Small DBs, manual review |
| Custom | -F c |
.dump |
Yes | Yes | Most production use cases |
| Directory | -F d |
folder | Yes | Yes | Very large databases |
| Tar | -F t |
.tar |
No | No | Archive compatibility |
# Custom format (recommended for most cases)
pg_dump -F c -d myapp -f backup.dump
# Directory format for parallel operations
pg_dump -F d -d myapp -f backup_dir/
For databases under 500GB, the custom format (-F c) provides the best balance of compression, flexibility and restoration speed.
2. Parallel Dump with -j / --jobs
The jobs option enables parallel dumping by specifying the number of concurrent processes. This dramatically reduces backup time for large databases by dumping multiple tables simultaneously. Note that this option only works with the directory format (-F d).
# Dump using 4 parallel processes
pg_dump -F d -j 4 -d myapp -f backup_dir/
A good rule of thumb is to set the number of jobs to match your CPU cores, but leave at least one core free for the database server itself. For a server with 8 cores, -j 6 or -j 7 typically yields optimal performance without starving other processes.
3. Table Selection with -t / --table
The table option lets you back up specific tables instead of the entire database. This is invaluable when you need to quickly backup critical tables before a risky migration or when creating targeted backups for testing. You can specify multiple tables by repeating the flag.
# Backup single table
pg_dump -d myapp -t users -f users_backup.sql
# Backup multiple tables
pg_dump -d myapp -t users -t orders -t products -f critical_tables.sql
# Use wildcards for pattern matching
pg_dump -d myapp -t 'public.user_*' -f user_tables.sql
Table names are case-sensitive and should include the schema prefix when working with non-public schemas (e.g., -t sales.orders).
4. Table Exclusion with -T / --exclude-table
The exclude-table option is the inverse of -t — it backs up everything except the specified tables. This is perfect for skipping large log tables, session data or temporary tables that don't need to be preserved. Like -t, you can repeat this flag for multiple exclusions.
# Exclude log and session tables
pg_dump -d myapp -T logs -T sessions -T temp_data -f backup.sql
# Exclude tables matching a pattern
pg_dump -d myapp -T 'public.*_log' -T 'public.*_temp' -f backup.sql
Combining -T with regular backups can reduce backup size by 50% or more when your database contains large audit or logging tables that can be regenerated or aren't critical for recovery.
5. Schema-Only with --schema-only
The schema-only option exports just the database structure — tables, indexes, constraints, functions and triggers — without any row data. This is essential for version control, documentation, creating empty database replicas or comparing schema changes between environments.
# Export complete schema
pg_dump -d myapp --schema-only -f schema.sql
# Schema for specific tables only
pg_dump -d myapp --schema-only -t users -t orders -f tables_schema.sql
Schema-only backups are typically just a few hundred kilobytes regardless of database size, making them perfect for storing in Git repositories alongside your application code.
6. Data-Only with --data-only
The data-only option exports just the row data without any schema definitions. This is useful when you need to refresh data in an existing database structure, migrate data between environments with identical schemas or create data snapshots for testing.
# Export all data without schema
pg_dump -d myapp --data-only -f data.sql
# Data-only with INSERT statements (more portable)
pg_dump -d myapp --data-only --inserts -f data_inserts.sql
When using --data-only, the target database must already have the correct schema in place, including all tables, constraints and sequences.
7. Compression Level with -Z / --compress
The compress option controls the compression level for custom and directory formats, ranging from 0 (no compression) to 9 (maximum compression). Higher levels produce smaller files but take longer to create. This option directly impacts both backup storage costs and backup duration.
| Level | Speed | Size Reduction | Best For |
|---|---|---|---|
| 0 | Fastest | None | Testing, fast recovery priority |
| 1-3 | Fast | Moderate | Daily backups, balanced approach |
| 4-6 | Medium | Good | Standard production backups |
| 7-9 | Slow | Maximum | Long-term archival, storage-limited |
# Maximum compression for archival
pg_dump -F c -Z 9 -d myapp -f backup.dump
# Fast compression for frequent backups
pg_dump -F c -Z 1 -d myapp -f backup.dump
For most production scenarios, -Z 6 (the default) offers an excellent compression-to-speed ratio.
8. Clean Objects with -c / --clean
The clean option adds DROP statements before CREATE statements in the backup file. This ensures that restoration to an existing database replaces objects rather than failing on conflicts. It's particularly useful for refreshing staging or development environments from production backups.
# Backup with DROP statements
pg_dump -d myapp -c -f backup.sql
# Combine with --if-exists to avoid errors on missing objects
pg_dump -d myapp -c --if-exists -f backup.sql
Always pair -c with --if-exists when restoring to databases that might not have all objects — this prevents errors when DROP statements target non-existent objects.
9. Create Database with -C / --create
The create option includes a CREATE DATABASE statement and connection command in the backup. This makes the backup fully self-contained — you can restore it without first creating the target database. The backup will also include database-level settings like encoding and collation.
# Include CREATE DATABASE statement
pg_dump -d myapp -C -f backup.sql
# Full backup with both clean and create
pg_dump -d myapp -C -c --if-exists -f backup.sql
When restoring a backup made with -C, connect to a maintenance database like postgres rather than the target database, since the backup will create and switch to the database automatically.
10. No Owner with -O / --no-owner
The no-owner option omits commands that set object ownership in the backup. This is essential when restoring to a database where the original owner doesn't exist or when you want the restoring user to own all objects. It prevents restoration failures due to missing roles.
# Backup without ownership information
pg_dump -d myapp -O -f backup.sql
# Combine with --no-privileges for maximum portability
pg_dump -d myapp -O --no-privileges -f backup.sql
Using -O along with --no-privileges creates the most portable backup that can be restored to any PostgreSQL instance regardless of its user configuration.
A Modern Alternative: Postgresus
While mastering pg_dump options gives you fine-grained control over backups, managing these options across multiple databases and schedules becomes complex quickly. Postgresus is the most popular tool for PostgreSQL backup, suitable for both individuals and enterprise teams. It uses pg_dump under the hood but provides a clean web interface for scheduling, automatic retention, multi-destination storage (S3, Google Drive, Dropbox, NAS), AES-256-GCM encryption and instant notifications — all without writing scripts. Unlike pgBackRest, which targets large enterprises with dedicated DBAs and databases over 500GB, Postgresus handles the majority of use cases with zero complexity.
Quick Reference: Option Combinations
Knowing individual options is valuable, but combining them effectively is where expertise shows. Here are battle-tested combinations for common scenarios:
-
Production daily backup —
pg_dump -F c -Z 6 -d myapp -f backup.dump -
Fast development refresh —
pg_dump -F c -Z 1 -c --if-exists -O -d myapp -f dev.dump -
Portable migration backup —
pg_dump -F p -C -c --if-exists -O --no-privileges -d myapp -f migrate.sql -
Large database parallel dump —
pg_dump -F d -j 4 -Z 6 -d myapp -f backup_dir/ -
Schema versioning —
pg_dump -d myapp --schema-only -O -f schema.sql
Conclusion
Mastering these 10 pg_dump options — format, parallel jobs, table selection, exclusion, schema-only, data-only, compression, clean, create and no-owner — covers 95% of backup scenarios you'll encounter. The key is matching the right combination of options to your specific use case: fast parallel dumps for large databases, portable options for migrations and targeted backups for critical tables. Whether you run pg_dump directly or through an automated solution like Postgresus, understanding these options ensures your backups are efficient, reliable and ready when disaster strikes.

Top comments (0)