DEV Community

Grigory Pshekovich
Grigory Pshekovich

Posted on

How to Back Up a PostgreSQL Database Using pg_dump

PostgreSQL is one of the most reliable and feature-rich open-source databases, powering everything from small projects to enterprise applications. However, even the most robust database needs a solid backup strategy. The pg_dump utility is PostgreSQL's built-in tool for creating logical backups, and understanding how to use it effectively is essential for any developer or database administrator.

pg_dump usage

What Is pg_dump?

pg_dump is a command-line utility that comes bundled with PostgreSQL. It creates a consistent snapshot of your database at a specific point in time, exporting the data and schema into a file that can be used for restoration. Unlike physical backups that copy raw data files, pg_dump creates logical backups — SQL statements or archive files that represent your database structure and contents.

The utility is particularly valuable because it works while the database is online and doesn't block other users from accessing the data. This makes it suitable for production environments where downtime is not an option.

Basic pg_dump Syntax and Usage

The fundamental syntax for pg_dump is straightforward:

pg_dump -h hostname -p port -U username -d database_name > backup.sql
Enter fullscreen mode Exit fullscreen mode
Parameter Description Example
-h Database host address localhost or 192.168.1.100
-p Port number 5432 (default)
-U Username for authentication postgres
-d Database name to backup myapp_production
-F Output format (p, c, d, t) -F c for custom format
-f Output file path -f /backups/mydb.dump

To create a simple SQL backup, run:

pg_dump -h localhost -U postgres -d myapp_production > backup_2024.sql
Enter fullscreen mode Exit fullscreen mode

For compressed custom format (recommended for larger databases):

pg_dump -h localhost -U postgres -F c -d myapp_production -f backup_2024.dump
Enter fullscreen mode Exit fullscreen mode

The custom format (-F c) provides compression and allows selective restoration of specific tables or schemas.

Output Formats Explained

pg_dump supports four output formats, each with distinct advantages:

Format Flag Extension Compression Parallel Restore Best For
Plain SQL -F p .sql No No Small DBs, manual review
Custom -F c .dump Yes Yes Most production use cases
Directory -F d folder Yes Yes Very large databases
Tar -F t .tar No No Compatibility needs

For most scenarios, the custom format strikes the best balance between compression, flexibility and restoration speed.

Common pg_dump Examples

Backup a single table:

pg_dump -h localhost -U postgres -d myapp -t users > users_table.sql
Enter fullscreen mode Exit fullscreen mode

Backup schema only (no data):

pg_dump -h localhost -U postgres -d myapp --schema-only > schema.sql
Enter fullscreen mode Exit fullscreen mode

Backup data only (no schema):

pg_dump -h localhost -U postgres -d myapp --data-only > data.sql
Enter fullscreen mode Exit fullscreen mode

Exclude specific tables:

pg_dump -h localhost -U postgres -d myapp --exclude-table=logs --exclude-table=sessions > backup.sql
Enter fullscreen mode Exit fullscreen mode

Backup with compression:

pg_dump -h localhost -U postgres -d myapp | gzip > backup.sql.gz
Enter fullscreen mode Exit fullscreen mode

These commands cover the majority of backup scenarios you'll encounter in day-to-day operations.

Restoring from pg_dump Backups

Restoration depends on the format you used during backup. For plain SQL files:

psql -h localhost -U postgres -d target_database < backup.sql
Enter fullscreen mode Exit fullscreen mode

For custom or directory formats, use pg_restore:

pg_restore -h localhost -U postgres -d target_database backup.dump
Enter fullscreen mode Exit fullscreen mode

To restore specific tables from a custom format backup:

pg_restore -h localhost -U postgres -d target_database -t users backup.dump
Enter fullscreen mode Exit fullscreen mode

Always test your restoration process on a non-production environment before relying on backups for disaster recovery.

Limitations of Manual pg_dump Scripts

Backing up database

While pg_dump is powerful, managing backups manually comes with significant challenges:

  • No built-in scheduling — you must configure cron jobs or Task Scheduler yourself
  • No automatic retention — old backups accumulate unless you write cleanup scripts
  • No notifications — failures go unnoticed without custom monitoring
  • No encryption — backup files are stored in plain format by default
  • No cloud storage integration — uploading to S3, Google Drive or other destinations requires additional scripting
  • No web interface — everything happens via command line

For teams and production environments, these limitations often lead to forgotten backups, storage issues or undetected failures that only surface during a crisis.

A Better Alternative: Postgresus

For developers and teams who want the reliability of pg_dump without the operational overhead, Postgresus offers a modern, UI-driven approach to PostgreSQL backup. It uses pg_dump under the hood but wraps it with scheduling, notifications, multiple storage destinations (S3, Google Drive, Dropbox, NAS), AES-256-GCM encryption and a clean web interface — all deployable in under 2 minutes via Docker. Unlike pgBackRest, which targets large enterprises with dedicated DBAs and databases over 500GB, Postgresus is designed for the majority of use cases: individual developers, startups and teams managing databases up to hundreds of gigabytes who need robust backups without complexity.

Feature pg_dump (manual) Postgresus
Scheduling Requires cron/scripts Built-in (hourly to monthly)
Notifications Manual setup Slack, Telegram, Discord, Email
Cloud storage Requires scripting S3, Google Drive, Dropbox, NAS
Encryption Not included AES-256-GCM
Web UI None Full dashboard
Restore Command line One-click restore
Team access N/A Role-based permissions

Best Practices for pg_dump Backups

Regardless of whether you use pg_dump directly or through a tool like Postgresus, follow these practices:

  1. Test restorations regularly — a backup is only valuable if you can restore from it
  2. Store backups off-site — keep copies in a different location than your database server
  3. Use compression — custom format or gzip significantly reduces storage requirements
  4. Schedule during low-traffic periods — minimize impact on production performance
  5. Monitor backup success — set up alerts for failures
  6. Implement retention policies — automatically remove old backups to manage storage

These practices ensure your backup strategy remains reliable and sustainable over time.

Conclusion

pg_dump conclusion

pg_dump remains the foundational tool for PostgreSQL logical backups, offering flexibility and reliability that has stood the test of time. For simple, one-off backups or development environments, running pg_dump directly is perfectly adequate. However, for production systems, teams and anyone who values their time, automating the process with a dedicated backup solution eliminates the risks of manual management. Whether you choose to script your own solution or adopt a tool like Postgresus, the key is ensuring your backups are consistent, tested and ready when you need them most.

Top comments (0)