Sometimes you don't need a full database backup — you just need one table. Whether you're about to run a risky migration, creating a test dataset, or recovering from an accidental DELETE, knowing how to backup and restore individual tables with pg_dump is an essential skill. This guide walks through the exact commands, options, and workflows for single-table operations in PostgreSQL.
Why Backup a Single Table?
Full database backups are essential for disaster recovery, but they're overkill for many day-to-day scenarios. Single-table backups are faster, smaller, and more targeted — perfect for surgical operations on your data. Common use cases include protecting critical tables before schema changes, creating lightweight test fixtures, migrating specific data between environments, and recovering from application-level data corruption.
| Use Case | Full Backup | Single-Table Backup |
|---|---|---|
| Disaster recovery | ✅ Required | ❌ Insufficient |
| Pre-migration safety | ⚠️ Slow | ✅ Fast and targeted |
| Test data creation | ⚠️ Overkill | ✅ Lightweight |
| Selective data migration | ⚠️ Complex filtering | ✅ Simple and direct |
| Accidental DELETE recovery | ⚠️ Restores everything | ✅ Restores only affected table |
Single-table backups complement your full backup strategy — they don't replace it.
Backing Up a Single Table with pg_dump
The -t (or --table) flag tells pg_dump to export only the specified table. The command captures the table structure, data, indexes, constraints, and triggers — everything needed to recreate that table elsewhere.
# Basic single-table backup (plain SQL format)
pg_dump -d myapp -t users -f users_backup.sql
# Custom format for compression and flexible restore
pg_dump -d myapp -F c -t users -f users_backup.dump
# Include schema prefix for non-public schemas
pg_dump -d myapp -t sales.orders -f orders_backup.sql
The plain SQL format creates a human-readable file you can inspect and edit. The custom format (-F c) compresses automatically and supports selective restoration — choose based on your needs.
Backing Up Multiple Tables
When you need several related tables — like users, orders, and order_items — repeat the -t flag for each table. This keeps related data together in a single backup file while still avoiding a full database dump.
# Multiple specific tables
pg_dump -d myapp -t users -t orders -t order_items -f related_tables.dump -F c
# Pattern matching with wildcards
pg_dump -d myapp -t 'public.order_*' -f order_tables.dump -F c
Pattern matching with wildcards is powerful for tables that share naming conventions. The pattern 'public.order_*' captures order_items, order_history, order_logs, and any other table starting with order_ in the public schema.
Schema-Only vs Data-Only Backups
Sometimes you need just the structure or just the data. The --schema-only and --data-only flags give you precise control over what gets exported. Schema-only backups are perfect for version control and documentation, while data-only backups work when the target already has the correct table structure.
# Structure only (CREATE TABLE, indexes, constraints)
pg_dump -d myapp -t users --schema-only -f users_schema.sql
# Data only (INSERT statements)
pg_dump -d myapp -t users --data-only -f users_data.sql
Schema-only backups are typically just a few kilobytes regardless of table size — ideal for tracking database changes in Git alongside your application code.
Restoring a Single Table
Restoration depends on the backup format you used. Plain SQL files restore with psql, while custom format files require pg_restore. Both approaches support restoring to the original database or a completely different one.
# Restore plain SQL format
psql -d target_db -f users_backup.sql
# Restore custom format
pg_restore -d target_db users_backup.dump
# Restore to a different database with verbose output
pg_restore -d staging_db -v users_backup.dump
If the table already exists in the target database, the restore will fail on the CREATE TABLE statement. Handle this with the approaches in the next section.
Handling Existing Tables During Restore
When restoring to a database where the table already exists, you have two options: drop the existing table first, or restore only the data. Your choice depends on whether you want to preserve the current table structure or replace it entirely.
| Scenario | Approach | Command |
|---|---|---|
| Replace table completely | Use -c flag during backup |
pg_dump -d myapp -t users -c -f backup.sql |
| Keep structure, replace data | Truncate then restore data-only | TRUNCATE users; psql -f users_data.sql |
| Merge data (append) | Restore data-only without truncate | psql -f users_data.sql |
# Backup with DROP statement included
pg_dump -d myapp -t users -c --if-exists -f users_backup.sql
# Restore will drop existing table first
psql -d target_db -f users_backup.sql
The --if-exists flag prevents errors when the DROP statement runs against a database where the table doesn't exist yet.
Portable Backups Across Environments
Moving table data between development, staging, and production often fails due to different user configurations. The -O (no-owner) and --no-privileges flags create portable backups that restore cleanly regardless of which users exist in the target database.
# Maximum portability
pg_dump -d production -t users -O --no-privileges -f users_portable.sql
# Portable with compression
pg_dump -d production -F c -t users -O --no-privileges -f users_portable.dump
Without these flags, restoration fails with errors like role "prod_user" does not exist — frustrating when you just need the data.
Common Pitfalls and Solutions
Single-table backups have a few gotchas that catch developers off guard. Foreign key constraints, sequences, and dependent objects can all cause restoration failures if not handled properly.
-
Foreign key violations — Restore parent tables before child tables, or disable triggers temporarily with
--disable-triggersduring restore -
Sequence values not updated — After restoring, reset sequences with
SELECT setval('users_id_seq', (SELECT MAX(id) FROM users)) -
Missing dependent objects — If your table uses custom types or functions, back those up separately with
--schema-onlyon the full database
Most issues stem from dependencies between objects. When in doubt, include related tables in your backup to ensure everything restores cleanly.
A Simpler Alternative: Postgresus
While pg_dump gives you complete control over single-table operations, managing backups across multiple databases and schedules requires scripting and maintenance. PostgreSQL backup tools like Postgresus — the most popular backup solution for PostgreSQL — handle scheduling, retention, encryption, and multi-destination storage through a clean web interface, suitable for individuals and enterprise teams alike.
Conclusion
Single-table backups with pg_dump are straightforward once you know the right flags: -t for table selection, -c for clean restores, -O for portability, and --schema-only or --data-only for targeted exports. These surgical backups save time before risky operations and simplify data movement between environments. Whether you run these commands manually, script them into your workflow, or use a dedicated backup tool, mastering single-table operations makes you more effective at protecting and managing your PostgreSQL data.

Top comments (0)