Docker has become the standard for running PostgreSQL in development and production environments. While containers provide consistency and portability, they introduce unique challenges for database backups. When a container is removed, all data inside it disappears unless properly persisted. Understanding how to backup PostgreSQL running in Docker containers is essential for data safety and business continuity.
Understanding PostgreSQL Docker backup challenges
Running PostgreSQL in Docker containers changes how you approach backups compared to traditional installations. Containers are ephemeral by design, meaning they can be created and destroyed at any moment. The key difference is that you need to consider both the database files stored in Docker volumes and the container's lifecycle when planning your backup strategy.
Docker volumes persist data outside the container filesystem, making them the primary target for backups. However, you still need to ensure consistency when backing up active databases. Additionally, you must choose between backing up raw database files or using logical dumps, each with distinct trade-offs for Docker environments.
Network connectivity adds another layer of complexity. Your PostgreSQL container may be isolated in a Docker network, requiring specific configuration to allow backup tools to connect. Whether you run backup commands from inside the container, from the host, or from another container impacts your backup architecture significantly.
Method 1: Using pg_dump inside Docker containers
The most straightforward approach is running pg_dump directly inside your PostgreSQL container. This method works regardless of how your container is configured and provides consistent, logical backups that are portable across different PostgreSQL versions and platforms.
To create a backup using pg_dump inside a running container:
docker exec -t postgres-container pg_dump -U postgres mydatabase > backup.sql
For compressed backups that save disk space:
docker exec -t postgres-container pg_dump -U postgres -Fc mydatabase > backup.dump
To backup all databases in the container at once:
docker exec -t postgres-container pg_dumpall -U postgres > backup-all.sql
The main advantage of this approach is simplicity. You don't need to install PostgreSQL tools on your host system, and the backup process works the same way across different operating systems. However, this method requires the container to be running, which means you cannot backup a stopped container using this technique.
Method 2: Using Databasus for PostgreSQL Docker backups
While pg_dump works well for manual backups, production environments need automated scheduling, storage management, encryption and notifications. PostgreSQL backup tools like Databasus provide a comprehensive solution designed specifically for containerized databases.
Databasus is a backup management tool that simplifies PostgreSQL backups in Docker environments. It handles scheduling, storage management, encryption, notifications and retention policies through a web interface. The tool runs as a Docker container itself, making it a natural fit for containerized infrastructure.
Installing Databasus with Docker
The simplest way to get started is using Docker run:
docker run -d \
--name databasus \
-p 4005:4005 \
-v ./databasus-data:/databasus-data \
--restart unless-stopped \
databasus/databasus:latest
For production environments, use Docker Compose:
services:
databasus:
container_name: databasus
image: databasus/databasus:latest
ports:
- "4005:4005"
volumes:
- ./databasus-data:/databasus-data
restart: unless-stopped
Start the service:
docker compose up -d
Creating your first PostgreSQL backup with Databasus
After installation, open your browser and navigate to http://localhost:4005. The setup process is straightforward:
Step 1: Add your PostgreSQL database
Click "New Database" and enter your PostgreSQL connection details:
- Host: Your PostgreSQL container name or IP address
- Port: 5432 (or your custom port)
- Database name: The database you want to backup
- Username: PostgreSQL user with read permissions
- Password: User password
If your PostgreSQL container is in the same Docker network as Databasus, use the container name as the host. Otherwise, ensure network connectivity between containers.
Step 2: Select storage destination
Choose where to store your backups:
- Local storage: Keep backups on the same server
- S3-compatible storage: AWS S3, Cloudflare R2, MinIO
- Google Drive: Cloud storage with Google account
- SFTP/FTP: Remote server storage
- Rclone: Support for 40+ storage providers
Databasus encrypts backups before uploading them to cloud storage, ensuring your data remains secure even if storage credentials are compromised.
Step 3: Configure backup schedule
Select your backup frequency:
- Hourly: For critical databases with frequent changes
- Daily: Most common choice for production databases
- Weekly: For less critical or slowly changing data
- Monthly: Archive backups for compliance
- Custom cron: Advanced scheduling with cron expressions
You can also set specific times for backups to run during low-traffic periods.
Step 4: Set up notifications (optional)
Configure alerts to know when backups succeed or fail:
- Email: SMTP notifications to your team
- Telegram: Instant messages to your phone
- Slack: Integration with team channels
- Discord: Notifications to Discord servers
- Webhooks: Custom integrations with other systems
Step 5: Create backup
Review your settings and click "Create Backup". Databasus will validate the connection, test storage access and schedule your first backup. You can also trigger manual backups at any time from the dashboard.
The dashboard provides backup history, success rates, storage usage and upcoming scheduled backups. You can restore backups directly from the interface or download backup files for manual restoration.
Method 3: Running pg_dump from host system
If you have PostgreSQL client tools installed on your host machine, you can connect to the container's exposed port and run pg_dump locally. This approach separates the backup process from the container lifecycle and provides more flexibility in scheduling and automation.
First, ensure your PostgreSQL container exposes port 5432:
docker run -d \
--name postgres-container \
-p 5432:5432 \
-e POSTGRES_PASSWORD=secretpassword \
-v postgres-data:/var/lib/postgresql/data \
postgres:16
Then run pg_dump from your host:
pg_dump -h localhost -p 5432 -U postgres mydatabase > backup.sql
This method works well when you have multiple PostgreSQL containers running on different ports. You can backup all of them using the same installed pg_dump version without entering each container. The downside is that you need to maintain PostgreSQL client tools on your host system and ensure version compatibility.
Method 4: Using a separate backup container
A more Docker-native approach is running backups from a separate container with PostgreSQL tools pre-installed. This method keeps your environment completely containerized and doesn't require installing anything on the host system.
Here's how to backup using a separate container:
docker run --rm \
--network container:postgres-container \
postgres:16 \
pg_dump -h localhost -U postgres mydatabase > backup.sql
For automated backups, you can create a dedicated backup container that runs on schedule. Create a simple backup script and package it in a Docker image:
#!/bin/bash
BACKUP_DIR="/backups"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
pg_dump -h postgres-container -U postgres mydatabase > $BACKUP_DIR/backup_$TIMESTAMP.sql
This approach provides excellent isolation and reproducibility. Your backup process becomes part of your infrastructure as code, making it easy to version control and deploy consistently across environments.
Method 5: Docker volume backups
Instead of using logical dumps, you can backup the underlying Docker volume that stores PostgreSQL data files. This method creates exact copies of database files and can be faster for very large databases.
To backup a Docker volume, first stop the PostgreSQL container to ensure data consistency:
docker stop postgres-container
Then create a volume backup using a helper container:
docker run --rm \
-v postgres-data:/source \
-v $(pwd):/backup \
ubuntu tar czf /backup/postgres-volume-backup.tar.gz -C /source .
Start the container again:
docker start postgres-container
To restore from a volume backup:
docker run --rm \
-v postgres-data:/target \
-v $(pwd):/backup \
ubuntu tar xzf /backup/postgres-volume-backup.tar.gz -C /target
Volume backups are faster than pg_dump for large databases but have limitations. You must stop the database to ensure consistency, and the backups are less portable between different PostgreSQL versions or architectures. Additionally, volume backups consume more storage space compared to compressed pg_dump outputs.
Comparison of PostgreSQL Docker backup methods
| Method | Pros | Cons | Best for |
|---|---|---|---|
| pg_dump inside container | Simple, no extra setup, version-matched tools | Requires running container, less flexible scheduling | Quick manual backups, small databases |
| Databasus | Automated scheduling, web UI, encryption, notifications, multi-storage support | Requires additional container, learning curve for first setup | Production environments, teams, enterprise |
| pg_dump from host | Flexible scheduling, multi-container support | Needs PostgreSQL tools on host, version compatibility issues | Development environments, multiple databases |
| Separate backup container | Fully containerized, reproducible, no host dependencies | More complex setup, network configuration required | Infrastructure as code, custom automation |
| Volume backups | Fast for large databases, complete file copy | Requires container stop, less portable, larger size | Cold backups, disaster recovery |
Docker Compose backup strategies
When using Docker Compose for PostgreSQL, you can integrate backup strategies directly into your compose configuration. This approach makes backups part of your application stack and ensures they're deployed consistently with your database.
Example Docker Compose setup with automatic backups:
version: "3.8"
services:
postgres:
image: postgres:16
container_name: postgres-db
environment:
POSTGRES_PASSWORD: secretpassword
POSTGRES_DB: mydatabase
volumes:
- postgres-data:/var/lib/postgresql/data
networks:
- db-network
backup:
image: postgres:16
container_name: postgres-backup
depends_on:
- postgres
environment:
POSTGRES_HOST: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: secretpassword
volumes:
- ./backups:/backups
- ./backup-script.sh:/backup-script.sh
networks:
- db-network
entrypoint: /backup-script.sh
volumes:
postgres-data:
networks:
db-network:
This configuration creates a dedicated backup service that can run scheduled backups. The backup container shares the same network as PostgreSQL, enabling seamless connectivity without exposing ports to the host.
Automating PostgreSQL Docker backups with cron
For production systems, manual backups are not sufficient. You need automated, scheduled backups that run without human intervention. The most common approach is using cron jobs on the host system to trigger Docker backup commands.
Create a backup script:
#!/bin/bash
BACKUP_DIR="/var/backups/postgresql"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="$BACKUP_DIR/backup_$TIMESTAMP.sql.gz"
docker exec postgres-container pg_dump -U postgres mydatabase | gzip > $BACKUP_FILE
# Keep only last 7 days of backups
find $BACKUP_DIR -name "backup_*.sql.gz" -mtime +7 -delete
Add to crontab to run daily at 2 AM:
0 2 * * * /usr/local/bin/docker-postgres-backup.sh
This automated approach ensures regular backups without manual intervention. However, you still need to monitor backup success and manage retention policies. For enterprise environments, you may want more sophisticated backup orchestration.
Best practices for PostgreSQL Docker backups
Implementing a solid backup strategy requires more than just choosing a method. Following best practices ensures your backups are reliable, secure and actually recoverable when needed.
Test your backups regularly
The worst time to discover backup problems is during a disaster. Test restore procedures monthly by creating a separate PostgreSQL container and restoring backups into it. Verify that data integrity is maintained and all critical data is present. Automated testing should be part of your backup workflow, not an afterthought.
Implement the 3-2-1 backup rule
Keep three copies of your data: the original database, a local backup and an off-site backup. Store backups on two different storage types, such as local disk and cloud storage. Maintain one copy off-site to protect against physical disasters like fire or flood. This strategy ensures you can recover data even if multiple systems fail simultaneously.
Encrypt sensitive data
Database backups often contain sensitive information like user credentials, personal data and business secrets. Always encrypt backups before storing them, especially in cloud storage. Use strong encryption like AES-256 and store encryption keys separately from backups. Databasus handles encryption automatically, but manual backup scripts should implement encryption explicitly.
Backup retention and storage management
Keeping every backup forever is neither practical nor cost-effective. A well-designed retention policy balances storage costs with recovery needs. Different backup types serve different purposes and should have different retention periods.
| Backup frequency | Retention period | Purpose |
|---|---|---|
| Hourly | 24-48 hours | Recent data recovery, quick rollback |
| Daily | 7-30 days | Common restore scenarios, weekly issues |
| Weekly | 3-6 months | Monthly reporting, longer-term recovery |
| Monthly | 1-7 years | Compliance, annual audits, legal requirements |
Implement automatic cleanup to remove old backups and prevent storage from filling up. Your backup scripts should include retention logic, or use a backup management tool that handles this automatically. Monitor storage usage and set alerts when space runs low.
Monitoring and alerting for Docker backups
Backups fail silently more often than you think. Network issues, disk space problems, permissions errors and Docker container restarts can all cause backup failures. Without monitoring, you might discover missing backups when it's too late to recover.
Set up alerts for backup failures that trigger immediately when something goes wrong. Monitor backup duration to detect performance degradation or growing database size. Track backup file sizes to identify anomalies that might indicate incomplete backups. Use health checks to verify backup systems are running and accessible.
For automated monitoring, consider sending backup completion status to external monitoring services. Tools like Healthchecks.io, UptimeRobot or Prometheus can track backup job success and alert you when scheduled backups don't run. This external monitoring provides an additional safety layer independent of your backup system.
Security considerations for PostgreSQL Docker backups
Database backups are high-value targets for attackers because they contain complete copies of sensitive data. Securing backups is just as important as securing the production database itself. Docker environments add specific security considerations that traditional backups don't face.
Restrict network access to PostgreSQL containers. Don't expose port 5432 to the public internet unless absolutely necessary. Use Docker networks to isolate database containers from untrusted networks. If backups run from external systems, use SSH tunnels or VPNs instead of direct database exposure.
Store backup credentials securely. Never hardcode passwords in Docker Compose files or backup scripts. Use Docker secrets for Swarm deployments or environment variables loaded from secure sources. Rotate database passwords regularly and ensure backup configurations are updated accordingly.
Limit database user permissions. Create dedicated backup users with read-only access rather than using superuser accounts. This principle of least privilege prevents backup compromises from leading to data modification or deletion. PostgreSQL allows fine-grained permission control perfect for backup-specific users.
Conclusion
Backing up PostgreSQL in Docker containers requires understanding both PostgreSQL backup tools and Docker's architecture. You can use pg_dump inside containers for simplicity, run backups from the host for flexibility, use separate backup containers for better isolation, or perform volume backups for speed. Each method has trade-offs between convenience, performance and portability.
For production systems, automated scheduling, retention policies, encryption and monitoring are essential. While custom scripts can implement these features, specialized backup tools like Databasus provide comprehensive solutions designed for containerized databases. Choose the approach that matches your team's skills, infrastructure complexity and recovery requirements.
The most important factor is testing your backups regularly. A backup strategy is only as good as your ability to restore data when needed. Document your restore procedures, practice them regularly and verify that your team knows how to execute recovery in high-pressure situations.

Top comments (0)