In modern production environments, database replication and automated cloud backups are essential for ensuring high availability, fault tolerance, and disaster recovery. In this blog, I’ll walk through a step-by-step approach to set up a PostgreSQL replication system using Docker, create automated backups, and upload them to cloud storage using Rclone.
This guide uses a dummy project structure to maintain confidentiality while illustrating a real-world implementation.
Why This Setup is Useful
Imagine a growing SaaS application that cannot afford downtime. You want to:
- Ensure continuous replication of your main database to multiple secondary databases.
- Automate daily backups to prevent data loss.
- Securely store backups on cloud storage like OneDrive, Google Drive, or S3.
- Have a system where restoring the database is straightforward if Docker containers fail.
This approach is ideal for:
- Multi-environment deployments (staging, production).
- Teams managing sensitive data that requires offsite backups.
- Applications where downtime translates to business loss.
project_root/
├── app/
│ └── some_module/
├── backups/
├── backup.sh
├── backup_to_cloud.sh
├── db/
│ ├── parent/
│ │ ├── postgresql.conf
│ │ └── pg_hba.conf
│ ├── child1/
│ │ └── postgresql.conf
│ ├── child2/
│ │ └── postgresql.conf
│ └── child3/
│ └── postgresql.conf
├── db_backups/
├── docker-init-scripts/
├── entrypoint.sh
├── full_backup.sql.gz
├── backup.sql
├── postgresql.conf
├── test_cloud_backup.sh
├── manage.py
├── docker-compose.yml
└── requirements.txt
Explanation:
- db/childX: Config files for replicated child databases.
- backups/: Local storage for .sql or .dump files.
- cron_jobs/: Scripts scheduled to run periodic backups.
- backup.sh: Handles local database backup.
- backup_to_cloud.sh: Uploads backups to cloud storage via Rclone.
Step 1: Setting Up Docker Containers for PostgreSQL Replication
We set up one parent database and three child databases. Each child replicates from the parent to ensure redundancy.
services:
postgres_parent:
image: postgres:15
container_name: postgres_parent
restart: always
environment:
POSTGRES_DB: mydb
POSTGRES_USER: postgres
POSTGRES_PASSWORD: admin
volumes:
- postgres_parent_data:/var/lib/postgresql/data
- ./db/primary/postgresql.conf:/etc/postgresql/postgresql.conf
- ./db/primary/pg_hba.conf:/etc/postgresql/pg_hba.conf
command: postgres -c config_file=/etc/postgresql/postgresql.conf
healthcheck:
test: ["CMD", "pg_isready", "-U", "postgres"]
interval: 10s
retries: 5
timeout: 5s
postgres_child1:
image: postgres:15
container_name: postgres_child1
restart: always
environment:
POSTGRES_DB: mydb
POSTGRES_USER: postgres
POSTGRES_PASSWORD: admin
PARENT_HOST: postgres_parent
volumes:
- postgres_child1_data:/var/lib/postgresql/data
- ./db/child1/postgresql.conf:/etc/postgresql/postgresql.conf
command: postgres -c config_file=/etc/postgresql/postgresql.conf
depends_on:
- postgres_parent
postgres_child2:
image: postgres:15
container_name: postgres_child2
restart: always
environment:
POSTGRES_DB: mydb
POSTGRES_USER: postgres
POSTGRES_PASSWORD: admin
PARENT_HOST: postgres_parent
volumes:
- postgres_child2_data:/var/lib/postgresql/data
- ./db/child2/postgresql.conf:/etc/postgresql/postgresql.conf
command: postgres -c config_file=/etc/postgresql/postgresql.conf
depends_on:
- postgres_parent
postgres_child3:
image: postgres:15
container_name: postgres_child3
restart: always
environment:
POSTGRES_DB: mydb
POSTGRES_USER: postgres
POSTGRES_PASSWORD: admin
PARENT_HOST: postgres_parent
volumes:
- postgres_child3_data:/var/lib/postgresql/data
- ./db/child3/postgresql.conf:/etc/postgresql/postgresql.conf
command: postgres -c config_file=/etc/postgresql/postgresql.conf
depends_on:
- postgres_parent
volumes:
postgres_parent_data:
postgres_child1_data:
postgres_child2_data:
postgres_child3_data:
Key Points:
- Each child container depends on the parent.
- Custom PostgreSQL configs are mounted to ensure proper replication settings.
- Healthchecks ensure the parent is ready before children start.
Step 2: Local Database Backups
To safeguard against data loss, we create daily backups.
backup.sh Example:
#!/bin/bash
BACKUP_DIR="./backups"
DATE=$(date +"%Y-%m-%d_%H-%M-%S")
DB_NAME="mydb"
USER="postgres"
mkdir -p $BACKUP_DIR
pg_dump -U $USER $DB_NAME > $BACKUP_DIR/${DB_NAME}_backup_$DATE.sql
echo "Backup created at $BACKUP_DIR/${DB_NAME}_backup_$DATE.sql"
- Stores timestamped .sql backups in backups/.
- Can be triggered manually or via cron.
Step 3: Automating Backups with Cron
Set up a cron job to run the backup script daily.
# crontab -e
0 2 * * * /path/to/my_project/backup.sh >> /path/to/my_project/logs/backup.log 2>&1
- Runs every day at 2 AM.
- Logs are stored for auditing purposes.
Step 4: Uploading Backups to Cloud with Rclone
We use Rclone to upload backups to OneDrive.
backup_to_cloud.sh Example:
#!/bin/bash
BACKUP_DIR="./backups"
REMOTE_NAME="onedrive"
REMOTE_PATH="DB_Backups/$(date +%Y-%m-%d)/"
rclone copy $BACKUP_DIR $REMOTE_NAME:$REMOTE_PATH --progress
Rclone Setup Steps:
- Install Rclone on the server.
- Run rclone config.
- Create a new remote (e.g., onedrive) and authorize it.
- Test connectivity using rclone lsf onedrive:.
- Tip: Always test rclone copy manually before automating.
Step 5: Handling Failures and Recovery
Scenario: Docker or a container fails.
Recovery Steps:
- Stop all containers:
docker-compose down
- Restore the database from the latest backup:
psql -U postgres -d mydb < ./backups/mydb_backup_YYYY-MM-DD.sql
- Restart containers:
docker-compose up -d
- Verify replication status in PostgreSQL:
SELECT * FROM pg_stat_replication;
Key Points:
- Keep backups outside Docker volumes to prevent accidental loss.
- Use versioned backups and cloud storage to recover from critical failures.
Real-World Example
Scenario: A SaaS application stores customer activity data.
- Parent database: Accepts writes.
- Child databases: Provide read replicas for analytics and reporting.
- Backup scripts: Run nightly, stored locally and in the cloud.
- Failure recovery: If the parent container crashes, the latest backup can restore the database, minimizing downtime.
- This setup ensures high availability, data durability, and quick disaster recovery.
Summary
In this blog, we covered:
- Dockerized PostgreSQL replication with one parent and multiple children.
- Automated local backups using scripts and cron jobs.
- Cloud backup using Rclone and OneDrive.
- Recovery procedure in case Docker or database fails.
- Best practices for backup and replication management.
This approach is scalable, secure, and production-ready, suitable for both small teams and enterprise environments.
✅ Next Steps:
- Extend Rclone scripts to support multiple cloud providers.
- Add monitoring alerts if backup or replication fails.
- Schedule incremental backups to optimize storage.
Top comments (0)