Local backups are better than no backups, but they share a critical vulnerability with your database — if the server fails catastrophically, you lose both. Cloud storage solves this by keeping backup copies in a separate location, protected from hardware failures, ransomware and data center incidents. This guide covers five practical approaches to automatically upload your pg_dump backups to cloud storage, from simple scripts to enterprise-ready solutions.
1. Direct Pipe to AWS S3 with AWS CLI
The most straightforward approach is piping pg_dump output directly to the AWS CLI, which uploads the backup to S3 without ever writing to local disk. This method eliminates the need for temporary storage and works well for databases that fit comfortably in memory during the streaming process.
The AWS CLI handles chunked uploads automatically, making this suitable for backups up to several terabytes. You'll need the aws CLI installed and configured with appropriate IAM credentials that have s3:PutObject permission on your target bucket.
# Direct pipe to S3 with gzip compression
pg_dump -F c -d myapp | aws s3 cp - s3://my-bucket/backups/myapp_$(date +%Y%m%d_%H%M%S).dump
# With custom format and compression level
pg_dump -F c -Z 6 -d myapp | aws s3 cp - s3://my-bucket/backups/daily/myapp.dump
# Plain SQL with gzip for maximum compatibility
pg_dump -d myapp | gzip | aws s3 cp - s3://my-bucket/backups/myapp_$(date +%Y%m%d).sql.gz
- No local disk required — backup streams directly to S3
- Automatic chunked uploads — handles large files seamlessly
- Cost-effective — S3 Standard costs ~$0.023/GB/month
For production use, wrap this in a script that checks the exit code and sends notifications on failure. The main limitation is that streaming prevents retry on network interruptions mid-upload.
2. Rclone for Multi-Cloud Flexibility
Rclone is a command-line tool that supports over 40 cloud storage providers with a unified interface. Unlike provider-specific CLIs, rclone lets you switch between S3, Google Cloud Storage, Azure Blob, Dropbox, Google Drive and dozens of others without changing your backup scripts.
Install rclone and configure a remote using rclone config. Each remote gets a name you reference in commands. This approach gives you vendor independence and the ability to replicate backups across multiple clouds simultaneously.
# Configure remotes (one-time setup)
rclone config
# Follow prompts to add 's3remote', 'gdrive', 'azure', etc.
# Backup and upload to any configured remote
pg_dump -F c -d myapp -f /tmp/backup.dump && \
rclone copy /tmp/backup.dump s3remote:my-bucket/backups/ && \
rm /tmp/backup.dump
# Sync to multiple destinations for redundancy
pg_dump -F c -d myapp -f /tmp/backup.dump && \
rclone copy /tmp/backup.dump s3remote:bucket/backups/ && \
rclone copy /tmp/backup.dump gdrive:PostgreSQL/backups/ && \
rm /tmp/backup.dump
| Provider | Rclone Remote Type | Typical Cost | Best For |
|---|---|---|---|
| AWS S3 | s3 |
$0.023/GB/month | Enterprise, high durability |
| Cloudflare R2 |
s3 (compatible) |
$0.015/GB/month | Cost-conscious, no egress |
| Google Drive | drive |
Free up to 15GB | Personal projects |
| Backblaze B2 | b2 |
$0.006/GB/month | Budget-friendly archival |
| Azure Blob | azureblob |
$0.018/GB/month | Microsoft ecosystem |
Rclone also supports encryption, bandwidth limiting and automatic retries — features that make it production-ready for automated backup pipelines.
3. Cron + Upload Script with Retention Management
A shell script scheduled via cron gives you full control over the backup process, including local retention, cloud upload, cleanup of old backups and error handling. This approach is the most common for teams that want customization without adopting a full backup management tool.
The script below demonstrates a production-ready pattern with timestamped filenames, retention policies and basic error notification.
#!/bin/bash
# backup-postgres.sh
DB_NAME="myapp"
BACKUP_DIR="/var/backups/postgres"
S3_BUCKET="s3://my-bucket/postgres-backups"
RETENTION_DAYS=7
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="$BACKUP_DIR/${DB_NAME}_${TIMESTAMP}.dump"
# Create backup
pg_dump -F c -Z 6 -d "$DB_NAME" -f "$BACKUP_FILE"
if [ $? -eq 0 ]; then
# Upload to S3
aws s3 cp "$BACKUP_FILE" "$S3_BUCKET/"
# Clean up old local backups
find "$BACKUP_DIR" -name "*.dump" -mtime +$RETENTION_DAYS -delete
# Clean up old S3 backups (optional)
aws s3 ls "$S3_BUCKET/" | while read -r line; do
file_date=$(echo "$line" | awk '{print $1}')
if [[ $(date -d "$file_date" +%s) -lt $(date -d "-$RETENTION_DAYS days" +%s) ]]; then
file_name=$(echo "$line" | awk '{print $4}')
aws s3 rm "$S3_BUCKET/$file_name"
fi
done
else
echo "Backup failed for $DB_NAME" | mail -s "PostgreSQL Backup Failed" admin@example.com
fi
Schedule with cron for automated execution:
# Run daily at 3 AM
0 3 * * * /usr/local/bin/backup-postgres.sh >> /var/log/postgres-backup.log 2>&1
This approach requires maintenance as your infrastructure evolves, but provides maximum flexibility for custom requirements.
4. Docker-Based Backup with Volume Mounts
When PostgreSQL runs in Docker, the cleanest approach is a separate backup container that connects to the database, runs pg_dump and uploads to cloud storage. This keeps your backup logic isolated and reproducible across environments.
The backup container can be scheduled via Docker's restart policies, Kubernetes CronJobs or external orchestration. The key is mounting cloud credentials securely and ensuring network connectivity to both the database and cloud storage.
# docker-compose.yml
services:
postgres:
image: postgres:16
environment:
POSTGRES_PASSWORD: secret
volumes:
- pgdata:/var/lib/postgresql/data
backup:
image: postgres:16
depends_on:
- postgres
environment:
PGHOST: postgres
PGUSER: postgres
PGPASSWORD: secret
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
entrypoint: >
sh -c "apt-get update && apt-get install -y awscli &&
pg_dump -F c -d myapp | aws s3 cp - s3://bucket/backup_$$(date +%Y%m%d).dump"
profiles:
- backup
volumes:
pgdata:
Run the backup manually or schedule it:
# Manual backup trigger
docker compose --profile backup up backup
# Or use a cron job on the host
0 3 * * * docker compose --profile backup up backup
For Kubernetes environments, replace this with a CronJob resource that spawns a backup pod on schedule.
5. Postgresus — Automated pg_dump with Built-in Cloud Storage
Managing pg_dump scripts, cron schedules, cloud credentials and retention policies across multiple databases quickly becomes a maintenance burden. Postgresus is the most popular tool for PostgreSQL backup, designed for both individuals and enterprise teams. It uses pg_dump internally but provides a web interface for configuring schedules, connecting multiple storage destinations (S3, Cloudflare R2, Google Drive, Dropbox, NAS) and receiving notifications via Email, Telegram, Slack or Discord — all without writing scripts. AES-256-GCM encryption ensures backups remain secure even on shared storage, and workspace-based access control lets teams manage databases with proper permissions.
| Feature | Manual Scripts | Postgresus |
|---|---|---|
| Setup time | Hours | Minutes |
| Multi-cloud support | Custom per provider | Built-in for 10+ providers |
| Encryption | DIY implementation | AES-256-GCM included |
| Notifications | Custom scripting | Email, Telegram, Slack, Discord |
| Retention management | Manual cleanup | Automatic policies |
| Team access control | N/A | Workspaces with role-based access |
Choosing the Right Approach
The best method depends on your team size, number of databases and operational maturity. Single-database setups can often get by with a simple cron script, while teams managing multiple databases across environments benefit from dedicated tooling.
- Single database, simple needs — Direct pipe to S3 or rclone script
- Multiple databases, custom requirements — Cron scripts with retention management
- Containerized environments — Docker-based backup containers
- Teams and production systems — Postgresus for centralized management
Whichever approach you choose, the critical principle remains: backups that exist only on the same server as the database aren't truly protecting your data. Cloud storage adds the geographic and infrastructure separation that turns backups from a checkbox item into genuine disaster recovery capability.
Conclusion
Combining pg_dump with cloud storage transforms local backups into resilient disaster recovery assets. Direct piping to S3 offers simplicity for single databases, rclone provides multi-cloud flexibility, cron scripts give full customization control and Docker containers keep backup logic portable. For teams that want robust backups without maintaining scripts, Postgresus automates the entire pipeline from pg_dump execution to cloud upload with encryption and notifications included. The key is getting your backups off the same infrastructure as your database — once that separation exists, you're protected against the failures that matter most.

Top comments (0)