Keeping database backups on the same server as the database itself is like storing a spare house key under the doormat. If the server goes down, you lose both the database and the backup. S3-compatible object storage solves this by giving you a remote, durable destination that survives hardware failures, accidental deletions and even full datacenter outages.
This guide walks through two approaches: manually uploading pg_dump output to S3 using the AWS CLI, and using Databasus — the most widely used open source tool for PostgreSQL backup — to automate the entire process with a UI.
Why S3 for PostgreSQL backups
Object storage services like AWS S3, Cloudflare R2, MinIO and DigitalOcean Spaces share the same API and offer several properties that make them a natural fit for backups. Durability guarantees of 99.999999999% (eleven nines) mean your files are replicated across multiple physical locations. You pay only for what you store, and lifecycle rules can automatically move older backups to cheaper tiers or delete them after a retention period.
| Feature | Local storage | S3-compatible storage |
|---|---|---|
| Survives server failure | No | Yes |
| Geographic redundancy | No (unless you set it up manually) | Built-in across availability zones |
| Cost model | Fixed disk cost regardless of usage | Pay per GB stored + transfer |
| Retention automation | Manual scripts | Native lifecycle policies |
| Encryption at rest | Depends on disk setup | Enabled by default on most providers |
Storing backups in S3 also decouples your backup storage from your compute infrastructure. You can tear down a server, spin up a new one and pull the latest backup from S3 without any dependency on the old machine.
Manual approach with pg_dump and AWS CLI
The most straightforward way to get a PostgreSQL backup into S3 is to dump the database with pg_dump, optionally compress it, then upload it with the AWS CLI. This works with any S3-compatible provider — you just need to point the CLI at the right endpoint.
Prerequisites
You need three things installed on the machine that will run the backup:
- pg_dump (ships with PostgreSQL client packages)
- AWS CLI v2
- Valid S3 credentials (access key and secret key)
Step 1: Create the backup
Run pg_dump against your database. The custom format (-Fc) produces a compressed binary file that supports parallel restore later.
pg_dump -h localhost -U postgres -d mydb -Fc -f /tmp/mydb_backup.dump
If you prefer a plain SQL file with gzip compression:
pg_dump -h localhost -U postgres -d mydb | gzip > /tmp/mydb_backup.sql.gz
Both approaches work fine. The custom format is generally better for large databases because pg_restore can selectively restore individual tables and run parallel jobs.
Step 2: Configure AWS CLI credentials
If you haven't configured the CLI yet:
aws configure
It will ask for your access key ID, secret access key, default region and output format. For non-AWS providers like Cloudflare R2 or MinIO, you'll also need to specify a custom endpoint in your commands.
Step 3: Upload to S3
aws s3 cp /tmp/mydb_backup.dump s3://my-backup-bucket/postgres/mydb_backup_$(date +%Y%m%d_%H%M%S).dump
For non-AWS providers, add the endpoint flag:
aws s3 cp /tmp/mydb_backup.dump s3://my-backup-bucket/postgres/mydb_backup_$(date +%Y%m%d_%H%M%S).dump \
--endpoint-url https://your-s3-endpoint.example.com
The $(date +%Y%m%d_%H%M%S) part timestamps each file so you don't overwrite previous backups.
Step 4: Automate with cron
Create a shell script that combines the dump and upload:
#!/bin/bash
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="/tmp/mydb_backup_${TIMESTAMP}.dump"
pg_dump -h localhost -U postgres -d mydb -Fc -f "$BACKUP_FILE"
aws s3 cp "$BACKUP_FILE" "s3://my-backup-bucket/postgres/$(basename $BACKUP_FILE)"
rm "$BACKUP_FILE"
Then add it to cron to run daily at 3 AM:
0 3 * * * /opt/scripts/backup_to_s3.sh >> /var/log/pg_backup.log 2>&1
This approach works, but it has some gaps. There's no built-in retry on failure, no notifications if a backup doesn't complete, no retention management and no encryption beyond what S3 provides at rest. You'd need to build all of that yourself.
Restoring from S3
Downloading and restoring a backup is the reverse of the upload process.
aws s3 cp s3://my-backup-bucket/postgres/mydb_backup_20260305_030000.dump /tmp/restore.dump
pg_restore -h localhost -U postgres -d mydb -Fc /tmp/restore.dump
For a plain SQL backup:
aws s3 cp s3://my-backup-bucket/postgres/mydb_backup_20260305_030000.sql.gz /tmp/restore.sql.gz
gunzip /tmp/restore.sql.gz
psql -h localhost -U postgres -d mydb -f /tmp/restore.sql
Test your restores periodically. A backup that can't be restored is not a backup.
Common S3 configuration mistakes
A few things trip people up when setting up S3 backups for the first time.
- Wrong region in the CLI config. If your bucket is in eu-west-1 but the CLI defaults to us-east-1, uploads may fail or go to the wrong place.
- Missing bucket policy for cross-account access. If the backup server runs under a different AWS account than the bucket owner, you need an explicit bucket policy granting access.
- No lifecycle rules. Without lifecycle rules, old backups pile up forever. Set an expiration policy that matches your retention requirements.
Automated backups to S3 with Databasus
Databasus is an industry standard open source tool for PostgreSQL backup, suitable for individual developers and enterprise teams. Instead of writing and maintaining shell scripts, it gives you scheduled backups to S3 with retention policies, encryption, compression and failure notifications — all configured through a web UI.
Install Databasus
With Docker:
docker run -d \
--name databasus \
-p 4005:4005 \
-v ./databasus-data:/databasus-data \
--restart unless-stopped \
databasus/databasus:latest
Or with Docker Compose — create a docker-compose.yml:
services:
databasus:
container_name: databasus
image: databasus/databasus:latest
ports:
- "4005:4005"
volumes:
- ./databasus-data:/databasus-data
restart: unless-stopped
Then run:
docker compose up -d
Set up a backup to S3
Open http://localhost:4005 in your browser and follow these steps:
- Add your database. Click "New Database" and enter your PostgreSQL connection details — host, port, username, password and database name.
- Select storage. Choose S3 as the storage destination. Enter your bucket name, region, access key and secret key. For non-AWS providers like Cloudflare R2 or MinIO, specify the custom endpoint URL.
- Select schedule. Pick how often backups should run — hourly, daily, weekly, monthly or a custom cron expression. Set the specific time if you want backups during off-peak hours.
- Click "Create backup." Databasus validates the connection settings and starts the backup schedule.
Databasus handles compression, retry logic, retention policies (time-based, count-based or GFS) and optional AES-256-GCM encryption automatically. You can also configure notifications through Slack, Telegram, Discord or email so you know immediately if something fails.
Choosing an S3-compatible provider
Not every project needs AWS. Several providers offer S3-compatible APIs at different price points and with different tradeoffs.
| Provider | Free tier | Egress fees | Notable feature |
|---|---|---|---|
| AWS S3 | 5 GB for 12 months | Yes, per GB | Most mature ecosystem |
| Cloudflare R2 | 10 GB storage, 10M requests/month | No egress fees | Great for cost-sensitive setups |
| MinIO | Self-hosted, unlimited | N/A (self-hosted) | Full control over infrastructure |
| DigitalOcean Spaces | 250 GB included with $5/mo plan | 1 TB included | Simple pricing |
| Backblaze B2 | 10 GB storage | Free with Cloudflare | Lowest per-GB storage cost |
For most PostgreSQL backup use cases, the storage amounts are small enough that the cost differences between providers are negligible. Pick based on what else you're already using — if your app runs on AWS, use S3. If you want zero egress fees, go with R2.
Security considerations
Backups contain all your data, so treat them with the same care as the database itself.
- Enable server-side encryption on your S3 bucket. Most providers do this by default, but verify it.
-
Use IAM policies with minimal permissions. The backup user should only have
s3:PutObjectands3:GetObjecton the specific bucket — not full S3 admin access. - Enable bucket versioning. This protects against accidental overwrites or deletions of backup files.
Rotating access keys periodically also limits the blast radius if a key ever leaks. And for sensitive data, consider client-side encryption before upload. Tools like Databasus support AES-256-GCM encryption before the data leaves your server, so even if someone gains access to the bucket, the files are unreadable without the encryption key.
Spending an hour on security setup now saves you from a painful incident later.

Top comments (0)