When a client’s database is the heart of their business, a simple local backup isn't enough. If the server catches fire, the local backup burns with it. What they really needed was a Disaster Recovery (DR) System—a way to ensure that even in a worst-case scenario, their data is safe, off-site, and ready to be restored.
Standard cloud recovery plans can cost hundreds of dollars a month. However, after analyzing the client's data volume, we realized we could build a high-resilience system for $0/month using Docker, Google Drive, and rclone.
The Strategy: Off-Site Redundancy
A true Disaster Recovery plan follows the 3-2-1 Rule: 3 copies of data, on 2 different media, with 1 copy off-site.
Because our client had modest storage needs, we recommended skipping expensive enterprise vaults and instead using their existing 15GB of free Google Drive storage as their off-site DR site. It’s secure, global, and provides the redundancy needed to survive a local server disaster.
Step 1: The "Vault" (Docker)
We encapsulated the recovery tools into a Docker container. This ensures that the recovery process is portable; if the primary server fails, we can spin up this same "vault" on any other machine in minutes to begin the restoration.
To make it bulletproof on their Windows server, we baked the settings directly into the image. For example, here is a typical Dockerfile for this DR setup:
# Example Dockerfile
FROM alpine:3.18
RUN apk add --no-cache postgresql-client rclone bash ca-certificates
RUN mkdir -p /backups /config
COPY ./config /config
COPY ./scripts/backup-db.sh /scripts/backup-db.sh
RUN chmod +x /scripts/backup-db.sh
CMD ["/bin/bash", "-c", "while true; do /scripts/backup-db.sh; sleep 86400; done"]
Step 2: The "Bridge" (Rclone)
To move data to our off-site DR location, we used rclone.
The Challenge: Standard service accounts often have zero storage quota on personal accounts.
The Fix: We used an OAuth token, allowing the system to act as the client and utilize their full personal storage quota for disaster protection.
Important Security Tip for Windows Users:
If you're trying to generate your token and Windows blocks the connection, you may need to temporarily stop the service that "hogs" the network ports. Always remember to turn it back on immediately after to keep your system working correctly.
Step 3: Making it Permanent (The "Forever" Fix)
In a disaster recovery scenario, the last thing you want is a "broken link." Google tokens expire every 7 days in "Testing" mode. To ensure the recovery pipeline is always active, we "Published" the app in the Google Cloud Console. This ensures the refresh token lasts indefinitely, making the DR system truly "set-and-forget."
Step 4: The "Recovery Engine" (The Script)
We wrote a script that automates the daily protection cycle. It doesn't just copy files; it ensures data integrity by creating compressed, timestamped snapshots of the entire database.
# Example logic for the DR engine:
# 1. Snapshot the data for integrity
pg_dump -h db_host -U user -d database --file=backup.dump
# 2. Compress for faster off-site transfer
tar -czf backup.tar.gz backup.dump
# 3. Securely transfer to the off-site DR folder
rclone --config /config/rclone.conf copy backup.tar.gz gdrive:nordible/db-backups
The "Bonus" Feature: Automatic Retention
A good DR system stays lean. As a bonus, we added a cleanup rule that automatically deletes snapshots older than 5 days. This keeps the recovery site organized and ensures the client never hits a storage limit during a crisis.
The Result: Business Resilience
By combining these free tools, we achieved a professional-grade Disaster Recovery system.
- Cost: $0/month.
- Off-site Redundancy: Fully Automated.
- Resilience: If the server goes down, the data is safe in the cloud.
Now, our client can sleep soundly knowing that even if disaster strikes, their business data is just a few clicks away from a full recovery.



Top comments (0)