A developer's journey from backup anxiety to peaceful sleep
Picture this: It's 2 AM, and you're lying awake wondering, "What if our database crashes tomorrow?" If you're running a MongoDB instance in production, this scenario has probably crossed your mind more than once.
As developers, we know backups are crucial, but let's be honest – manual backups are a recipe for disaster. We get busy, we forget, and suddenly we're one hardware failure away from losing everything our users trust us with.
That's exactly where I found myself six months ago. Our startup was growing, our MongoDB database was becoming more critical by the day, and I realized we needed a backup solution that worked even when I was on vacation (or simply human and forgot to run it manually).
The Solution That Changed Everything
After researching various approaches, I built an automated backup system that creates daily snapshots of our MongoDB database and stores them safely in Google Drive. The best part? It runs completely hands-off, and I sleep better knowing our data is protected.
Here's how you can build the same peace of mind into your infrastructure:
What You'll Need
Before we dive in, make sure you have:
- A MongoDB instance running in Docker with exposed ports
- MongoDB Database Tools installed on your Ubuntu server
- A Google account for cloud storage
- About 30 minutes to set this up once
Step 1: Setting Up rclone (Your Cloud Storage Bridge)
The magic happens with a tool called rclone – think of it as the Swiss Army knife for cloud storage management. Installation is surprisingly simple:
curl https://rclone.org/install.sh | sudo bash
Step 2: Connecting to Google Drive (The Tricky Part Made Simple)
This step used to intimidate me, but it's actually straightforward once you know the process. Since most servers don't have web browsers, we'll use what's called "headless" authentication:
- Start the configuration:
- Follow these exact steps:
- The authentication dance:
- Confirm and you're done!
Pro tip: This one-time setup might feel tedious, but you'll thank yourself later when backups are running silently in the background.
Step 3: The Backup Script That Does the Heavy Lifting
Here's where the real magic happens. I created a script that handles everything: dumping the database, uploading to the cloud, and cleaning up afterwards.
Create your script file:
nano ~/backup_mongo.sh
Here's the complete script (remember to update the credentials):
#!/bin/bash
# --- Configuration ---
# Database credentials (replace with your actual values)
MONGO_USER="your_admin_user"
MONGO_PASS="your_super_strong_password"
# Local backup directory
BACKUP_DIR="/home/mongo_backups"
# rclone Configuration
RCLONE_REMOTE="gdrive"
DRIVE_BACKUP_FOLDER="MongoDB_Backups"
# --- The Magic Happens Here ---
echo "Starting MongoDB backup..."
mkdir -p $BACKUP_DIR
TIMESTAMP=$(date +"%Y-%m-%d_%H-%M-%S")
BACKUP_FILE="$BACKUP_DIR/mongodb_backup_$TIMESTAMP.gz"
# Create the database dump
echo "Creating database dump: $BACKUP_FILE"
mongodump \
--host=localhost \
--port=27017 \
--username=$MONGO_USER \
--password=$MONGO_PASS \
--authenticationDatabase=admin \
--archive="$BACKUP_FILE" \
--gzip
# Check if the dump worked
if [ $? -eq 0 ]; then
echo "Database dump successful."
# Upload to Google Drive
echo "Uploading to Google Drive..."
rclone copy "$BACKUP_FILE" "${RCLONE_REMOTE}:${DRIVE_BACKUP_FOLDER}" --progress
if [ $? -eq 0 ]; then
echo "Upload successful."
# Clean up local file to save space
rm "$BACKUP_FILE"
echo "Backup complete!"
else
echo "rclone upload failed!"
fi
else
echo "mongodump failed!"
fi
Make it executable:
chmod +x ~/backup_mongo.sh
Step 4: Set It and Forget It with Cron
The final piece is automation. I set mine to run at 3 AM Bangladesh time (when our traffic is lowest):
crontab -e
Add this line:
0 21 * * * /home/revuers/backup_mongo.sh > /home/revuers/mongo_backup.log 2>&1
Why 21:00 UTC? That's 3:00 AM in Bangladesh – perfect timing for minimal disruption.
The Peace of Mind Payoff
Six months later, this system has:
- Created 180+ automated backups without me lifting a finger
- Survived server maintenance, power outages, and my vacation to Cox's Bazar
- Saved me countless hours of manual backup procedures
Lessons Learned
- Start simple, then optimize: My first version was much more complex. This stripped-down approach has proven more reliable.
- Monitor but don't micromanage: The log file lets me spot-check without obsessing over every backup.
- Test your restores: Having backups means nothing if you can't restore from them. Test this process before you need it.
What's Next?
I'm considering adding Slack notifications for backup failures and implementing a retention policy to automatically delete old backups. But honestly? Sometimes the simple solution that just works is perfect as-is.
Have you implemented automated backups for your databases? What challenges did you face, and what solutions worked for you? I'd love to hear about your experiences in the comments below.
If this helped you sleep better at night knowing your data is safe, give it a 👍 and share it with your developer friends who might need this peace of mind too.
Top comments (0)