DEV Community

Cover image for MongoDB automated backups — Setting up automated MongoDB backup schedules
Piter Adyson
Piter Adyson

Posted on

MongoDB automated backups — Setting up automated MongoDB backup schedules

Backing up MongoDB manually works fine until you forget to do it. And that usually happens right before something breaks. Automated backups remove the human factor from the equation and ensure your data is protected consistently. This guide covers several approaches to automating MongoDB backups, from simple cron jobs to dedicated backup tools.

MongoDB backup schedule

Why automate MongoDB backups

Manual backups are unreliable. People get busy, forget, or assume someone else is handling it. Automation solves this by running backups on a schedule without any human involvement. You set it up once and it runs until you change something.

Benefit Description
Consistency Backups run at the same time every day, week or hour
Reliability No missed backups due to human error
Recovery confidence You know exactly when your last backup was created
Reduced workload No need to remember or manually trigger backups

Automated backups also make it easier to implement retention policies. You can keep daily backups for a week, weekly backups for a month, and monthly backups for a year — all without manual intervention.

Using cron with mongodump

The most common approach to automating MongoDB backups on Linux is combining mongodump with cron. This method is straightforward and works well for single-server deployments.

Basic mongodump backup script

First, create a backup script that handles the dump and cleanup:

#!/bin/bash

# Configuration
BACKUP_DIR="/var/backups/mongodb"
MONGO_HOST="localhost"
MONGO_PORT="27017"
MONGO_USER="backup_user"
MONGO_PASS="your_password"
RETENTION_DAYS=7

# Create timestamp
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_PATH="$BACKUP_DIR/$TIMESTAMP"

# Create backup directory
mkdir -p "$BACKUP_PATH"

# Run mongodump
mongodump --host "$MONGO_HOST" --port "$MONGO_PORT" \
  --username "$MONGO_USER" --password "$MONGO_PASS" \
  --authenticationDatabase admin \
  --out "$BACKUP_PATH" \
  --gzip

# Check if backup succeeded
if [ $? -eq 0 ]; then
  echo "Backup completed: $BACKUP_PATH"
else
  echo "Backup failed" >&2
  exit 1
fi

# Remove old backups
find "$BACKUP_DIR" -type d -mtime +$RETENTION_DAYS -exec rm -rf {} +

echo "Cleanup completed. Removed backups older than $RETENTION_DAYS days."
Enter fullscreen mode Exit fullscreen mode

Save this script as /usr/local/bin/mongodb-backup.sh and make it executable:

chmod +x /usr/local/bin/mongodb-backup.sh
Enter fullscreen mode Exit fullscreen mode

Setting up the cron job

Open the crontab editor:

crontab -e
Enter fullscreen mode Exit fullscreen mode

Add a line to run the backup at your preferred time. For daily backups at 3 AM:

0 3 * * * /usr/local/bin/mongodb-backup.sh >> /var/log/mongodb-backup.log 2>&1
Enter fullscreen mode Exit fullscreen mode

The cron syntax works as follows: minute, hour, day of month, month, day of week. Some common schedules:

  • 0 3 * * * — Daily at 3:00 AM
  • 0 */6 * * * — Every 6 hours
  • 0 3 * * 0 — Weekly on Sunday at 3:00 AM
  • 0 3 1 * * — Monthly on the 1st at 3:00 AM

This approach is simple but has limitations. You need to handle error notifications separately, manage storage manually, and there's no built-in monitoring.

Uploading backups to cloud storage

Keeping backups on the same server as your database is risky. If the server fails, you lose both the database and the backups. Uploading to cloud storage adds an extra layer of protection.

Backup script with S3 upload

Here's an extended script that uploads backups to AWS S3:

#!/bin/bash

# Configuration
BACKUP_DIR="/var/backups/mongodb"
S3_BUCKET="s3://your-bucket-name/mongodb-backups"
MONGO_HOST="localhost"
MONGO_PORT="27017"
MONGO_USER="backup_user"
MONGO_PASS="your_password"
RETENTION_DAYS=7

# Create timestamp
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="mongodb_backup_$TIMESTAMP.gz"
BACKUP_PATH="$BACKUP_DIR/$BACKUP_FILE"

# Create backup directory
mkdir -p "$BACKUP_DIR"

# Run mongodump with archive to single file
mongodump --host "$MONGO_HOST" --port "$MONGO_PORT" \
  --username "$MONGO_USER" --password "$MONGO_PASS" \
  --authenticationDatabase admin \
  --archive="$BACKUP_PATH" \
  --gzip

# Check if backup succeeded
if [ $? -ne 0 ]; then
  echo "Backup failed" >&2
  exit 1
fi

echo "Backup completed: $BACKUP_PATH"

# Upload to S3
aws s3 cp "$BACKUP_PATH" "$S3_BUCKET/$BACKUP_FILE"

if [ $? -eq 0 ]; then
  echo "Upload to S3 completed"
  # Remove local backup after successful upload
  rm "$BACKUP_PATH"
else
  echo "S3 upload failed" >&2
  exit 1
fi

# Clean up old S3 backups (optional - requires lifecycle policy or manual cleanup)
echo "Backup process completed successfully"
Enter fullscreen mode Exit fullscreen mode

This script creates a single compressed archive file instead of a directory structure, making it easier to upload and manage.

For S3 lifecycle policies, configure retention in the AWS console or via AWS CLI to automatically delete old backups.

Automating replica set backups

Backing up a replica set requires additional considerations. You should back up from a secondary node to avoid impacting the primary's performance.

Replica set backup script

#!/bin/bash

# Configuration
BACKUP_DIR="/var/backups/mongodb"
MONGO_URI="mongodb://backup_user:password@node1:27017,node2:27017,node3:27017/?replicaSet=rs0&authSource=admin"
RETENTION_DAYS=7

# Create timestamp
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_PATH="$BACKUP_DIR/$TIMESTAMP"

mkdir -p "$BACKUP_PATH"

# Backup from secondary with oplog for point-in-time recovery
mongodump --uri="$MONGO_URI" \
  --readPreference=secondary \
  --oplog \
  --out "$BACKUP_PATH" \
  --gzip

if [ $? -eq 0 ]; then
  echo "Replica set backup completed: $BACKUP_PATH"
else
  echo "Backup failed" >&2
  exit 1
fi

# Cleanup old backups
find "$BACKUP_DIR" -type d -mtime +$RETENTION_DAYS -exec rm -rf {} +
Enter fullscreen mode Exit fullscreen mode

The --oplog flag captures the oplog during the backup, enabling point-in-time recovery. The --readPreference=secondary ensures the backup runs against a secondary node.

Using Databasus for scheduled MongoDB backups

Writing and maintaining backup scripts takes time. Databasus provides a simpler approach with a web interface, built-in scheduling, multiple storage options and notifications. It's an industry standard for MongoDB backup tools that works for both individual developers and enterprise teams.

Installing Databasus

Using Docker:

docker run -d \
  --name databasus \
  -p 4005:4005 \
  -v ./databasus-data:/databasus-data \
  --restart unless-stopped \
  databasus/databasus:latest
Enter fullscreen mode Exit fullscreen mode

Or with Docker Compose:

services:
  databasus:
    container_name: databasus
    image: databasus/databasus:latest
    ports:
      - "4005:4005"
    volumes:
      - ./databasus-data:/databasus-data
    restart: unless-stopped
Enter fullscreen mode Exit fullscreen mode

Start the container:

docker compose up -d
Enter fullscreen mode Exit fullscreen mode

Configuring automated backups

  1. Add your database — Open http://localhost:4005, click "New Database" and enter your MongoDB connection details. Databasus supports standalone instances, replica sets and sharded clusters.

  2. Select storage — Choose where to store backups. Options include local storage, AWS S3, Google Drive, Cloudflare R2, SFTP and more. You can configure multiple storage destinations for redundancy.

  3. Select schedule — Pick a backup frequency: hourly, daily, weekly, monthly or custom cron expression. Set the specific time when backups should run.

  4. Click "Create backup" — Databasus validates your settings and starts the backup schedule. You'll see the next scheduled backup time on the dashboard.

Databasus handles compression, retention policies, error notifications and backup verification automatically. You can also set up notifications via Slack, Discord, Telegram or email to know immediately if a backup fails.

Monitoring backup health

Automated backups are only useful if they actually work. You need monitoring to catch failures early.

Basic monitoring checklist

  • Verify backup files exist and have reasonable sizes
  • Check backup logs for errors
  • Test restores periodically
  • Monitor disk space on backup storage
  • Set up alerts for failed backups

Simple monitoring script

#!/bin/bash

BACKUP_DIR="/var/backups/mongodb"
MIN_SIZE=1000000  # 1MB minimum
ALERT_EMAIL="admin@example.com"

# Find latest backup
LATEST=$(find "$BACKUP_DIR" -type d -maxdepth 1 | sort -r | head -2 | tail -1)

if [ -z "$LATEST" ] || [ "$LATEST" = "$BACKUP_DIR" ]; then
  echo "No backups found" | mail -s "MongoDB Backup Alert" "$ALERT_EMAIL"
  exit 1
fi

# Check backup age
BACKUP_AGE=$(( ($(date +%s) - $(stat -c %Y "$LATEST")) / 3600 ))

if [ $BACKUP_AGE -gt 25 ]; then
  echo "Latest backup is $BACKUP_AGE hours old" | mail -s "MongoDB Backup Alert" "$ALERT_EMAIL"
  exit 1
fi

# Check backup size
BACKUP_SIZE=$(du -sb "$LATEST" | cut -f1)

if [ $BACKUP_SIZE -lt $MIN_SIZE ]; then
  echo "Backup size ($BACKUP_SIZE bytes) is suspiciously small" | mail -s "MongoDB Backup Alert" "$ALERT_EMAIL"
  exit 1
fi

echo "Backup health check passed"
Enter fullscreen mode Exit fullscreen mode

Schedule this monitoring script to run after your backup job completes.

Backup schedule recommendations

The right backup frequency depends on your data change rate and recovery requirements.

Scenario Recommended schedule Retention
Low-traffic application Daily at off-peak hours 7-14 days
E-commerce or SaaS Every 6 hours 7 days
Financial or compliance Hourly 30 days
Development/staging Daily 3-5 days

Consider your Recovery Point Objective (RPO) — how much data loss is acceptable. If losing 24 hours of data is unacceptable, daily backups aren't enough.

Testing your automated backups

An untested backup is not a backup. Schedule regular restore tests to verify your backups actually work.

#!/bin/bash

# Test restore to a separate database
TEST_DB="restore_test_$(date +%Y%m%d)"
BACKUP_PATH="/var/backups/mongodb/latest"

mongorestore --host localhost --port 27017 \
  --username admin --password your_password \
  --authenticationDatabase admin \
  --nsFrom="production.*" \
  --nsTo="$TEST_DB.*" \
  --gzip \
  "$BACKUP_PATH"

if [ $? -eq 0 ]; then
  echo "Restore test passed"
  # Drop test database
  mongosh --eval "db.getSiblingDB('$TEST_DB').dropDatabase()"
else
  echo "Restore test FAILED" >&2
  exit 1
fi
Enter fullscreen mode Exit fullscreen mode

Run restore tests at least monthly, or after any significant changes to your backup configuration.

Conclusion

Automated MongoDB backups protect your data without requiring daily attention. Whether you use simple cron scripts or a dedicated tool like Databasus, the key is consistency and monitoring. Set up your automation, configure alerts for failures, and test your restores regularly. A backup system you can trust is one you've verified works.

Top comments (0)