MongoDB is a critical component in modern application architectures, and protecting your data is essential. The mongodump utility provides a straightforward approach to creating backups of your MongoDB databases. This tutorial will guide you through everything you need to know about using mongodump effectively, from basic backup commands to advanced techniques for production environments.
Understanding mongodump
mongodump is MongoDB's official backup utility that creates binary exports of your database data. It connects to your MongoDB instance and exports collections in BSON format, preserving data types and structures exactly as they exist in your database. Understanding how mongodump works helps you choose the right backup strategy for your infrastructure.
What mongodump does
mongodump creates logical backups by reading data from MongoDB and writing it to disk in BSON format. The tool captures collections, indexes and metadata, making it suitable for database migrations, archiving and disaster recovery. Unlike file system snapshots, mongodump works at the database level, allowing selective backups of specific databases or collections.
The utility produces two files for each collection: a .bson file containing the actual data and a .metadata.json file describing indexes and collection options. This structure makes mongodump backups portable across different MongoDB versions and architectures, though some features may not transfer between significantly different versions.
When to use mongodump
mongodump works well for several scenarios:
- Small to medium databases (under 100GB)
- Development and staging environments
- Database migrations between servers
- Selective collection backups
- Cross-platform database transfers
For large production databases, consider filesystem snapshots or MongoDB Atlas automated backups instead. mongodump performance degrades significantly with database size, and the backup process can impact production performance on busy systems.
Installing mongodump
mongodump is part of the MongoDB Database Tools package. Since MongoDB 4.4, the database tools are distributed separately from the MongoDB server.
Installing on Linux
Ubuntu/Debian:
wget https://fastdl.mongodb.org/tools/db/mongodb-database-tools-ubuntu2204-x86_64-100.9.4.tgz
tar -zxvf mongodb-database-tools-*.tgz
sudo cp mongodb-database-tools-*/bin/* /usr/local/bin/
RHEL/CentOS:
wget https://fastdl.mongodb.org/tools/db/mongodb-database-tools-rhel80-x86_64-100.9.4.tgz
tar -zxvf mongodb-database-tools-*.tgz
sudo cp mongodb-database-tools-*/bin/* /usr/local/bin/
Installing on macOS
Use Homebrew for the easiest installation:
brew tap mongodb/brew
brew install mongodb-database-tools
Installing on Windows
Download the MSI installer from the MongoDB Download Center and run it. The installer adds mongodump to your system PATH automatically.
Verifying installation
Check that mongodump is installed correctly:
mongodump --version
You should see the version information, confirming mongodump is ready to use.
Basic mongodump usage
The simplest mongodump command creates a backup of all databases on your local MongoDB instance.
Creating your first backup
Connect to localhost and backup all databases:
mongodump
This creates a dump directory in your current location containing all databases. Each database gets its own subdirectory with BSON and metadata files for every collection.
Backing up a specific database
Specify a database name to backup only that database:
mongodump --db production
For a specific collection within a database:
mongodump --db production --collection users
Specifying output location
Choose where to store backups with the --out parameter:
mongodump --db production --out /backups/mongodb/2026-01-12
Organizing backups by date makes it easier to find specific backup versions later.
Connection options
mongodump supports various connection methods for local and remote MongoDB instances.
Connecting to remote MongoDB
Specify host and port for remote connections:
mongodump --host mongodb.example.com --port 27017 --db production
Using connection strings
MongoDB connection strings provide a compact way to specify connection details:
mongodump --uri="mongodb://username:password@mongodb.example.com:27017/production"
For replica sets:
mongodump --uri="mongodb://user:pass@host1:27017,host2:27017,host3:27017/production?replicaSet=rs0"
Authentication
Provide credentials for authenticated MongoDB instances:
mongodump --host mongodb.example.com \
--username backup_user \
--password your_password \
--authenticationDatabase admin \
--db production
For production scripts, use --password without a value to prompt for the password securely, or read credentials from environment variables.
Advanced mongodump options
Production environments require more sophisticated backup configurations. These options help optimize performance, reduce backup size and control what data gets backed up.
Compression options
Enable gzip compression to reduce backup size:
mongodump --db production --gzip --out /backups/compressed
Compression typically reduces backup size by 70-80%, though it adds CPU overhead during both backup and restore operations.
Query-based backups
Backup only documents matching specific criteria:
mongodump --db production \
--collection orders \
--query '{"status": "completed", "created_at": {"$gte": {"$date": "2026-01-01T00:00:00Z"}}}'
This technique helps create partial backups for archival or analysis purposes.
Parallel collection dumping
Speed up backups of databases with many collections:
mongodump --db production --numParallelCollections=4 --out /backups/parallel
The --numParallelCollections parameter controls how many collections mongodump processes simultaneously. Higher values speed up backups but increase load on your MongoDB server.
Oplog capture for point-in-time backups
Capture oplog entries during backup for point-in-time recovery:
mongodump --oplog --out /backups/with-oplog
The --oplog option only works when backing up entire MongoDB instances (not individual databases). It creates an oplog.bson file containing all operations that occurred during the backup, allowing restoration to any point during the backup window.
Excluding collections
Skip specific collections during backup:
mongodump --db production \
--excludeCollection=logs \
--excludeCollection=temp_data \
--out /backups/filtered
Use this to avoid backing up large collections that contain temporary or regenerable data.
Restoring with mongorestore
mongodump backups are restored using the mongorestore utility, which reads BSON files and recreates collections in your MongoDB instance.
Basic restore
Restore an entire backup directory:
mongorestore /backups/mongodb/2026-01-12
This restores all databases found in the backup directory.
Restoring a specific database
Restore to a different database name:
mongorestore --db production_restored /backups/mongodb/2026-01-12/production
Restoring a single collection
Restore one collection without affecting others:
mongorestore --db production \
--collection users \
/backups/mongodb/2026-01-12/production/users.bson
Drop existing collections before restore
Replace existing data completely:
mongorestore --drop /backups/mongodb/2026-01-12
The --drop flag removes existing collections before restoring, ensuring a clean state. Without it, mongorestore adds documents to existing collections, which can cause duplicate key errors.
Restoring compressed backups
mongorestore automatically detects and decompresses gzipped backups:
mongorestore --gzip /backups/compressed
Point-in-time restore with oplog
Apply oplog entries to restore to a specific moment:
mongorestore --oplogReplay --oplogLimit=1704672000:1 /backups/with-oplog
The --oplogLimit parameter uses a timestamp to stop replay at a specific point, enabling recovery to just before a data corruption event.
Production backup strategies
Running mongodump in production requires careful planning to balance backup frequency, storage requirements and performance impact.
Scheduling backups with cron
Create a backup script (/usr/local/bin/mongodb-backup.sh):
#!/bin/bash
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR="/backups/mongodb/$DATE"
mongodump \
--uri="mongodb://backup_user:password@localhost:27017" \
--oplog \
--gzip \
--out "$BACKUP_DIR"
# Keep only last 7 days of backups
find /backups/mongodb -type d -mtime +7 -exec rm -rf {} \;
Make it executable and add to crontab:
chmod +x /usr/local/bin/mongodb-backup.sh
crontab -e
Add this line for daily 2 AM backups:
0 2 * * * /usr/local/bin/mongodb-backup.sh >> /var/log/mongodb-backup.log 2>&1
Backup retention policies
Implement a retention strategy that balances storage costs with recovery needs:
| Backup age | Frequency | Retention |
|---|---|---|
| 0-7 days | Daily | Keep all |
| 7-30 days | Weekly | Keep Sunday backups |
| 30+ days | Monthly | Keep first of month |
This provides granular recent backups while managing long-term storage costs.
Minimizing production impact
Reduce backup load on production systems:
- Run backups during low-traffic periods
- Use
--readPreference=secondaryto backup from replica set secondaries - Enable compression to reduce I/O
- Limit parallel collections on busy systems
- Monitor backup duration and adjust accordingly
For replica sets, always prefer backing up from secondary nodes:
mongodump --uri="mongodb://host1,host2,host3/production?replicaSet=rs0&readPreference=secondary"
Automated backups with Databasus
Manual backup scripts work for simple setups, but production environments benefit from dedicated backup management tools. Databasus is a free, open source backup solution that automates MongoDB backups with scheduling, multiple storage destinations and team notifications.
Installing Databasus
Install Databasus using Docker:
docker run -d \
--name databasus \
-p 4005:4005 \
-v ./databasus-data:/databasus-data \
--restart unless-stopped \
databasus/databasus:latest
Or with Docker Compose:
services:
databasus:
container_name: databasus
image: databasus/databasus:latest
ports:
- "4005:4005"
volumes:
- ./databasus-data:/databasus-data
restart: unless-stopped
Start the service:
docker compose up -d
Access the web interface at http://localhost:4005 and create your account.
Configuring MongoDB backups in Databasus
- Add your database: Click "New Database" and select MongoDB as the database type
- Enter connection details: Provide your MongoDB host, port, database name and authentication credentials
- Select storage: Choose from local storage, S3, Google Cloud Storage, Dropbox, SFTP or other supported destinations
- Configure schedule: Set hourly, daily, weekly, monthly or custom cron-based backup intervals
- Add notifications (optional): Configure Slack, Discord, Telegram or email alerts for backup status
- Create backup: Databasus validates your configuration and begins the backup schedule
Databasus handles compression, encryption and retention automatically. The platform provides backup history, restoration tools and audit logs for team environments.
Backup encryption and security
MongoDB backups contain sensitive data and require the same security measures as your production database.
Encrypting backup files
Encrypt mongodump output immediately after creation:
mongodump --db production --archive | \
openssl enc -aes-256-cbc -salt -pbkdf2 -pass file:/etc/mongodb/backup.key \
-out /backups/production_encrypted_$(date +%Y%m%d).archive.enc
Decrypt during restore:
openssl enc -aes-256-cbc -d -pbkdf2 -pass file:/etc/mongodb/backup.key \
-in /backups/production_encrypted_20260112.archive.enc | \
mongorestore --archive
Store encryption keys separately from backup files. Use key management services like AWS KMS, HashiCorp Vault or Azure Key Vault for production systems.
Secure credential management
Never hardcode passwords in backup scripts. Use environment variables or credential files with restricted permissions:
export MONGODB_PASSWORD=$(cat /secure/mongodb-backup.password)
mongodump --uri="mongodb://backup_user:$MONGODB_PASSWORD@localhost:27017/production"
Set file permissions to prevent unauthorized access:
chmod 600 /secure/mongodb-backup.password
Network security
For backups over untrusted networks, use TLS connections:
mongodump --uri="mongodb://user:pass@mongodb.example.com:27017/production?tls=true&tlsCAFile=/etc/ssl/mongodb-ca.crt"
Enable MongoDB client certificate authentication for additional security.
Monitoring and testing backups
Creating backups is only half the solution. Regular testing ensures backups remain restorable and meet your recovery requirements.
Verifying backup integrity
Test backups regularly in a separate environment:
# Restore to test database
mongorestore --db production_test /backups/mongodb/2026-01-12/production
# Run validation
mongosh production_test --eval "db.runCommand({validate: 'users'})"
# Clean up
mongosh production_test --eval "db.dropDatabase()"
Schedule monthly restoration tests to verify backup quality and train team members on recovery procedures.
Backup monitoring
Track backup metrics to detect issues early:
| Metric | Why it matters | Alert threshold |
|---|---|---|
| Backup duration | Detects performance degradation | 50% increase from baseline |
| Backup size | Identifies unexpected data growth | 100% increase week over week |
| Success rate | Catches configuration issues | Any failure |
| Last successful backup | Prevents stale backups | More than 48 hours old |
Implement monitoring using backup script exit codes and logging. Send alerts to your team through monitoring systems like Prometheus, Grafana or Datadog.
Recovery time objectives
Document and test your recovery metrics:
- RTO (Recovery Time Objective): Maximum acceptable downtime
- RPO (Recovery Point Objective): Maximum acceptable data loss
If your RTO is 30 minutes, ensure you can restore from backup within that timeframe. If your RPO is 1 hour, run backups at least hourly.
Common mongodump issues and solutions
Even experienced administrators encounter problems with mongodump. Here are solutions to common issues.
Out of memory errors
Large collections can exhaust available memory. Use --forceTableScan to reduce memory usage:
mongodump --db production --forceTableScan --out /backups/low-memory
Alternatively, backup collections individually to control memory consumption.
Slow backup performance
Several factors can slow mongodump:
- Backup from secondary nodes in replica sets
- Enable compression to reduce I/O
- Reduce
--numParallelCollectionsif overloading the server - Use faster storage for backup destination
- Avoid backing up during peak traffic periods
Connection timeouts
Increase timeout values for slow networks or large databases:
mongodump --db production --socketTimeout=300000 --out /backups/timeout-fix
The --socketTimeout parameter is specified in milliseconds.
Backup size concerns
Large backups consume significant storage. Reduce size by:
- Enabling gzip compression with
--gzip - Excluding unnecessary collections with
--excludeCollection - Implementing tiered retention (keep daily for 7 days, weekly for 30 days)
- Archiving old data to separate databases
Backup best practices
Following these best practices ensures reliable, maintainable backup operations.
The 3-2-1 backup rule
Always implement the 3-2-1 strategy:
- 3 copies of your data: production database, local backup and remote backup
- 2 different media types: local disk and cloud storage
- 1 offsite location: different physical location from production
This protects against hardware failures, natural disasters and ransomware attacks.
Automate everything
Manual backups fail due to human error. Automate:
- Backup execution with cron or orchestration tools
- Backup verification through automated restore tests
- Monitoring and alerting for backup failures
- Retention policy enforcement
- Offsite backup transfers
Document procedures
Create runbooks documenting:
- Backup schedules and retention policies
- Restoration procedures with step-by-step commands
- Contact information for escalations
- Recovery time objectives and test results
Train multiple team members on backup and restoration to avoid single points of failure.
Conclusion
mongodump provides a reliable, straightforward approach to MongoDB backups. While it has limitations with very large databases, it excels in portability, simplicity and selective backup capabilities. Start with basic daily backups, add compression and encryption as you gain confidence, and implement automation to eliminate human error.
Remember that backups are insurance against data loss. Test restoration procedures regularly, maintain offsite copies and document recovery processes. When disaster strikes, proper backup preparation determines whether you experience brief inconvenience or catastrophic data loss.

Top comments (0)