In the world of DevOps, there's a golden rule: If it’s not backed up, it doesn’t exist.
While there are many enterprise-grade backup solutions available, sometimes you need something lightweight, highly customizable, and easy to integrate into your existing workflows. That's where a well-crafted Bash script comes in.
In this post, I'll walk you through a Robust File Backup Script I built that handles compression, remote transfers, retention policies, and detailed logging—all while following core DevOps principles.
Key Features
Our backup script isn't just a simple cp command. It's designed to be production-ready with features like:
- Smart Compression: Uses
tarandgzipto minimize storage space. - Configuration Decoupling: All settings live in a separate
backup.conffile (Infrastructure as Code lite). - Dry Run Mode: A
-dflag to see exactly what would happen without actually doing it. - Automatic Retention: Keeps your disk clean by deleting local backups older than
Ndays. - Secure Remote Transfer: Optionally sends your archives to a remote server via
scp. - Comprehensive Logging: Every action, warning, and error is timestamped and logged for auditing.
🛠 The Technical Breakdown
1. Separation of Concerns
We keep our logic (backup.sh) separate from our settings (backup.conf). This makes the script portable across different environments (Dev, Stage, Prod) without modification.
backup.conf example:
SOURCE_PATHS=(
"/var/www/html"
"/etc/nginx/conf.d"
)
BACKUP_DIR="./backups"
RETENTION_DAYS=7
ENABLE_REMOTE="true"
REMOTE_HOST="backup-server.local"
2. Flexible Argument Parsing
Using getopts, the script feels like a professional CLI tool. You can specify custom config files or trigger a dry run easily.
while getopts ":c:dh" opt; do
case ${opt} in
c ) CONFIG_FILE=$OPTARG ;;
d ) DRY_RUN=true ;;
h ) usage ;;
esac
done
3. The Power of find for Retention
Managing disk space is crucial. We use find with the -mtime flag to identify and remove old archives automatically.
find "$BACKUP_DIR" -type f -name "backup_*.tar.gz" -mtime +"$RETENTION_DAYS" -exec rm {} \;
4. Secure Remote Offloading
A backup on the same disk isn't a true backup. Our script supports secure transfer to a remote host:
scp "$BACKUP_DIR/$BACKUP_NAME" "$REMOTE_USER@$REMOTE_HOST:$REMOTE_PATH"
📖 Lessons Learned & DevOps Principles
Building this project reinforced several key concepts:
- Automation over Manual Work: Human error is the #1 cause of data loss. Automating the backup removes the "I forgot" factor.
- Idempotency & Resilience: The script checks if directories exist and if source paths are valid before starting.
- Visibility: "In DevOps, if it wasn't logged, it didn't happen." Detailed logs are essential for debugging scheduled cron jobs.
- Safety First: The Dry Run mode is a lifesaver when testing new configurations on a production server.
How to Use It
- Clone the Repo: [Link to your repo here]
- Configure: Edit
backup.confwith your paths. - Make it Executable:
chmod +x backup.sh - Test it:
./backup.sh -d(Dry run) - Run it:
./backup.sh - Schedule it: Add it to your
crontabto run nightly!
0 2 * * * /path/to/backup.sh >> /path/to/logs/cron.log 2>&1
Future Enhancements
- Adding AWS S3 support using the AWS CLI.
- Implementing Slack/Email notifications on failure.
- Adding Checksum verification to ensure data integrity after transfer.
What does your backup strategy look like? Do you prefer simple scripts or complex tools? Let’s discuss in the comments! 👇
Top comments (0)