Production apps generate thousands of log lines every day. If not managed properly:
- Logs eat up disk space.
- Critical errors get buried in noise.
- Developers find out about issues too late.
So, I built a shell-powered automation that:
- Rotates and archives logs daily
- Extracts ERROR logs from gzipped archives
- Emails those errors to developers
- Sends success/failure status to Datadog Logs
Use Case:
Imagine your app (Invoice) runs in production and writes logs to /var/log/invoice/. Every night, you want to:
- Rotate those logs and gzip them
- Save them to /var/backups/logs/invoice/YYYY-MM-DD
- Extract only ERROR lines
- Email the developers a copy of that error log
- Push a status message to Datadog for audit and alerts
The Script
#!/bin/bash
set -euo pipefail
APP_NAME="invoice"
LOG_DIR="/var/log/$APP_NAME"
ARCHIVE_DIR="/var/backups/logs/$APP_NAME"
TODAY=$(date +%F)
ERROR_FILE="/tmp/${APP_NAME}_errors_$TODAY.log"
EMAIL_SUBJECT="[$APP_NAME] Errors Detected - $TODAY"
DEVELOPER_EMAILS=("dev1@example.com" "dev2@example.com")
DD_API_KEY="your_datadog_api_key"
DD_LOG_TAG="error-alert"
mkdir -p "$ARCHIVE_DIR/$TODAY"
log_to_datadog() {
local message="$1"
curl -X POST "https://http-intake.logs.datadoghq.com/v1/input" \
-H "Content-Type: application/json" \
-H "DD-API-KEY: $DD_API_KEY" \
-d "{
\"ddsource\": \"shell-script\",
\"service\": \"$APP_NAME\",
\"hostname\": \"$(hostname)\",
\"message\": \"$message\",
\"tags\": [\"env:prod\",\"task:$DD_LOG_TAG\"]
}" >/dev/null 2>&1
}
send_error_email() {
for email in "${DEVELOPER_EMAILS[@]}"; do
mailx -s "$EMAIL_SUBJECT" -a "$ERROR_FILE" "$email" <<EOF
Hi Dev,
Errors were detected in today's logs for [$APP_NAME].
Please check the attached error log file for investigation.
Regards,
Automated Log Monitor
EOF
done
}
echo " Rotating logs for $APP_NAME..."
if compgen -G "$LOG_DIR/*.log" > /dev/null; then
find "$LOG_DIR" -type f -name "*.log" -exec gzip {} \;
mv "$LOG_DIR"/*.gz "$ARCHIVE_DIR/$TODAY"
echo " Rotated logs to $ARCHIVE_DIR/$TODAY"
log_to_datadog " [$APP_NAME] Logs rotated successfully at $TODAY"
else
echo " No logs to rotate."
log_to_datadog "[$APP_NAME] No logs found for rotation on $TODAY"
fi
echo " Extracting errors..."
find "$ARCHIVE_DIR/$TODAY" -name "*.gz" -exec zgrep "ERROR" {} \; > "$ERROR_FILE" || true
if [ -s "$ERROR_FILE" ]; then
echo " Found error logs. Sending alert..."
send_error_email
log_to_datadog " [$APP_NAME] Errors found and emailed to developers on $TODAY"
else
echo "No errors found."
log_to_datadog "[$APP_NAME] No errors found in logs on $TODAY"
rm -f "$ERROR_FILE"
fi
Schedule It With Cron
`# /etc/cron.d/log-monitor
0 2 * * * /opt/scripts/rotate-and-alert-errors.sh >> /var/log/log-monitor.log 2>&1
How This Helps DevOps & Developers
Proactive Monitoring
No more “surprise bugs” — devs get error logs before users notice.Disk-space Hygiene
Logs are rotated and compressed, preventing overflow.Observability
All operations are visible in Datadog Logs. You can create monitors, dashboards, or anomaly alerts.
Conclusion
- Build once, run forever
- Alert smartly, not noisily
- Give developers what they need, when they need it
Top comments (0)