DEV Community

Cover image for Automate Your Server Maintenance: A Simple Bash Script for Log Cleanup
Alan Varghese
Alan Varghese

Posted on

Automate Your Server Maintenance: A Simple Bash Script for Log Cleanup

Tired of logs eating up your disk space? Learn how to build a simple, automated log cleanup script using Bash.

Every developer or sysadmin has been there: you log into a server to investigate an issue, only to find that the disk is 100% full. More often than not, the culprit is a mountain of old log files that haven't been touched in months.

While tools like logrotate are the industry standard, sometimes you need a lightweight, custom solution that you can understand and deploy in seconds.

In this post, I'll walk you through a simple Bash script I built to automate log cleanup and keep your storage breathing easy.

The Problem

Applications generate logs—lots of them. Whether it's access logs, error logs, or debug traces, these files grow silently. Without a retention policy, they will eventually consume all available disk space, potentially crashing your applications.

The Solution: auto_clean_log.sh

I created a script that identifies .log files older than a specific number of days, deletes them, and generates a report of the actions taken.

The Code

Here is the core of the script:

#!/bin/bash

# Configuration
LOG_DIR="/path/to/your/logs"
RETENTION_DAYS=7
REPORT="cleanup_report.log"

echo "------ Log Cleanup Report $(date) ---------" >> $REPORT

# Find files older than X days
FILES=$(find $LOG_DIR -name "*.log" -type f -mtime +$RETENTION_DAYS)

COUNT=0

if [ -z "$FILES" ]
then
    echo "No old log files found." >> $REPORT
else
    for file in $FILES
    do 
        echo "Deleting $file" >> $REPORT
        rm -f $file
        COUNT=$((COUNT+1))
    done
    echo "$COUNT files deleted." >> $REPORT
fi
echo "---------------------------------------------------" >> $REPORT
Enter fullscreen mode Exit fullscreen mode

How It Works

  1. find $LOG_DIR -name "*.log" -type f -mtime +$RETENTION_DAYS: This is the heart of the script.
    • -name "*.log": Targets only log files.
    • -type f: Ensures we only look at files, not directories.
    • -mtime +$RETENTION_DAYS: Filters for files modified more than $X$ days ago.
  2. The Loop: It iterates through the found files, deletes them using rm -f, and tracks the count.
  3. Reporting: Every action is appended to cleanup_report.log, giving you a clear audit trail of what happened and when.

Setup & Automation

To make this truly "set it and forget it," follow these steps:

1. Make it Executable

chmod +x auto_clean_log.sh
Enter fullscreen mode Exit fullscreen mode

2. Automate with Cron

You shouldn't have to run this manually. Add it to your crontab to run every night at midnight:

crontab -e
Enter fullscreen mode Exit fullscreen mode

Add this line:

0 0 * * * /path/to/your/script/auto_clean_log.sh
Enter fullscreen mode Exit fullscreen mode

Why This Matters

  • Performance: Prevents disk I/O issues related to full partitions.
  • Cost: Reduces storage costs on cloud providers.
  • Security/Compliance: Helps adhere to data retention policies by ensuring old logs aren't kept indefinitely.

Conclusion

Automation doesn't always require complex tools or heavy frameworks. Sometimes, a 20-line Bash script is exactly what you need to solve a recurring headache.

Feel free to check out the full project on my GitHub!

How do you handle log management in your environment? Let me know in the comments!

Top comments (0)