Member-only story
From file cleanups to server health checks, these scripts are your new sidekicks.
Introduction
Let’s face it DevOps is equal parts adrenaline and monotony. One moment you’re deploying microservices like a wizard; the next, you’re manually clearing logs or restarting crashed containers for the hundredth time. That’s when you realize: you didn’t sign up to be a human cron job.
Enter bash scripting your underrated sidekick.
Bash isn’t flashy. It won’t win “Dev Tool of the Year” on Reddit. But when you’re trying to automate that one annoying workflow (you know the one), bash is there. Quiet. Reliable. Fast. Like an old-school sysadmin in a hoodie.
In this article, I’m dropping 15 battle-tested bash scripts that every beginner DevOps engineer should have in their toolkit. These aren’t copy-paste stack overflow snippets these are practical, ready-to-deploy scripts you can actually use to automate your real-life nightmares.
You’ll find scripts to:
- Monitor your system’s health
- Keep your disk from exploding
- Deploy containers on command
- Automate backups before you cry
- And yes, even send alerts before everything catches fire
Whether you’re new to DevOps or just tired of wasting brain cells on routine tasks, these scripts will save you hours, reduce errors, and maybe just maybe give you time to finally finish Elden Ring.
Why bash? still relevant in 2025?
You’d think by now we’d have replaced bash with AI that types for us while we sip coffee and talk Kubernetes at conferences. But nah bash still slaps.
Why?
Because when you need to:
- check disk space fast
- automate a backup
- restart a crashed process
- or clean up files at 3AM before your logs eat the server alive…
you don’t want to spin up Python or wait for your Ansible playbook to warm up. You want a script that just runs.
Bash is fast, dirty, and already installed.
It’s the closest thing to a universal language in Linux-land. Whether you’re in AWS, GCP, DigitalOcean, or some random VPS running CentOS 7 from 2014, bash is there waiting like:
“Say less. I got you.”
Also, bash scripting forces you to think like a system. You learn how files, processes, and memory work together. And if you’re in DevOps, that’s the name of the game.
So yes bash might not be cool like Rust or slick like Go. But it’s your best friend when s* breaks*. And in DevOps, that’s half the job.
Script 1 check system health like a boss
Ever SSH into a server and feel that “something’s off” vibe? Maybe it’s a high load, or maybe the memory’s leaking faster than your weekend energy. Either way, you want answers fast.
This bash script gives you a quick snapshot of system health CPU, memory, and disk usage in one neat summary.
Script: check_system_health.sh
#!/bin/bash
echo "📊 System Health Report - $(date)"
echo "------------------------------------"
echo "🧠 Memory Usage:"
free -h
echo
echo "🔥 CPU Load:"
uptime
echo
echo "💾 Disk Space:"
df -h /
echo
echo "✅ Top 5 memory-hungry processes:"
ps -eo pid,comm,%mem,%cpu --sort=-%mem | head -n 6
What it does:
- Shows free and used memory in human-readable form
- Displays current CPU load and user sessions
- Lists how much disk space is left on
/
- Ranks the top 5 processes hogging memory
How to use it:
- Save as
check_system_health.sh
- Make it executable:
chmod +x check_system_health.sh
- Run it:
./check_system_health.sh
You can even drop it into a cron job and get a daily report emailed to you. Or better run it before you Slack your infra team saying “the server feels slow”.
Script 2 find the biggest files eating your disk
Ever had your server crash because it ran out of disk space and all you could say was “how the hell did this happen?”
Spoiler: it’s always some sneaky log file or rogue .tar.gz
from 6 months ago.
This script helps you track down disk hogs in seconds.
Script: biggest_files.sh
#!/bin/bash
DIR=${1:-.}
echo "🔎 Searching for the biggest files in: $DIR"
echo "--------------------------------------------"
find "$DIR" -type f -exec du -h {} + 2>/dev/null | sort -rh | head -n 10
What it does:
- Scans the given directory (or current one if none is specified)
- Calculates file sizes
- Sorts them from largest to smallest
- Lists the top 10 chonkers on your server
Usage:
chmod +x biggest_files.sh
./biggest_files.sh /var/log
Boom. Instant clarity. Now you know exactly what’s bloating your box.
Bonus tip:
Hook this script into a cron job that emails you weekly. Prevention is cheaper than a panic reboot.
Script 3 clean log files older than X days
Old logs are like expired milk in your fridge. They sit there quietly until one day boom, you’re out of disk and everything smells like panic.
This script is your auto janitor. It wipes log files older than a set number of days from a target directory. Because nobody needs a 2GB nginx-access.log
from 2021.
Script: clean_old_logs.sh
TARGET_DIR=${1:-/var/log}
DAYS_OLD=${2:-7}
echo "🧼 Cleaning up logs in $TARGET_DIR older than $DAYS_OLD days..."
find "$TARGET_DIR" -type f -name ".log" -mtime +"$DAYS_OLD" -exec rm -v {} \;
echo "✅ Cleanup complete!"
What it does:
- Looks inside
/var/log
by default (or any directory you pass) - Finds
.log
files older than 7 days (or any number you specify) - Deletes them with a nice verbose output so you know what died
Usage:
chmod +x clean_old_logs.sh
./clean_old_logs.sh /your/logs 14
Pro tip:
Automate this via cron weekly, and you’ll never get those 3AM alerts from your monitoring tool about “Disk 99% full 🔴”.
Script 4 auto-deploy a Docker container like a lazy genius
You’ve got a new build. You want it running. But you’re too tired to docker pull
, docker stop
, docker rm
, and docker run
every time.
This script makes it one-command easy to pull, stop, and redeploy your Docker container. Because your time is better spent debugging YAML files, right?
Script: auto_deploy_docker.sh
#!/bin/bash
IMAGE_NAME="your-image-name:latest"
CONTAINER_NAME="my-app"
echo "🐳 Pulling latest image: $IMAGE_NAME"
docker pull $IMAGE_NAME
echo "🛑 Stopping and removing old container (if exists)..."
docker stop $CONTAINER_NAME 2>/dev/null
docker rm $CONTAINER_NAME 2>/dev/null
echo "🚀 Starting new container..."
docker run -d --name $CONTAINER_NAME -p 80:80 $IMAGE_NAME
echo "✅ Deployment complete."
What it does:
- Pulls the latest version of your Docker image
- Stops and removes the running container (if any)
- Deploys a fresh instance
Usage:
chmod +x auto_deploy_docker.sh
./auto_deploy_docker.sh
Want to get fancier? Add --env-file
or bind volumes. Or even better, integrate it with your CI/CD pipeline for hands-free glory.
Note: Don’t run this in prod without checks. Add healthchecks or pre-deploy tests if this touches customer stuff. You’ve been warned. ⚠️
Script 5 ping test for network troubleshooting
So your app suddenly can’t reach the database. Or maybe your server in Frankfurt is playing hide and seek again. Whatever the case, a quick ping
can tell you if it's alive or ghosted.
This script lets you ping a host repeatedly, log the results, and highlight packet loss or downtime.
Script: ping_test.sh
#!/bin/bash
HOST=${1:-8.8.8.8}
LOGFILE="ping_log_$(date +%F).log"
echo "🌐 Pinging $HOST... logging to $LOGFILE"
ping -c 10 $HOST | tee -a $LOGFILE
LOSS=$(ping -c 10 $HOST | grep -oP '\d+(?=% packet loss)')
if [ "$LOSS" -gt 0 ]; then
echo "⚠️ Warning: $LOSS% packet loss detected for $HOST" | tee -a $LOGFILE
else
echo "✅ Network to $HOST is stable." | tee -a $LOGFILE
fi
What it does:
- Pings a host (default: Google DNS)
- Logs all results to a dated
.log
file - Alerts you if there’s any packet loss
Usage:
chmod +x ping_test.sh
./ping_test.sh yourdomain.com
Bonus use-case:
Run it via cron every hour and grep the logs for packet loss to troubleshoot intermittent network issues. Great for proving “it’s not your app, it’s the internet.”
Script 6 automate git commits and pushes because you forget every time
Raise your hand if you’ve ever made 5 changes and then realized you haven’t committed in 2 days. 🙋
This script helps you auto-commit and push your changes with a timestamp or a random message. Perfect for solo projects, scripts, or when you’re just moving fast and breaking things.
Script: git_push_auto.sh
#!/bin/bash
COMMIT_MSG=${1:-"auto-commit: $(date)"}
echo "📁 Adding changes..."
git add .
echo "📝 Committing with message: $COMMIT_MSG"
git commit -m "$COMMIT_MSG"
echo "🚀 Pushing to remote..."
git push
echo "✅ Done!"
What it does:
- Adds all changes in the current repo
- Commits with either a custom or auto-generated message
- Pushes to the current branch
Usage:
chmod +x git_push_auto.sh
./git_push_auto.sh "🔥 quick fix before lunch"
Pro tip:
Use a random commit message generator like what-the-commit
for chaotic good.
Script 7 backup your PostgreSQL database before disaster strikes
If you’ve ever whispered “please let me restore this” to your terminal after dropping a table… congrats, you’ve earned your DevOps badge.
Now let’s never go through that again. This script helps you automate PostgreSQL backups with timestamped files for easy recovery.
Script: pg_backup.sh
#!/bin/bash
DB_NAME=${1:-mydatabase}
BACKUP_DIR=${2:-/backups}
USER_NAME=${3:-postgres}
TIMESTAMP=$(date +"%F_%T")
FILENAME="$BACKUP_DIR/${DB_NAME}backup$TIMESTAMP.sql"
mkdir -p "$BACKUP_DIR"
echo "💾 Backing up database '$DB_NAME' to $FILENAME"
pg_dump -U "$USER_NAME" "$DB_NAME" > "$FILENAME"
if [ $? -eq 0 ]; then
echo "✅ Backup successful!"
else
echo "❌ Backup failed!"
fi
What it does:
- Creates a timestamped
.sql
backup of your PostgreSQL database - Saves it in a custom or default directory
- Fails gracefully and lets you know if something goes wrong
Usage:
chmod +x pg_backup.sh
./pg_backup.sh yourdb /your/backup/folder your_pg_user
Bonus idea:
Wrap this into a nightly cron job + send the .sql
to S3 or a remote server. Because if you don’t have backups, you’re just playing production on hard mode.
Script 8 compress and archive directories like a storage wizard
Disk getting tight? Have folders full of random reports, exports, or logs?
Before you hit the panic button or start deleting things you might need later, archive them like a pro.
This script helps you compress any directory into a timestamped .tar.gz
archive. Great for backups, file rotation, or just decluttering the /tmp
jungle.
Script: archive_directory.sh
#!/bin/bash
DIR_TO_ARCHIVE=${1:-/var/log}
DEST_DIR=${2:-~/archives}
TIMESTAMP=$(date +"%F_%H-%M-%S")
ARCHIVE_NAME=$(basename "$DIR_TO_ARCHIVE")"$TIMESTAMP".tar.gz
mkdir -p "$DEST_DIR"
echo "📦 Archiving $DIR_TO_ARCHIVE into $DEST_DIR/$ARCHIVE_NAME..."
tar -czf "$DEST_DIR/$ARCHIVE_NAME" "$DIR_TO_ARCHIVE"
if [ $? -eq 0 ]; then
echo "✅ Archive created successfully!"
else
echo "❌ Archive failed."
fi
What it does:
- Takes the folder you specify (default:
/var/log
) - Compresses it into a
.tar.gz
with a timestamp - Drops it neatly into a chosen directory (
~/archives
by default)
Usage:
chmod +x archive_directory.sh
./archive_directory.sh /etc /backups
Bonus use-case:
Chain this with your PostgreSQL backup or log cleaner for weekly scheduled maintenance. Store archives off-server for extra safety.
Script 9 detect file changes in a directory (like a sneaky watchdog)
Ever had a config file magically change and nobody on the team “remembers doing it”? Yeah, sure.
This script helps you detect file changes in real-time using inotifywait
—perfect for monitoring sensitive directories like /etc
, /var/www
, or your secret bash scripts folder 👀.
Script: watch_directory.sh
#!/bin/bash
WATCH_DIR=${1:-/etc}
EVENTS="modify,create,delete,move"
LOGFILE="watch$(basename $WATCH_DIR)$(date +%F).log"
echo "👁️ Watching $WATCH_DIR for changes..."
inotifywait -m -r -e $EVENTS "$WATCH_DIR" --format '%T %w%f %e' --timefmt '%F %T' | tee -a "$LOGFILE"
What it does:
- Monitors a directory and its subfolders for file changes
- Logs timestamped events (modified, created, deleted, or moved files)
- Outputs to both screen and log file
Requirements:
You need inotify-tools
installed:
sudo apt install inotify-tools
Usage:
chmod +x watch_directory.sh
./watch_directory.sh /your/important/folder
Pro tip:
Run this script in a tmux
or screen
session. You’ll never have to wonder who touched the config again. You’ll have receipts.
Script 10 detect failed SSH login attempts (and catch the sneaky bois)
Your logs are full of weird usernames trying to SSH into your server at 3AM: admin
, test
, root
, and even asdf
.
Welcome to the internet.
This script helps you monitor failed SSH login attempts so you can stay one step ahead of brute-force bots (or suspicious interns).
Script: detect_ssh_failures.sh
#!/bin/bash
LOG_FILE="/var/log/auth.log"
OUTPUT_FILE="ssh_failures$(date +%F).log"
echo "🛡️ Detecting failed SSH login attempts..."
grep "Failed password" "$LOG_FILE" | awk '{print $1, $2, $3, $11}' | sort | uniq -c | sort -nr > "$OUTPUT_FILE"
echo "📄 Report saved to $OUTPUT_FILE"
cat "$OUTPUT_FILE"
What it does:
- Scans
/var/log/auth.log
for failed login attempts - Extracts IP addresses and timestamps
- Counts and sorts the attempts by frequency
- Saves the result to a nice readable log file
Usage:
chmod +x detect_ssh_failures.sh
./detect_ssh_failures.sh
Bonus move:
Hook this to a Slack webhook or an email alert if an IP fails too many times. You’ll be the security hero your server needs .
Script 11 check if a service is running (and restart it like a boss if it’s not)
Services crash. It’s not personal it’s just Linux being Linux. Whether it’s Nginx, MySQL, or your custom API, you don’t want to find out it died… hours later… from your CEO.
This script checks if a service is running and automatically restarts it if it’s not. It’s your silent uptime guardian.
Script: watchdog_service.sh
#!/bin/bash
SERVICE_NAME=${1:-nginx}
echo "🔍 Checking if $SERVICE_NAME is running..."
if systemctl is-active --quiet "$SERVICE_NAME"; then
echo "✅ $SERVICE_NAME is running fine."
else
echo "⚠️ $SERVICE_NAME is not running. Attempting restart..."
systemctl restart "$SERVICE_NAME"
if systemctl is-active --quiet "$SERVICE_NAME"; then
echo "✅ Restart successful."
else
echo "❌ Restart failed. Check logs ASAP."
fi
fi
What it does:
- Checks if a systemd service is active
- If not, tries to restart it
- Logs the result so you know if things went smooth or sideways
Usage:
chmod +x watchdog_service.sh
./watchdog_service.sh mysql
Set it up with cron to check every 5 or 10 minutes. Combine it with alerts (like sending a Slack or email) if the restart fails, and you’ve got yourself a mini self-healing system.
Script 12 remove dangling Docker images and volumes (because disk space is precious)
Docker is awesome until it starts hoarding your disk space like a dragon.
Dangling images? Orphaned volumes? Useless containers from two weeks ago?
This script helps you wipe the junk and keep your system lean.
Script: docker_cleanup.sh
#!/bin/bash
echo "🧹 Cleaning up dangling Docker images..."
docker image prune -f
echo "🧺 Removing unused Docker volumes..."
docker volume prune -f
echo "🗑️ Removing stopped containers..."
docker container prune -f
echo "✅ Docker cleanup complete."
What it does:
- Removes unused Docker images (those with
<none>
tags) - Deletes orphaned volumes that aren’t used by any container
- Clears out containers that exited or were never started properly
Usage:
chmod +x docker_cleanup.sh
./docker_cleanup.sh
Pro tip:
Set this script on a weekly cron schedule. Or even better run it after CI/CD builds that leave clutter behind.
Just don’t go wild in production unless you know exactly what’s safe to remove.
Script 13 update and upgrade all packages safely without nuking your system
You know that nervous moment when you run apt upgrade
on a live server and just hope it doesn’t break everything?
This script helps you update your system the smart way, logging what’s updated so you can roll back (or blame the update) if something goes sideways.
Script: safe_update.sh
#!/bin/bash
LOGFILE="update_log_$(date +%F_%H-%M-%S).log"
echo "⚙️ Updating package lists..."
sudo apt update | tee "$LOGFILE"
echo "⬆️ Upgrading installed packages..."
sudo apt upgrade -y | tee -a "$LOGFILE"
echo "🧽 Autoremoving unused packages..."
sudo apt autoremove -y | tee -a "$LOGFILE"
echo "📄 Update log saved to $LOGFILE"
What it does:
- Updates the apt package index
- Upgrades all packages non-interactively
- Removes unused dependencies
- Logs every change with timestamps for tracking
Usage:
chmod +x safe_update.sh
./safe_update.sh
Bonus tip:
Keep a backup or snapshot before running this in production. Automation is great until an update bricks your app and all you have is regrets.
Script 14 test a URL and alert if it fails because downtime sucks
Your website is up… until it’s not. And unless you’ve got fancy monitoring tools hooked up, you might not find out until your users do.
This script helps you ping a URL and alert you if it’s down. Super handy for simple uptime checks, cron jobs, or catching flaky APIs.
Script:url_health_check.sh
#!/bin/bash
URL=${1:-https://example.com}
LOGFILE="url_check_$(date +%F).log"
STATUS_CODE=$(curl -s -o /dev/null -w "%{http_code}" $URL)
echo "$(date) - Checking $URL - Status code: $STATUS_CODE" | tee -a "$LOGFILE"
if [ "$STATUS_CODE" -ne 200 ]; then
echo "🚨 ALERT: $URL is down or returned $STATUS_CODE!" | tee -a "$LOGFILE"
# Optionally send an alert (email, Slack, webhook)
fi
What it does:
- Uses
curl
to ping a URL and fetch the HTTP status code - Logs the result with timestamps
- Alerts you in the console (and optionally via integrations) if the site isn’t returning a 200 OK
Usage:
chmod +x url_health_check.sh
./url_health_check.sh https://yourdomain.com
Upgrade idea:
Tie it into a Slack webhook or email trigger for real-time alerts.
Pair it with watch
or a cron job every few minutes for continuous monitoring without breaking the bank.
Script 15 rotate logs and archive them before they bury your server
Logs are essential until they take over your disk like an invasive species.
Instead of deleting them blindly, this script helps you rotate and archive your logs so you keep the important stuff, compress the old stuff, and avoid full-disk meltdowns.
Script: rotate_logs.sh
#!/bin/bash
LOG_DIR=${1:-/var/log}
ROTATE_DIR=${2:-/var/log/archived}
DAYS_OLD=${3:-7}
TIMESTAMP=$(date +%F)
mkdir -p "$ROTATE_DIR"
echo "🔁 Rotating logs in $LOG_DIR older than $DAYS_OLD days..."
find "$LOG_DIR" -type f -name ".log" -mtime +"$DAYS_OLD" | while read FILE; do
BASENAME=$(basename "$FILE")
ARCHIVE_NAME="${BASENAME%.}-$TIMESTAMP.log.gz"
gzip -c "$FILE" > "$ROTATE_DIR/$ARCHIVE_NAME" && rm "$FILE"
echo "📦 Archived $FILE → $ROTATE_DIR/$ARCHIVE_NAME"
done
echo "✅ Log rotation complete."
What it does:
- Finds
.log
files older than X days (default: 7) - Compresses them into
.gz
archives with date-stamped names - Stores them in a separate directory
- Deletes the original logs after archiving
Usage:
chmod +x rotate_logs.sh
./rotate_logs.sh /var/log /var/log/archived 5
Pro tip:
Add a weekly cron job for this. It’s like Marie Kondo for your log files tidy, minimal, and doesn’t kill production.

Section 17: conclusion automate more, cry less
Let’s be honest: no one gets into DevOps because they love running the same terminal commands every day.
These 15 bash scripts are your starter pack to stop doing boring stuff manually and start using your brain for actual problem-solving (or meme creation, we don’t judge).
Here’s what you gain:
- Peace of mind with backups and service checks
- Clean systems that won’t crash from log overload
- Fast fixes without logging into 5 servers at once
- More time to focus on complex infra, monitoring, and architecture
- Fewer “why didn’t you check this” moments
Automation doesn’t mean you’re lazy. It means you’re smart enough to make your future self grateful.
So clone these scripts, tweak them, add your own flair, and plug them into your daily workflows. Your uptime and your sanity will thank you.
Section 18: helpful resources
Here are some solid tools and links to level up your bash scripting and DevOps game:
- Explainshell paste a bash command and it explains every part
- Bash Cheat Sheet minimal and fast syntax guide
- tldr.sh community-powered simplified man pages
- Linux Command Library categorized command references
- The Phoenix Project DevOps classic for workflow geeks
- Bash Pitfalls great list of mistakes to avoid
Top comments (0)