Start your Bash scripting journey here. This guide takes you from "What is this?" to writing functional automation scripts.
What is a Bash Script?
Think of a Bash script as a recipe file. Instead of typing commands one by one into the terminal (mkdir folder, cd folder, touch file), you write them all in a text file. When you run the file, Linux executes them in order, top to bottom.
Step 1: The "Hello World" Anatomy
Every script starts with a specific setup.
1. The Shebang (#!)
The very first line of your script tells Linux which interpreter to use to run the code.
-
#!/bin/bash-> Use Bash (Standard for DevOps). -
#!/usr/bin/python3-> Use Python.
2. Create Your First Script
Create a file named hello.sh:
#!/bin/bash
# This is a comment. Linux ignores lines starting with #
echo "Hello, Future DevOps Engineer!"
3. Permissions (Crucial Step)
By default, Linux does not let you run text files as programs for security reasons. You must make it "executable."
-
Command:
chmod +x hello.sh -
chmod: Change Mode +x: Add eXecutable permissionRun it:
./hello.sh./: Means "look in the current directory" for this file.
Step 2: Variables (The Storage Boxes)
Variables allow you to store data to reuse later.
Rules:
-
No spaces around the
=sign. (Good:NAME="Gemini", Bad:NAME = "Gemini") -
Access the variable using
$.
#!/bin/bash
# Defining variables
NAME="Amit"
ROLE="DevOps Engineer"
# Using variables
echo "My name is $NAME and I am a $ROLE."
# --- Advanced: Command Substitution ---
# Storing the OUTPUT of a linux command into a variable
CURRENT_DATE=$(date)
MY_LOCATION=$(pwd)
echo "I am currently at $MY_LOCATION on $CURRENT_DATE"
Step 3: Input & Arguments (Talking to the Script)
You don't always want to hard-code values. You want to pass them in.
Method A: Interactive Input (read)
#!/bin/bash
echo "What is your name?"
read USERNAME
echo "Welcome, $USERNAME!"
Method B: Command Line Arguments (Better for Automation)
These are values you pass when you run the script (e.g., ./script.sh value1 value2).
-
$1: The first argument. -
$2: The second argument. -
$0: The name of the script itself.
#!/bin/bash
# Run this as: ./deploy.sh app-v1 production
APP_NAME=$1
ENV=$2
echo "Deploying $APP_NAME to the $ENV environment..."
Step 4: Logic (If / Else)
Automation is about making decisions. "If the file exists, delete it. If not, create it."
Key Syntax: Notice the spaces inside the brackets [ ... ]. They are required!
#!/bin/bash
echo "Enter a filename to check:"
read FILENAME
# -f checks if a file exists
if [ -f "$FILENAME" ]; then
echo "✅ File $FILENAME exists."
else
echo "❌ File not found. Creating it now..."
touch "$FILENAME"
fi
# Common Checks:
# -f : File exists
# -d : Directory exists
# -z : String is empty
# "$A" == "$B" : Strings are equal
Step 5: Loops (Doing things repeatedly)
DevOps is often about doing the same thing to 50 different servers or files.
The for Loop:
#!/bin/bash
# Loop through a list of names
for SERVER in "web01" "db01" "cache01"
do
echo "Pinging $SERVER..."
# Imagine a ping command here
done
# Loop through numbers (1 to 5)
for i in {1..5}
do
echo "Iteration number: $i"
done
Step 6: Exit Codes (The DevOps Traffic Light)
How does a CI/CD pipeline know if your script failed? Exit Codes.
- 0: Success.
- 1-255: Error.
Every command sends an exit code. You can see the last one using $?.
#!/bin/bash
mkdir /root/test_folder
# Only root can do this, so if you are a normal user, it will fail.
if [ $? -eq 0 ]; then
echo "Success!"
else
echo "Failed to create folder. (Exit Code: $?)"
fi
Summary: A Real-World "Backup" Script
Let's combine everything into a useful script you might actually use.
Scenario: Back up a specific folder and add a timestamp.
#!/bin/bash
# 1. Variables
SOURCE_DIR="/home/user/documents"
BACKUP_DIR="/home/user/backups"
DATE=$(date +%Y-%m-%d)
BACKUP_NAME="backup-$DATE.tar.gz"
# 2. Check if Backup Directory exists, if not create it
if [ ! -d "$BACKUP_DIR" ]; then
echo "Creating backup directory..."
mkdir -p "$BACKUP_DIR"
fi
# 3. Create the backup (Compressing the folder)
echo "Backing up $SOURCE_DIR to $BACKUP_DIR/$BACKUP_NAME..."
tar -czf "$BACKUP_DIR/$BACKUP_NAME" "$SOURCE_DIR" 2>/dev/null
# 4. Check status
if [ $? -eq 0 ]; then
echo "✅ Backup Successful!"
else
echo "❌ Backup Failed!"
exit 1
fi
Cron is the time-based job scheduler in Linux. It allows you to run scripts automatically at specific times or intervals (e.g., "Every Friday at 5 PM" or "Every 5 minutes").
1. The Syntax (The "Five Stars")
The configuration line for a cron job looks cryptic at first because it consists of five asterisks followed by the command.
* * * * * /path/to/script.sh
Each asterisk represents a specific unit of time:
| Position | Unit | Allowed Values |
|---|---|---|
| 1st | Minute | 0 - 59 |
| 2nd | Hour | 0 - 23 (24-hour clock) |
| 3rd | Day of Month | 1 - 31 |
| 4th | Month | 1 - 12 |
| 5th | Day of Week | 0 - 7 (0 and 7 are Sunday) |
2. Common Examples
| Schedule | Syntax | Meaning |
|---|---|---|
| Every Minute | * * * * * |
Runs every single minute. |
| Daily at 2 AM | 0 2 * * * |
Runs at 02:00 every day. |
| Every Monday | 0 8 * * 1 |
Runs at 08:00 AM, only on Mondays. |
| Every Hour | 0 * * * * |
Runs at the top of every hour (1:00, 2:00...). |
3. How to Set It Up
Step 1: Open the Crontab Editor
Run this command in your terminal:
crontab -e
(If it asks you to choose an editor, pick nano if you are a beginner—it's the easiest).
Step 2: Add Your Job
Scroll to the bottom of the file and add your line. Let's schedule the backup script we wrote previously to run every day at 2:30 AM.
30 2 * * * /home/user/backup.sh
Step 3: Save and Exit
- In
nano: PressCtrl+O(Enter) to save, thenCtrl+Xto exit. -
Success: You should see the message:
crontab: installing new crontab.
4. The "Senior" Tip: Logging Output
By default, if your script prints text (echo "Success"), Cron tries to email it to the system user (which you likely won't check).
You should redirect the output to a log file so you can debug it later.
# syntax: command >> log_file 2>&1
30 2 * * * /home/user/backup.sh >> /var/log/backup.log 2>&1
-
>>: Appends output to the file (doesn't overwrite). -
2>&1: Redirects "Standard Error" (2) to the same place as "Standard Output" (1). This ensures you catch both success messages and crash errors in the same log file.
This is the difference between a "hacky script" and a "production tool."
In a senior DevOps environment, scripts run unattended in CI/CD pipelines. They must fail gracefully, tell you exactly what went wrong, and be able to read complex configuration files.
1. Advanced Error Handling in Bash
By default, Bash keeps running even if a command fails. This is dangerous. You need "Strict Mode".
A. The Safety Switch (set -euo pipefail)
Put this at the top of every script you write.
#!/bin/bash
set -euo pipefail
# set -e: Exit immediately if a command exits with a non-zero status.
# set -u: Exit if you try to use a variable that doesn't exist (prevents rm -rf /$BAD_VAR).
# set -o pipefail: If 'cmd1 | cmd2' runs, fail if cmd1 fails (default bash only checks cmd2).
B. The "Trap" (Cleanup on Exit)
If your script creates temporary files and then crashes, those files stay there forever. trap allows you to run a cleanup function automatically, even if the script crashes or the user hits Ctrl+C.
#!/bin/bash
# Create a temp file
TEMP_FILE=$(mktemp)
# Define a cleanup function
cleanup() {
echo "🧹 Cleaning up temporary files..."
rm -f "$TEMP_FILE"
}
# Trap signals: EXIT (normal finish), SIGINT (Ctrl+C), SIGTERM (Kill command)
trap cleanup EXIT SIGINT SIGTERM
echo "Working with $TEMP_FILE..."
# Simulate a crash
exit 1
Result: Even though it crashed with exit 1, the cleanup function still runs.
2. Professional Logging
Don't just use echo. You need timestamps and log levels (INFO, WARN, ERROR) to debug effectively in production logs (like CloudWatch or Splunk).
#!/bin/bash
log() {
local LEVEL=$1
shift # Remove the first argument (Level), leaving the message
local MSG=$@
local TIMESTAMP=$(date +"%Y-%m-%d %H:%M:%S")
echo "[$TIMESTAMP] [$LEVEL] $MSG"
}
# Usage
log "INFO" "Starting backup process..."
log "WARN" "Disk space is below 20%."
# Example Error Handling with Log
cp /source /dest || { log "ERROR" "Copy failed!"; exit 1; }
Output: [2026-01-17 17:45:00] [INFO] Starting backup process...
3. Parsing JSON in Bash (using jq)
Bash is text-based; it doesn't understand JSON objects. To handle JSON, you use a tool called jq. It is installed on almost every DevOps agent.
Scenario: You have a config.json file.
{
"database": {
"host": "db.prod.local",
"port": 5432
}
}
Script to read it:
#!/bin/bash
CONFIG_FILE="config.json"
# Extract values using jq
DB_HOST=$(jq -r '.database.host' $CONFIG_FILE)
DB_PORT=$(jq -r '.database.port' $CONFIG_FILE)
echo "Connecting to $DB_HOST on port $DB_PORT..."
-
-r: Raw output (removes quotes around strings).
4. Parsing Arguments & YAML in Python
When logic gets complex (like needing flagged arguments --file or --dry-run), switch to Python.
A. Argument Parsing (argparse)
This creates professional CLI tools with help menus automatically.
import argparse
import sys
# 1. Setup the parser
parser = argparse.ArgumentParser(description="Deploy App Tool")
# 2. Add arguments
parser.add_argument('--env', required=True, help="Target environment (dev/prod)")
parser.add_argument('--replicas', type=int, default=1, help="Number of instances")
parser.add_argument('--dry-run', action='store_true', help="Don't actually deploy")
# 3. Parse arguments
args = parser.parse_args()
# 4. Use them
print(f"Targeting Environment: {args.env}")
print(f"Replica Count: {args.replicas}")
if args.dry_run:
print("⚠️ DRY RUN MODE: No changes made.")
sys.exit(0)
Run it: python3 script.py --env prod --replicas 3 --dry-run
B. Parsing YAML (PyYAML)
YAML is the language of Kubernetes and Ansible. You often need to read a YAML config in your script.
(Requires: pip install pyyaml)
File: `deploy.yaml`
app_name: my-web-app
version: 1.2
features:
- dark_mode
- beta_users
Script:
import yaml
with open("deploy.yaml", "r") as file:
config = yaml.safe_load(file)
print(f"Deploying {config['app_name']} version {config['version']}")
if "dark_mode" in config['features']:
print("🌙 Dark mode enabled.")
Summary Table: When to use what?
| Task | Tool/Library | Notes |
|---|---|---|
| Simple Flags | Bash (getopts) |
Good for -f or -v. Bad for --long-flags. |
| Complex Flags | Python (argparse) |
Handles help menus (--help) and types (int/str) automatically. |
| Reading JSON | Bash (jq) |
The industry standard for JSON in shell. |
| Reading YAML | Python (PyYAML) |
Bash has no native YAML parser; use yq or Python. |
Here are the best website to learn and practice Bash scripting, categorized by how they teach (Gamified, Challenge-based, or Interactive).
The "Gamified" Approach (Best for Beginners)
If you find reading manuals boring, start here. These turn Linux/Bash into a game.
OverTheWire (Bandit Wargame): overthewire.org
Why it's #1: It is the industry standard for learning Linux/Bash security and navigation. You start at Level 0 and have to use Bash commands to find the password for Level 1, then Level 2, etc.
Focus: Navigation, SSH, piping, permissions, and grep.
Cost: Free.
Top comments (0)