Learn how to build a robust, multi-server SSH command runner using Bash, Docker, and parallel processing.
As developers or system administrators, we've all been there: You need to check the disk space, uptime, or service status on 10 different servers.
The "manual" way is painful:
-
ssh user@server1->df -h->exit -
ssh user@server2->df -h->exit - ...repeat 8 more times. π«
Sure, tools like Ansible exist, but sometimes you just want a lightweight, zero-dependency script to fire off a quick command and see what's happening right now.
In this post, I'll walk you through how I built a Multi-Server SSH Executor using pure Bash. We'll explore parallel processing, robust file parsing, and how to simulate a server cluster locally using Docker.
π― The Goal
We want a script that takes a command (e.g., uptime) and runs it on a list of servers defined in a config file.
Requirements:
- Parallel Execution: Use threading (background processes) so checking 10 servers takes as long as the slowest one, not the sum of all of them.
- Robust Config Parsing: Handle comments, weird whitespace, and different ports/users.
- Local Testing Ground: A way to test this without buying 5 VPS instances (spoiler: we use Docker).
ποΈ The Architecture
The project consists of three main parts:
-
servers.conf: A simple file defining our target servers. -
multi_ssh.sh: The brains of the operation. -
docker-compose.yml: A simulated lab environment with 4 SSH-enabled containers.
1. The Configuration
I wanted a simple format that's easy to read but flexible:
name:hostname:port:username
# Production Servers
web01:192.168.1.10:22:admin
db01:192.168.1.20:22:dbadmin
# Docker Lab (Localhost mapped ports)
web1:localhost:2221:root
web2:localhost:2222:root
2. The Simulation (Docker Lab)
Testing SSH scripts on production servers is... brave. Instead, I used docker-compose to spin up lightweight Ubuntu containers running sshd.
services:
web1:
image: rastasheep/ubuntu-sshd:18.04
ports: ["2221:22"]
web2:
image: rastasheep/ubuntu-sshd:18.04
ports: ["2222:22"]
Now I have "real" servers running on localhost ports 2221, 2222, etc.
β‘ The "Secret Sauce": Parallelism in Bash
The core challenge is running commands simultaneously. In Bash, we do this by putting a command in the background with &.
Here is the simplified logic:
# Loop through servers
for server in "${servers[@]}"; do
# Run SSH in the background
ssh $user@$host "$command" > "/tmp/result_$server.txt" &
# Save the Process ID (PID)
pids+=($!)
done
# Wait for all background jobs to finish
wait
This simple trick reduces execution time from (N * Timeout) to (Max(Timeout)).
π§ Lessons Learned & "Gotchas"
Writing the script revealed a few common Bash pitfalls that I had to fix to make it production-ready.
Lesson 1: for loops vs. while read
Initially, I used a for loop to read lines from the config file.
The Trap: If a line has spaces (like a description), for splits it into multiple items.
The Fix: Use a while loop with a custom Internal Field Separator (IFS).
# Robust way to read lines
while IFS=':' read -r name hostname port username || [[ -n "$name" ]]; do
# Process server...
done < "$config_file"
Note the || [[ -n "$name" ]] partβthis ensures we don't skip the last line if the file doesn't end with a newline character!
Lesson 2: Race Conditions & Temp Files
When running parallel jobs, you can't just write to output.txt. Multiple processes will write at the same time, garbling the text.
The Fix: Give each process its own temporary file (e.g., /tmp/ssh_result_web1.txt), let them finish, and then aggregate the results sequentially.
I used mktemp to ensure my temporary files never collided with other running instances of the script.
SERVERS_LIST_TMP=$(mktemp /tmp/ssh_multi_servers.XXXXXX)
Lesson 3: SSH is picky
Running SSH non-interactively requires specific flags to avoid hanging:
-
-o BatchMode=yes: Fail instead of asking for a password. -
-o ConnectTimeout=X: Don't wait forever if a server is down. -
-o StrictHostKeyChecking=no: Crucial for automated environments where IPs might change (like Docker containers).
π The Result
Running ./multi_ssh.sh "df -h" gives me a beautiful, color-coded summary of disk space across my entire fleet in seconds.
π₯ Try It Yourself
I've open-sourced this tool along with the setup script that automatically generates SSH keys and configures the Docker containers for you.
Prerequisites:
- Docker & Docker Compose
-
sshpass(for the initial setup script)
Installation:
git clone https://github.com/alanvarghese-dev/Bash_Scripting/tree/main/ssh_multi_server_executor.git
cd ssh-multi-server-executor
./ssh_install.sh # Sets up the Docker lab
./multi_ssh.sh "uptime"
Let me know in the comments if you prefer Bash for these tasks or if you stick to heavier tools like Ansible!
Happy scripting! π»β¨
Top comments (0)