Introduction
Have you ever wondered how professional developers deploy their applications to servers automatically? In this comprehensive guide, I'll walk you through creating a powerful Bash script that automates the entire deployment process for Docker-based applications. By the end of this article, you'll have a script that can:
β
Clone your Git repository
β
Set up a remote server environment
β
Build and deploy Docker containers
β
Configure Nginx as a reverse proxy
β
Validate your deployment
β
Handle cleanup and log management
Best of all: You'll understand every step, even if you're just starting your DevOps journey!
What is Automated Deployment and Why Do We Need It?
In the early days of web development, deploying an application meant manually copying files to a server, installing dependencies, and configuring everything by hand. This process was:
- Time-consuming: Could take hours for a single deployment
- Error-prone: Easy to forget a step or misconfigure something
- Not repeatable: Hard to deploy the same way twice
- Not scalable: Imagine doing this for 10 different applications!
Automated deployment solves all these problems by using scripts to perform all deployment steps consistently and reliably.
Understanding the Deployment Workflow
Before we dive into the code, let's understand the big picture of what happens during deployment:
- Your local machine runs the deployment script
- The script clones your application code from Git
- It connects to your remote server via SSH
- It sets up Docker and Nginx on the server
- It builds your Docker container and runs it
- It configures Nginx to route traffic to your app
- It validates everything is working correctly
Prerequisites
Before you start, make sure you have:
-
Basic understanding of:
- Linux command line basics
- Git version control
- What Docker is (don't worry, we'll explain as we go!)
-
Technical requirements:
- A Linux/Mac machine (or WSL on Windows)
- A remote server (AWS EC2, DigitalOcean, etc.)
- SSH key access to your server
- A Git repository with your application code
[!TIP]
If you're new to SSH keys, think of them as a secure password alternative that uses cryptographic keys instead of text passwords.
The 7 Stages of Our Deployment Script
Our deployment script is organized into 7 distinct stages. Let's explore each one in detail!
Stage 0.5: Setup and Housekeeping
What it does: Sets up logging, error handling, and cleanup functionality.
Key concepts:
#!/usr/bin/env bash
set -o errexit # Exit on any error
set -o nounset # Exit on undefined variable
set -o pipefail # Exit if any command in a pipe fails
These three lines are crucial safety features:
-
errexit: If any command fails, the script stops immediately -
nounset: Catches typos in variable names -
pipefail: Ensures errors in complex commands don't get hidden
Logging system:
TIMESTAMP="$(date +%Y%m%d_%H%M%S)"
LOGDIR="./logs"
LOGFILE="${LOGDIR}/deploy_${TIMESTAMP}.log"
log() {
printf "%s [%s] %s\n" "$(date '+%Y-%m-%d %H:%M:%S')" "$1" "$2" | tee -a "${LOGFILE}"
}
This creates a timestamped log file for every deployment, so you can troubleshoot issues later.
Cleanup mode:
The script includes a special --cleanup flag that removes all deployed containers, images, and configurations. This is useful for starting fresh or removing old deployments.
./deploy.sh --cleanup
[!CAUTION]
Cleanup mode will remove ALL Docker containers and images on your server. Always backup important data first!
Stage 1: Collect Parameters and Basic Validation
What it does: Gathers all the information needed for deployment through interactive prompts.
What you'll be asked:
-
Git Repository URL: Where is your code?
- Example:
https://github.com/yourusername/your-app.git
- Example:
-
Personal Access Token (PAT): For private repositories
- Entered securely (won't show on screen)
-
Branch Name: Which branch to deploy?
- Default:
main
- Default:
-
SSH Details: How to connect to your server
- Username:
ubuntu - Host:
54.123.45.67(your server IP) - SSH Key Path:
~/.ssh/your-key.pem
- Username:
-
Application Port: What port does your app use?
- Example:
3000for Node.js,8080for many others
- Example:
Example interaction:
Git repository URL: https://github.com/john/awesome-app.git
Branch name: main
Remote SSH username: ubuntu
Remote SSH host/IP: 54.123.45.67
Path to local SSH private key: ~/.ssh/deploy-key.pem
Application internal container port: 3000
Why validation matters:
The script validates your inputs immediately:
case "${APP_PORT}" in
''|*[!0-9]* ) die 12 "Invalid port";;
esac
This ensures the port is actually a number, preventing errors later.
Stage 2: Repository Clone and Validation
What it does: Downloads your code and verifies connectivity to the server.
The cloning process:
WORKDIR="./workspace_${TIMESTAMP}"
mkdir -p "${WORKDIR}"
git clone -b "${BRANCH}" "${GIT_REPO_URL}" "${WORKDIR}/repo"
This creates a timestamped workspace directory and clones your specific branch into it.
Handling authentication:
For private repositories with HTTPS:
# Inject PAT into the URL securely
AUTH_URL="$(echo "${GIT_REPO_URL}" | sed -E "s#https://#https://${GIT_PAT}@#")"
git clone -b "${BRANCH}" "${AUTH_URL}" "${WORKDIR}/repo"
For SSH repositories:
GIT_SSH_COMMAND="ssh -i ${SSH_KEY_PATH} -o StrictHostKeyChecking=no" \
git clone -b "${BRANCH}" "${GIT_REPO_URL}" "${WORKDIR}/repo"
Docker setup detection:
if [ -f "${WORKDIR}/repo/Dockerfile" ]; then
log "INFO" "Found Dockerfile."
elif [ -f "${WORKDIR}/repo/docker-compose.yml" ]; then
log "INFO" "Found docker-compose.yml."
else
log "WARN" "No Docker setup found. Skipping container checks."
fi
The script intelligently detects whether you're using a simple Dockerfile or a complex docker-compose.yml setup.
SSH connectivity test:
Before transferring files, the script tests the SSH connection:
ssh -i "${SSH_KEY_PATH}" -o BatchMode=yes -o ConnectTimeout=10 \
"${REMOTE_USER}@${REMOTE_HOST}" "echo 'SSH_OK'" > /dev/null 2>&1
If this fails, you'll know immediately that there's an SSH problem, saving you time debugging later.
Dry-run file transfer:
rsync -avz --dry-run -e "ssh -i ${SSH_KEY_PATH}" \
"${WORKDIR}/repo/" "${REMOTE_USER}@${REMOTE_HOST}:/tmp/deploy-test/"
This simulates the file transfer without actually copying anything, ensuring the process will work when we do it for real.
[!NOTE] >
rsyncis better thanscpfor deployments because it only transfers changed files, making updates much faster.
Stage 3: Prepare Remote Environment
What it does: Installs and configures all necessary software on your server.
This is where the magic happens! The script automatically sets up:
- Docker: The containerization platform
- Docker Compose: For managing multi-container applications
- Nginx: The web server and reverse proxy
The installation process:
# Update package lists
sudo apt-get update -y
# Install Docker if not present
if ! command -v docker &>/dev/null; then
echo "Installing Docker..."
# Add Docker's official GPG key
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
# Set up Docker repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker
sudo apt-get update -y
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
fi
User permissions:
# Add current user to Docker group
if ! groups $USER | grep -q docker; then
sudo usermod -aG docker $USER
fi
This allows you to run Docker commands without sudo, which is a security best practice.
Service management:
# Enable and start services
sudo systemctl enable docker
sudo systemctl start docker
sudo systemctl enable nginx
sudo systemctl start nginx
systemctl enable ensures services start automatically when the server reboots.
Verification:
echo "--- Installed Versions ---"
docker --version
docker-compose --version
nginx -v
This confirms everything installed correctly and logs the versions for troubleshooting.
[!IMPORTANT]
This stage is idempotent, meaning you can run it multiple times safely. If Docker is already installed, it won't reinstall it.
Stage 4: Deploy Application on Remote
What it does: Transfers your code to the server, builds Docker images, and runs containers.
File synchronization:
APP_NAME=$(basename "${GIT_REPO_URL}" .git)
REMOTE_APP_PATH="/opt/${APP_NAME}"
# Prepare directory with correct permissions
ssh -i "${SSH_KEY_PATH}" "${REMOTE_USER}@${REMOTE_HOST}" \
"sudo mkdir -p '${REMOTE_APP_PATH}' && \
sudo chown -R \$(whoami):\$(whoami) '${REMOTE_APP_PATH}'"
# Transfer files
rsync -avz -e "ssh -i ${SSH_KEY_PATH}" --delete \
"${WORKDIR}/repo/" "${REMOTE_USER}@${REMOTE_HOST}:${REMOTE_APP_PATH}/"
The --delete flag removes files on the server that don't exist in your repository, keeping everything in sync.
Docker Compose deployment:
if [ -f docker-compose.yml ]; then
sudo docker compose down || true # Stop old containers
sudo docker compose build # Build new images
sudo docker compose up -d # Start containers in background
fi
Single Dockerfile deployment:
elif [ -f Dockerfile ]; then
APP_TAG=$(basename "$(pwd)")
# Build image
sudo docker build --build-arg PORT='${APP_PORT}' -t "${APP_TAG}:latest" .
# Remove old container
sudo docker stop "${APP_TAG}" || true
sudo docker rm "${APP_TAG}" || true
# Run new container
sudo docker run -d \
-p '${APP_PORT}':'${APP_PORT}' \
-e PORT='${APP_PORT}' \
--name "${APP_TAG}" \
"${APP_TAG}:latest"
fi
Health check:
echo "Waiting for app to start..."
sleep 5
if curl -fs "http://localhost:${APP_PORT}" > /dev/null 2>&1; then
echo "Application reachable on port ${APP_PORT}."
else
echo "WARNING: Application not responding yet."
fi
This gives your application a few seconds to start up, then tests if it's responding.
[!TIP]
The-dflag indocker runmeans "detached mode" - the container runs in the background, not blocking your terminal.
Stage 5: Configure Nginx Reverse Proxy
What it does: Sets up Nginx to route external traffic (port 80) to your Docker container.
Why use a reverse proxy?
Without Nginx, users would need to access your app like:
http://your-server.com:3000
With Nginx as a reverse proxy, they can use:
http://your-server.com
Much cleaner! Plus, Nginx handles:
- SSL/TLS certificates (HTTPS)
- Load balancing
- Static file serving
- DDoS protection
The Nginx configuration:
server {
listen 80;
server_name _;
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Breaking it down:
-
listen 80: Nginx listens on port 80 (HTTP) -
server_name _: Matches any domain/IP (use your domain here in production) -
proxy_pass http://localhost:3000: Forwards requests to your app -
proxy_set_headerlines: Pass client information to your app
Deployment process:
NGINX_CONFIG_NAME="${APP_NAME}.conf"
NGINX_PATH="/etc/nginx/sites-available/${NGINX_CONFIG_NAME}"
NGINX_LINK="/etc/nginx/sites-enabled/${NGINX_CONFIG_NAME}"
# Backup old config
if [ -f "$NGINX_PATH" ]; then
sudo mv "$NGINX_PATH" "${NGINX_PATH}.bak_$(date +%s)"
fi
# Write new config
sudo bash -c "cat > $NGINX_PATH" <<'CONF'
# ... nginx config here ...
CONF
# Enable site by creating symlink
sudo ln -sf "$NGINX_PATH" "$NGINX_LINK"
# Test configuration and reload
sudo nginx -t && sudo systemctl reload nginx
The nginx -t command tests the configuration before applying it, preventing syntax errors from breaking your web server.
Stage 6: Validate Deployment
What it does: Runs automated checks to ensure everything is working correctly.
Service status checks:
# Check if Docker is running
sudo systemctl is-active --quiet docker && \
echo "Docker: Active" || echo "Docker: Inactive"
# Check if Nginx is running
sudo systemctl is-active --quiet nginx && \
echo "Nginx: Active" || echo "Nginx: Inactive"
Container health:
sudo docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
This shows all running containers with their status and port mappings.
End-to-end test:
if curl -fs "http://localhost" > /dev/null 2>&1; then
echo "SUCCESS: Application reachable via Nginx on port 80!"
else
echo "ERROR: Application not responding through Nginx!"
fi
This simulates a real user accessing your application through Nginx.
[!NOTE]
The-fflag in curl makes it fail silently on HTTP errors, and-smakes it quiet. Perfect for scripting!
Stage 7: Final Cleanup and Log Management
What it does: Performs maintenance tasks to keep your server clean and efficient.
Remove broken symlinks:
if [ -d /etc/nginx/sites-enabled ]; then
find /etc/nginx/sites-enabled -xtype l -delete || true
fi
Clean up old Docker resources:
# Remove stopped containers older than 7 days
docker ps -a --filter "status=exited" --format '{{.ID}}' | \
xargs -r -n1 docker rm -v || true
# Remove unused images
docker image prune -f || true
# Remove unused networks
docker network prune -f || true
This prevents your server from filling up with old Docker artifacts.
Log rotation:
# Compress current log
gzip -c "${LOGFILE}" > "${LOGFILE}.gz" || true
# Delete logs older than 30 days
find "${LOGDIR}" -type f -name "deploy_*.log.gz" -mtime +30 -exec rm -f {} \;
This keeps your deployment history while managing disk space.
Advanced Features
Idempotency
What is idempotency?
A script is idempotent if running it multiple times produces the same result as running it once. Our script is idempotent because:
-
rsync --deleteensures remote files match local files exactly - Old Docker containers are removed before creating new ones
- Nginx symlinks use
-sf(force) to overwrite existing links - Service installations check if already installed
Why it matters: You can safely re-run the deployment without worrying about duplicate resources or conflicts.
Error Handling
The script includes comprehensive error handling:
die() {
log "ERROR" "$2"
exit "${1:-1}"
}
# Usage example:
if [ -z "${GIT_REPO_URL}" ]; then
die 10 "Git repository URL is required"
fi
Each error has a unique exit code for easier troubleshooting:
-
10-19: Parameter validation errors -
20-29: Repository and SSH errors -
30-39: Remote setup errors -
40-49: Deployment errors -
50-59: Nginx configuration errors -
60+: Validation errors
Comprehensive Logging
Every action is logged with timestamps:
log "INFO" "Starting Stage 3: remote environment preparation..."
Logs are:
- Written to timestamped files
- Compressed automatically
- Rotated after 30 days
- Displayed in real-time during deployment
Common Issues and Troubleshooting
Issue 1: SSH Connection Fails
Symptoms:
ERROR: Cannot SSH into remote. Check SSH key, username, host IP, or firewall.
Solutions:
- Verify SSH key permissions:
chmod 600 ~/.ssh/your-key.pem - Test SSH manually:
ssh -i ~/.ssh/your-key.pem user@host - Check server firewall allows port 22
- Verify the SSH key is added to server's
~/.ssh/authorized_keys
Issue 2: Docker Build Fails
Symptoms:
ERROR: The command '/bin/sh -c npm install' returned a non-zero code
Solutions:
- Check your
Dockerfilesyntax - Verify your app's dependencies are correct
- Look at the full error in the log file
- Try building locally first:
docker build -t test .
Issue 3: Application Not Responding
Symptoms:
WARNING: Application not responding yet.
Solutions:
- Check container logs:
docker logs <container-name> - Verify the port number is correct
- Ensure your app is binding to
0.0.0.0, notlocalhost - Check if the container is running:
docker ps
Issue 4: Nginx Configuration Error
Symptoms:
nginx: [emerg] unexpected "}" in /etc/nginx/sites-available/app.conf:10
Solutions:
- The script tests configuration before applying:
nginx -t - Check for syntax errors in the Nginx config section
- Verify the
APP_PORTvariable is correctly substituted
Best Practices and Security Considerations
1. SSH Key Management
Do:
- Use separate SSH keys for deployment (not your personal key)
- Restrict key permissions:
chmod 600 key.pem - Use SSH key passphrases for extra security
Don't:
- Commit SSH keys to Git
- Share deployment keys between projects
- Use weak key types (use RSA 4096-bit or ed25519)
2. Secrets Management
Do:
- Use environment variables for sensitive data
- Consider using secret management tools (HashiCorp Vault, AWS Secrets Manager)
- Rotate credentials regularly
Don't:
- Hardcode passwords or API keys in your code
- Commit
.envfiles to Git - Log sensitive information
3. Server Security
Do:
- Keep server software updated:
sudo apt update && sudo apt upgrade - Use firewall rules (UFW or security groups)
- Enable automatic security updates
- Implement SSL/TLS with Let's Encrypt
Don't:
- Run containers as root unnecessarily
- Expose Docker daemon to the internet
- Use default passwords
4. Deployment Strategy
Do:
- Test deployments in a staging environment first
- Implement health checks
- Keep backups before deploying
- Use version tags for Docker images
Don't:
- Deploy directly to production without testing
- Deploy during peak traffic hours
- Skip validation checks
Taking It Further: Next Steps
Now that you have a working deployment script, consider these enhancements:
1. Add CI/CD Integration
Integrate with GitHub Actions or GitLab CI:
# .github/workflows/deploy.yml
name: Deploy
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Deploy to server
run: |
echo "${{ secrets.SSH_KEY }}" > key.pem
chmod 600 key.pem
./deploy.sh
2. Implement Blue-Green Deployment
Run two versions of your app and switch between them:
# Start new version on different port
docker run -d -p 3001:3000 --name app-blue app:v2
# After validation, update Nginx to point to new version
# Then remove old version
3. Add Monitoring
Use tools like:
- Prometheus for metrics
- Grafana for visualization
- Sentry for error tracking
- Uptime Robot for availability monitoring
4. Database Management
Add database backup and migration steps:
# Backup before deployment
docker exec postgres pg_dump -U user dbname > backup.sql
# Run migrations
docker exec app npm run migrate
5. Multi-Environment Support
Extend the script to handle dev/staging/production:
read -rp "Environment (dev/staging/prod): " ENVIRONMENT
# Use different configs based on environment
Conclusion
Congratulations! π You now have a comprehensive understanding of automated Docker deployment using Bash scripts. Let's recap what we've covered:
β
The basics: What deployment automation is and why it matters
β
The workflow: How code gets from your computer to a live server
β
The 7 stages: Every step of the deployment process in detail
β
Advanced concepts: Idempotency, error handling, and logging
β
Troubleshooting: Common issues and how to fix them
β
Best practices: Security and deployment strategies
β
Next steps: How to enhance your deployment pipeline
Key Takeaways
- Automation saves time: What used to take hours now takes minutes
- Consistency matters: The same process every time reduces errors
- Logging is crucial: Good logs make troubleshooting 10x easier
- Security is paramount: Never compromise on security practices
- Iteration is key: Start simple, then add features as needed
Your Deployment Journey
This script is a foundation, not a final destination. As you grow in your DevOps journey:
- Customize it for your specific needs
- Add features that solve your problems
- Share your improvements with the community
- Keep learning and experimenting
Additional Resources
Want to dive deeper? Check out these resources:
- Docker Documentation: docs.docker.com
- Nginx Beginner's Guide: nginx.org/en/docs/beginners_guide.html
- Bash Scripting Guide: tldp.org/LDP/abs/html/
- DevOps Roadmap: roadmap.sh/devops
Get Started Today!
Ready to deploy your first application automatically? Here's your action plan:
- Clone the script from the repository
- Set up a test server (DigitalOcean has a $5/month droplet)
- Create a simple Dockerized app (a "Hello World" Node.js app works great)
- Run the deployment script following this guide
- Celebrate when you see your app live! π
Remember: Every expert was once a beginner. Don't be afraid to experiment, break things (in a test environment!), and learn from mistakes.
Questions or Feedback?
I'd love to hear about your deployment journey! Did this guide help you? What challenges did you face? What would you like to see covered in a follow-up article?
Drop your thoughts in the comments below, and happy deploying! π»β¨
About This Article
This guide was created as part of the HNG Internship Stage 1 DevOps task. The script implements best practices for production-ready deployments while remaining accessible to beginners.
Connect with me: [Your social links here]
Want to learn more about DevOps? Check out the HNG Internship program for hands-on learning opportunities.
Tags: #DevOps #Docker #Nginx #BashScripting #Deployment #Automation #Linux #CloudComputing #WebDevelopment #Tutorial



Top comments (0)