Most Teams Think They Have CI/CD. They Don’t.
Most teams say they have CI/CD.
But if someone is still SSH-ing into a server and running Docker commands manually, the system is not truly automated.
Most teams automate steps.
Very few automate the system.
This is where the gap exists.
This is not a theory post — this is based on a real working lab setup.
This article breaks down how GitHub Actions actually works — using both a real-world analogy and a technical perspective — based on a hands-on lab where a Dockerized nginx application is deployed to an EC2 instance on AWS.
🧠 Real-World View
Think of GitHub Actions like a diligent assistant who watches your mailbox.
Your house (EC2 instance) sits inside a gated community (VPC), so only authorised people can access it.
• The house is on a street (public subnet)
• The main gate (Internet Gateway) is the only way in and out
• A traffic controller (route table) directs visitors correctly
• The front door lock (security group – ports 22 and 80) controls who can enter
📬 What Happens When Code Changes?
Every time a new letter arrives in your mailbox (code merged to main):
• The assistant drives to your house (connects to EC2 via SSH)
• Picks up the old furniture (stops and removes old container)
• Brings in new furniture (pulls and runs new Docker image)
• Sends you a message when done (workflow success notification)
The assistant has a spare key (SSH key pair) stored in a secure lockbox (GitHub Secrets), so it can always access your house without asking you every time.
👉 You never have to go there yourself.
That’s what modern deployments should feel like:
repeatable, reliable, and hands-off.
🖼️ Real World → Technical Mapping
⚙️ Technical View
Think of GitHub Actions like a senior DevOps engineer who automated repetitive work.
❌ Before Automation
• Engineers SSH into EC2 for every deployment
• Run docker stop, docker pull, docker run manually
• Repeat the same steps every time
• Risk outages due to small mistakes
✅ After GitHub Actions
That same knowledge is now encoded as code in deploy.yml.
• Pipeline triggers automatically on push to main
• A runner (ubuntu-latest VM) executes the steps
• The process is consistent and repeatable
👉 No manual intervention required
⚙️ Infrastructure as Code (Terraform)
The entire environment is provisioned using Infrastructure as Code:
• Network boundary → VPC (aws_vpc)
• Network segment → Public Subnet (aws_subnet)
• External access → Internet Gateway (aws_internet_gateway)
• Traffic routing → Route Table (aws_route_table)
• Firewall → Security Group (aws_security_group)
• Secure access → SSH Key Pair (aws_key_pair)
• Compute → EC2 Instance (aws_instance)
👉 If it’s not in version control, it doesn’t exist.
🔄 Deployment Flow
In this lab setup:
- Code is merged into main
- GitHub Actions triggers automatically
- Runner reads instructions from deploy.yml
- Connects to EC2 via SSH using stored credentials
- Stops and removes the old container
- Pulls the latest Docker image
- Runs the new container on port 80
- Logs the result in workflow history
👉 A manual process becomes a fully automated release pipeline
🔐 Security Controls
• EC2 access restricted by security group (ports 22 and 80 only)
• SSH authentication uses key pair (aws_key_pair)
• Private key and host stored securely in GitHub Secrets
• Deployment executed only through workflow
• No direct manual access required during normal deployments
🎯 Key Insight
Terraform builds the system.
GitHub Actions runs the system.
Together, they eliminate manual deployments.
🚀 Final Thought
If your deployment still requires:
• SSH
• Manual commands
• “That one person”
You don’t have CI/CD.
You have automation on top of manual work.
🎥 Related Video
Watch the full GitHub Actions breakdown on Cloud AIOps Hub:
👉 https://www.youtube.com/@CloudAIopsHub
#devops #cicd #aws #terraform #docker #githubactions #beginners

Top comments (0)