DEV Community

Cover image for From Zero to Your First AWS Deployment: A Practical DevOps Walkthrough for 2026
Vected Technologies
Vected Technologies

Posted on

From Zero to Your First AWS Deployment: A Practical DevOps Walkthrough for 2026

Every AWS tutorial shows you how to click through the console. Almost none of them show you what a real DevOps engineer actually does — the architecture decisions, the IAM headaches, the pipeline failures at 2 AM, and the mental model that makes it all click. This one does.

Why Most AWS Tutorials Leave You More Confused Than When You Started
The AWS console has over 200 services.
Most beginner tutorials pick 3 of them — EC2, S3, and maybe RDS — and walk you through creating resources by clicking buttons. By the end you've launched an instance you don't fully understand, stored a file in a bucket, and feel approximately 0% more confident about working with AWS professionally.
The problem isn't the tutorial content. The problem is that button-clicking is not how real DevOps engineers think about AWS.
Real DevOps engineers think in architectures — how services connect, what fails when something goes down, how to make systems recoverable, and how to do all of it without the AWS bill becoming a startup-ending surprise.
This walkthrough is built around that mental model. We'll deploy a real application — not a demo — while building the thinking that makes everything else in AWS make sense.

The Mental Model First: What DevOps Actually Means
Before any commands or console clicks — let's establish what we're actually trying to do.
DevOps is the practice of making software delivery fast, reliable, and repeatable. That breaks down into three core concerns:
Infrastructure — The servers, networks, databases, and services your application runs on. In AWS, this means knowing which services to use, how to configure them securely, and how to provision them consistently (ideally through code, not manual clicks).
CI/CD — Continuous Integration and Continuous Deployment. The automated pipelines that take code from a developer's laptop to production without manual intervention. Every commit triggers automated tests. Every passing build can be deployed. Human error in deployment goes to near zero.
Observability — Logs, metrics, and alerts that tell you what your system is doing and when something goes wrong. A deployed application nobody is monitoring is a liability waiting to become an incident.
Everything in AWS DevOps serves one of these three concerns. Keeping that in mind stops you from drowning in service documentation.

The Stack We're Building
For this walkthrough we're going to deploy a simple Python Flask API to AWS using a proper DevOps setup. Here's what we'll use and why:
Application: Python Flask API (simple REST endpoints)
Compute: AWS EC2 (t2.micro — free tier)
Networking: VPC + Security Groups + Elastic IP
Storage: S3 (for static assets)
CI/CD: GitHub Actions → EC2 deployment
Process Mgmt: Gunicorn + Nginx
Monitoring: CloudWatch basic metrics
IAM: Least privilege service roles
This is a real, production-appropriate stack for a small application — not a toy setup. Everything here is something you'd actually use.

Step 1 — IAM Setup (Don't Skip This)
The most common beginner mistake in AWS: using root credentials or admin access for everything.
Create a deployment IAM user with only the permissions it needs:
bash# Via AWS CLI (install first: pip install awscli)
aws iam create-user --user-name vsa-deploy-user

Create a policy document — save as deploy-policy.json

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:StartInstances",
"ec2:StopInstances",
"s3:GetObject",
"s3:PutObject",
"cloudwatch:PutMetricData"
],
"Resource": "*"
}
]
}

aws iam put-user-policy \
--user-name vsa-deploy-user \
--policy-name DeploymentPolicy \
--policy-document file://deploy-policy.json
Why this matters: Least privilege IAM is the single most important security practice in AWS. If your deployment credentials are compromised, a narrow permission set limits the blast radius dramatically.

Step 2 — VPC and Networking Setup
Default VPCs work. They're also not how real applications should be structured.
bash# Create a VPC
aws ec2 create-vpc --cidr-block 10.0.0.0/16

Create public subnet

aws ec2 create-subnet \
--vpc-id vpc-XXXXXXXX \
--cidr-block 10.0.1.0/24 \
--availability-zone ap-south-1a

Create Internet Gateway and attach

aws ec2 create-internet-gateway
aws ec2 attach-internet-gateway \
--vpc-id vpc-XXXXXXXX \
--internet-gateway-id igw-XXXXXXXX

Security Group — only allow what you need

aws ec2 create-security-group \
--group-name vsa-app-sg \
--description "App security group" \
--vpc-id vpc-XXXXXXXX

Allow HTTP, HTTPS, SSH (restrict SSH to your IP only)

aws ec2 authorize-security-group-ingress \
--group-id sg-XXXXXXXX \
--protocol tcp --port 80 --cidr 0.0.0.0/0

aws ec2 authorize-security-group-ingress \
--group-id sg-XXXXXXXX \
--protocol tcp --port 443 --cidr 0.0.0.0/0

aws ec2 authorize-security-group-ingress \
--group-id sg-XXXXXXXX \
--protocol tcp --port 22 --cidr YOUR_IP/32

Step 3 — EC2 Instance + Application Setup
bash# Launch EC2 instance
aws ec2 run-instances \
--image-id ami-0f5ee92e2d63afc18 \ # Amazon Linux 2023, ap-south-1
--instance-type t2.micro \
--key-name your-key-pair \
--security-group-ids sg-XXXXXXXX \
--subnet-id subnet-XXXXXXXX \
--tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=vsa-app}]'

SSH in and set up the application

ssh -i your-key.pem ec2-user@YOUR_ELASTIC_IP

On the instance:

sudo yum update -y
sudo yum install python3 python3-pip nginx git -y

Clone your app and install dependencies

git clone https://github.com/yourusername/your-flask-app.git
cd your-flask-app
pip3 install -r requirements.txt

Set up Gunicorn as a systemd service

sudo nano /etc/systemd/system/flaskapp.service
ini[Unit]
Description=Gunicorn Flask App
After=network.target

[Service]
User=ec2-user
WorkingDirectory=/home/ec2-user/your-flask-app
ExecStart=/usr/local/bin/gunicorn --workers 3 --bind 0.0.0.0:8000 app:app
Restart=always

[Install]
WantedBy=multi-user.target
bashsudo systemctl enable flaskapp
sudo systemctl start flaskapp

Configure Nginx as reverse proxy

sudo nano /etc/nginx/conf.d/flaskapp.conf
nginxserver {
listen 80;
server_name YOUR_ELASTIC_IP;

location / {
    proxy_pass http://127.0.0.1:8000;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
}
Enter fullscreen mode Exit fullscreen mode

}
bashsudo systemctl restart nginx

Step 4 — GitHub Actions CI/CD Pipeline
This is where DevOps gets real. Every push to main should deploy automatically.
yaml# .github/workflows/deploy.yml

name: Deploy to AWS EC2

on:
push:
branches: [main]

jobs:
deploy:
runs-on: ubuntu-latest

steps:
  - name: Checkout code
    uses: actions/checkout@v3

  - name: Deploy to EC2
    uses: appleboy/ssh-action@master
    with:
      host: ${{ secrets.EC2_HOST }}
      username: ec2-user
      key: ${{ secrets.EC2_SSH_KEY }}
      script: |
        cd /home/ec2-user/your-flask-app
        git pull origin main
        pip3 install -r requirements.txt
        sudo systemctl restart flaskapp
        echo "Deployment successful ✅"
Enter fullscreen mode Exit fullscreen mode

Set these GitHub Secrets:

EC2_HOST — your Elastic IP
EC2_SSH_KEY — your private key content

Now every git push to main triggers a deployment. No manual SSH. No manual restarts. That's CI/CD.

Step 5 — Basic CloudWatch Monitoring
bash# Install CloudWatch agent on EC2
sudo yum install amazon-cloudwatch-agent -y

Create a basic config

sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard

Key metrics to monitor:

- CPUUtilization (alert at >80%)

- MemoryUsed (alert at >85%)

- DiskSpaceUsed (alert at >90%)

- Application error logs

Set up a CloudWatch alarm via console or CLI:
bashaws cloudwatch put-metric-alarm \
--alarm-name "HighCPU" \
--metric-name CPUUtilization \
--namespace AWS/EC2 \
--statistic Average \
--period 300 \
--threshold 80 \
--comparison-operator GreaterThanThreshold \
--evaluation-periods 2 \
--alarm-actions arn:aws:sns:REGION:ACCOUNT:your-sns-topic

What You've Just Built
Let's take stock:
✅ VPC with proper network isolation
✅ EC2 instance with least-privilege IAM
✅ Application running behind Nginx reverse proxy
✅ Automated deployments via GitHub Actions
✅ Basic monitoring with CloudWatch alerts
This is a real DevOps setup. Not a tutorial demo — an actual architecture you could use in production for a small application.
The next steps from here: add HTTPS via AWS Certificate Manager + Load Balancer, containerize with Docker, orchestrate with ECS or EKS, and implement infrastructure-as-code with Terraform. Each of those is a full article in itself.

Learning DevOps With Real Mentorship in Indore
The challenge with learning AWS/DevOps from tutorials is that you never know when your setup is actually correct versus when it works by accident. Real learning happens when someone who has run production AWS environments at scale can look at what you've built and tell you exactly what would break at 10,000 users.
Vector Skill Academy in Indore offers AWS/DevOps training led by an ex-Amazon professional — someone who's been on the inside of the infrastructure that handles millions of requests. The program is built around real deployments, real problems, and interview preparation that reflects what companies are actually testing in 2026.
If you're serious about AWS/DevOps as a career path:
🌐 www.vectorskillacademy.com

Top comments (0)