<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mart Young</title>
    <description>The latest articles on DEV Community by Mart Young (@mart_young_ce778e4c31eb33).</description>
    <link>https://dev.to/mart_young_ce778e4c31eb33</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mart_young_ce778e4c31eb33"/>
    <language>en</language>
    <item>
      <title>🧪 Selling Digital Privacy - My GDRP Toolkit Journey</title>
      <dc:creator>Mart Young</dc:creator>
      <pubDate>Fri, 12 Dec 2025 09:25:24 +0000</pubDate>
      <link>https://dev.to/mart_young_ce778e4c31eb33/selling-digital-privacy-my-gdrp-toolkit-journey-k25</link>
      <guid>https://dev.to/mart_young_ce778e4c31eb33/selling-digital-privacy-my-gdrp-toolkit-journey-k25</guid>
      <description>&lt;p&gt;&lt;strong&gt;Two months ago, I shelved my GDRP toolkit project halfway.&lt;/strong&gt; Thoughts like "who cares?", "How accurate enough can this be", and mostly, "Who would want to buy?" was constantly what i had to deal with every step of the way. &lt;/p&gt;

&lt;p&gt;Well, it's live on Gumroad, the idea of helping others navigate compliance and also generating a trickle of passive income is now a reality.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;How I built, launched, and learned from my first difital product focusing on one of internet's most intimidating acronyms - &lt;strong&gt;GDRP&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Why GDRP
&lt;/h2&gt;

&lt;p&gt;I built a &lt;strong&gt;GDRP-Compliant AWS infrastructure setup using Terraform&lt;/strong&gt;, packaged with detailed documentation, reuseable Terraform modules, and deployment templates. Think of it as a &lt;strong&gt;plug-and-play toolkit&lt;/strong&gt; for DevOps engineers, founders. or freelancers who want to launch EU-friendly cloud products without hiring a privacy lawyer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why GDRP?&lt;/strong&gt; As a DevOps engineer who has worked across cloud infrastructure and compliance-heavy idustries over time, I have witnessed first hand how intimidating data privacy laws can be, especially for small teams. It's not like they &lt;em&gt;don't care&lt;/em&gt;, they are just overwhlmed, confused or resource-strapped. &lt;br&gt;
And that was my in.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Sparked this Project
&lt;/h2&gt;

&lt;p&gt;One thing is quite clear &lt;strong&gt;GDRP is notoriously vague&lt;/strong&gt;, and AWS on the other hand is notoriously complex. When the two mix, they live choas in their wake.&lt;/p&gt;

&lt;p&gt;As a DevOps engineer and cloud consultant, I kept noticing a pattern - &lt;strong&gt;most clients know they needed GDRP compliance&lt;/strong&gt;, but rarely knew where or how to start. Some just copy-paste outdated checklists, others would trust their cloud setup defaults. One client even said, &lt;em&gt;"We're GDRP-ready because we use AWS. Right?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Spoiler alert: &lt;strong&gt;Wrong&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This was when it hit me &lt;strong&gt;What if i could codify compliance&lt;/strong&gt; into reuseable infra that meets GDPR standards &lt;strong&gt;by design&lt;/strong&gt;?&lt;/p&gt;




&lt;h2&gt;
  
  
  The Dice was cast (and Nearly Giving Up)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Research &amp;amp; Validation
&lt;/h3&gt;

&lt;p&gt;Before writing a single line of Terraform, I asked around &lt;em&gt;freelancer communities, Reddit and whatsapp groups&lt;/em&gt;. I posted a simple question: &lt;strong&gt;"Would a pre-built GDRP-friendly AWS starter kit help you or your clients?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The responses were quite encouraging. Some even offered to pay for an early copy.&lt;/p&gt;

&lt;h3&gt;
  
  
  🛠️ Tools I Used
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Terraform&lt;/strong&gt;: For infrastructure-as-code module (VPCs, ECS, RDS, KMS, S3, CloudTrail, Cloudwatch, IAM - all privacy-hardened).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Notion&lt;/strong&gt;: For organizing my checklist and writing the compliance documentation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;VS Code &amp;amp; Github&lt;/strong&gt;: My dev and version control environment&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ⏳ Development Time
&lt;/h3&gt;

&lt;p&gt;I worked this project mostly late in the evenings and during the weekends. It took me about &lt;strong&gt;~35 hours over 4 weeks.&lt;/strong&gt; Mostly spent on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Testing and re-testing deployments.&lt;/li&gt;
&lt;li&gt;Writing clean, expandable Terraform code.&lt;/li&gt;
&lt;li&gt;Making sure every module met GDRP principles: data encryption, logging, auditability and region restrictions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Bigest Challenge?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Legal confidence.&lt;/strong&gt; Like yok already know, I'm not a lawyer, I had to make sure the toolkit aligned with &lt;em&gt;technical requirements&lt;/em&gt; of GDRP. Not just the spirit, but also the letters. I had to review the &lt;a href="https://ico.org.uk/" rel="noopener noreferrer"&gt;ICO guidance&lt;/a&gt;, read AWS whitepapers extensively and had a privacy advisor look over the docs. (worth every penny).&lt;/p&gt;

&lt;h2&gt;
  
  
  Launching on Gumroad
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🛒 Why?
&lt;/h3&gt;

&lt;p&gt;I chose &lt;strong&gt;Gumroad&lt;/strong&gt; for three reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;No upfront cost&lt;/strong&gt;, perfect for testing the waters.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simple setup&lt;/strong&gt;, I could lauch in munites.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Digital product friendly&lt;/strong&gt;, Gumroad's audience &lt;em&gt;gets&lt;/em&gt; toolkits, templates, and niche SaaS assets.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  💰 Pricing Strategy
&lt;/h3&gt;

&lt;p&gt;I decided on &lt;strong&gt;\$59 for personal use&lt;/strong&gt; and &lt;strong&gt;\$99 for basic comercial/agency use&lt;/strong&gt;. &lt;br&gt;
Why?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It undercut most legal templates. &lt;/li&gt;
&lt;li&gt;The value was in &lt;em&gt;saving time&lt;/em&gt; and &lt;em&gt;avoiding legal risk&lt;/em&gt; a nightmare for freelancers and bootstrapped startups.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;h3&gt;
  
  
  📈 Results
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Over \$600 in sales&lt;/strong&gt; within first 2 weeks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;X former Twitter&lt;/strong&gt;, followered by direct links via Emails&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;11 purchases&lt;/strong&gt;: mostly personal use &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Most customers were freelancers&lt;/strong&gt;, not startup founders. That flipped my assumptions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gumroad discovery is weak&lt;/strong&gt;. Most sales came from &lt;em&gt;my own network&lt;/em&gt;, not their marketplace&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;"Compliance" sounds boring.&lt;/strong&gt; I got better results using phrases like &lt;em&gt;"data privacy-ready"&lt;/em&gt; and &lt;em&gt;"cloud compliance toolkit"&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Lessons Learned
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Overdeliver with clarity.&lt;/strong&gt; Documentation matters more than design.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;People pay for simplicity.&lt;/strong&gt; Just making AWS + GDRP less intimidating is valuable.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What's Next?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Expanding the kit&lt;/strong&gt; to include support for other cloud providers like Azure &amp;amp; GCP.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Creating a micro-course&lt;/strong&gt; on GDRP-compliant cloud design.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Exploring privacy laws beyond GDRP&lt;/strong&gt;, like CCPA or Brazil's LGPD&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I am also testing selling on &lt;strong&gt;Lemon Squeezy or Payhip&lt;/strong&gt; for a better visibiity.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;I did not set out to "start a business", just to solve a problem I noticed over and over again. If you are thinking about lauching your own digital product, here's my nugde: &lt;strong&gt;Pick a niche, validate it fast, and ship even if it's not perfect.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Want to check out the toolkit? &lt;a href="https://martyoung.gumroad.com/l/swhae" rel="noopener noreferrer"&gt;Use this Here link&lt;/a&gt;&lt;br&gt;
Got any questions? I'd love to hear your thoughts.&lt;/p&gt;

</description>
      <category>privacy</category>
      <category>showdev</category>
      <category>terraform</category>
      <category>aws</category>
    </item>
    <item>
      <title>Build a Blue/Green deployment with Nginx Auto-Failover</title>
      <dc:creator>Mart Young</dc:creator>
      <pubDate>Tue, 09 Dec 2025 15:54:05 +0000</pubDate>
      <link>https://dev.to/mart_young_ce778e4c31eb33/build-a-bluegreen-deployment-with-nginx-auto-failover-4fji</link>
      <guid>https://dev.to/mart_young_ce778e4c31eb33/build-a-bluegreen-deployment-with-nginx-auto-failover-4fji</guid>
      <description>&lt;p&gt;&lt;em&gt;Imagine you run two identical kitchens: Blue and Green. One serves customers, the other is warmed up and ready. If the active kitchen has trouble, you quietly switch orders to the standby and nobody notices. That’s Blue/Green. In this post we’ll build it ourselves, line by line, with Nginx doing the instant handoff—no prior code or prebuilt images required.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  How It Fits Together (Quick Map)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Nginx&lt;/strong&gt;: Front door. Sends traffic to the main pool, retries fast, and falls back to backup if the main one misbehaves. Logs everything as JSON.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Apps (Blue &amp;amp; Green)&lt;/strong&gt;: Same Node.js app, two copies. Env vars label which is which. They expose &lt;code&gt;/healthz&lt;/code&gt;, &lt;code&gt;/version&lt;/code&gt;, and chaos endpoints so we can test.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dockerfile&lt;/strong&gt;: Builds the app once; both Blue and Green use it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;docker-compose.yaml&lt;/strong&gt;: Starts both apps, Nginx, and (if you want) the Slack watcher. Sets ports and health checks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;nginx.conf.template&lt;/strong&gt;: Tells Nginx who’s primary, who’s backup, and to be impatient with failures.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;watcher.py&lt;/strong&gt;: Reads Nginx logs and posts to Slack when failover or high errors happen (optional, but helpful).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;.env&lt;/strong&gt;: One place to pick the active pool and set labels/alert thresholds.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What You’ll Learn
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Blue/Green basics (two identical apps, one live, one ready).&lt;/li&gt;
&lt;li&gt;How Nginx routes to a primary and instantly falls back to a backup.&lt;/li&gt;
&lt;li&gt;Why health checks, short timeouts, and retries make failover fast.&lt;/li&gt;
&lt;li&gt;How to add chaos endpoints to &lt;em&gt;prove&lt;/em&gt; failover works.&lt;/li&gt;
&lt;li&gt;How to read structured logs (and send Slack alerts) so you know which pool served traffic.&lt;/li&gt;
&lt;li&gt;How to wire it all together with Docker Compose—no Kubernetes needed.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Docker + Docker Compose.&lt;/li&gt;
&lt;li&gt;Node.js (so we can build the tiny app locally).&lt;/li&gt;
&lt;li&gt;(Optional) Slack webhook URL if you want alerts.&lt;/li&gt;
&lt;li&gt;A terminal and a text editor. That’s it.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  1) Create the Project from Scratch
&lt;/h2&gt;

&lt;p&gt;Let’s start with nothing and build every file ourselves. Copy/paste is fine—understanding why each piece exists is the real goal.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.1 package.json
&lt;/h3&gt;

&lt;p&gt;This defines our minimal Node app and its dependencies.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; package.json &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;'
{
  "name": "blue-green-app",
  "version": "1.0.0",
  "main": "app.js",
  "license": "MIT",
  "scripts": {
    "start": "node app.js"
  },
  "dependencies": {
    "express": "^4.18.2"
  }
}
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  1.2 app.js (with health + chaos endpoints)
&lt;/h3&gt;

&lt;p&gt;This tiny server:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Responds to &lt;code&gt;/healthz&lt;/code&gt; so Nginx can decide if we’re alive.&lt;/li&gt;
&lt;li&gt;Responds to &lt;code&gt;/version&lt;/code&gt; with headers that tell us which pool handled the request.&lt;/li&gt;
&lt;li&gt;Has chaos endpoints so we can intentionally break one pool and watch traffic fail over.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; app.js &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;'
const express = require('express');
const app = express();

const APP_POOL = process.env.APP_POOL || 'unknown';
const RELEASE_ID = process.env.RELEASE_ID || 'unknown';
const PORT = process.env.PORT || 3000;

let chaosMode = false;
let chaosType = 'error'; // 'error' or 'timeout'

// Add headers for tracing
app.use((req, res, next) =&amp;gt; {
  res.setHeader('X-App-Pool', APP_POOL);
  res.setHeader('X-Release-Id', RELEASE_ID);
  next();
});

app.get('/', (req, res) =&amp;gt; {
  res.json({
    service: 'Blue/Green Demo',
    pool: APP_POOL,
    releaseId: RELEASE_ID,
    status: chaosMode ? 'chaos' : 'healthy',
    chaosMode,
    chaosType: chaosMode ? chaosType : null,
    timestamp: new Date().toISOString(),
    endpoints: { version: '/version', health: '/healthz', chaos: '/chaos/start, /chaos/stop' }
  });
});

app.get('/healthz', (req, res) =&amp;gt; {
  res.status(200).json({ status: 'healthy', pool: APP_POOL });
});

app.get('/version', (req, res) =&amp;gt; {
  if (chaosMode &amp;amp;&amp;amp; chaosType === 'error') return res.status(500).json({ error: 'Chaos: server error' });
  if (chaosMode &amp;amp;&amp;amp; chaosType === 'timeout') return; // simulate hang
  res.json({ version: '1.0.0', pool: APP_POOL, releaseId: RELEASE_ID, timestamp: new Date().toISOString() });
});

app.post('/chaos/start', (req, res) =&amp;gt; {
  const mode = req.query.mode || 'error';
  chaosMode = true;
  chaosType = mode;
  res.json({ message: 'Chaos started', mode, pool: APP_POOL });
});

app.post('/chaos/stop', (req, res) =&amp;gt; {
  chaosMode = false;
  chaosType = 'error';
  res.json({ message: 'Chaos stopped', pool: APP_POOL });
});

app.listen(PORT, '0.0.0.0', () =&amp;gt; {
  console.log(`App (&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;APP_POOL&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;) listening on &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PORT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;`);
  console.log(`Release ID: &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;RELEASE_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;`);
});
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  1.3 Dockerfile (build the app image)
&lt;/h3&gt;

&lt;p&gt;We’ll build the same image for Blue and Green; only the environment variables differ.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Dockerfile &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;'
FROM node:18-alpine
WORKDIR /app

# Install dependencies
COPY package*.json ./
RUN npm install --only=production

# Copy app code
COPY . .

EXPOSE 3000
CMD ["npm", "start"]
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  2) Nginx Config (Auto-Failover Upstreams)
&lt;/h2&gt;

&lt;p&gt;Nginx is our traffic director. We template it so a single env var (&lt;code&gt;ACTIVE_POOL&lt;/code&gt;) chooses who is primary. Create &lt;code&gt;nginx.conf.template&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; nginx.conf.template &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;'
events {
    worker_connections 1024;
}

http {
    # Structured JSON access logs
    log_format custom_json '{"time":"&lt;/span&gt;&lt;span class="nv"&gt;$time_iso8601&lt;/span&gt;&lt;span class="sh"&gt;"'
                          ',"remote_addr":"&lt;/span&gt;&lt;span class="nv"&gt;$remote_addr&lt;/span&gt;&lt;span class="sh"&gt;"'
                          ',"method":"&lt;/span&gt;&lt;span class="nv"&gt;$request_method&lt;/span&gt;&lt;span class="sh"&gt;"'
                          ',"uri":"&lt;/span&gt;&lt;span class="nv"&gt;$request_uri&lt;/span&gt;&lt;span class="sh"&gt;"'
                          ',"status":&lt;/span&gt;&lt;span class="nv"&gt;$status&lt;/span&gt;&lt;span class="sh"&gt;'
                          ',"bytes_sent":&lt;/span&gt;&lt;span class="nv"&gt;$bytes_sent&lt;/span&gt;&lt;span class="sh"&gt;'
                          ',"request_time":&lt;/span&gt;&lt;span class="nv"&gt;$request_time&lt;/span&gt;&lt;span class="sh"&gt;'
                          ',"upstream_response_time":"&lt;/span&gt;&lt;span class="nv"&gt;$upstream_response_time&lt;/span&gt;&lt;span class="sh"&gt;"'
                          ',"upstream_status":"&lt;/span&gt;&lt;span class="nv"&gt;$upstream_status&lt;/span&gt;&lt;span class="sh"&gt;"'
                          ',"upstream_addr":"&lt;/span&gt;&lt;span class="nv"&gt;$upstream_addr&lt;/span&gt;&lt;span class="sh"&gt;"'
                          ',"pool":"&lt;/span&gt;&lt;span class="nv"&gt;$sent_http_x_app_pool&lt;/span&gt;&lt;span class="sh"&gt;"'
                          ',"release":"&lt;/span&gt;&lt;span class="nv"&gt;$sent_http_x_release_id&lt;/span&gt;&lt;span class="sh"&gt;"}';

    upstream blue_pool {
        server app-blue:3000 max_fails=1 fail_timeout=3s;
        server app-green:3000 backup;
    }

    upstream green_pool {
        server app-green:3000 max_fails=1 fail_timeout=3s;
        server app-blue:3000 backup;
    }

    server {
        listen 80;
        server_name localhost;

        # Write JSON logs (shared volume)
        access_log /var/log/nginx/access.json custom_json;

        # Health check for LB
        location /healthz {
            access_log off;
            return 200 "healthy&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;";
            add_header Content-Type text/plain;
        }

        location / {
            proxy_pass http://&lt;/span&gt;&lt;span class="nv"&gt;$UPSTREAM_POOL&lt;/span&gt;&lt;span class="sh"&gt;;

            proxy_set_header Host &lt;/span&gt;&lt;span class="nv"&gt;$host&lt;/span&gt;&lt;span class="sh"&gt;;
            proxy_set_header X-Real-IP &lt;/span&gt;&lt;span class="nv"&gt;$remote_addr&lt;/span&gt;&lt;span class="sh"&gt;;
            proxy_set_header X-Forwarded-For &lt;/span&gt;&lt;span class="nv"&gt;$proxy_add_x_forwarded_for&lt;/span&gt;&lt;span class="sh"&gt;;
            proxy_set_header X-Forwarded-Proto &lt;/span&gt;&lt;span class="nv"&gt;$scheme&lt;/span&gt;&lt;span class="sh"&gt;;

            proxy_connect_timeout 1s;
            proxy_send_timeout 3s;
            proxy_read_timeout 3s;

            proxy_next_upstream error timeout http_500 http_502 http_503 http_504;
            proxy_next_upstream_tries 2;
            proxy_next_upstream_timeout 10s;

            proxy_pass_request_headers on;
            proxy_hide_header X-Powered-By;
        }
    }
}
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why these settings? (plain English)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;max_fails=1 fail_timeout=3s&lt;/code&gt;: one bad request is enough to say “try the other one” for a few seconds.&lt;/li&gt;
&lt;li&gt;Short timeouts (1s connect, 3s send/read): don’t wait around; switch fast.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;proxy_next_upstream&lt;/code&gt; + retries: if the main one errors or stalls, immediately try the backup within ~10s total.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;What just happened? Nginx knows who’s main, who’s backup, and to give up quickly on a slow/broken main.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  3) Optional Alerts: watcher.py + requirements.txt
&lt;/h2&gt;

&lt;p&gt;Think of this as a friendly pager: it reads Nginx’s JSON logs and pings Slack when failover happens or errors spike. If you don’t want alerts, you can skip this section and remove the watcher service later.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;requirements.txt&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; requirements.txt &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;'
requests==2.32.3
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;watcher.py&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; watcher.py &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;'
import json, os, time, requests
from collections import deque
from datetime import datetime, timezone

LOG_PATH = os.environ.get("NGINX_LOG_FILE", "/var/log/nginx/access.json")
SLACK_WEBHOOK_URL = os.environ.get("SLACK_WEBHOOK_URL", "")
SLACK_PREFIX = os.environ.get("SLACK_PREFIX", "from: @Watcher")
ACTIVE_POOL = os.environ.get("ACTIVE_POOL", "blue")
ERROR_RATE_THRESHOLD = float(os.environ.get("ERROR_RATE_THRESHOLD", "2"))
WINDOW_SIZE = int(os.environ.get("WINDOW_SIZE", "200"))
ALERT_COOLDOWN_SEC = int(os.environ.get("ALERT_COOLDOWN_SEC", "300"))
MAINTENANCE_MODE = os.environ.get("MAINTENANCE_MODE", "false").lower() == "true"

def now_iso(): return datetime.now(timezone.utc).isoformat()

def post_to_slack(text: str):
    if not SLACK_WEBHOOK_URL:
        return
    try:
        requests.post(SLACK_WEBHOOK_URL, json={"text": f"{SLACK_PREFIX} | {text}"}, timeout=5).raise_for_status()
    except Exception:
        pass

def parse(line: str):
    try:
        data = json.loads(line.strip())
        return {
            "pool": data.get("pool"),
            "release": data.get("release"),
            "status": int(data["status"]) if data.get("status") else None,
            "upstream_status": str(data.get("upstream_status") or ""),
            "upstream_addr": data.get("upstream_addr"),
        }
    except Exception:
        return None

class AlertState:
    def __init__(self):
        self.last_pool = ACTIVE_POOL
        self.window = deque(maxlen=WINDOW_SIZE)
        self.cooldowns = {}
    def cooldown_ok(self, key):
        now = time.time()
        last = self.cooldowns.get(key)
        if last is None or (now - last) &amp;gt;= ALERT_COOLDOWN_SEC:
            self.cooldowns[key] = now
            return True
        return False
    def error_rate_pct(self):
        if not self.window: return 0.0
        err = 0
        for evt in self.window:
            if any(s.startswith("5") for s in evt.get("upstream_status","").split(",") if s):
                err += 1
            elif evt.get("status") and 500 &amp;lt;= int(evt["status"]) &amp;lt;= 599:
                err += 1
        return (err / len(self.window)) * 100.0
    def handle(self, evt):
        self.window.append(evt)
        if MAINTENANCE_MODE:
            return
        pool = evt.get("pool")
        if pool and self.last_pool and pool != self.last_pool:
            if self.cooldown_ok(f"failover_to_{pool}"):
                post_to_slack(f"*Failover Detected*: {self.last_pool} → {pool}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;• time: {now_iso()}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;• error_rate: {self.error_rate_pct():.2f}%&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;• upstream: {evt.get('upstream_addr')}")
            self.last_pool = pool
        if len(self.window) &amp;gt;= max(10, int(WINDOW_SIZE * 0.5)):
            rate = self.error_rate_pct()
            if rate &amp;gt; ERROR_RATE_THRESHOLD and self.cooldown_ok(f"error_rate_{int(round(rate))}"):
                post_to_slack(f"*High Error Rate*: {rate:.2f}% over last {len(self.window)} requests&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;• time: {now_iso()}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;• active_pool: {pool or self.last_pool}")

def tail(path):
    with open(path, "r") as f:
        f.seek(0, os.SEEK_END)
        while True:
            line = f.readline()
            if not line:
                time.sleep(0.2)
                continue
            yield line

def main():
    state = AlertState()
    while not os.path.exists(LOG_PATH):
        time.sleep(0.5)
    for line in tail(LOG_PATH):
        evt = parse(line)
        if evt: state.handle(evt)

if __name__ == "__main__":
    main()
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  4) docker-compose.yaml (Build + Orchestrate)
&lt;/h2&gt;

&lt;p&gt;Compose glues everything together: it builds the single app image, runs it twice (Blue/Green), starts Nginx, and (optionally) the Slack watcher. This is the “one file to rule them all.”&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; docker-compose.yaml &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;'
version: '3.8'

services:
  app-blue:
    build:
      context: .
      dockerfile: Dockerfile
    container_name: blue-app
    environment:
      - APP_POOL=blue
      - RELEASE_ID=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;RELEASE_ID_BLUE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;
      - PORT=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PORT&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="nv"&gt;3000&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;
    ports:
      - "8081:3000"
    healthcheck:
      test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://127.0.0.1:3000/healthz || exit 1"]
      interval: 5s
      timeout: 3s
      retries: 3
      start_period: 10s

  app-green:
    build:
      context: .
      dockerfile: Dockerfile
    container_name: green-app
    environment:
      - APP_POOL=green
      - RELEASE_ID=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;RELEASE_ID_GREEN&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;
      - PORT=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PORT&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="nv"&gt;3000&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;
    ports:
      - "8082:3000"
    healthcheck:
      test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://127.0.0.1:3000/healthz || exit 1"]
      interval: 5s
      timeout: 3s
      retries: 3
      start_period: 10s

  nginx:
    image: nginx:alpine
    container_name: nginx-lb
    ports:
      - "8080:80"
    environment:
      - ACTIVE_POOL=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;ACTIVE_POOL&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;
      - UPSTREAM_POOL=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;ACTIVE_POOL&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;_pool
    volumes:
      - ./nginx.conf.template:/etc/nginx/nginx.conf.template:ro
      - nginx_logs:/var/log/nginx
    depends_on:
      - app-blue
      - app-green
    command: &amp;gt;
      sh -c "
        envsubst '&lt;/span&gt;&lt;span class="nv"&gt;$$&lt;/span&gt;&lt;span class="sh"&gt;UPSTREAM_POOL' &amp;lt; /etc/nginx/nginx.conf.template &amp;gt; /etc/nginx/nginx.conf &amp;amp;&amp;amp;
        nginx -g 'daemon off;'
      "

  alert_watcher:
    image: python:3.11-slim
    container_name: alert-watcher
    depends_on:
      - nginx
    environment:
      - SLACK_WEBHOOK_URL=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SLACK_WEBHOOK_URL&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;
      - SLACK_PREFIX=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SLACK_PREFIX&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="nv"&gt;from&lt;/span&gt;:&lt;span class="p"&gt; @Watcher&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;
      - ACTIVE_POOL=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;ACTIVE_POOL&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;
      - ERROR_RATE_THRESHOLD=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;ERROR_RATE_THRESHOLD&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="nv"&gt;2&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;
      - WINDOW_SIZE=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;WINDOW_SIZE&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="nv"&gt;200&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;
      - ALERT_COOLDOWN_SEC=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;ALERT_COOLDOWN_SEC&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="nv"&gt;300&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;
      - MAINTENANCE_MODE=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;MAINTENANCE_MODE&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="nv"&gt;false&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;
      - NGINX_LOG_FILE=/var/log/nginx/access.json
    volumes:
      - nginx_logs:/var/log/nginx
      - ./watcher.py:/opt/watcher/watcher.py:ro
      - ./requirements.txt:/opt/watcher/requirements.txt:ro
    command: &amp;gt;
      sh -c "pip install --no-cache-dir -r /opt/watcher/requirements.txt &amp;amp;&amp;amp; python /opt/watcher/watcher.py"

volumes:
  nginx_logs:
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Want it ultra-minimal? Comment out/remove &lt;code&gt;alert_watcher&lt;/code&gt; if you don’t need Slack alerts. The stack still works without it.&lt;/p&gt;

&lt;p&gt;What just happened? We wired four pieces: one shared app image, two containers (Blue/Green) with different env vars, Nginx in front, and an optional watcher that shares Nginx logs.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  5) .env (Wire It All Up)
&lt;/h2&gt;

&lt;p&gt;One place for all the knobs: which pool is primary, release labels, and alert thresholds. Changing &lt;code&gt;ACTIVE_POOL&lt;/code&gt; later lets you flip who is “live” without touching code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; .env &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;'
# Which pool is primary (blue or green)
ACTIVE_POOL=blue

# Release IDs (just labels for observability)
RELEASE_ID_BLUE=release-v1.0.0-blue
RELEASE_ID_GREEN=release-v1.0.0-green

# App port inside the container
PORT=3000

# Optional Slack alerts
SLACK_WEBHOOK_URL=
SLACK_PREFIX=from: @YourName
ERROR_RATE_THRESHOLD=2
WINDOW_SIZE=200
ALERT_COOLDOWN_SEC=300
MAINTENANCE_MODE=false
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  6) Run Everything
&lt;/h2&gt;

&lt;p&gt;Bring the whole stack up. Compose will build the image once and reuse it for both Blue and Green, then start Nginx and the watcher.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
docker compose ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see containers for blue, green, nginx, and (optionally) alert-watcher.&lt;/p&gt;




&lt;h2&gt;
  
  
  7) Sanity Checks
&lt;/h2&gt;

&lt;p&gt;These calls prove traffic flows and headers are set so you can tell which pool responded.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Through Nginx (main entry)&lt;/span&gt;
curl http://localhost:8080/version

&lt;span class="c"&gt;# Direct to Blue&lt;/span&gt;
curl http://localhost:8081/version

&lt;span class="c"&gt;# Direct to Green&lt;/span&gt;
curl http://localhost:8082/version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see JSON with &lt;code&gt;pool&lt;/code&gt; and &lt;code&gt;releaseId&lt;/code&gt;. By default, Blue is &lt;br&gt;
active.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9mkjqrbpn5n7q3zkivng.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9mkjqrbpn5n7q3zkivng.png" alt="Live Endpoint" width="800" height="282"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  8) Prove Auto-Failover (Chaos Testing)
&lt;/h2&gt;

&lt;p&gt;Time to break things on purpose. We’ll poison Blue and watch Nginx slide traffic to Green without customers seeing errors.&lt;/p&gt;

&lt;p&gt;1) Baseline (Blue active):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://localhost:8080/version
&lt;span class="c"&gt;# Expect X-App-Pool: blue&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2) Break Blue:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:8081/chaos/start?mode&lt;span class="o"&gt;=&lt;/span&gt;error
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3) Check via Nginx:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://localhost:8080/version
&lt;span class="c"&gt;# Expect X-App-Pool: green (failover)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4) Heal Blue:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:8081/chaos/stop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5) Try timeout chaos:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:8081/chaos/start?mode&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;timeout&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;6) Light load test (should stay 200s, most from active pool):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;for &lt;/span&gt;i &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;1..50&lt;span class="o"&gt;}&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; http://localhost:8080/version &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;What just happened? We proved failover under two kinds of pain: errors and timeouts. Nginx noticed, retried, and shifted traffic to keep responses healthy.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F66kgjbywkbvlf36l33k1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F66kgjbywkbvlf36l33k1.png" alt="Chaos Mode" width="800" height="215"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  9) Switch Pools Manually
&lt;/h2&gt;

&lt;p&gt;Edit &lt;code&gt;.env&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ACTIVE_POOL=green
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose down
docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nginx will now route to Green as primary, Blue as backup.&lt;/p&gt;




&lt;h2&gt;
  
  
  10) Slack Alerts (If Enabled)
&lt;/h2&gt;

&lt;p&gt;If you kept &lt;code&gt;alert_watcher&lt;/code&gt;, set &lt;code&gt;SLACK_WEBHOOK_URL&lt;/code&gt; in &lt;code&gt;.env&lt;/code&gt;, then:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Trigger chaos on Blue:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:8081/chaos/start?mode&lt;span class="o"&gt;=&lt;/span&gt;error
&lt;span class="k"&gt;for &lt;/span&gt;i &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;1..50&lt;span class="o"&gt;}&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; http://localhost:8080/version &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done
&lt;/span&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:8081/chaos/stop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see Slack messages for failover and (if errors &amp;gt; threshold) high error rate. Tune thresholds in &lt;code&gt;.env&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1jph2sgkpiynwe9w7sqf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1jph2sgkpiynwe9w7sqf.png" alt="Slack Alert" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;What just happened? The watcher tailed Nginx’s JSON logs, spotted failover/high-error signals, and pinged Slack so humans know immediately.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  11) Cleanup
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose down
&lt;span class="c"&gt;# Full clean (images/volumes):&lt;/span&gt;
docker compose down &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nt"&gt;--rmi&lt;/span&gt; all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;What just happened? We shut down everything, and if you ran the full clean, you also removed images and volumes for a fresh slate next time.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Troubleshooting Quick Hits
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ports busy&lt;/strong&gt;: Free 8080/8081/8082 or change mappings in compose.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No failover&lt;/strong&gt;: Check health endpoints (&lt;code&gt;/healthz&lt;/code&gt;), timeouts, and chaos mode.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Headers missing&lt;/strong&gt;: Ensure app sets &lt;code&gt;X-App-Pool&lt;/code&gt;/&lt;code&gt;X-Release-Id&lt;/code&gt; and Nginx passes headers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Slack silent&lt;/strong&gt;: Verify webhook, internet egress, and watcher logs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Slow failover&lt;/strong&gt;: Tighten timeouts (&lt;code&gt;proxy_connect_timeout&lt;/code&gt;, &lt;code&gt;max_fails&lt;/code&gt;, &lt;code&gt;fail_timeout&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Zero downtime&lt;/strong&gt;: Swap versions or survive failures without users noticing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Confidence&lt;/strong&gt;: Chaos testing proves failover actually works.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clarity&lt;/strong&gt;: Structured logs + headers show exactly who served each request.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simplicity&lt;/strong&gt;: Docker Compose + Nginx — no Kubernetes required.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Add CI to build/push the app image, tag blue/green releases.&lt;/li&gt;
&lt;li&gt;Add canary routing (gradual traffic shift) on top of blue/green.&lt;/li&gt;
&lt;li&gt;Ship logs to ELK/Datadog, add dashboards.&lt;/li&gt;
&lt;li&gt;Extend alerts to email/PagerDuty.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;You just built Blue/Green with automatic failover, chaos testing, and optional Slack alerts — from scratch. Happy shipping! 🚀&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>node</category>
      <category>architecture</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>From Zero to Production: A Complete Guide to Deploying Microservices with Terraform, Ansible and CI/CD</title>
      <dc:creator>Mart Young</dc:creator>
      <pubDate>Tue, 09 Dec 2025 15:30:22 +0000</pubDate>
      <link>https://dev.to/mart_young_ce778e4c31eb33/from-zero-to-production-a-complete-guide-to-deploying-microservices-with-terraform-ansible-and-3nbf</link>
      <guid>https://dev.to/mart_young_ce778e4c31eb33/from-zero-to-production-a-complete-guide-to-deploying-microservices-with-terraform-ansible-and-3nbf</guid>
      <description>&lt;p&gt;&lt;em&gt;How I built a production-ready DevOps pipeline for a microservices TODO application - and how you can too, even if you're just starting out.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Introduction: Why This Matters
&lt;/h2&gt;

&lt;p&gt;If you're reading this, you've probably heard terms like "DevOps," "Infrastructure as Code," and "CI/CD" thrown around, but maybe you're not entirely sure what they mean or how they fit together. That's exactly where I was when I started.&lt;/p&gt;

&lt;p&gt;This guide isn't just about completing a task - it's about understanding the &lt;strong&gt;why&lt;/strong&gt; behind each decision, learning from common mistakes, and building something you can be proud of. By the end, you'll have deployed a real application to the cloud with automated infrastructure, proper security, and a professional workflow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you'll build:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A microservices application with 5 different services (Vue.js, Go, Node.js, Java, Python)&lt;/li&gt;
&lt;li&gt;Automated cloud infrastructure using Terraform&lt;/li&gt;
&lt;li&gt;Server configuration and deployment with Ansible&lt;/li&gt;
&lt;li&gt;CI/CD pipelines that detect when things go wrong&lt;/li&gt;
&lt;li&gt;Multi-environment setup (dev, staging, production)&lt;/li&gt;
&lt;li&gt;Secure HTTPS with automatic SSL certificates&lt;/li&gt;
&lt;li&gt;A single command that deploys everything&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What you'll learn:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How containerization actually works (beyond just "docker run")&lt;/li&gt;
&lt;li&gt;Why infrastructure as code matters (and how it saves you from disasters)&lt;/li&gt;
&lt;li&gt;How to think about security in a cloud environment&lt;/li&gt;
&lt;li&gt;The importance of automation and what happens when you skip it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's dive in.&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 1: Understanding What We're Building
&lt;/h2&gt;

&lt;p&gt;Before we start writing code, let's understand what we're actually building. This isn't just a TODO app - it's a &lt;strong&gt;microservices architecture&lt;/strong&gt;, which means instead of one big application, we have multiple small services that work together.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Architecture
&lt;/h3&gt;

&lt;p&gt;Think of it like a restaurant:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend (Vue.js)&lt;/strong&gt; - The dining room where customers interact&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auth API (Go)&lt;/strong&gt; - The host who checks if you have a reservation (authentication)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Todos API (Node.js)&lt;/strong&gt; - The waiter who takes your order (manages your todos)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Users API (Java)&lt;/strong&gt; - The manager who knows all the customers (user management)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Log Processor (Python)&lt;/strong&gt; - The kitchen staff who process orders (background processing)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Redis&lt;/strong&gt; - The order board where everyone can see what's happening (message queue)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each service runs in its own container, which is like giving each part of the restaurant its own kitchen. If the waiter (Todos API) has a problem, it doesn't crash the whole restaurant.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Containerization?
&lt;/h3&gt;

&lt;p&gt;You might be thinking: "Why not just run everything on one server?" Great question! Here's why containers matter:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Isolation&lt;/strong&gt;: If one service crashes, others keep running&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consistency&lt;/strong&gt;: "It works on my machine" becomes "it works everywhere"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: Need more power? Spin up more containers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Portability&lt;/strong&gt;: Move from AWS to Azure? Just change where you run containers&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  The Challenge
&lt;/h3&gt;

&lt;p&gt;The real challenge isn't just getting containers to run - it's:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Making sure they can talk to each other&lt;/li&gt;
&lt;li&gt;Securing them with HTTPS&lt;/li&gt;
&lt;li&gt;Automating deployment so you don't manually SSH into servers&lt;/li&gt;
&lt;li&gt;Detecting when someone changes things manually (drift detection)&lt;/li&gt;
&lt;li&gt;Managing multiple environments without chaos&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's what makes this a real-world project.&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 2: Setting Up Your Development Environment
&lt;/h2&gt;

&lt;p&gt;Before we write any code, let's make sure you have everything you need. Don't worry if some of these are new - I'll explain what each one does.&lt;/p&gt;

&lt;h3&gt;
  
  
  Required Accounts
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;GitHub Account&lt;/strong&gt; (Free)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is where your code lives and where CI/CD runs&lt;/li&gt;
&lt;li&gt;Think of it as your code's home and your automation's brain&lt;/li&gt;
&lt;li&gt;Sign up at github.com if you don't have one&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Account&lt;/strong&gt; (Free tier available)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is where your servers will run&lt;/li&gt;
&lt;li&gt;AWS has a free tier that's perfect for learning&lt;/li&gt;
&lt;li&gt;You'll need a credit card, but we'll stay within free limits&lt;/li&gt;
&lt;li&gt;Sign up at aws.amazon.com&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Domain Name&lt;/strong&gt; (Optional but recommended - ~$10-15/year)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is your website's address (like &lt;code&gt;yourname.com&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;You can use services like Namecheap, GoDaddy, or Cloudflare&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Why you need it&lt;/strong&gt;: Let's Encrypt (free SSL) requires a real domain&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Alternative&lt;/strong&gt;: You can test with &lt;code&gt;localhost&lt;/code&gt; but won't get real SSL&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Installing Required Tools
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Docker &amp;amp; Docker Compose&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# On Ubuntu/Debian&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;docker.io docker-compose-plugin

&lt;span class="c"&gt;# Verify installation&lt;/span&gt;
docker &lt;span class="nt"&gt;--version&lt;/span&gt;
docker compose version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What is Docker?&lt;/strong&gt; Think of it as a shipping container for software. Just like shipping containers standardize how goods are transported, Docker standardizes how applications run.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terraform&lt;/strong&gt; (Version 1.5.0 or higher)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Download from hashicorp.com or use package manager&lt;/span&gt;
wget https://releases.hashicorp.com/terraform/1.5.0/terraform_1.5.0_linux_amd64.zip
unzip terraform_1.5.0_linux_amd64.zip
&lt;span class="nb"&gt;sudo mv &lt;/span&gt;terraform /usr/local/bin/

&lt;span class="c"&gt;# Verify&lt;/span&gt;
terraform version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What is Terraform?&lt;/strong&gt; It's like a blueprint for your cloud infrastructure. Instead of clicking buttons in AWS console (which you'll forget), you write code that describes what you want, and Terraform makes it happen.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ansible&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;ansible

&lt;span class="c"&gt;# Verify&lt;/span&gt;
ansible &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What is Ansible?&lt;/strong&gt; Think of it as a remote control for servers. Instead of SSHing into each server and typing commands, you write a "playbook" that tells Ansible what to do, and it does it on all your servers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS CLI&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="s2"&gt;"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="s2"&gt;"awscliv2.zip"&lt;/span&gt;
unzip awscliv2.zip
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./aws/install

&lt;span class="c"&gt;# Configure with your credentials&lt;/span&gt;
aws configure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What is AWS CLI?&lt;/strong&gt; It's a command-line interface to AWS. Instead of using the web console, you can control AWS from your terminal.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting Up AWS
&lt;/h3&gt;

&lt;p&gt;This is where many beginners get stuck, so let's go through it step by step.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1: Create an IAM User
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Why not use your root account?&lt;/strong&gt; Security best practice - root account has unlimited power. If it gets compromised, your entire AWS account is at risk.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to AWS Console → IAM → Users&lt;/li&gt;
&lt;li&gt;Click "Create user"&lt;/li&gt;
&lt;li&gt;Name it something like &lt;code&gt;terraform-user&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Important&lt;/strong&gt;: Check "Provide user access to the AWS Management Console" if you want console access, OR just programmatic access for CLI/API&lt;/li&gt;
&lt;li&gt;Attach policies:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;AmazonEC2FullAccess&lt;/code&gt; (for creating servers)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AmazonS3FullAccess&lt;/code&gt; (for storing Terraform state)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AmazonDynamoDBFullAccess&lt;/code&gt; (for state locking)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AmazonSESFullAccess&lt;/code&gt; (for email notifications)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Save the &lt;strong&gt;Access Key ID&lt;/strong&gt; and &lt;strong&gt;Secret Access Key&lt;/strong&gt; - you'll need these!&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Step 2: Create S3 Bucket for Terraform State
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;What is Terraform state?&lt;/strong&gt; Terraform needs to remember what it created. This "memory" is stored in a state file. We put it in S3 so it's:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Backed up automatically&lt;/li&gt;
&lt;li&gt;Accessible from anywhere&lt;/li&gt;
&lt;li&gt;Versioned (can see history)&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Go to S3 → Create bucket&lt;/li&gt;
&lt;li&gt;Name it something like &lt;code&gt;yourname-terraform-state&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Important settings&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Region: Choose one (remember which one!)&lt;/li&gt;
&lt;li&gt;Block Public Access: Keep all enabled (security)&lt;/li&gt;
&lt;li&gt;Versioning: &lt;strong&gt;Enable this&lt;/strong&gt; (so you can recover if state gets corrupted)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Click Create&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Step 3: Create DynamoDB Table for State Locking
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Why do we need locking?&lt;/strong&gt; Imagine two people trying to deploy at the same time. Without locking, they might both try to create the same server, causing conflicts. DynamoDB prevents this.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to DynamoDB → Create table&lt;/li&gt;
&lt;li&gt;Table name: &lt;code&gt;terraform-state-lock&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Partition key: &lt;code&gt;LockID&lt;/code&gt; (type: String)&lt;/li&gt;
&lt;li&gt;Table settings: Use default&lt;/li&gt;
&lt;li&gt;Capacity: On-demand (pay per request - perfect for this use case)&lt;/li&gt;
&lt;li&gt;Click Create&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Step 4: Create EC2 Key Pair
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;What is a key pair?&lt;/strong&gt; It's like a password, but more secure. Instead of typing a password, you use a private key file to authenticate.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to EC2 → Key Pairs → Create key pair&lt;/li&gt;
&lt;li&gt;Name: &lt;code&gt;my-terraform-key&lt;/code&gt; (or whatever you prefer)&lt;/li&gt;
&lt;li&gt;Key pair type: RSA&lt;/li&gt;
&lt;li&gt;Private key file format: &lt;code&gt;.pem&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Click Create&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IMPORTANT&lt;/strong&gt;: The &lt;code&gt;.pem&lt;/code&gt; file downloads automatically. Save it somewhere safe! You'll need it to SSH into your servers.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Step 5: Verify AWS CLI Works
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Test your credentials&lt;/span&gt;
aws sts get-caller-identity

&lt;span class="c"&gt;# Should show your user ARN&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If this works, you're all set! If not, check your &lt;code&gt;aws configure&lt;/code&gt; settings.&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 3: Containerizing Your Application
&lt;/h2&gt;

&lt;p&gt;Now that your environment is set up, let's containerize the application. This is where the magic happens.&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding Dockerfiles
&lt;/h3&gt;

&lt;p&gt;A Dockerfile is like a recipe. It tells Docker:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;What base image to start with (like choosing an operating system)&lt;/li&gt;
&lt;li&gt;What files to copy&lt;/li&gt;
&lt;li&gt;What commands to run&lt;/li&gt;
&lt;li&gt;What port to expose&lt;/li&gt;
&lt;li&gt;What command to run when the container starts&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Creating Dockerfiles for All Services
&lt;/h3&gt;

&lt;p&gt;Now, you might be wondering: "How do I know what Dockerfile to create for each service?" Great question! Let me show you the pattern.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The rule of thumb:&lt;/strong&gt; Each service folder needs its own Dockerfile. Look at your project structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DevOps-deployment/
├── frontend/          → Needs Dockerfile (Vue.js)
├── auth-api/          → Needs Dockerfile (Go) 
├── todos-api/         → Needs Dockerfile (Node.js)
├── users-api/         → Needs Dockerfile (Java)
└── log-message-processor/ → Needs Dockerfile (Python)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;How to figure out what each service needs:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Check what language/framework it uses (look for &lt;code&gt;package.json&lt;/code&gt;, &lt;code&gt;pom.xml&lt;/code&gt;, &lt;code&gt;requirements.txt&lt;/code&gt;, &lt;code&gt;go.mod&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Find the entry point (usually &lt;code&gt;server.js&lt;/code&gt;, &lt;code&gt;main.go&lt;/code&gt;, &lt;code&gt;main.py&lt;/code&gt;, or a compiled JAR)&lt;/li&gt;
&lt;li&gt;Determine the port it runs on (check the code or config files)&lt;/li&gt;
&lt;li&gt;Follow the pattern for that language&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Frontend Dockerfile (Vue.js)
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;First, check the service:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;frontend
&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-la&lt;/span&gt;
&lt;span class="c"&gt;# You'll see: package.json, src/, public/&lt;/span&gt;
&lt;span class="c"&gt;# This tells you: It's a Vue.js app that needs to be built&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Vue.js apps are special&lt;/strong&gt; - they compile to static HTML/CSS/JS files that need a web server. We use a "multi-stage build":&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Build stage&lt;/strong&gt;: Use Node.js to compile the Vue app&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Runtime stage&lt;/strong&gt;: Use nginx (lightweight web server) to serve the compiled files
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Step 1: Build the application&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:18-alpine&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;build&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package*.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm run build

&lt;span class="c"&gt;# Step 2: Serve it with nginx&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; nginx:alpine&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build /app/dist /usr/share/nginx/html&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; nginx.conf /etc/nginx/conf.d/default.conf&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Breaking it down:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;FROM node:18-alpine AS build&lt;/code&gt; - Start with Node.js for building (the &lt;code&gt;AS build&lt;/code&gt; names this stage)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;COPY package*.json ./&lt;/code&gt; - Copy dependency files first (Docker caching optimization!)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;RUN npm install&lt;/code&gt; - Install dependencies&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;RUN npm run build&lt;/code&gt; - Compile Vue.js to static files (creates &lt;code&gt;dist/&lt;/code&gt; folder)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;FROM nginx:alpine&lt;/code&gt; - Start a NEW stage with nginx (much smaller image)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;COPY --from=build&lt;/code&gt; - Copy the built files from the build stage&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;EXPOSE 80&lt;/code&gt; - nginx serves on port 80&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why two stages?&lt;/strong&gt; The build stage has Node.js + all build tools (~500MB). The runtime stage only has nginx + static files (~20MB). This makes the final image 25x smaller!&lt;/p&gt;

&lt;h4&gt;
  
  
  Auth API Dockerfile (Go)
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Check the service:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;auth-api
&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-la&lt;/span&gt;
&lt;span class="c"&gt;# You'll see: go.mod, main.go&lt;/span&gt;
&lt;span class="c"&gt;# This tells you: It's Go, entry point is main.go&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Go is special&lt;/strong&gt; - it compiles to a single binary file. No runtime needed! We also use multi-stage build:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Build stage&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;golang:1.21-alpine&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;build&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; go.mod go.sum ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;go mod download
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nv"&gt;CGO_ENABLED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 &lt;span class="nv"&gt;GOOS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;linux go build &lt;span class="nt"&gt;-o&lt;/span&gt; auth-api

&lt;span class="c"&gt;# Runtime stage&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; alpine:latest&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apk &lt;span class="nt"&gt;--no-cache&lt;/span&gt; add ca-certificates
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build /app/auth-api /auth-api&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 8081&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["/auth-api"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Breaking it down:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;FROM golang:1.21-alpine AS build&lt;/code&gt; - Go compiler for building&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;RUN go mod download&lt;/code&gt; - Download Go dependencies&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;RUN go build -o auth-api&lt;/code&gt; - Compile to a single binary file&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;FROM alpine:latest&lt;/code&gt; - Tiny Linux (only 5MB!)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;COPY --from=build /app/auth-api&lt;/code&gt; - Copy the compiled binary&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;CMD ["/auth-api"]&lt;/code&gt; - Run the binary&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key differences from Vue.js:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go compiles to a single binary (no runtime needed!)&lt;/li&gt;
&lt;li&gt;Final image is super small (~10MB vs ~500MB for Node.js)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;CGO_ENABLED=0&lt;/code&gt; creates a static binary (no external dependencies)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Todos API Dockerfile (Node.js)
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Check the service:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;todos-api
&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-la&lt;/span&gt;
&lt;span class="c"&gt;# You'll see: package.json, server.js, routes.js&lt;/span&gt;
&lt;span class="c"&gt;# This tells you: It's Node.js, entry point is server.js&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Check package.json for the start command:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"scripts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"start"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"node server.js"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Node.js API Dockerfile&lt;/strong&gt; (simpler than Vue.js - no build step needed):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; node:18-alpine&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="c"&gt;# Copy dependency files first (Docker caching optimization)&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package*.json ./&lt;/span&gt;

&lt;span class="c"&gt;# Install dependencies (production only for smaller image)&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm ci &lt;span class="nt"&gt;--only&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;production

&lt;span class="c"&gt;# Copy application code&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;

&lt;span class="c"&gt;# Expose the port (check server.js to see which port)&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 8082&lt;/span&gt;

&lt;span class="c"&gt;# Start the application&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["node", "server.js"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why this pattern?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;npm ci --only=production&lt;/code&gt; - Faster, more reliable than &lt;code&gt;npm install&lt;/code&gt;, and skips dev dependencies&lt;/li&gt;
&lt;li&gt;Copy &lt;code&gt;package*.json&lt;/code&gt; first - If dependencies don't change, Docker reuses the cached layer&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;node:18-alpine&lt;/code&gt; - Lightweight Node.js image&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How to test it:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Build the image&lt;/span&gt;
docker build &lt;span class="nt"&gt;-t&lt;/span&gt; todos-api ./todos-api

&lt;span class="c"&gt;# Run it&lt;/span&gt;
docker run &lt;span class="nt"&gt;-p&lt;/span&gt; 8082:8082 todos-api

&lt;span class="c"&gt;# Test it&lt;/span&gt;
curl http://localhost:8082
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Users API Dockerfile (Java Spring Boot)
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Check the service:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;users-api
&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-la&lt;/span&gt;
&lt;span class="c"&gt;# You'll see: pom.xml, src/&lt;/span&gt;
&lt;span class="c"&gt;# This tells you: It's Java with Maven, needs to be compiled&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Java services need two stages:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Build stage - Compile the code&lt;/li&gt;
&lt;li&gt;Runtime stage - Run the compiled JAR
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Stage 1: Build&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;maven:3.9-eclipse-temurin-17&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;build&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="c"&gt;# Copy Maven config first (caching optimization)&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; pom.xml .&lt;/span&gt;
&lt;span class="c"&gt;# Download dependencies (cached if pom.xml doesn't change)&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;mvn dependency:go-offline

&lt;span class="c"&gt;# Copy source code&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; src ./src&lt;/span&gt;

&lt;span class="c"&gt;# Build the application&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;mvn clean package &lt;span class="nt"&gt;-DskipTests&lt;/span&gt;

&lt;span class="c"&gt;# Stage 2: Runtime&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; eclipse-temurin:17-jre-alpine&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="c"&gt;# Install JAXB dependencies (needed for Java 17+)&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apk add &lt;span class="nt"&gt;--no-cache&lt;/span&gt; wget &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /app/lib &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    wget &lt;span class="nt"&gt;-q&lt;/span&gt; &lt;span class="nt"&gt;-O&lt;/span&gt; /app/lib/jaxb-api.jar https://repo1.maven.org/maven2/javax/xml/bind/jaxb-api/2.3.1/jaxb-api-2.3.1.jar &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    wget &lt;span class="nt"&gt;-q&lt;/span&gt; &lt;span class="nt"&gt;-O&lt;/span&gt; /app/lib/jaxb-runtime.jar https://repo1.maven.org/maven2/org/glassfish/jaxb/jaxb-runtime/2.3.1/jaxb-runtime-2.3.1.jar &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apk del wget

&lt;span class="c"&gt;# Copy the built JAR from build stage&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build /app/target/*.jar app.jar&lt;/span&gt;

&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 8083&lt;/span&gt;

&lt;span class="c"&gt;# Run the Spring Boot application&lt;/span&gt;
&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; ["java", \&lt;/span&gt;
    "--add-opens", "java.base/java.lang=ALL-UNNAMED", \
    "--add-opens", "java.base/java.lang.reflect=ALL-UNNAMED", \
    "--add-opens", "java.base/java.util=ALL-UNNAMED", \
    "-cp", "app.jar:/app/lib/*", \
    "org.springframework.boot.loader.JarLauncher"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why this is complex:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Java needs compilation (Maven does this)&lt;/li&gt;
&lt;li&gt;Spring Boot creates a "fat JAR" (includes everything)&lt;/li&gt;
&lt;li&gt;Java 17+ removed some libraries (JAXB), so we add them back&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;--add-opens&lt;/code&gt; flags are needed for Java 17+ module system&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Don't worry if this looks complicated&lt;/strong&gt; - Java Dockerfiles are the most complex. The pattern is always:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Build stage: Install dependencies, compile&lt;/li&gt;
&lt;li&gt;Runtime stage: Copy compiled artifact, run it&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Log Message Processor Dockerfile (Python)
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Check the service:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;log-message-processor
&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-la&lt;/span&gt;
&lt;span class="c"&gt;# You'll see: requirements.txt, main.py&lt;/span&gt;
&lt;span class="c"&gt;# This tells you: It's Python, entry point is main.py&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Python Dockerfile:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; python:3.11-slim&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="c"&gt;# Install build dependencies (needed to compile some Python packages)&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nt"&gt;--no-install-recommends&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    gcc &lt;span class="se"&gt;\
&lt;/span&gt;    g++ &lt;span class="se"&gt;\
&lt;/span&gt;    python3-dev &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /var/lib/apt/lists/&lt;span class="k"&gt;*&lt;/span&gt;

&lt;span class="c"&gt;# Copy and install Python dependencies&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; requirements.txt .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--no-cache-dir&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt

&lt;span class="c"&gt;# Remove build dependencies (they're not needed at runtime)&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get purge &lt;span class="nt"&gt;-y&lt;/span&gt; gcc g++ python3-dev &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apt-get autoremove &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apt-get clean &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /var/lib/apt/lists/&lt;span class="k"&gt;*&lt;/span&gt;

&lt;span class="c"&gt;# Copy application code&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;

&lt;span class="c"&gt;# Run the application&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["python", "main.py"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why install then remove build dependencies?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Some Python packages need to compile C extensions&lt;/li&gt;
&lt;li&gt;We install &lt;code&gt;gcc&lt;/code&gt;, &lt;code&gt;g++&lt;/code&gt; to compile them&lt;/li&gt;
&lt;li&gt;After installation, we remove them (saves ~200MB!)&lt;/li&gt;
&lt;li&gt;The compiled packages still work without the compilers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Simpler alternative (if no C extensions needed):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; python:3.11-slim&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; requirements.txt .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--no-cache-dir&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["python", "main.py"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Pattern: How to Create Any Dockerfile
&lt;/h3&gt;

&lt;p&gt;Here's the mental model:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Identify the language&lt;/strong&gt; → Check for language-specific files&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;package.json&lt;/code&gt; → Node.js&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;pom.xml&lt;/code&gt; or &lt;code&gt;build.gradle&lt;/code&gt; → Java&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;requirements.txt&lt;/code&gt; → Python&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;go.mod&lt;/code&gt; → Go&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Cargo.toml&lt;/code&gt; → Rust&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Find the base image&lt;/strong&gt; → Use official images&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Node.js → &lt;code&gt;node:18-alpine&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Java → &lt;code&gt;eclipse-temurin:17-jre-alpine&lt;/code&gt; (runtime) or &lt;code&gt;maven:3.9-eclipse-temurin-17&lt;/code&gt; (build)&lt;/li&gt;
&lt;li&gt;Python → &lt;code&gt;python:3.11-slim&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Go → &lt;code&gt;golang:1.21-alpine&lt;/code&gt; (build) or &lt;code&gt;alpine:latest&lt;/code&gt; (runtime)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Copy dependencies first&lt;/strong&gt; → Docker caching optimization&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Copy &lt;code&gt;package.json&lt;/code&gt; / &lt;code&gt;pom.xml&lt;/code&gt; / &lt;code&gt;requirements.txt&lt;/code&gt; / &lt;code&gt;go.mod&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Install dependencies&lt;/li&gt;
&lt;li&gt;Then copy application code&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Expose the port&lt;/strong&gt; → Check the code for which port it uses&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Set the command&lt;/strong&gt; → How to start the application&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Testing Each Dockerfile
&lt;/h3&gt;

&lt;p&gt;Before adding to docker-compose, test each one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Test Frontend&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;frontend
docker build &lt;span class="nt"&gt;-t&lt;/span&gt; frontend-test &lt;span class="nb"&gt;.&lt;/span&gt;
docker run &lt;span class="nt"&gt;-p&lt;/span&gt; 8080:80 frontend-test
curl http://localhost:8080

&lt;span class="c"&gt;# Test Auth API&lt;/span&gt;
&lt;span class="nb"&gt;cd&lt;/span&gt; ../auth-api
docker build &lt;span class="nt"&gt;-t&lt;/span&gt; auth-api-test &lt;span class="nb"&gt;.&lt;/span&gt;
docker run &lt;span class="nt"&gt;-p&lt;/span&gt; 8081:8081 auth-api-test
curl http://localhost:8081/health

&lt;span class="c"&gt;# Test Todos API&lt;/span&gt;
&lt;span class="nb"&gt;cd&lt;/span&gt; ../todos-api
docker build &lt;span class="nt"&gt;-t&lt;/span&gt; todos-api-test &lt;span class="nb"&gt;.&lt;/span&gt;
docker run &lt;span class="nt"&gt;-p&lt;/span&gt; 8082:8082 todos-api-test
curl http://localhost:8082

&lt;span class="c"&gt;# Test Users API&lt;/span&gt;
&lt;span class="nb"&gt;cd&lt;/span&gt; ../users-api
docker build &lt;span class="nt"&gt;-t&lt;/span&gt; users-api-test &lt;span class="nb"&gt;.&lt;/span&gt;
docker run &lt;span class="nt"&gt;-p&lt;/span&gt; 8083:8083 users-api-test
curl http://localhost:8083/health

&lt;span class="c"&gt;# Test Log Processor&lt;/span&gt;
&lt;span class="nb"&gt;cd&lt;/span&gt; ../log-message-processor
docker build &lt;span class="nt"&gt;-t&lt;/span&gt; log-processor-test &lt;span class="nb"&gt;.&lt;/span&gt;
docker run log-processor-test
&lt;span class="c"&gt;# (This might not have HTTP endpoint, check logs)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Common issues and fixes:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;"Module not found" or "Package not found"&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Make sure you copied dependency files before installing&lt;/li&gt;
&lt;li&gt;Check that &lt;code&gt;requirements.txt&lt;/code&gt; / &lt;code&gt;package.json&lt;/code&gt; is in the right place&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;"Port already in use"&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Another container is using that port&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;docker ps&lt;/code&gt; to see what's running&lt;/li&gt;
&lt;li&gt;Stop it with &lt;code&gt;docker stop &amp;lt;container-id&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;"Cannot connect to database" or "Connection refused"&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Services need to be in the same Docker network&lt;/li&gt;
&lt;li&gt;Use service names (e.g., &lt;code&gt;redis&lt;/code&gt;) not &lt;code&gt;localhost&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Wait for dependencies to be ready (use &lt;code&gt;depends_on&lt;/code&gt; in docker-compose)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Image too large&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use multi-stage builds (build in one stage, copy artifacts to smaller runtime stage)&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;alpine&lt;/code&gt; or &lt;code&gt;slim&lt;/code&gt; base images&lt;/li&gt;
&lt;li&gt;Remove build dependencies after installation&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Creating docker-compose.yml
&lt;/h3&gt;

&lt;p&gt;Now we need to orchestrate all these containers. That's where Docker Compose comes in.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Think of docker-compose.yml as a conductor's score&lt;/strong&gt; - it tells all the musicians (containers) when to play, how to play together, and in what order.&lt;/p&gt;

&lt;p&gt;Let's build it piece by piece:&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1: The Reverse Proxy (Traefik)
&lt;/h4&gt;

&lt;p&gt;Traefik is like a smart receptionist at a hotel:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It receives all incoming requests (guests)&lt;/li&gt;
&lt;li&gt;It looks at the URL and decides which service should handle it (which room)&lt;/li&gt;
&lt;li&gt;It automatically gets SSL certificates from Let's Encrypt (security badges)&lt;/li&gt;
&lt;li&gt;It handles HTTPS redirects (escorts HTTP guests to HTTPS)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# Reverse Proxy - Routes traffic to the right service&lt;/span&gt;
  &lt;span class="na"&gt;traefik&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;traefik:latest&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;traefik&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--api.insecure=true"&lt;/span&gt;  &lt;span class="c1"&gt;# Enable dashboard (for debugging)&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--providers.docker=true"&lt;/span&gt;  &lt;span class="c1"&gt;# Watch Docker containers&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--providers.docker.exposedbydefault=false"&lt;/span&gt;  &lt;span class="c1"&gt;# Only expose containers with labels&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--entrypoints.web.address=:80"&lt;/span&gt;  &lt;span class="c1"&gt;# HTTP entry point&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--entrypoints.websecure.address=:443"&lt;/span&gt;  &lt;span class="c1"&gt;# HTTPS entry point&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--certificatesresolvers.letsencrypt.acme.httpchallenge=true"&lt;/span&gt;  &lt;span class="c1"&gt;# Use HTTP challenge&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--certificatesresolvers.letsencrypt.acme.httpchallenge.entrypoint=web"&lt;/span&gt;  &lt;span class="c1"&gt;# Challenge on port 80&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--certificatesresolvers.letsencrypt.acme.email=${LETSENCRYPT_EMAIL:-your-email@example.com}"&lt;/span&gt;  &lt;span class="c1"&gt;# Email for cert&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json"&lt;/span&gt;  &lt;span class="c1"&gt;# Where to store certs&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;80:80"&lt;/span&gt;    &lt;span class="c1"&gt;# HTTP&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;443:443"&lt;/span&gt;  &lt;span class="c1"&gt;# HTTPS&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;8080:8080"&lt;/span&gt;  &lt;span class="c1"&gt;# Traefik dashboard (for debugging)&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/var/run/docker.sock:/var/run/docker.sock:ro&lt;/span&gt;  &lt;span class="c1"&gt;# Let Traefik see other containers&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./letsencrypt:/letsencrypt&lt;/span&gt;  &lt;span class="c1"&gt;# Store SSL certificates&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;app-network&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unless-stopped&lt;/span&gt;  &lt;span class="c1"&gt;# Auto-restart if it crashes&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key concepts:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;ports: "80:80"&lt;/code&gt; means "map host port 80 to container port 80"&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;volumes: /var/run/docker.sock&lt;/code&gt; lets Traefik discover other containers automatically&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;networks: app-network&lt;/code&gt; puts Traefik on the same network as other services&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Step 2: The Frontend Service
&lt;/h4&gt;

&lt;p&gt;The Vue.js frontend needs to be built and served:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="c1"&gt;# Frontend - Vue.js application&lt;/span&gt;
  &lt;span class="na"&gt;frontend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./frontend&lt;/span&gt;  &lt;span class="c1"&gt;# Where the Dockerfile is&lt;/span&gt;
      &lt;span class="na"&gt;dockerfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Dockerfile&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;PORT=80&lt;/span&gt;  &lt;span class="c1"&gt;# Port the app runs on inside container&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;AUTH_API_ADDRESS=http://auth-api:8081&lt;/span&gt;  &lt;span class="c1"&gt;# Use service name, not localhost!&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;TODOS_API_ADDRESS=http://todos-api:8082&lt;/span&gt;
    &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# Tell Traefik to route traffic to this service&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.enable=true"&lt;/span&gt;
      &lt;span class="c1"&gt;# Route requests for your domain to this service&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.frontend.rule=Host(`${DOMAIN:-yourdomain.com}`)"&lt;/span&gt;
      &lt;span class="c1"&gt;# Use HTTPS&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.frontend.entrypoints=websecure"&lt;/span&gt;
      &lt;span class="c1"&gt;# Get SSL certificate automatically&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.frontend.tls.certresolver=letsencrypt"&lt;/span&gt;
      &lt;span class="c1"&gt;# Frontend runs on port 80 inside container&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.services.frontend.loadbalancer.server.port=80"&lt;/span&gt;
      &lt;span class="c1"&gt;# Redirect HTTP to HTTPS&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.frontend-redirect.rule=Host(`${DOMAIN:-yourdomain.com}`)"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.frontend-redirect.entrypoints=web"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.frontend-redirect.middlewares=redirect-to-https"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;app-network&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;auth-api&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;todos-api&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;users-api&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Important points:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;build: context: ./frontend&lt;/code&gt; tells Docker to build from the frontend folder&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;environment:&lt;/code&gt; sets variables the app can read&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AUTH_API_ADDRESS=http://auth-api:8081&lt;/code&gt; - Notice we use &lt;code&gt;auth-api&lt;/code&gt; (the service name), not &lt;code&gt;localhost&lt;/code&gt;!&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;depends_on:&lt;/code&gt; ensures these services start before frontend&lt;/li&gt;
&lt;li&gt;Labels tell Traefik how to route traffic&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Step 3: The Auth API (Go)
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="c1"&gt;# Auth API - Handles authentication&lt;/span&gt;
  &lt;span class="na"&gt;auth-api&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./auth-api&lt;/span&gt;
      &lt;span class="na"&gt;dockerfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Dockerfile&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;auth-api&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;AUTH_API_PORT=8081&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;USERS_API_ADDRESS=http://users-api:8083&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;JWT_SECRET=${JWT_SECRET:-myfancysecret}&lt;/span&gt;  &lt;span class="c1"&gt;# Secret key for tokens&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;REDIS_URL=redis://redis:6379&lt;/span&gt;  &lt;span class="c1"&gt;# Redis connection&lt;/span&gt;
    &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.enable=true"&lt;/span&gt;
      &lt;span class="c1"&gt;# Route /api/auth requests to this service&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.auth.rule=Host(`${DOMAIN:-yourdomain.com}`)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;PathPrefix(`/api/auth`)"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.auth.entrypoints=websecure"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.auth.tls.certresolver=letsencrypt"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.services.auth.loadbalancer.server.port=8081"&lt;/span&gt;
      &lt;span class="c1"&gt;# Also handle /login route (frontend calls this)&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.auth-login.rule=Host(`${DOMAIN:-yourdomian.com}`)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;(Path(`/login`)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;||&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;PathPrefix(`/login/`))"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.auth-login.entrypoints=websecure"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.auth-login.tls.certresolver=letsencrypt"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.auth-login.service=auth"&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;app-network&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;  &lt;span class="c1"&gt;# Needs Redis for session storage&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Routing explained:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;PathPrefix(/api/auth)&lt;/code&gt; means any URL starting with &lt;code&gt;/api/auth&lt;/code&gt; goes here&lt;/li&gt;
&lt;li&gt;Example: &lt;code&gt;https://your-domain.com/api/auth/login&lt;/code&gt; → auth-api&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Path(/login)&lt;/code&gt; means exactly &lt;code&gt;/login&lt;/code&gt; goes here&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Step 4: The Todos API (Node.js)
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="c1"&gt;# Todos API - Manages todo items&lt;/span&gt;
  &lt;span class="na"&gt;todos-api&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./todos-api&lt;/span&gt;
      &lt;span class="na"&gt;dockerfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Dockerfile&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todos-api&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;PORT=8082&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;AUTH_API_URL=http://auth-api:8081&lt;/span&gt;  &lt;span class="c1"&gt;# To validate tokens&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;JWT_SECRET=${JWT_SECRET:-myfancysecret}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;REDIS_URL=redis://redis:6379&lt;/span&gt;
    &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.enable=true"&lt;/span&gt;
      &lt;span class="c1"&gt;# Route /api/todos requests here&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.todos.rule=Host(`${DOMAIN:-yourdomain.com}`)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;PathPrefix(`/api/todos`)"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.todos.entrypoints=websecure"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.todos.tls.certresolver=letsencrypt"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.services.todos.loadbalancer.server.port=8082"&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;app-network&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;auth-api&lt;/span&gt;  &lt;span class="c1"&gt;# Needs auth-api to validate tokens&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 5: The Users API (Java)
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="c1"&gt;# Users API - Manages user accounts&lt;/span&gt;
  &lt;span class="na"&gt;users-api&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./users-api&lt;/span&gt;
      &lt;span class="na"&gt;dockerfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Dockerfile&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;users-api&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SERVER_PORT=8083&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;JWT_SECRET=${JWT_SECRET:-myfancysecret}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;REDIS_URL=redis://redis:6379&lt;/span&gt;
    &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.enable=true"&lt;/span&gt;
      &lt;span class="c1"&gt;# Route /api/users requests here&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.users.rule=Host(`${DOMAIN:-yourdomian.com}`)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;PathPrefix(`/api/users`)"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.users.entrypoints=websecure"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.users.tls.certresolver=letsencrypt"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.services.users.loadbalancer.server.port=8083"&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;app-network&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 6: The Log Processor (Python)
&lt;/h4&gt;

&lt;p&gt;This service doesn't need Traefik routing - it's a background worker:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="c1"&gt;# Log Processor - Background worker that processes messages&lt;/span&gt;
  &lt;span class="na"&gt;log-message-processor&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./log-message-processor&lt;/span&gt;
      &lt;span class="na"&gt;dockerfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Dockerfile&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;log-message-processor&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;REDIS_HOST=redis&lt;/span&gt;  &lt;span class="c1"&gt;# Use service name&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;REDIS_PORT=6379&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;REDIS_CHANNEL=log-messages&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;app-network&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unless-stopped&lt;/span&gt;  &lt;span class="c1"&gt;# Keep it running&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why no Traefik labels?&lt;/strong&gt; This service doesn't serve HTTP requests - it just listens to Redis for messages.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 7: Supporting Services
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="c1"&gt;# Redis - Message queue and cache&lt;/span&gt;
  &lt;span class="na"&gt;redis&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis:7-alpine&lt;/span&gt;  &lt;span class="c1"&gt;# Use pre-built image, no Dockerfile needed&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;6379:6379"&lt;/span&gt;  &lt;span class="c1"&gt;# Expose for debugging (optional)&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;redis-data:/data&lt;/span&gt;  &lt;span class="c1"&gt;# Persist data&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;app-network&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unless-stopped&lt;/span&gt;

  &lt;span class="c1"&gt;# Zipkin handler - Service for /zipkin endpoint&lt;/span&gt;
  &lt;span class="na"&gt;zipkin-handler&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:alpine&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;zipkin-handler&lt;/span&gt;
    &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.enable=true"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.zipkin.rule=Host(`${DOMAIN:-yourdomain.com}`)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;PathPrefix(`/zipkin`)"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.zipkin.entrypoints=websecure"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.zipkin.tls.certresolver=letsencrypt"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.services.zipkin.loadbalancer.server.port=80"&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;app-network&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="s"&gt;sh -c "echo 'server {&lt;/span&gt;
        &lt;span class="s"&gt;listen 80;&lt;/span&gt;
        &lt;span class="s"&gt;location / {&lt;/span&gt;
          &lt;span class="s"&gt;return 200 \"OK\";&lt;/span&gt;
          &lt;span class="s"&gt;add_header Content-Type text/plain;&lt;/span&gt;
        &lt;span class="s"&gt;}&lt;/span&gt;
      &lt;span class="s"&gt;}' &amp;gt; /etc/nginx/conf.d/default.conf &amp;amp;&amp;amp; nginx -g 'daemon off;'"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 8: Networks and Volumes
&lt;/h4&gt;

&lt;p&gt;At the end of the file, define shared resources:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Networks - How containers communicate&lt;/span&gt;
&lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;app-network&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bridge&lt;/span&gt;  &lt;span class="c1"&gt;# Default network type&lt;/span&gt;

&lt;span class="c1"&gt;# Volumes - Persistent storage&lt;/span&gt;
&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;redis-data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  &lt;span class="c1"&gt;# Named volume for Redis data&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why networks?&lt;/strong&gt; Containers on the same network can talk to each other using service names (like &lt;code&gt;auth-api&lt;/code&gt; instead of IP addresses).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why volumes?&lt;/strong&gt; Data in containers is lost when they're removed. Volumes persist data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Complete docker-compose.yml Structure
&lt;/h3&gt;

&lt;p&gt;Here's the mental model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose.yml
├── services (all your containers)
│   ├── traefik (reverse proxy)
│   ├── frontend (Vue.js app)
│   ├── auth-api (Go service)
│   ├── todos-api (Node.js service)
│   ├── users-api (Java service)
│   ├── log-message-processor (Python worker)
│   ├── redis (database/queue)
│   └── zipkin-handler (dummy endpoint)
├── networks (how they connect)
│   └── app-network
└── volumes (persistent storage)
    └── redis-data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Understanding Traefik Labels (Deep Dive)
&lt;/h3&gt;

&lt;p&gt;Labels are how you tell Traefik what to do. Let's break down a complex example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.enable=true"&lt;/span&gt;  &lt;span class="c1"&gt;# Step 1: Enable Traefik for this service&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.auth.rule=Host(`example.com`)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;PathPrefix(`/api/auth`)"&lt;/span&gt;  &lt;span class="c1"&gt;# Step 2: Define routing rule&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.auth.entrypoints=websecure"&lt;/span&gt;  &lt;span class="c1"&gt;# Step 3: Use HTTPS&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.auth.tls.certresolver=letsencrypt"&lt;/span&gt;  &lt;span class="c1"&gt;# Step 4: Get SSL cert&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.services.auth.loadbalancer.server.port=8081"&lt;/span&gt;  &lt;span class="c1"&gt;# Step 5: Which port to forward to&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Breaking it down:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Router&lt;/strong&gt; = A set of rules for routing traffic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rule&lt;/strong&gt; = Conditions that must match (domain + path)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Entrypoint&lt;/strong&gt; = Which port/protocol (web = HTTP, websecure = HTTPS)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service&lt;/strong&gt; = The actual container and port&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Middleware&lt;/strong&gt; = Transformations (redirects, rewrites, etc.)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Example flow:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;User visits &lt;code&gt;https://example.com/api/auth/login&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Traefik receives request on port 443 (websecure entrypoint)&lt;/li&gt;
&lt;li&gt;Traefik checks rules: "Does this match &lt;code&gt;Host(example.com) &amp;amp;&amp;amp; PathPrefix(/api/auth)&lt;/code&gt;?" → Yes!&lt;/li&gt;
&lt;li&gt;Traefik forwards to &lt;code&gt;auth&lt;/code&gt; service on port 8081&lt;/li&gt;
&lt;li&gt;Auth-api container handles the request&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Testing Your docker-compose.yml
&lt;/h3&gt;

&lt;p&gt;Before deploying to the cloud, test locally:&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1: Create Environment File
&lt;/h4&gt;

&lt;p&gt;Create a &lt;code&gt;.env&lt;/code&gt; file in the root directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; .env &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
DOMAIN=localhost
LETSENCRYPT_EMAIL=your-email@example.com
JWT_SECRET=test-secret-key-change-this-in-production
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What's in .env?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;DOMAIN&lt;/code&gt; - Your domain name (use &lt;code&gt;localhost&lt;/code&gt; for local testing)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;LETSENCRYPT_EMAIL&lt;/code&gt; - Email for SSL certificate notifications&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;JWT_SECRET&lt;/code&gt; - Secret key for JWT tokens (use a strong random string in production)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Step 2: Start All Services
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Build and start all containers in the background&lt;/span&gt;
docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;

&lt;span class="c"&gt;# Watch all logs in real-time&lt;/span&gt;
docker compose logs &lt;span class="nt"&gt;-f&lt;/span&gt;

&lt;span class="c"&gt;# Or watch specific service logs&lt;/span&gt;
docker compose logs &lt;span class="nt"&gt;-f&lt;/span&gt; frontend
docker compose logs &lt;span class="nt"&gt;-f&lt;/span&gt; auth-api
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What &lt;code&gt;-d&lt;/code&gt; means:&lt;/strong&gt; Detached mode - runs in the background so you can use your terminal.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 3: Verify Everything is Running
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Check status of all containers&lt;/span&gt;
docker compose ps

&lt;span class="c"&gt;# You should see something like:&lt;/span&gt;
&lt;span class="c"&gt;# NAME                    STATUS              PORTS&lt;/span&gt;
&lt;span class="c"&gt;# traefik                 Up 2 minutes        0.0.0.0:80-&amp;gt;80/tcp, 0.0.0.0:443-&amp;gt;443/tcp&lt;/span&gt;
&lt;span class="c"&gt;# frontend                Up 2 minutes        &lt;/span&gt;
&lt;span class="c"&gt;# auth-api                Up 2 minutes        &lt;/span&gt;
&lt;span class="c"&gt;# todos-api               Up 2 minutes        &lt;/span&gt;
&lt;span class="c"&gt;# users-api               Up 2 minutes        &lt;/span&gt;
&lt;span class="c"&gt;# log-message-processor   Up 2 minutes        &lt;/span&gt;
&lt;span class="c"&gt;# redis                   Up 2 minutes        0.0.0.0:6379-&amp;gt;6379/tcp&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 4: Test Each Service
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Test frontend (should return HTML)&lt;/span&gt;
curl http://localhost

&lt;span class="c"&gt;# Test auth API (should return "Not Found" - that's expected!)&lt;/span&gt;
curl http://localhost/api/auth

&lt;span class="c"&gt;# Test todos API (should return "Invalid Token" - also expected!)&lt;/span&gt;
curl http://localhost/api/todos

&lt;span class="c"&gt;# Test users API&lt;/span&gt;
curl http://localhost/api/users

&lt;span class="c"&gt;# Check Traefik dashboard (optional)&lt;/span&gt;
&lt;span class="c"&gt;# Open http://localhost:8080 in your browser&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Expected responses:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Frontend: HTML page (login screen)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/api/auth&lt;/code&gt; without path: "Not Found" (correct - needs specific endpoint)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/api/todos&lt;/code&gt; without auth: "Invalid Token" (correct - needs authentication)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/api/users&lt;/code&gt; without auth: "Missing or invalid Authorization header" (correct)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Step 5: Test with Browser
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Open &lt;code&gt;http://localhost&lt;/code&gt; in your browser&lt;/li&gt;
&lt;li&gt;You should see the login page&lt;/li&gt;
&lt;li&gt;Try logging in (if you have test credentials)&lt;/li&gt;
&lt;li&gt;Check browser console (F12) for any errors&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Step 6: Check Logs for Errors
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# View logs for a specific service&lt;/span&gt;
docker compose logs frontend
docker compose logs auth-api
docker compose logs traefik

&lt;span class="c"&gt;# View last 100 lines&lt;/span&gt;
docker compose logs &lt;span class="nt"&gt;--tail&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;100 traefik

&lt;span class="c"&gt;# Follow logs in real-time&lt;/span&gt;
docker compose logs &lt;span class="nt"&gt;-f&lt;/span&gt; traefik
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What to look for:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ "Server started" or "Listening on port X" = Good!&lt;/li&gt;
&lt;li&gt;❌ "Connection refused" = Service dependency not ready&lt;/li&gt;
&lt;li&gt;❌ "Module not found" = Missing dependency in Dockerfile&lt;/li&gt;
&lt;li&gt;❌ "Port already in use" = Another service is using that port&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Step 7: Stop Everything
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Stop all containers&lt;/span&gt;
docker compose down

&lt;span class="c"&gt;# Stop and remove volumes (clean slate)&lt;/span&gt;
docker compose down &lt;span class="nt"&gt;-v&lt;/span&gt;

&lt;span class="c"&gt;# Stop and remove images too (full cleanup)&lt;/span&gt;
docker compose down &lt;span class="nt"&gt;--rmi&lt;/span&gt; all &lt;span class="nt"&gt;-v&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Common Issues and Solutions
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Issue 1: "Port 80 already in use"
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Problem:&lt;/strong&gt; Another service (like Apache, Nginx, or another Docker container) is using port 80.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Find what's using port 80&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;lsof &lt;span class="nt"&gt;-i&lt;/span&gt; :80
&lt;span class="c"&gt;# or&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;netstat &lt;span class="nt"&gt;-tulpn&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; :80

&lt;span class="c"&gt;# Stop the conflicting service&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl stop apache2  &lt;span class="c"&gt;# or nginx, or whatever it is&lt;/span&gt;

&lt;span class="c"&gt;# Or change Traefik ports in docker-compose.yml:&lt;/span&gt;
ports:
  - &lt;span class="s2"&gt;"8080:80"&lt;/span&gt;   &lt;span class="c"&gt;# Use 8080 instead of 80&lt;/span&gt;
  - &lt;span class="s2"&gt;"8443:443"&lt;/span&gt;  &lt;span class="c"&gt;# Use 8443 instead of 443&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Issue 2: "Build failed" or "Module not found"
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Problem:&lt;/strong&gt; Dockerfile has issues or dependencies are missing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Build a specific service to see detailed errors&lt;/span&gt;
docker compose build frontend

&lt;span class="c"&gt;# Check the Dockerfile syntax&lt;/span&gt;
&lt;span class="c"&gt;# Make sure COPY commands are in the right order&lt;/span&gt;
&lt;span class="c"&gt;# Make sure RUN commands install dependencies before copying code&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Issue 3: "Container keeps restarting"
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Problem:&lt;/strong&gt; The application is crashing on startup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Check why it's restarting&lt;/span&gt;
docker compose logs &amp;lt;service-name&amp;gt;

&lt;span class="c"&gt;# Common causes:&lt;/span&gt;
&lt;span class="c"&gt;# - Missing environment variables&lt;/span&gt;
&lt;span class="c"&gt;# - Database/Redis not ready (add depends_on)&lt;/span&gt;
&lt;span class="c"&gt;# - Port conflict&lt;/span&gt;
&lt;span class="c"&gt;# - Missing files or dependencies&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Issue 4: "Cannot connect to auth-api" or "Connection refused"
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Problem:&lt;/strong&gt; Services can't find each other.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Use service names (e.g., &lt;code&gt;http://auth-api:8081&lt;/code&gt;), not &lt;code&gt;localhost&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;✅ Make sure all services are on the same network (&lt;code&gt;app-network&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;✅ Check &lt;code&gt;depends_on&lt;/code&gt; - services might be starting before dependencies are ready&lt;/li&gt;
&lt;li&gt;✅ Add health checks or wait scripts if needed&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Issue 5: "SSL certificate error" or "Let's Encrypt failed"
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Problem:&lt;/strong&gt; Let's Encrypt can't verify your domain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For local testing: Use &lt;code&gt;localhost&lt;/code&gt; and HTTP only (remove HTTPS redirect)&lt;/li&gt;
&lt;li&gt;For production: Make sure DNS points to your server&lt;/li&gt;
&lt;li&gt;Make sure ports 80 and 443 are open in firewall&lt;/li&gt;
&lt;li&gt;Check Traefik logs: &lt;code&gt;docker compose logs traefik&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Quick Reference: docker-compose Commands
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Start services&lt;/span&gt;
docker compose up              &lt;span class="c"&gt;# Start and show logs&lt;/span&gt;
docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;           &lt;span class="c"&gt;# Start in background&lt;/span&gt;

&lt;span class="c"&gt;# Stop services&lt;/span&gt;
docker compose stop            &lt;span class="c"&gt;# Stop but don't remove&lt;/span&gt;
docker compose down           &lt;span class="c"&gt;# Stop and remove containers&lt;/span&gt;

&lt;span class="c"&gt;# View logs&lt;/span&gt;
docker compose logs            &lt;span class="c"&gt;# All services&lt;/span&gt;
docker compose logs &lt;span class="nt"&gt;-f&lt;/span&gt;         &lt;span class="c"&gt;# Follow (live updates)&lt;/span&gt;
docker compose logs &amp;lt;service&amp;gt;  &lt;span class="c"&gt;# Specific service&lt;/span&gt;

&lt;span class="c"&gt;# Rebuild&lt;/span&gt;
docker compose build           &lt;span class="c"&gt;# Build all&lt;/span&gt;
docker compose build &amp;lt;service&amp;gt; &lt;span class="c"&gt;# Build specific service&lt;/span&gt;
docker compose up &lt;span class="nt"&gt;--build&lt;/span&gt;      &lt;span class="c"&gt;# Build and start&lt;/span&gt;

&lt;span class="c"&gt;# Status&lt;/span&gt;
docker compose ps              &lt;span class="c"&gt;# Show running containers&lt;/span&gt;
docker compose top             &lt;span class="c"&gt;# Show running processes&lt;/span&gt;

&lt;span class="c"&gt;# Execute commands&lt;/span&gt;
docker compose &lt;span class="nb"&gt;exec&lt;/span&gt; &amp;lt;service&amp;gt; &amp;lt;&lt;span class="nb"&gt;command&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;  &lt;span class="c"&gt;# Run command in container&lt;/span&gt;
docker compose &lt;span class="nb"&gt;exec &lt;/span&gt;frontend sh           &lt;span class="c"&gt;# Get shell in frontend container&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Part 4: Infrastructure as Code with Terraform
&lt;/h2&gt;

&lt;p&gt;Now comes the infrastructure part. This is where many people get intimidated, but it's actually simpler than it seems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Infrastructure as Code?
&lt;/h3&gt;

&lt;p&gt;Imagine you're building a house. You could:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Manual approach&lt;/strong&gt;: Tell the builder "put a window here, a door there" every time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Blueprint approach&lt;/strong&gt;: Draw a blueprint once, builder follows it every time&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Infrastructure as Code is the blueprint approach. Benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reproducible&lt;/strong&gt;: Same code = same infrastructure, every time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Version controlled&lt;/strong&gt;: See what changed and when&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testable&lt;/strong&gt;: Try changes without breaking production&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Documented&lt;/strong&gt;: The code IS the documentation&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Understanding Terraform Basics
&lt;/h3&gt;

&lt;p&gt;Terraform uses a language called HCL (HashiCorp Configuration Language). It's designed to be human-readable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Basic structure:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_instance"&lt;/span&gt; &lt;span class="s2"&gt;"todo_app"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;ami&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ami-12345"&lt;/span&gt;
  &lt;span class="nx"&gt;instance_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"t3.medium"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This says: "Create an AWS EC2 instance resource, call it 'todo_app', with these properties."&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating Your First Terraform Configuration
&lt;/h3&gt;

&lt;p&gt;Let's build it step by step:&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1: Provider Configuration
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;required_version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&amp;gt;= 1.5.0"&lt;/span&gt;

  &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;aws&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/aws"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~&amp;gt; 5.0"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;backend&lt;/span&gt; &lt;span class="s2"&gt;"s3"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;# We'll configure this during init&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_region&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What's happening:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;required_version&lt;/code&gt; - Ensures everyone uses compatible Terraform&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;required_providers&lt;/code&gt; - Tells Terraform which plugins to download&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;backend "s3"&lt;/code&gt; - Where to store state (we'll configure this later)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;provider "aws"&lt;/code&gt; - Which cloud provider to use&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Step 2: Data Sources (Getting Information)
&lt;/h4&gt;

&lt;p&gt;Before creating resources, we often need to look things up:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"aws_ami"&lt;/span&gt; &lt;span class="s2"&gt;"ubuntu"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;most_recent&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;owners&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"099720109477"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;  &lt;span class="c1"&gt;# Canonical (Ubuntu's publisher)&lt;/span&gt;

  &lt;span class="nx"&gt;filter&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"name"&lt;/span&gt;
    &lt;span class="nx"&gt;values&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"*ubuntu-jammy-22.04-amd64-server*"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What is an AMI?&lt;/strong&gt; Amazon Machine Image - it's like a template for a virtual machine. This code finds the latest Ubuntu 22.04 image.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why use data sources?&lt;/strong&gt; AMI IDs change in different regions. Instead of hardcoding &lt;code&gt;ami-12345&lt;/code&gt;, we let Terraform find the right one.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 3: Security Group (Firewall Rules)
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_security_group"&lt;/span&gt; &lt;span class="s2"&gt;"todo_app"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"todo-app-sg-${var.environment}"&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Security group for TODO application"&lt;/span&gt;

  &lt;span class="nx"&gt;ingress&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"HTTP"&lt;/span&gt;
    &lt;span class="nx"&gt;from_port&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;
    &lt;span class="nx"&gt;to_port&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tcp"&lt;/span&gt;
    &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;  &lt;span class="c1"&gt;# Allow from anywhere&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;ingress&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"HTTPS"&lt;/span&gt;
    &lt;span class="nx"&gt;from_port&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;443&lt;/span&gt;
    &lt;span class="nx"&gt;to_port&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;443&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tcp"&lt;/span&gt;
    &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;ingress&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"SSH"&lt;/span&gt;
    &lt;span class="nx"&gt;from_port&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;22&lt;/span&gt;
    &lt;span class="nx"&gt;to_port&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;22&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tcp"&lt;/span&gt;
    &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ssh_cidr&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;  &lt;span class="c1"&gt;# Only from your IP (security!)&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;egress&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;from_port&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="nx"&gt;to_port&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"-1"&lt;/span&gt;  &lt;span class="c1"&gt;# All protocols&lt;/span&gt;
    &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;  &lt;span class="c1"&gt;# Allow all outbound&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;Name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"todo-app-sg-${var.environment}"&lt;/span&gt;
    &lt;span class="nx"&gt;Environment&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;environment&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What is a security group?&lt;/strong&gt; It's AWS's firewall. It controls what traffic can reach your server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Breaking it down:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;ingress&lt;/code&gt; - Incoming traffic rules&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;egress&lt;/code&gt; - Outgoing traffic rules&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;cidr_blocks = ["0.0.0.0/0"]&lt;/code&gt; - From anywhere (0.0.0.0/0 means "everywhere")&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;var.ssh_cidr&lt;/code&gt; - A variable (we'll set this to your IP for security)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Security tip&lt;/strong&gt;: In production, restrict SSH to your IP only! Use a service like &lt;code&gt;whatismyip.com&lt;/code&gt; to find your IP, then set &lt;code&gt;ssh_cidr = "YOUR_IP/32"&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 4: EC2 Instance (Your Server)
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_instance"&lt;/span&gt; &lt;span class="s2"&gt;"todo_app"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;ami&lt;/span&gt;                    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_ami&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ubuntu&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;instance_type&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;instance_type&lt;/span&gt;
  &lt;span class="nx"&gt;key_name&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;key_pair_name&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_security_group_ids&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;aws_security_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;todo_app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="nx"&gt;user_data&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;-&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
    #!/bin/bash
    apt-get update
    apt-get install -y python3 python3-pip
&lt;/span&gt;&lt;span class="no"&gt;  EOF

&lt;/span&gt;  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;Name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"todo-app-server-${var.environment}"&lt;/span&gt;
    &lt;span class="nx"&gt;Environment&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;environment&lt;/span&gt;
    &lt;span class="nx"&gt;Project&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hngi13-stage6"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;lifecycle&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;create_before_destroy&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What's happening:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;ami&lt;/code&gt; - Which OS image to use (from our data source)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;instance_type&lt;/code&gt; - Server size (t3.medium = 2 vCPU, 4GB RAM)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;key_name&lt;/code&gt; - Which SSH key to install&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;vpc_security_group_ids&lt;/code&gt; - Which firewall rules to apply&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;user_data&lt;/code&gt; - Script that runs when server starts (bootstrap script)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;lifecycle&lt;/code&gt; - Terraform behavior (create new before destroying old = zero downtime)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Step 5: Variables (Making It Flexible)
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"aws_region"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AWS region"&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
  &lt;span class="nx"&gt;default&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"instance_type"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"EC2 instance type"&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
  &lt;span class="nx"&gt;default&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"t3.medium"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"key_pair_name"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AWS Key Pair name"&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;  &lt;span class="c1"&gt;# Required - no default&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"environment"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Environment name (dev, stg, prod)"&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
  &lt;span class="nx"&gt;default&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"dev"&lt;/span&gt;

  &lt;span class="nx"&gt;validation&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;condition&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;contains&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="s2"&gt;"dev"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"stg"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"prod"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nx"&gt;error_message&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Environment must be one of: dev, stg, prod"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why variables?&lt;/strong&gt; Makes your code reusable. Same code works for dev, staging, and production - just change the variables!&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 6: Outputs (Getting Information Back)
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"server_ip"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Public IP of the server"&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_instance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;todo_app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public_ip&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"ansible_inventory_path"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Path to generated Ansible inventory"&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;local_file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ansible_inventory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;filename&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What are outputs?&lt;/strong&gt; After Terraform creates resources, you often need information about them (like the server IP). Outputs make that information available.&lt;/p&gt;

&lt;h3&gt;
  
  
  Environment-Specific Configuration
&lt;/h3&gt;

&lt;p&gt;Create separate &lt;code&gt;.tfvars&lt;/code&gt; files for each environment. This is crucial - you'll have three files, one for each environment:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;terraform.dev.tfvars:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;environment&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"dev"&lt;/span&gt;
&lt;span class="nx"&gt;aws_region&lt;/span&gt;  &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
&lt;span class="nx"&gt;instance_type&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"t3.small"&lt;/span&gt;  &lt;span class="c1"&gt;# Smaller for dev (saves money)&lt;/span&gt;
&lt;span class="nx"&gt;key_pair_name&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"my-terraform-key"&lt;/span&gt;
&lt;span class="nx"&gt;ssh_key_path&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~/.ssh/my-terraform-key.pem"&lt;/span&gt;
&lt;span class="nx"&gt;ssh_cidr&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;  &lt;span class="c1"&gt;# Less restrictive for dev&lt;/span&gt;
&lt;span class="nx"&gt;server_user&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ubuntu"&lt;/span&gt;
&lt;span class="nx"&gt;skip_ansible_provision&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;  &lt;span class="c1"&gt;# Run Ansible automatically&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;terraform.stg.tfvars:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;environment&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"stg"&lt;/span&gt;
&lt;span class="nx"&gt;aws_region&lt;/span&gt;  &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
&lt;span class="nx"&gt;instance_type&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"t3.small"&lt;/span&gt;  &lt;span class="c1"&gt;# Can be same size as dev for staging&lt;/span&gt;
&lt;span class="nx"&gt;key_pair_name&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"my-terraform-key"&lt;/span&gt;
&lt;span class="nx"&gt;ssh_key_path&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~/.ssh/my-terraform-key.pem"&lt;/span&gt;
&lt;span class="nx"&gt;ssh_cidr&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;  &lt;span class="c1"&gt;# Can be less restrictive than prod&lt;/span&gt;
&lt;span class="nx"&gt;server_user&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ubuntu"&lt;/span&gt;
&lt;span class="nx"&gt;skip_ansible_provision&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;terraform.prod.tfvars:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;environment&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"prod"&lt;/span&gt;
&lt;span class="nx"&gt;aws_region&lt;/span&gt;  &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
&lt;span class="nx"&gt;instance_type&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"t3.medium"&lt;/span&gt;  &lt;span class="c1"&gt;# More power for production&lt;/span&gt;
&lt;span class="nx"&gt;key_pair_name&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"my-terraform-key"&lt;/span&gt;
&lt;span class="nx"&gt;ssh_key_path&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~/.ssh/my-terraform-key.pem"&lt;/span&gt;
&lt;span class="nx"&gt;ssh_cidr&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"YOUR_IP/32"&lt;/span&gt;  &lt;span class="c1"&gt;# Restrict SSH to your IP only!&lt;/span&gt;
&lt;span class="nx"&gt;server_user&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ubuntu"&lt;/span&gt;
&lt;span class="nx"&gt;skip_ansible_provision&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why three separate files?&lt;/strong&gt; Different environments have different needs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dev&lt;/strong&gt;: Smaller instance, less security (for quick testing)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Staging&lt;/strong&gt;: Similar to dev, but closer to production setup (for pre-production testing)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prod&lt;/strong&gt;: Larger instance, maximum security (for real users)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;File structure:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;infra/terraform/
├── main.tf
├── variables.tf
├── outputs.tf
├── terraform.dev.tfvars    ← Development environment
├── terraform.stg.tfvars    ← Staging environment
└── terraform.prod.tfvars   ← Production environment
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Remote State Configuration
&lt;/h3&gt;

&lt;p&gt;Remember the S3 bucket we created? Now we use it. &lt;strong&gt;Important&lt;/strong&gt;: Each environment needs its own state file path!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For Development:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform init &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"bucket=yourname-terraform-state"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"key=terraform-state/dev/terraform.tfstate"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"region=us-east-1"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"dynamodb_table=terraform-state-lock"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"encrypt=true"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;For Staging:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform init &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"bucket=yourname-terraform-state"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"key=terraform-state/stg/terraform.tfstate"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"region=us-east-1"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"dynamodb_table=terraform-state-lock"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"encrypt=true"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;For Production:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform init &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"bucket=yourname-terraform-state"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"key=terraform-state/prod/terraform.tfstate"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"region=us-east-1"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"dynamodb_table=terraform-state-lock"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"encrypt=true"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What this does:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;bucket&lt;/code&gt; - Where to store state (same bucket for all environments)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;key&lt;/code&gt; - File path in bucket (&lt;strong&gt;different per environment!&lt;/strong&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;dynamodb_table&lt;/code&gt; - For locking (prevents conflicts when multiple people run Terraform)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;encrypt=true&lt;/code&gt; - Encrypt state at rest (security)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why separate keys per environment?&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;terraform-state/dev/terraform.tfstate&lt;/code&gt; → Development infrastructure&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;terraform-state/stg/terraform.tfstate&lt;/code&gt; → Staging infrastructure
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;terraform-state/prod/terraform.tfstate&lt;/code&gt; → Production infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are &lt;strong&gt;completely separate files&lt;/strong&gt;. This means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Dev, staging, and prod infrastructure are isolated&lt;/li&gt;
&lt;li&gt;✅ You can destroy dev without affecting staging or prod&lt;/li&gt;
&lt;li&gt;✅ Each environment has its own state history&lt;/li&gt;
&lt;li&gt;✅ No risk of accidentally modifying the wrong environment&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Your First Terraform Run
&lt;/h3&gt;

&lt;p&gt;Let's deploy to development first (always start with dev!):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 1. Initialize (downloads providers, sets up backend)&lt;/span&gt;
terraform init &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"bucket=yourname-terraform-state"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"key=terraform-state/dev/terraform.tfstate"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"region=us-east-1"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"dynamodb_table=terraform-state-lock"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"encrypt=true"&lt;/span&gt;

&lt;span class="c"&gt;# 2. Plan (see what will be created - SAFE, doesn't change anything)&lt;/span&gt;
terraform plan &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;terraform.dev.tfvars

&lt;span class="c"&gt;# 3. Apply (actually create resources)&lt;/span&gt;
terraform apply &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;terraform.dev.tfvars
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What happens:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;terraform init&lt;/code&gt; - Downloads AWS provider, configures backend for dev environment&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;terraform plan&lt;/code&gt; - Shows you what will be created/changed/destroyed (dry run)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;terraform apply&lt;/code&gt; - Actually creates the resources&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Pro tip&lt;/strong&gt;: Always run &lt;code&gt;plan&lt;/code&gt; first! It's like a dry run. Review the output carefully before applying.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For other environments&lt;/strong&gt;, repeat the same steps but:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use the appropriate &lt;code&gt;-backend-config="key=terraform-state/ENV/terraform.tfstate"&lt;/code&gt; &lt;/li&gt;
&lt;li&gt;Use the matching &lt;code&gt;-var-file=terraform.ENV.tfvars&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example for staging:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform init &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"bucket=yourname-terraform-state"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"key=terraform-state/stg/terraform.tfstate"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"region=us-east-1"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"dynamodb_table=terraform-state-lock"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"encrypt=true"&lt;/span&gt;

terraform plan &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;terraform.stg.tfvars
terraform apply &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;terraform.stg.tfvars
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Understanding Drift Detection (Critical for Safety!)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What is drift?&lt;/strong&gt; Imagine you have a blueprint for a house (Terraform code), but someone goes and changes the actual house (AWS infrastructure) without updating the blueprint. That's drift - your code and reality don't match anymore.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-world example:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You deploy infrastructure with Terraform ✅&lt;/li&gt;
&lt;li&gt;Later, you manually add a tag to your EC2 instance in AWS Console 🏷️&lt;/li&gt;
&lt;li&gt;Terraform doesn't know about this change&lt;/li&gt;
&lt;li&gt;Next time you run Terraform, it sees the difference → &lt;strong&gt;DRIFT DETECTED!&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Why drift is dangerous:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🔴 &lt;strong&gt;Security risk&lt;/strong&gt;: Someone might have changed something maliciously&lt;/li&gt;
&lt;li&gt;🔴 &lt;strong&gt;Data loss&lt;/strong&gt;: Terraform might try to "fix" things and delete your changes&lt;/li&gt;
&lt;li&gt;🔴 &lt;strong&gt;Confusion&lt;/strong&gt;: You don't know what changed or why&lt;/li&gt;
&lt;li&gt;🔴 &lt;strong&gt;Breaking changes&lt;/strong&gt;: Manual changes might break your application&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How drift detection works:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Think of it like a security guard checking your house:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Terraform Plan&lt;/strong&gt; = Security guard walks around and notes what's different&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Check Git History&lt;/strong&gt; = Did you change the blueprint (Terraform files)?

&lt;ul&gt;
&lt;li&gt;✅ &lt;strong&gt;If yes&lt;/strong&gt; → Expected changes (you updated the code)&lt;/li&gt;
&lt;li&gt;❌ &lt;strong&gt;If no&lt;/strong&gt; → DRIFT! (Someone changed infrastructure without updating code)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faejrh796svhskcjbbfqj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faejrh796svhskcjbbfqj.png" alt="Drift Log" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Alert&lt;/strong&gt; = Security guard calls you immediately&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff1w02ff3scp93x0b2t30.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff1w02ff3scp93x0b2t30.png" alt="Email Alert" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Approval&lt;/strong&gt; = You review and decide what to do&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyh18go7igm1vn8swqzk9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyh18go7igm1vn8swqzk9.png" alt="Github Issue" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Action&lt;/strong&gt; = Apply changes or investigate further&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The detection logic:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Step 1: Run terraform plan to see what's different&lt;/span&gt;
terraform plan &lt;span class="nt"&gt;-out&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tfplan

&lt;span class="c"&gt;# Step 2: Check if Terraform files changed in this commit&lt;/span&gt;
git diff HEAD~1 HEAD &lt;span class="nt"&gt;--name-only&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-E&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;tf&lt;/span&gt;&lt;span class="nv"&gt;$|&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;tfvars$"&lt;/span&gt;

&lt;span class="c"&gt;# Step 3: Determine the type of change&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; no terraform files changed &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; plan shows changes &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"🚨 DRIFT DETECTED!"&lt;/span&gt;
  &lt;span class="c"&gt;# Send email, create GitHub issue, wait for approval&lt;/span&gt;
&lt;span class="k"&gt;else
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"✅ Expected changes (code was updated)"&lt;/span&gt;
  &lt;span class="c"&gt;# Proceed automatically&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Setting Up Email Notifications for Drift
&lt;/h3&gt;

&lt;p&gt;When drift is detected, you need to know immediately! That's where email notifications come in.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1: Verify Your Email in AWS SES
&lt;/h4&gt;

&lt;p&gt;AWS SES (Simple Email Service) is like a post office for your applications. First, you need to verify your email address:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Go to AWS Console&lt;/strong&gt; → SES (Simple Email Service)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Click "Verified identities"&lt;/strong&gt; → "Create identity"&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Choose "Email address"&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enter your email&lt;/strong&gt; (e.g., &lt;code&gt;your-email@gmail.com&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Click "Create identity"&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Check your email&lt;/strong&gt; and click the verification link&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Why verify?&lt;/strong&gt; AWS prevents spam by requiring you to verify you own the email address.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 2: Create the Email Notification Script
&lt;/h4&gt;

&lt;p&gt;Create &lt;code&gt;infra/ci-cd/scripts/email-notification.sh&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="c"&gt;# Email Notification Script for Terraform Drift&lt;/span&gt;
&lt;span class="c"&gt;# Sends email alert when infrastructure drift is detected&lt;/span&gt;

&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt;

&lt;span class="nv"&gt;DRIFT_SUMMARY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="k"&gt;:-}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DRIFT_SUMMARY&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Error: Drift summary not provided"&lt;/span&gt;
  &lt;span class="nb"&gt;exit &lt;/span&gt;1
&lt;span class="k"&gt;fi&lt;/span&gt;

&lt;span class="c"&gt;# Email configuration from environment variables&lt;/span&gt;
&lt;span class="nv"&gt;EMAIL_TO&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;EMAIL_TO&lt;/span&gt;&lt;span class="k"&gt;:-}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nv"&gt;EMAIL_FROM&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;EMAIL_FROM&lt;/span&gt;&lt;span class="k"&gt;:-}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nv"&gt;AWS_REGION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;AWS_REGION&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="nv"&gt;us&lt;/span&gt;&lt;span class="p"&gt;-east-1&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# GitHub Actions variables for workflow link&lt;/span&gt;
&lt;span class="nv"&gt;GITHUB_SERVER_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GITHUB_SERVER_URL&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="nv"&gt;https&lt;/span&gt;://github.com&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nv"&gt;GITHUB_REPOSITORY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GITHUB_REPOSITORY&lt;/span&gt;&lt;span class="k"&gt;:-}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nv"&gt;GITHUB_RUN_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GITHUB_RUN_ID&lt;/span&gt;&lt;span class="k"&gt;:-}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# Check if email is configured&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$EMAIL_TO&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$EMAIL_FROM&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"⚠️  Email not configured. Skipping email notification."&lt;/span&gt;
  &lt;span class="nb"&gt;exit &lt;/span&gt;0
&lt;span class="k"&gt;fi&lt;/span&gt;

&lt;span class="c"&gt;# Build workflow run URL if GitHub variables are available&lt;/span&gt;
&lt;span class="nv"&gt;WORKFLOW_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$GITHUB_REPOSITORY&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$GITHUB_RUN_ID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nv"&gt;WORKFLOW_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GITHUB_SERVER_URL&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GITHUB_REPOSITORY&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/actions/runs/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GITHUB_RUN_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;

&lt;span class="c"&gt;# Create email body&lt;/span&gt;
&lt;span class="nv"&gt;SUBJECT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"🚨 Terraform Drift Detected - Action Required"&lt;/span&gt;
&lt;span class="nv"&gt;BODY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
Terraform infrastructure drift has been detected.

This means infrastructure was changed OUTSIDE of Terraform (e.g., manually in AWS Console).

Please review the changes and approve the deployment in GitHub Actions.

&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$WORKFLOW_URL&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"🔗 View Workflow Run: &lt;/span&gt;&lt;span class="nv"&gt;$WORKFLOW_URL&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;fi&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="sh"&gt;

Drift Summary:
&lt;/span&gt;&lt;span class="nv"&gt;$DRIFT_SUMMARY&lt;/span&gt;&lt;span class="sh"&gt;

---
This is an automated message from GitHub Actions.
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Send email via AWS SES&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"📧 Sending drift alert email via AWS SES..."&lt;/span&gt;
aws ses send-email &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--region&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$AWS_REGION&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--from&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$EMAIL_FROM&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--to&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$EMAIL_TO&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--subject&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SUBJECT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--text&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$BODY&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"⚠️  Failed to send email. Check AWS credentials and SES configuration."&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"✅ Email notification sent!"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Make it executable:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;chmod&lt;/span&gt; +x infra/ci-cd/scripts/email-notification.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What this script does:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Takes the drift summary as input&lt;/li&gt;
&lt;li&gt;Checks if email is configured&lt;/li&gt;
&lt;li&gt;Builds a nice email message with the drift details&lt;/li&gt;
&lt;li&gt;Sends it via AWS SES&lt;/li&gt;
&lt;li&gt;Includes a link to the GitHub workflow run&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Step 3: Add GitHub Secrets
&lt;/h4&gt;

&lt;p&gt;In your GitHub repository, go to &lt;strong&gt;Settings&lt;/strong&gt; → &lt;strong&gt;Secrets and variables&lt;/strong&gt; → &lt;strong&gt;Actions&lt;/strong&gt;, and add:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;EMAIL_TO&lt;/code&gt; - Your email address (where to send alerts)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;EMAIL_FROM&lt;/code&gt; - Your verified SES email (must match the one you verified in AWS)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AWS_ACCESS_KEY_ID&lt;/code&gt; - Your AWS access key&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AWS_SECRET_ACCESS_KEY&lt;/code&gt; - Your AWS secret key&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AWS_REGION&lt;/code&gt; - Your AWS region (e.g., &lt;code&gt;us-east-1&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Security tip&lt;/strong&gt;: Never commit these values to your repository! Always use GitHub Secrets.&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding GitHub Issue Approval
&lt;/h3&gt;

&lt;p&gt;When drift is detected, the workflow creates a GitHub issue and waits for your approval. This is like a safety checkpoint.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How it works:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Drift detected&lt;/strong&gt; → Workflow pauses&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub issue created&lt;/strong&gt; → Contains:

&lt;ul&gt;
&lt;li&gt;What changed&lt;/li&gt;
&lt;li&gt;Why it's drift (no code changes)&lt;/li&gt;
&lt;li&gt;Link to workflow run&lt;/li&gt;
&lt;li&gt;Plan summary&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You review&lt;/strong&gt; → Check the issue&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You approve&lt;/strong&gt; → Comment "approve" or click approve button&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Workflow continues&lt;/strong&gt; → Terraform applies the changes&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Example GitHub Issue:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;🚨 REAL DRIFT DETECTED - Infrastructure Changed Outside Terraform (dev)

⚠️ CRITICAL: Real Infrastructure Drift Detected

Infrastructure has been modified outside of Terraform. This is unexpected.

Environment: dev

What happened:
- Terraform code files were NOT modified
- But infrastructure plan shows changes
- This indicates manual changes or changes from another process

Action Required:
1. Review the plan below
2. Investigate what caused the drift
3. Approve if changes are intentional, or revert if unauthorized

Plan Summary:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  aws_instance.todo_app will be updated in-place
&lt;/h2&gt;

&lt;p&gt;~ resource "aws_instance" "todo_app" {&lt;br&gt;
    ~ tags = {&lt;br&gt;
      - "ManualTag" = "test" -&amp;gt; null&lt;br&gt;
    }&lt;br&gt;
  }&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Workflow Run:
🔗 View Workflow Run: https://github.com/yourusername/repo/actions/runs/123456

Next Steps:
- Approve to apply these changes
- Or investigate and revert unauthorized changes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;How to approve:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Go to the GitHub issue&lt;/strong&gt; (you'll get a notification)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Review the changes&lt;/strong&gt; carefully&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;If changes are OK&lt;/strong&gt;: Comment "approve" or click the approve button&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;If changes are suspicious&lt;/strong&gt;: Investigate first, then approve or revert&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Why this matters:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Prevents accidental changes&lt;/li&gt;
&lt;li&gt;✅ Gives you time to investigate&lt;/li&gt;
&lt;li&gt;✅ Creates an audit trail (who approved what, when)&lt;/li&gt;
&lt;li&gt;✅ Protects production from unauthorized changes&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Testing Drift Detection
&lt;/h3&gt;

&lt;p&gt;Want to test if drift detection works? Here's how:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Deploy infrastructure normally&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform apply &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;terraform.dev.tfvars
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Manually change something in AWS Console&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to AWS Console → EC2 → Instances&lt;/li&gt;
&lt;li&gt;Find your instance&lt;/li&gt;
&lt;li&gt;Click "Tags" → "Manage tags"&lt;/li&gt;
&lt;li&gt;Add a new tag: &lt;code&gt;TestTag = "drift-test"&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Save&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Added port 8080 via AWS console&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpu7x6q04nj8fsn4w4zi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpu7x6q04nj8fsn4w4zi.png" alt="ADD Port-8080" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Trigger the workflow&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to GitHub Actions&lt;/li&gt;
&lt;li&gt;Run the "Infrastructure Deployment" workflow&lt;/li&gt;
&lt;li&gt;Select "dev" environment&lt;/li&gt;
&lt;li&gt;Watch it detect drift!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Check your email&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You should receive an email alert&lt;/li&gt;
&lt;li&gt;Check spam folder if you don't see it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbuw0zgamal61m8ddtsnp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbuw0zgamal61m8ddtsnp.png" alt="Email Alert" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Approve in GitHub&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A GitHub issue should be created&lt;/li&gt;
&lt;li&gt;Review and approve&lt;/li&gt;
&lt;li&gt;Watch Terraform apply the changes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Approved github issue&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faihhioyuk4wgw5e4y82u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faihhioyuk4wgw5e4y82u.png" alt="Github Issue" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Destroying Infrastructure (When You Need to Start Over)
&lt;/h3&gt;

&lt;p&gt;Sometimes you need to tear everything down and start fresh. Here's how to do it safely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;⚠️ WARNING&lt;/strong&gt;: Destroying infrastructure will &lt;strong&gt;DELETE EVERYTHING&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your EC2 instance&lt;/li&gt;
&lt;li&gt;All data on the server&lt;/li&gt;
&lt;li&gt;Security groups&lt;/li&gt;
&lt;li&gt;Everything created by Terraform&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Make sure you:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Have backups if you need data&lt;/li&gt;
&lt;li&gt;✅ Are destroying the right environment (dev, not prod!)&lt;/li&gt;
&lt;li&gt;✅ Really want to delete everything&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  Method 1: Destroy via Command Line
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Step 1: Initialize Terraform (if not already done)&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;infra/terraform
terraform init &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"bucket=yourname-terraform-state"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"key=terraform-state/dev/terraform.tfstate"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"region=us-east-1"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"dynamodb_table=terraform-state-lock"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"encrypt=true"&lt;/span&gt;

&lt;span class="c"&gt;# Step 2: Plan the destruction (see what will be deleted)&lt;/span&gt;
terraform plan &lt;span class="nt"&gt;-destroy&lt;/span&gt; &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;terraform.dev.tfvars

&lt;span class="c"&gt;# Step 3: Review the plan carefully!&lt;/span&gt;
&lt;span class="c"&gt;# Make sure it's only deleting what you want&lt;/span&gt;

&lt;span class="c"&gt;# Step 4: Destroy everything&lt;/span&gt;
terraform destroy &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;terraform.dev.tfvars
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;What happens:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Terraform reads the state file&lt;/li&gt;
&lt;li&gt;Plans what needs to be destroyed&lt;/li&gt;
&lt;li&gt;Shows you the plan (review it!)&lt;/li&gt;
&lt;li&gt;Asks for confirmation (type &lt;code&gt;yes&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Deletes everything in reverse order&lt;/li&gt;
&lt;/ol&gt;
&lt;h4&gt;
  
  
  Method 2: Destroy via GitHub Actions
&lt;/h4&gt;

&lt;p&gt;If you have a destroy workflow set up:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Go to GitHub Actions&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxyzoersxwui4yukdzmj5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxyzoersxwui4yukdzmj5.png" alt="Terraform Destroy" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Find "Destroy Infrastructure" workflow&lt;/strong&gt; (if you have one)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Click "Run workflow"&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Select environment&lt;/strong&gt; (be careful - don't destroy prod!)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Confirm&lt;/strong&gt; and run
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpvo052va4kgi2csf06xm.png" alt="Destroy Action" width="800" height="390"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Example destroy workflow:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Destroy Infrastructure&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;workflow_dispatch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Environment&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;to&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;destroy'&lt;/span&gt;
        &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;choice&lt;/span&gt;
        &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;dev&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;stg&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;prod&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;destroy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Configure AWS credentials&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-actions/configure-aws-credentials@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;aws-access-key-id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_ACCESS_KEY_ID }}&lt;/span&gt;
          &lt;span class="na"&gt;aws-secret-access-key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_SECRET_ACCESS_KEY }}&lt;/span&gt;
          &lt;span class="na"&gt;aws-region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_REGION }}&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Setup Terraform&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hashicorp/setup-terraform@v3&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Init&lt;/span&gt;
        &lt;span class="na"&gt;working-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;infra/terraform&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;terraform init \&lt;/span&gt;
            &lt;span class="s"&gt;-backend-config="bucket=${{ secrets.TERRAFORM_STATE_BUCKET }}" \&lt;/span&gt;
            &lt;span class="s"&gt;-backend-config="key=terraform-state/${{ github.event.inputs.environment }}/terraform.tfstate" \&lt;/span&gt;
            &lt;span class="s"&gt;-backend-config="region=${{ secrets.AWS_REGION }}" \&lt;/span&gt;
            &lt;span class="s"&gt;-backend-config="dynamodb_table=${{ secrets.TERRAFORM_STATE_LOCK_TABLE }}" \&lt;/span&gt;
            &lt;span class="s"&gt;-backend-config="encrypt=true"&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Plan Destroy&lt;/span&gt;
        &lt;span class="na"&gt;working-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;infra/terraform&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;terraform plan -destroy \&lt;/span&gt;
            &lt;span class="s"&gt;-var-file=terraform.${{ github.event.inputs.environment }}.tfvars \&lt;/span&gt;
            &lt;span class="s"&gt;-var="key_pair_name=${{ secrets.TERRAFORM_KEY_PAIR_NAME }}" \&lt;/span&gt;
            &lt;span class="s"&gt;-out=tfplan&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Manual Approval&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;trstringer/manual-approval@v1&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ github.TOKEN }}&lt;/span&gt;
          &lt;span class="na"&gt;approvers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ github.actor }}&lt;/span&gt;
          &lt;span class="na"&gt;minimum-approvals&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;issue-title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;⚠️&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;DESTROY&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Infrastructure&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;${{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;github.event.inputs.environment&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
          &lt;span class="na"&gt;issue-body&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;**⚠️ WARNING: Infrastructure Destruction Requested**&lt;/span&gt;

            &lt;span class="s"&gt;This will **DELETE ALL INFRASTRUCTURE** for environment: **${{ github.event.inputs.environment }}**&lt;/span&gt;

            &lt;span class="s"&gt;**This action cannot be undone!**&lt;/span&gt;

            &lt;span class="s"&gt;Review the plan and approve only if you're sure.&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Destroy&lt;/span&gt;
        &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;steps.manual-approval.outcome == 'success'&lt;/span&gt;
        &lt;span class="na"&gt;working-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;infra/terraform&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;terraform apply -auto-approve tfplan&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Safety features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Manual approval required (can't destroy by accident)&lt;/li&gt;
&lt;li&gt;✅ Shows what will be destroyed&lt;/li&gt;
&lt;li&gt;✅ Creates GitHub issue for review&lt;/li&gt;
&lt;li&gt;✅ Environment selection (prevents destroying wrong env)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Method 3: Destroy Specific Resources
&lt;/h4&gt;

&lt;p&gt;Don't want to destroy everything? You can target specific resources:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Destroy only the EC2 instance (keep security group)&lt;/span&gt;
terraform destroy &lt;span class="nt"&gt;-target&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;aws_instance.todo_app &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;terraform.dev.tfvars

&lt;span class="c"&gt;# Destroy only the security group&lt;/span&gt;
terraform destroy &lt;span class="nt"&gt;-target&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;aws_security_group.todo_app &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;terraform.dev.tfvars
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;When to use this:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You want to recreate just one resource&lt;/li&gt;
&lt;li&gt;Something is broken and you want to rebuild it&lt;/li&gt;
&lt;li&gt;You're testing changes&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  After Destruction
&lt;/h4&gt;

&lt;p&gt;After destroying, your state file still exists in S3, but it's empty (or has no resources). You can:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Start fresh&lt;/strong&gt;: Run &lt;code&gt;terraform apply&lt;/code&gt; again to recreate everything&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clean up state&lt;/strong&gt;: Delete the state file from S3 (optional)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Keep state&lt;/strong&gt;: Leave it (Terraform will just create new resources)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Best practice&lt;/strong&gt;: Keep the state file. It's useful for history and doesn't cost much.&lt;/p&gt;

&lt;h3&gt;
  
  
  Summary: The Complete Terraform Workflow
&lt;/h3&gt;

&lt;p&gt;Here's the full picture of how everything works together:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────────────────────────────────────────────┐
│ 1. You make changes to Terraform code                   │
│    (or someone changes infrastructure manually)        │
└─────────────────┬───────────────────────────────────────┘
                  │
                  ▼
┌─────────────────────────────────────────────────────────┐
│ 2. GitHub Actions workflow runs                         │
│    - Checks out code                                    │
│    - Runs terraform plan                                │
└─────────────────┬───────────────────────────────────────┘
                  │
                  ▼
┌─────────────────────────────────────────────────────────┐
│ 3. Drift Detection                                      │
│    - Did Terraform files change?                        │
│    - Does plan show changes?                            │
│    - If no code changes + plan changes = DRIFT!         │
└─────────────────┬───────────────────────────────────────┘
                  │
        ┌─────────┴─────────┐
        │                   │
        ▼                   ▼
┌──────────────┐   ┌──────────────────┐
│ DRIFT        │   │ EXPECTED CHANGES  │
│ DETECTED     │   │ (Code updated)    │
└──────┬───────┘   └────────┬───────────┘
       │                    │
       ▼                    ▼
┌──────────────────┐   ┌──────────────────┐
│ Send Email       │   │ Apply directly   │
│ Create Issue     │   │ (No approval)    │
│ Wait for Approval│   └──────────────────┘
└────────┬─────────┘
         │
         ▼
┌──────────────────┐
│ You Review       │
│ &amp;amp; Approve        │
└────────┬─────────┘
         │
         ▼
┌──────────────────┐
│ Apply Changes    │
│ (terraform apply) │
└──────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key takeaways:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Always run &lt;code&gt;terraform plan&lt;/code&gt; first (see what will happen)&lt;/li&gt;
&lt;li&gt;✅ Drift detection protects you from unexpected changes&lt;/li&gt;
&lt;li&gt;✅ Email notifications keep you informed&lt;/li&gt;
&lt;li&gt;✅ Manual approval prevents accidents&lt;/li&gt;
&lt;li&gt;✅ Destroy carefully - it's permanent!&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Part 5: Server Configuration with Ansible
&lt;/h2&gt;

&lt;p&gt;Terraform created your server, but it's just a blank Ubuntu machine. Now we need to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install Docker&lt;/li&gt;
&lt;li&gt;Clone your code&lt;/li&gt;
&lt;li&gt;Start the application&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's where Ansible comes in.&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding Ansible
&lt;/h3&gt;

&lt;p&gt;Ansible is like having a robot assistant that can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SSH into your servers&lt;/li&gt;
&lt;li&gt;Run commands&lt;/li&gt;
&lt;li&gt;Install software&lt;/li&gt;
&lt;li&gt;Copy files&lt;/li&gt;
&lt;li&gt;Start services&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why Ansible over SSH scripts?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Idempotent&lt;/strong&gt;: Run it multiple times safely (won't break if run twice)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Declarative&lt;/strong&gt;: You say "what" you want, not "how" to do it&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Organized&lt;/strong&gt;: Roles and playbooks keep things organized&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reusable&lt;/strong&gt;: Write once, use for dev/stg/prod&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Ansible Playbook Structure
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Configure TODO Application Server&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;all&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;  &lt;span class="c1"&gt;# Use sudo&lt;/span&gt;
  &lt;span class="na"&gt;gather_facts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;  &lt;span class="c1"&gt;# Collect info about the server&lt;/span&gt;

  &lt;span class="na"&gt;vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app_user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu&lt;/span&gt;
    &lt;span class="na"&gt;app_dir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/opt/todo-app&lt;/span&gt;

  &lt;span class="na"&gt;roles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dependencies&lt;/span&gt;  &lt;span class="c1"&gt;# Install Docker, etc.&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy&lt;/span&gt;        &lt;span class="c1"&gt;# Deploy the application&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Breaking it down:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;hosts: all&lt;/code&gt; - Run on all servers in inventory&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;become: yes&lt;/code&gt; - Use sudo (needed for installing packages)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;gather_facts&lt;/code&gt; - Ansible learns about the server (OS, IP, etc.)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;roles&lt;/code&gt; - Reusable collections of tasks&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Creating the Dependencies Role
&lt;/h3&gt;

&lt;p&gt;This role installs everything the server needs:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;roles/dependencies/tasks/main.yml:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Update apt cache&lt;/span&gt;
  &lt;span class="na"&gt;apt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;update_cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
    &lt;span class="na"&gt;cache_valid_time&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3600&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install required packages&lt;/span&gt;
  &lt;span class="na"&gt;apt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;git&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;curl&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;python3&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;python3-pip&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;present&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Check if Docker is already installed&lt;/span&gt;
  &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker --version&lt;/span&gt;
  &lt;span class="na"&gt;register&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker_check&lt;/span&gt;
  &lt;span class="na"&gt;changed_when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;failed_when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;ignore_errors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install Docker (only if not installed)&lt;/span&gt;
  &lt;span class="na"&gt;apt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker-ce&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker-ce-cli&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;containerd.io&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker-compose-plugin&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;present&lt;/span&gt;
  &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker_check.rc != &lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;  &lt;span class="c1"&gt;# Only if Docker not found&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Add user to docker group&lt;/span&gt;
  &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;app_user&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;groups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker&lt;/span&gt;
    &lt;span class="na"&gt;append&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Start and enable Docker&lt;/span&gt;
  &lt;span class="na"&gt;systemd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;started&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key concepts:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;register&lt;/code&gt; - Save command output to a variable&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;when&lt;/code&gt; - Conditional execution (only if condition is true)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;changed_when: false&lt;/code&gt; - This task never "changes" anything (just checks)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;state: present&lt;/code&gt; - Ensure package is installed (idempotent!)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Creating the Deploy Role
&lt;/h3&gt;

&lt;p&gt;This role actually deploys your application:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;roles/deploy/tasks/main.yml:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create application directory&lt;/span&gt;
  &lt;span class="na"&gt;file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;app_dir&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;directory&lt;/span&gt;
    &lt;span class="na"&gt;owner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;app_user&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;app_user&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;0755'&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Clone repository&lt;/span&gt;
  &lt;span class="na"&gt;git&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;repo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;repo_url&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;dest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;app_dir&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;repo_branch&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default('main')&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;update&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
  &lt;span class="na"&gt;register&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;git_pull_result&lt;/span&gt;
  &lt;span class="na"&gt;changed_when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;git_pull_result.changed&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create .env file&lt;/span&gt;
  &lt;span class="na"&gt;copy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;DOMAIN="{{ domain }}"&lt;/span&gt;
      &lt;span class="s"&gt;LETSENCRYPT_EMAIL="{{ letsencrypt_email }}"&lt;/span&gt;
      &lt;span class="s"&gt;JWT_SECRET="{{ jwt_secret }}"&lt;/span&gt;
      &lt;span class="s"&gt;# ... other variables&lt;/span&gt;
    &lt;span class="na"&gt;dest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;app_dir&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}/.env"&lt;/span&gt;
    &lt;span class="na"&gt;owner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;app_user&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;0600'&lt;/span&gt;
  &lt;span class="na"&gt;register&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;env_file_result&lt;/span&gt;
  &lt;span class="na"&gt;changed_when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;env_file_result.changed&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Determine if rebuild is needed&lt;/span&gt;
  &lt;span class="na"&gt;set_fact&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;needs_rebuild&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;git_pull_result.changed&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default(false)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;or&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;env_file_result.changed&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default(false)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build images if code/config changed&lt;/span&gt;
  &lt;span class="na"&gt;shell&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker compose build&lt;/span&gt;
  &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;chdir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;app_dir&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
  &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;needs_rebuild | default(false)&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Start/update containers&lt;/span&gt;
  &lt;span class="na"&gt;shell&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker compose up -d&lt;/span&gt;
  &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;chdir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;app_dir&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Making it idempotent:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Only rebuilds if code or config changed&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;docker compose up -d&lt;/code&gt; is idempotent (won't restart if nothing changed)&lt;/li&gt;
&lt;li&gt;Safe to run multiple times&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Environment-Specific Variables
&lt;/h3&gt;

&lt;p&gt;Just like Terraform, Ansible needs separate configuration files for each environment. Create three files:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;group_vars/dev/vars.yml:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;domain&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dev.yourdomain.com"&lt;/span&gt;
&lt;span class="na"&gt;letsencrypt_email&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dev-email@example.com"&lt;/span&gt;
&lt;span class="na"&gt;jwt_secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dev-secret-key"&lt;/span&gt;
&lt;span class="na"&gt;repo_url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://github.com/yourusername/path-to-codebase.git"&lt;/span&gt;
&lt;span class="na"&gt;repo_branch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dev"&lt;/span&gt;  &lt;span class="c1"&gt;# Use dev branch for development&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;group_vars/stg/vars.yml:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;domain&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;stg.yourdomain.com"&lt;/span&gt;
&lt;span class="na"&gt;letsencrypt_email&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;staging-email@example.com"&lt;/span&gt;
&lt;span class="na"&gt;jwt_secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;staging-secret-key"&lt;/span&gt;  &lt;span class="c1"&gt;# Different from dev!&lt;/span&gt;
&lt;span class="na"&gt;repo_url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://github.com/yourusername/path-to-codebase.git"&lt;/span&gt;
&lt;span class="na"&gt;repo_branch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;staging"&lt;/span&gt;  &lt;span class="c1"&gt;# Use staging branch for staging&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;group_vars/prod/vars.yml:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;domain&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;yourdomain.com"&lt;/span&gt;
&lt;span class="na"&gt;letsencrypt_email&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;prod-email@example.com"&lt;/span&gt;
&lt;span class="na"&gt;jwt_secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;super-secure-production-secret"&lt;/span&gt;  &lt;span class="c1"&gt;# Different per environment!&lt;/span&gt;
&lt;span class="na"&gt;repo_url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://github.com/yourusername/path-to-codebase.git"&lt;/span&gt;
&lt;span class="na"&gt;repo_branch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;main"&lt;/span&gt;  &lt;span class="c1"&gt;# Use main branch for production&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why three separate files?&lt;/strong&gt; Each environment needs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Different domains&lt;/strong&gt;: &lt;code&gt;dev.yourdomain.com&lt;/code&gt;, &lt;code&gt;stg.yourdomain.com&lt;/code&gt;, &lt;code&gt;yourdomain.com&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Different secrets&lt;/strong&gt;: If dev gets compromised, staging and prod are still safe&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Different branches&lt;/strong&gt;: 

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;dev&lt;/code&gt; branch → development environment (experimental features)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;staging&lt;/code&gt; branch → staging environment (testing before production)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;main&lt;/code&gt; branch → production environment (stable, tested code)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;File structure:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;infra/ansible/
├── playbook.yml
├── inventory/
│   ├── dev.yml
│   ├── stg.yml
│   └── prod.yml
└── group_vars/
    ├── dev/
    │   └── vars.yml      ← Development variables
    ├── stg/
    │   └── vars.yml      ← Staging variables
    └── prod/
        └── vars.yml      ← Production variables
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Branch strategy explained:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Development&lt;/strong&gt; (&lt;code&gt;dev&lt;/code&gt; branch): Where you experiment and develop new features&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Staging&lt;/strong&gt; (&lt;code&gt;staging&lt;/code&gt; branch): Where you test features before they go to production&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Production&lt;/strong&gt; (&lt;code&gt;main&lt;/code&gt; branch): The stable code that real users interact with&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This way, you can test changes in dev/staging without affecting production!&lt;/p&gt;

&lt;h3&gt;
  
  
  Generating Inventory
&lt;/h3&gt;

&lt;p&gt;Terraform automatically generates the Ansible inventory:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;templates/inventory.tpl:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;all:
  hosts:
    todo-app-server:
      ansible_host: ${server_ip}
      ansible_user: ${server_user}
      ansible_ssh_private_key_file: ${ssh_key_path}
      ansible_ssh_common_args: '-o StrictHostKeyChecking=no'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gets generated as &lt;code&gt;ansible/inventory/dev.yml&lt;/code&gt; (or stg.yml, prod.yml) with the actual server IP.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running Ansible
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# From the ansible directory&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;infra/ansible

&lt;span class="c"&gt;# Run the playbook&lt;/span&gt;
ansible-playbook &lt;span class="nt"&gt;-i&lt;/span&gt; inventory/dev.yml playbook.yml

&lt;span class="c"&gt;# With verbose output (for debugging)&lt;/span&gt;
ansible-playbook &lt;span class="nt"&gt;-i&lt;/span&gt; inventory/dev.yml playbook.yml &lt;span class="nt"&gt;-vvv&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What happens:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ansible connects to your server via SSH&lt;/li&gt;
&lt;li&gt;Runs the dependencies role (installs Docker)&lt;/li&gt;
&lt;li&gt;Runs the deploy role (clones code, starts containers)&lt;/li&gt;
&lt;li&gt;Your application is live!&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Part 6: CI/CD with GitHub Actions
&lt;/h2&gt;

&lt;p&gt;Now we automate everything. Instead of running commands manually, GitHub Actions does it for us.&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding CI/CD
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;CI (Continuous Integration)&lt;/strong&gt;: Automatically test and build when code changes&lt;br&gt;
&lt;strong&gt;CD (Continuous Deployment)&lt;/strong&gt;: Automatically deploy when tests pass&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why CI/CD?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Consistency&lt;/strong&gt;: Same process every time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Speed&lt;/strong&gt;: Deploy in minutes, not hours&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Safety&lt;/strong&gt;: Automated tests catch bugs before production&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;History&lt;/strong&gt;: See what was deployed when&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Setting Up GitHub Secrets
&lt;/h3&gt;

&lt;p&gt;Before workflows can run, they need credentials:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to your GitHub repo → Settings → Secrets and variables → Actions&lt;/li&gt;
&lt;li&gt;Add these secrets:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;AWS_ACCESS_KEY_ID&lt;/code&gt; - From your IAM user&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AWS_SECRET_ACCESS_KEY&lt;/code&gt; - From your IAM user&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;TERRAFORM_STATE_BUCKET&lt;/code&gt; - Your S3 bucket name&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;TERRAFORM_STATE_LOCK_TABLE&lt;/code&gt; - Your DynamoDB table name&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;TERRAFORM_KEY_PAIR_NAME&lt;/code&gt; - Your EC2 key pair name&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;SSH_PRIVATE_KEY&lt;/code&gt; - Contents of your &lt;code&gt;.pem&lt;/code&gt; file&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;EMAIL_TO&lt;/code&gt; - Where to send drift alerts&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;EMAIL_FROM&lt;/code&gt; - Your verified SES email&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Infrastructure Workflow
&lt;/h3&gt;

&lt;p&gt;This workflow runs when infrastructure code changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Infrastructure Deployment&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;infra/terraform/**'&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;infra/ansible/**'&lt;/span&gt;
  &lt;span class="na"&gt;workflow_dispatch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Environment&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;(dev,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;stg,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;prod)'&lt;/span&gt;
        &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;choice&lt;/span&gt;
        &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;dev&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;stg&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;prod&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;terraform-plan&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Plan &amp;amp; Drift Detection&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Configure AWS&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-actions/configure-aws-credentials@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;aws-access-key-id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_ACCESS_KEY_ID }}&lt;/span&gt;
          &lt;span class="na"&gt;aws-secret-access-key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_SECRET_ACCESS_KEY }}&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Setup Terraform&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hashicorp/setup-terraform@v3&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Init&lt;/span&gt;
        &lt;span class="na"&gt;working-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;infra/terraform&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;terraform init -backend-config=...&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Plan&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;terraform plan -out=tfplan&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Check for Drift&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;drift-check&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;# Detect if this is drift (infrastructure changed outside Terraform)&lt;/span&gt;
          &lt;span class="s"&gt;# vs expected changes (Terraform code changed)&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Send Drift Email&lt;/span&gt;
        &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;steps.drift-check.outputs.change_type == 'drift'&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./infra/ci-cd/scripts/email-notification.sh "$(cat drift_summary.txt)"&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Manual Approval&lt;/span&gt;
        &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;steps.drift-check.outputs.change_type == 'drift'&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;trstringer/manual-approval@v1&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Apply&lt;/span&gt;
        &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;steps.manual-approval.outcome == 'success'&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;terraform apply -auto-approve tfplan&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Drift Detection in CI/CD (Quick Reference)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: For a detailed explanation of drift detection, email setup, and GitHub approval, see the "Understanding Drift Detection" section earlier in this guide.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick summary:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Drift = Infrastructure changed outside Terraform&lt;/li&gt;
&lt;li&gt;Detected automatically in CI/CD&lt;/li&gt;
&lt;li&gt;Email sent + GitHub issue created&lt;/li&gt;
&lt;li&gt;Manual approval required before applying&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Application Deployment Workflow
&lt;/h3&gt;

&lt;p&gt;Separate workflow for application code changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Application Deployment&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;frontend/**'&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;auth-api/**'&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;todos-api/**'&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;docker-compose.yml'&lt;/span&gt;
  &lt;span class="na"&gt;workflow_dispatch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Environment'&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;choice&lt;/span&gt;
        &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;dev&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;stg&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;prod&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Get Server IP&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;# Find server by tag&lt;/span&gt;
          &lt;span class="s"&gt;INSTANCE_ID=$(aws ec2 describe-instances ...)&lt;/span&gt;
          &lt;span class="s"&gt;SERVER_IP=$(aws ec2 describe-instances ...)&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy with Ansible&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;ansible-playbook -i inventory/${ENV}.yml playbook.yml --tags deploy&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why separate workflows?&lt;/strong&gt; Infrastructure changes are rare and need careful review. Application changes are frequent and should deploy quickly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure Destruction Workflow
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;⚠️ CRITICAL&lt;/strong&gt;: This workflow &lt;strong&gt;DESTROYS EVERYTHING&lt;/strong&gt;. Use with extreme caution!&lt;/p&gt;

&lt;p&gt;The destroy workflow is separate from the deployment workflow for safety. It has multiple confirmation steps to prevent accidental destruction.&lt;/p&gt;

&lt;h4&gt;
  
  
  How the Destroy Workflow Works
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Manual Trigger Only&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Only runs when you manually trigger it (no automatic triggers)&lt;/li&gt;
&lt;li&gt;Requires you to select the environment&lt;/li&gt;
&lt;li&gt;Requires you to type "DESTROY" to confirm&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Validation&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Checks that you typed "DESTROY" correctly (case-sensitive)&lt;/li&gt;
&lt;li&gt;Prevents typos from accidentally destroying infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 3: State File Handling&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tries to download state from artifacts (most recent)&lt;/li&gt;
&lt;li&gt;Falls back to S3 remote backend if artifacts missing&lt;/li&gt;
&lt;li&gt;Imports resources if state is completely missing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Destroy Plan&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Shows you exactly what will be destroyed&lt;/li&gt;
&lt;li&gt;Review this carefully before proceeding&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Destruction&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deletes all resources in the correct order&lt;/li&gt;
&lt;li&gt;Handles dependencies (e.g., detaches volumes before deleting)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Verification&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Checks that everything was destroyed&lt;/li&gt;
&lt;li&gt;Cleans up orphaned resources&lt;/li&gt;
&lt;li&gt;Provides a summary&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  The Complete Destroy Workflow
&lt;/h4&gt;

&lt;p&gt;Here's what the actual workflow looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Infrastructure Destruction&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;workflow_dispatch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  &lt;span class="c1"&gt;# Manual trigger only - safe!&lt;/span&gt;
    &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Environment&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;to&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;destroy&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;(dev,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;stg,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;prod)'&lt;/span&gt;
        &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;choice&lt;/span&gt;
        &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;dev&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;stg&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;prod&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;dev'&lt;/span&gt;
      &lt;span class="na"&gt;confirm_destroy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Type&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;"DESTROY"&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;to&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;confirm&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;(case-sensitive)'&lt;/span&gt;
        &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;validate-destroy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Validate Destruction Request&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Validate confirmation&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;if [ "${{ github.event.inputs.confirm_destroy }}" != "DESTROY" ]; then&lt;/span&gt;
            &lt;span class="s"&gt;echo "❌ Invalid confirmation. You must type 'DESTROY' to proceed."&lt;/span&gt;
            &lt;span class="s"&gt;exit 1&lt;/span&gt;
          &lt;span class="s"&gt;fi&lt;/span&gt;
          &lt;span class="s"&gt;echo "✅ Destruction confirmed. Proceeding..."&lt;/span&gt;

  &lt;span class="na"&gt;terraform-destroy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Destroy Infrastructure&lt;/span&gt;
    &lt;span class="na"&gt;needs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;validate-destroy&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout code&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Configure AWS Credentials&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-actions/configure-aws-credentials@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;aws-access-key-id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_ACCESS_KEY_ID }}&lt;/span&gt;
          &lt;span class="na"&gt;aws-secret-access-key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_SECRET_ACCESS_KEY }}&lt;/span&gt;
          &lt;span class="na"&gt;aws-region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_REGION }}&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Setup Terraform&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hashicorp/setup-terraform@v3&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Init (with S3 backend)&lt;/span&gt;
        &lt;span class="na"&gt;working-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;infra/terraform&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;terraform init \&lt;/span&gt;
            &lt;span class="s"&gt;-backend-config="bucket=${{ secrets.TERRAFORM_STATE_BUCKET }}" \&lt;/span&gt;
            &lt;span class="s"&gt;-backend-config="key=terraform-state/${{ github.event.inputs.environment }}/terraform.tfstate" \&lt;/span&gt;
            &lt;span class="s"&gt;-backend-config="region=${{ secrets.AWS_REGION }}" \&lt;/span&gt;
            &lt;span class="s"&gt;-backend-config="dynamodb_table=${{ secrets.TERRAFORM_STATE_LOCK_TABLE }}" \&lt;/span&gt;
            &lt;span class="s"&gt;-backend-config="encrypt=true"&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Plan Destroy&lt;/span&gt;
        &lt;span class="na"&gt;working-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;infra/terraform&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;terraform plan -destroy \&lt;/span&gt;
            &lt;span class="s"&gt;-var-file=terraform.${{ github.event.inputs.environment }}.tfvars \&lt;/span&gt;
            &lt;span class="s"&gt;-var="key_pair_name=${{ secrets.TERRAFORM_KEY_PAIR_NAME }}" \&lt;/span&gt;
            &lt;span class="s"&gt;-out=destroy.tfplan&lt;/span&gt;
          &lt;span class="s"&gt;echo ""&lt;/span&gt;
          &lt;span class="s"&gt;echo "⚠️ DESTRUCTION PLAN SUMMARY:"&lt;/span&gt;
          &lt;span class="s"&gt;terraform show -no-color destroy.tfplan | head -100&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Destroy&lt;/span&gt;
        &lt;span class="na"&gt;working-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;infra/terraform&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;echo "🔥 Starting infrastructure destruction..."&lt;/span&gt;
          &lt;span class="s"&gt;terraform destroy -auto-approve \&lt;/span&gt;
            &lt;span class="s"&gt;-var-file=terraform.${{ github.event.inputs.environment }}.tfvars \&lt;/span&gt;
            &lt;span class="s"&gt;-var="key_pair_name=${{ secrets.TERRAFORM_KEY_PAIR_NAME }}"&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Verify Destruction&lt;/span&gt;
        &lt;span class="na"&gt;working-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;infra/terraform&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;echo "🔍 Verifying all resources are destroyed..."&lt;/span&gt;
          &lt;span class="s"&gt;# Check for orphaned resources and clean them up&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  How to Use the Destroy Workflow
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Go to GitHub Actions&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open your repository on GitHub&lt;/li&gt;
&lt;li&gt;Click the &lt;strong&gt;"Actions"&lt;/strong&gt; tab&lt;/li&gt;
&lt;li&gt;Find &lt;strong&gt;"Infrastructure Destruction"&lt;/strong&gt; in the workflow list&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Run the Workflow&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;"Run workflow"&lt;/strong&gt; button (top right)&lt;/li&gt;
&lt;li&gt;Select the &lt;strong&gt;environment&lt;/strong&gt; you want to destroy:

&lt;ul&gt;
&lt;li&gt;⚠️ &lt;strong&gt;Be very careful&lt;/strong&gt; - make sure you select the right one!&lt;/li&gt;
&lt;li&gt;Dev is usually safe to destroy&lt;/li&gt;
&lt;li&gt;Staging should be destroyed carefully&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;NEVER destroy production unless absolutely necessary!&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;In the &lt;strong&gt;"Type DESTROY to confirm"&lt;/strong&gt; field, type exactly: &lt;code&gt;DESTROY&lt;/code&gt;

&lt;ul&gt;
&lt;li&gt;Must be all caps&lt;/li&gt;
&lt;li&gt;Must be exactly "DESTROY" (no extra spaces)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;"Run workflow"&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Watch It Run&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The workflow will start with a &lt;strong&gt;validation job&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Checks that you typed "DESTROY" correctly&lt;/li&gt;
&lt;li&gt;If wrong, workflow fails immediately (safe!)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Then the &lt;strong&gt;terraform-destroy job&lt;/strong&gt; runs:

&lt;ul&gt;
&lt;li&gt;Initializes Terraform with the correct backend&lt;/li&gt;
&lt;li&gt;Creates a destroy plan (shows what will be deleted)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Review the plan carefully!&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Destroys all resources&lt;/li&gt;
&lt;li&gt;Verifies everything is gone&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Review the Results&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check the workflow logs&lt;/li&gt;
&lt;li&gt;Verify in AWS Console that resources are gone&lt;/li&gt;
&lt;li&gt;Check that costs are now $0.00&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Safety Features
&lt;/h4&gt;

&lt;p&gt;The destroy workflow has multiple safety features:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Manual trigger only&lt;/strong&gt; - Can't be triggered automatically&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Confirmation required&lt;/strong&gt; - Must type "DESTROY" exactly&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Environment selection&lt;/strong&gt; - Prevents destroying wrong environment&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plan before destroy&lt;/strong&gt; - Shows you what will be deleted&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Validation job&lt;/strong&gt; - Double-checks confirmation before proceeding&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;State file handling&lt;/strong&gt; - Works with remote state (S3)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verification&lt;/strong&gt; - Checks that everything was destroyed&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  What Gets Destroyed
&lt;/h4&gt;

&lt;p&gt;When you run the destroy workflow, it deletes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ &lt;strong&gt;EC2 instance&lt;/strong&gt; - Your server and everything on it&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;Security groups&lt;/strong&gt; - Firewall rules&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;EBS volumes&lt;/strong&gt; - Any attached storage (if using EBS for state)&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;All containers&lt;/strong&gt; - Docker containers running on the instance&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;All data&lt;/strong&gt; - Everything on the server is permanently lost&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What stays:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ &lt;strong&gt;S3 bucket&lt;/strong&gt; - Your Terraform state bucket (not deleted)&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;DynamoDB table&lt;/strong&gt; - State locking table (not deleted)&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;GitHub repository&lt;/strong&gt; - Your code (not deleted)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  When to Use the Destroy Workflow
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Good reasons to destroy:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ You're done with the project and want to stop costs&lt;/li&gt;
&lt;li&gt;✅ You want to start completely fresh&lt;/li&gt;
&lt;li&gt;✅ You're testing and need to clean up&lt;/li&gt;
&lt;li&gt;✅ You're moving to a different AWS account&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Bad reasons to destroy:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;❌ Just to restart services (use Ansible instead)&lt;/li&gt;
&lt;li&gt;❌ To fix a small issue (fix the issue, don't destroy)&lt;/li&gt;
&lt;li&gt;❌ Because something isn't working (debug first)&lt;/li&gt;
&lt;li&gt;❌ In production without a backup plan&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  After Destruction
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;What happens:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;All infrastructure is deleted&lt;/li&gt;
&lt;li&gt;State file in S3 is updated (shows no resources)&lt;/li&gt;
&lt;li&gt;You stop paying for AWS resources&lt;/li&gt;
&lt;li&gt;All data is permanently lost&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;To recreate:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Run the &lt;strong&gt;Infrastructure Deployment&lt;/strong&gt; workflow again&lt;/li&gt;
&lt;li&gt;It will create everything from scratch&lt;/li&gt;
&lt;li&gt;You'll need to redeploy your application&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Important notes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;State file history is preserved in S3&lt;/li&gt;
&lt;li&gt;You can see what was destroyed in the workflow logs&lt;/li&gt;
&lt;li&gt;GitHub Actions artifacts are kept for 90 days&lt;/li&gt;
&lt;li&gt;You can manually delete artifacts if needed&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Destroy Workflow vs Manual Destroy
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Use the workflow when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ You want safety features (confirmation, validation)&lt;/li&gt;
&lt;li&gt;✅ You want to destroy from anywhere (don't need local setup)&lt;/li&gt;
&lt;li&gt;✅ You want an audit trail (GitHub Actions logs)&lt;/li&gt;
&lt;li&gt;✅ You're working with a team (everyone can see what happened)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Use manual destroy when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ You need to destroy specific resources only&lt;/li&gt;
&lt;li&gt;✅ You're debugging and need more control&lt;/li&gt;
&lt;li&gt;✅ You don't have GitHub Actions set up&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Example: Destroying Development Environment
&lt;/h4&gt;

&lt;p&gt;Let's walk through destroying a dev environment:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Go to Actions&lt;/strong&gt; → &lt;strong&gt;Infrastructure Destruction&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Click "Run workflow"&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Select environment&lt;/strong&gt;: &lt;code&gt;dev&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Type confirmation&lt;/strong&gt;: &lt;code&gt;DESTROY&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Click "Run workflow"&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;What you'll see:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;✅ validate-destroy: Validation passed
✅ terraform-destroy: 
   - Terraform Init: Success
   - Terraform Plan Destroy: Shows what will be deleted
   - Terraform Destroy: Deleting resources...
   - Verify Destruction: All resources destroyed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;After completion:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check AWS Console → EC2 → No instances&lt;/li&gt;
&lt;li&gt;Check AWS Console → Security Groups → No groups&lt;/li&gt;
&lt;li&gt;Check AWS Billing → Costs should be $0.00&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Troubleshooting Destroy Workflow
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Issue: "Invalid confirmation"&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Problem&lt;/strong&gt;: You didn't type "DESTROY" exactly&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: Type exactly &lt;code&gt;DESTROY&lt;/code&gt; (all caps, no spaces)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Issue: "State file not found"&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Problem&lt;/strong&gt;: State file is missing or in wrong location&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: Workflow will try to import resources automatically&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Issue: "Resources still exist after destroy"&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Problem&lt;/strong&gt;: Some resources might be stuck&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: Check the verification step - it will try to clean up orphaned resources&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Issue: "Can't destroy because of dependencies"&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Problem&lt;/strong&gt;: Resources have dependencies (e.g., volume attached)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: Workflow handles this automatically (detaches volumes first)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Best Practices for Destruction
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Always destroy dev first&lt;/strong&gt; - Test the workflow in dev before using in staging/prod&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Review the plan&lt;/strong&gt; - Check what will be destroyed before confirming&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backup important data&lt;/strong&gt; - If you need any data, back it up first&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Destroy during off-hours&lt;/strong&gt; - If others are using the environment&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Document why&lt;/strong&gt; - Add a comment in the workflow run explaining why you destroyed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verify after&lt;/strong&gt; - Check AWS Console to confirm everything is gone&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clean up artifacts&lt;/strong&gt; - Delete GitHub Actions artifacts if you want&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Part 7: Single Command Deployment
&lt;/h2&gt;

&lt;p&gt;The ultimate goal: one command that does everything.&lt;/p&gt;

&lt;h3&gt;
  
  
  How It Works
&lt;/h3&gt;

&lt;p&gt;When you run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform apply &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;terraform.dev.tfvars &lt;span class="nt"&gt;-auto-approve&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's what happens behind the scenes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Terraform provisions infrastructure&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creates security group&lt;/li&gt;
&lt;li&gt;Launches EC2 instance&lt;/li&gt;
&lt;li&gt;Waits for instance to be ready&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Terraform generates Ansible inventory&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creates &lt;code&gt;ansible/inventory/dev.yml&lt;/code&gt; with server IP&lt;/li&gt;
&lt;li&gt;Ready for Ansible to use&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Terraform triggers Ansible&lt;/strong&gt; (via null_resource)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Waits for SSH to be available&lt;/li&gt;
&lt;li&gt;Runs Ansible playbook&lt;/li&gt;
&lt;li&gt;Installs Docker&lt;/li&gt;
&lt;li&gt;Clones repository&lt;/li&gt;
&lt;li&gt;Starts containers&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Traefik gets SSL certificate&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Contacts Let's Encrypt&lt;/li&gt;
&lt;li&gt;Verifies domain ownership&lt;/li&gt;
&lt;li&gt;Gets certificate&lt;/li&gt;
&lt;li&gt;Enables HTTPS&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Application is live!&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Frontend accessible at &lt;code&gt;https://yourdomain.com&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;APIs at &lt;code&gt;https://yourdomain.com/api/*&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  The Magic: null_resource
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"null_resource"&lt;/span&gt; &lt;span class="s2"&gt;"ansible_provision"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;triggers&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;instance_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_instance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;todo_app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;provisioner&lt;/span&gt; &lt;span class="s2"&gt;"local-exec"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;-&lt;/span&gt;&lt;span class="no"&gt;EOT&lt;/span&gt;&lt;span class="sh"&gt;
      # Wait for SSH
      until ssh ... 'echo "ready"'; do sleep 10; done

      # Run Ansible
      cd ../ansible
      ansible-playbook -i inventory/${var.environment}.yml playbook.yml
&lt;/span&gt;&lt;span class="no"&gt;    EOT
&lt;/span&gt;  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What is null_resource?&lt;/strong&gt; It's a Terraform resource that doesn't create anything in AWS. It just runs a command. Perfect for triggering Ansible!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why the wait?&lt;/strong&gt; EC2 instances take 30-60 seconds to boot. We wait for SSH to be ready before running Ansible.&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing the Single Command
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Make sure you're in the terraform directory&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;infra/terraform

&lt;span class="c"&gt;# Initialize (one-time setup)&lt;/span&gt;
terraform init &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;...

&lt;span class="c"&gt;# The magic command&lt;/span&gt;
terraform apply &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;terraform.dev.tfvars &lt;span class="nt"&gt;-auto-approve&lt;/span&gt;

&lt;span class="c"&gt;# Watch it work!&lt;/span&gt;
&lt;span class="c"&gt;# You'll see:&lt;/span&gt;
&lt;span class="c"&gt;# 1. Security group created&lt;/span&gt;
&lt;span class="c"&gt;# 2. EC2 instance launching&lt;/span&gt;
&lt;span class="c"&gt;# 3. Waiting for SSH...&lt;/span&gt;
&lt;span class="c"&gt;# 4. Running Ansible...&lt;/span&gt;
&lt;span class="c"&gt;# 5. Application deployed!&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Pro tip&lt;/strong&gt;: The first run takes 5-10 minutes. Subsequent runs are faster (only changes what's needed).&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 8: Multi-Environment Setup
&lt;/h2&gt;

&lt;p&gt;Real applications need multiple environments. Here's how to set it up properly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Multiple Environments?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dev&lt;/strong&gt;: Where you experiment (break things safely)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Staging&lt;/strong&gt;: Mirror of production (test before going live)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Production&lt;/strong&gt;: The real thing (users depend on it)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Environment Isolation
&lt;/h3&gt;

&lt;p&gt;Each environment is completely separate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Different EC2 instances&lt;/li&gt;
&lt;li&gt;Different security groups&lt;/li&gt;
&lt;li&gt;Different state files in S3&lt;/li&gt;
&lt;li&gt;Different domains&lt;/li&gt;
&lt;li&gt;Different secrets&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why isolation matters:&lt;/strong&gt; If dev gets hacked, staging and prod are still safe. If you break dev, staging and prod keep running. This is why we have three separate environments!&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting Up Per-Environment Configuration
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Terraform:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;terraform.dev.tfvars&lt;/code&gt; - Dev configuration&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;terraform.stg.tfvars&lt;/code&gt; - Staging configuration
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;terraform.prod.tfvars&lt;/code&gt; - Production configuration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Ansible:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;group_vars/dev/vars.yml&lt;/code&gt; - Dev variables&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;group_vars/stg/vars.yml&lt;/code&gt; - Staging variables&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;group_vars/prod/vars.yml&lt;/code&gt; - Production variables&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;State files:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;terraform-state/dev/terraform.tfstate&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;terraform-state/stg/terraform.tfstate&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;terraform-state/prod/terraform.tfstate&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Deploying to Different Environments
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Via GitHub Actions:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to Actions → Infrastructure Deployment&lt;/li&gt;
&lt;li&gt;Click "Run workflow"&lt;/li&gt;
&lt;li&gt;Select environment (dev/stg/prod)&lt;/li&gt;
&lt;li&gt;Click "Run workflow"&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Via command line:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Dev&lt;/span&gt;
terraform apply &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;terraform.dev.tfvars &lt;span class="nt"&gt;-auto-approve&lt;/span&gt;

&lt;span class="c"&gt;# Staging&lt;/span&gt;
terraform apply &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;terraform.stg.tfvars &lt;span class="nt"&gt;-auto-approve&lt;/span&gt;

&lt;span class="c"&gt;# Production&lt;/span&gt;
terraform apply &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;terraform.prod.tfvars &lt;span class="nt"&gt;-auto-approve&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt;: Always test in dev first! Never deploy to prod without testing.&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 9: Common Issues and Solutions
&lt;/h2&gt;

&lt;p&gt;Every project has issues. Here are the ones you'll likely encounter and how to fix them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Issue 1: Let's Encrypt Certificate Errors
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Symptom:&lt;/strong&gt; &lt;code&gt;Timeout during connect (likely firewall problem)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Causes:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;DNS not pointing to your server&lt;/li&gt;
&lt;li&gt;Security group blocking ports 80/443&lt;/li&gt;
&lt;li&gt;Firewall on the server blocking ports&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Solutions:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 1. Verify DNS&lt;/span&gt;
dig yourdomain.com
&lt;span class="c"&gt;# Should show your server IP&lt;/span&gt;

&lt;span class="c"&gt;# 2. Check security group&lt;/span&gt;
&lt;span class="c"&gt;# AWS Console → EC2 → Security Groups&lt;/span&gt;
&lt;span class="c"&gt;# Verify ports 80 and 443 allow 0.0.0.0/0&lt;/span&gt;

&lt;span class="c"&gt;# 3. Check server firewall&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ufw status
&lt;span class="nb"&gt;sudo &lt;/span&gt;ufw allow 80/tcp
&lt;span class="nb"&gt;sudo &lt;/span&gt;ufw allow 443/tcp

&lt;span class="c"&gt;# 4. Switch to HTTP challenge (more reliable)&lt;/span&gt;
&lt;span class="c"&gt;# In docker-compose.yml:&lt;/span&gt;
- &lt;span class="s2"&gt;"--certificatesresolvers.letsencrypt.acme.httpchallenge=true"&lt;/span&gt;
- &lt;span class="s2"&gt;"--certificatesresolvers.letsencrypt.acme.httpchallenge.entrypoint=web"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Issue 2: Terraform State Lock
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Symptom:&lt;/strong&gt; &lt;code&gt;Error acquiring the state lock&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cause:&lt;/strong&gt; Another Terraform run is in progress, or a previous run crashed&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Check what's locking&lt;/span&gt;
aws dynamodb scan &lt;span class="nt"&gt;--table-name&lt;/span&gt; terraform-state-lock

&lt;span class="c"&gt;# If you're sure no one else is running Terraform:&lt;/span&gt;
terraform force-unlock &amp;lt;LOCK_ID&amp;gt;

&lt;span class="c"&gt;# Be careful! Only do this if you're certain.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Issue 3: Ansible Connection Failed
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Symptom:&lt;/strong&gt; &lt;code&gt;SSH connection failed&lt;/code&gt; or &lt;code&gt;Permission denied&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Causes:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Security group doesn't allow SSH from your IP&lt;/li&gt;
&lt;li&gt;Wrong key pair&lt;/li&gt;
&lt;li&gt;Server not ready yet&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Solutions:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 1. Test SSH manually&lt;/span&gt;
ssh &lt;span class="nt"&gt;-i&lt;/span&gt; ~/.ssh/your-key.pem ubuntu@&amp;lt;server-ip&amp;gt;

&lt;span class="c"&gt;# 2. Check security group&lt;/span&gt;
&lt;span class="c"&gt;# Make sure it allows port 22 from your IP&lt;/span&gt;

&lt;span class="c"&gt;# 3. Verify key pair name matches&lt;/span&gt;
&lt;span class="c"&gt;# AWS Console → EC2 → Key Pairs&lt;/span&gt;
&lt;span class="c"&gt;# Should match what's in terraform.tfvars&lt;/span&gt;

&lt;span class="c"&gt;# 4. Wait longer (server might still be booting)&lt;/span&gt;
&lt;span class="c"&gt;# EC2 instances take 1-2 minutes to be ready&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Issue 4: Drift Detection Not Working
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Symptom:&lt;/strong&gt; Changes made manually but drift not detected&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Did Terraform files change? (That's "expected", not drift)&lt;/li&gt;
&lt;li&gt;Is state in S3? (Local state won't work properly)&lt;/li&gt;
&lt;li&gt;Check drift detection logic in workflow&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Test drift:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Manually add a tag to your EC2 instance in AWS Console&lt;/li&gt;
&lt;li&gt;Run the infrastructure workflow&lt;/li&gt;
&lt;li&gt;Should detect drift and send email&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Issue 5: Containers Keep Restarting
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Symptom:&lt;/strong&gt; &lt;code&gt;docker ps&lt;/code&gt; shows containers restarting constantly&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Debug:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Check logs&lt;/span&gt;
docker logs &amp;lt;container-name&amp;gt;

&lt;span class="c"&gt;# Check all containers&lt;/span&gt;
docker compose logs

&lt;span class="c"&gt;# Common causes:&lt;/span&gt;
&lt;span class="c"&gt;# - Configuration error in .env&lt;/span&gt;
&lt;span class="c"&gt;# - Port conflict&lt;/span&gt;
&lt;span class="c"&gt;# - Missing environment variables&lt;/span&gt;
&lt;span class="c"&gt;# - Application crash on startup&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Part 10: Best Practices and Security
&lt;/h2&gt;

&lt;p&gt;Now that everything works, let's make it production-ready.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security Best Practices
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Never commit secrets&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use GitHub Secrets&lt;/li&gt;
&lt;li&gt;Use environment variables&lt;/li&gt;
&lt;li&gt;Add &lt;code&gt;.env&lt;/code&gt; to &lt;code&gt;.gitignore&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Restrict SSH access&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In production, set &lt;code&gt;ssh_cidr&lt;/code&gt; to your IP only&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;YOUR_IP/32&lt;/code&gt; format (e.g., &lt;code&gt;1.2.3.4/32&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Use different secrets per environment&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dev JWT secret ≠ Staging JWT secret ≠ Prod JWT secret&lt;/li&gt;
&lt;li&gt;If dev is compromised, staging and prod are still safe&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Enable MFA&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On AWS account&lt;/li&gt;
&lt;li&gt;On GitHub account&lt;/li&gt;
&lt;li&gt;Extra layer of protection&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Regular updates&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keep Docker images updated&lt;/li&gt;
&lt;li&gt;Keep system packages updated&lt;/li&gt;
&lt;li&gt;Security patches are important!&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Infrastructure Best Practices
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Always use remote state&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;S3 + DynamoDB&lt;/li&gt;
&lt;li&gt;Never commit state files&lt;/li&gt;
&lt;li&gt;Enable versioning on S3 bucket&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Separate state per environment&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Different S3 keys&lt;/li&gt;
&lt;li&gt;Complete isolation&lt;/li&gt;
&lt;li&gt;Can't accidentally affect prod from dev&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Use version constraints&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In Terraform: &lt;code&gt;version = "~&amp;gt; 5.0"&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Prevents unexpected breaking changes&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Tag everything&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Makes it easy to find resources&lt;/li&gt;
&lt;li&gt;Helps with cost tracking&lt;/li&gt;
&lt;li&gt;Required for organization&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Deployment Best Practices
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Test in dev first&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Always deploy to dev → staging → prod (in that order!)&lt;/li&gt;
&lt;li&gt;Catch issues early before they reach production&lt;/li&gt;
&lt;li&gt;Dev is for breaking things&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Review drift alerts&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Don't ignore them!&lt;/li&gt;
&lt;li&gt;Investigate unexpected changes&lt;/li&gt;
&lt;li&gt;Could be security issue&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Use idempotent deployments&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Safe to run multiple times&lt;/li&gt;
&lt;li&gt;Ansible should be idempotent&lt;/li&gt;
&lt;li&gt;Terraform is idempotent by design&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Monitor your infrastructure&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up CloudWatch alarms&lt;/li&gt;
&lt;li&gt;Monitor costs&lt;/li&gt;
&lt;li&gt;Watch for unusual activity&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Cost Optimization
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Right-size instances&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dev: t3.small (saves money)&lt;/li&gt;
&lt;li&gt;Prod: t3.medium (enough power)&lt;/li&gt;
&lt;li&gt;Don't over-provision&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Stop dev when not in use&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dev doesn't need to run 24/7&lt;/li&gt;
&lt;li&gt;Stop instances when not testing&lt;/li&gt;
&lt;li&gt;Saves ~70% of costs&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Clean up unused resources&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Delete old instances&lt;/li&gt;
&lt;li&gt;Remove unused security groups&lt;/li&gt;
&lt;li&gt;Regular cleanup prevents waste&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Part 11: Going Further
&lt;/h2&gt;

&lt;p&gt;You've built a solid foundation. Here's where to go next.&lt;/p&gt;

&lt;h3&gt;
  
  
  Monitoring and Observability
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Add CloudWatch:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monitor CPU, memory, disk&lt;/li&gt;
&lt;li&gt;Set up alarms&lt;/li&gt;
&lt;li&gt;Track costs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Add Application Monitoring:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prometheus + Grafana&lt;/li&gt;
&lt;li&gt;ELK stack for logs&lt;/li&gt;
&lt;li&gt;APM tools (New Relic, Datadog)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Scaling
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Horizontal Scaling:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add load balancer&lt;/li&gt;
&lt;li&gt;Multiple instances&lt;/li&gt;
&lt;li&gt;Auto-scaling groups&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Vertical Scaling:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Larger instance types&lt;/li&gt;
&lt;li&gt;More CPU/RAM&lt;/li&gt;
&lt;li&gt;Better performance&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Backup and Disaster Recovery
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Backup Strategy:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Database backups&lt;/li&gt;
&lt;li&gt;State file backups (S3 versioning)&lt;/li&gt;
&lt;li&gt;Configuration backups&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Disaster Recovery:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multi-region deployment&lt;/li&gt;
&lt;li&gt;Automated failover&lt;/li&gt;
&lt;li&gt;Recovery procedures&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Advanced Topics
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes&lt;/strong&gt;: Container orchestration at scale&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terraform Modules&lt;/strong&gt;: Reusable infrastructure code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ansible Roles&lt;/strong&gt;: Shareable configuration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitOps&lt;/strong&gt;: Git as source of truth&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure Testing&lt;/strong&gt;: Test your infrastructure code&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Conclusion: What You've Accomplished
&lt;/h2&gt;

&lt;p&gt;Let's take a moment to appreciate what you've built:&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;A microservices application&lt;/strong&gt; running in containers&lt;br&gt;
✅ &lt;strong&gt;Automated infrastructure&lt;/strong&gt; with Terraform&lt;br&gt;
✅ &lt;strong&gt;Automated deployment&lt;/strong&gt; with Ansible&lt;br&gt;
✅ &lt;strong&gt;CI/CD pipelines&lt;/strong&gt; that detect problems&lt;br&gt;
✅ &lt;strong&gt;Multi-environment setup&lt;/strong&gt; (dev/staging/prod)&lt;br&gt;
✅ &lt;strong&gt;Secure HTTPS&lt;/strong&gt; with automatic certificates&lt;br&gt;
✅ &lt;strong&gt;Single-command deployment&lt;/strong&gt; that just works&lt;br&gt;
✅ &lt;strong&gt;Production-ready practices&lt;/strong&gt; and security&lt;/p&gt;

&lt;p&gt;This isn't just a tutorial project - this is &lt;strong&gt;real infrastructure&lt;/strong&gt; that follows industry best practices. You can use this as a foundation for actual production applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Takeaways
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure as Code&lt;/strong&gt; saves time and prevents mistakes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automation&lt;/strong&gt; is your friend - manual processes break&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt; isn't optional - build it in from the start&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testing&lt;/strong&gt; in dev/staging prevents production disasters&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Documentation&lt;/strong&gt; (this blog post!) helps you and others&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Next Steps
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Deploy your own project&lt;/strong&gt; using this as a template&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Experiment&lt;/strong&gt; - break things in dev, learn from it&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Share&lt;/strong&gt; - help others learn what you've learned&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Iterate&lt;/strong&gt; - improve based on real-world experience&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Resources
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.terraform.io/docs" rel="noopener noreferrer"&gt;Terraform Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.ansible.com/" rel="noopener noreferrer"&gt;Ansible Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://doc.traefik.io/traefik/" rel="noopener noreferrer"&gt;Traefik Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.github.com/en/actions" rel="noopener noreferrer"&gt;GitHub Actions Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/architecture/well-architected/" rel="noopener noreferrer"&gt;AWS Well-Architected Framework&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Thank you for reading!&lt;/strong&gt; If this helped you, please share it with others who might benefit. And if you have questions or run into issues, don't hesitate to reach out.&lt;/p&gt;

&lt;p&gt;Happy deploying! 🚀&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This guide was written as part of the HNG Internship Stage 6 DevOps task. The complete implementation is available on &lt;a href="https://github.com/Donkross360/DevOps-Stage-6" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cicd</category>
      <category>terraform</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Building Your Own Virtual Private Cloud (VPC) on Linux: A Complete Beginner's Guide</title>
      <dc:creator>Mart Young</dc:creator>
      <pubDate>Mon, 10 Nov 2025 10:06:31 +0000</pubDate>
      <link>https://dev.to/mart_young_ce778e4c31eb33/building-your-own-virtual-private-cloud-vpc-on-linux-a-complete-beginners-guide-1pnj</link>
      <guid>https://dev.to/mart_young_ce778e4c31eb33/building-your-own-virtual-private-cloud-vpc-on-linux-a-complete-beginners-guide-1pnj</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Have you ever wondered how cloud providers like AWS, Google Cloud, or Azure create isolated virtual networks? How do they ensure that your resources are completely isolated from others while still allowing controlled communication?&lt;/p&gt;

&lt;p&gt;In this comprehensive guide, we'll build our own Virtual Private Cloud (VPC) from scratch on Linux using nothing but native Linux networking tools. By the end of this journey, you'll understand the fundamental building blocks of network virtualization and have a working VPC implementation that you can use for learning, testing, or even production workloads.&lt;/p&gt;

&lt;h3&gt;
  
  
  What You'll Learn
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;How to create isolated virtual networks using Linux network namespaces&lt;/li&gt;
&lt;li&gt;How to connect networks using Linux bridges and virtual Ethernet pairs&lt;/li&gt;
&lt;li&gt;How to implement routing between subnets&lt;/li&gt;
&lt;li&gt;How to enable internet access using NAT (Network Address Translation)&lt;/li&gt;
&lt;li&gt;How to enforce network isolation between different VPCs&lt;/li&gt;
&lt;li&gt;How to connect VPCs using peering&lt;/li&gt;
&lt;li&gt;How to implement firewall rules (Security Groups)&lt;/li&gt;
&lt;li&gt;How to automate VPC management using &lt;code&gt;vpcctl&lt;/code&gt; CLI tool&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Our Approach: Learn by Doing, Then Automate
&lt;/h3&gt;

&lt;p&gt;This guide follows a two-part approach:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Manual Implementation&lt;/strong&gt; (Parts 1-10): We'll build VPCs manually using Linux commands. This helps you understand exactly what's happening under the hood.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automation with vpcctl&lt;/strong&gt; (Part 11): Once you understand the fundamentals, we'll introduce &lt;code&gt;vpcctl&lt;/code&gt;, a CLI tool that automates all the manual steps. This is what you'll use in practice!&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  What is vpcctl?
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;vpcctl&lt;/code&gt; is a command-line tool written in Python that automates VPC creation and management. Instead of running dozens of commands manually, you can create a VPC with a single command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Manual way (what we'll learn first)&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns add my-ns
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link &lt;/span&gt;add veth0 &lt;span class="nb"&gt;type &lt;/span&gt;veth peer name veth1
&lt;span class="c"&gt;# ... many more commands ...&lt;/span&gt;

&lt;span class="c"&gt;# Automated way (using vpcctl)&lt;/span&gt;
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./vpcctl create &lt;span class="nt"&gt;--name&lt;/span&gt; myvpc &lt;span class="nt"&gt;--cidr&lt;/span&gt; 10.0.0.0/16
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./vpcctl add-subnet &lt;span class="nt"&gt;--vpc&lt;/span&gt; myvpc &lt;span class="nt"&gt;--name&lt;/span&gt; public &lt;span class="nt"&gt;--cidr&lt;/span&gt; 10.0.1.0/24 &lt;span class="nt"&gt;--type&lt;/span&gt; public
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Don't worry - we'll get to &lt;code&gt;vpcctl&lt;/code&gt; after you understand the fundamentals!&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Before we begin, you'll need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Linux system (Ubuntu 20.04+, Debian, or Pop!_OS recommended)&lt;/li&gt;
&lt;li&gt;Root or sudo access&lt;/li&gt;
&lt;li&gt;Basic familiarity with the command line&lt;/li&gt;
&lt;li&gt;Git (to clone the repository)&lt;/li&gt;
&lt;li&gt;Python 3.6 or higher (for vpcctl)&lt;/li&gt;
&lt;li&gt;Curiosity and patience! 🚀&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Tools We'll Use
&lt;/h3&gt;

&lt;p&gt;All the tools we'll use are built into Linux:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;ip&lt;/code&gt;&lt;/strong&gt; - Modern Linux networking tool&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;brctl&lt;/code&gt;&lt;/strong&gt; - Bridge control utility&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;iptables&lt;/code&gt;&lt;/strong&gt; - Firewall and NAT tool&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;ping&lt;/code&gt;&lt;/strong&gt; - Network connectivity testing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;curl&lt;/code&gt;&lt;/strong&gt; - HTTP client for testing web servers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;vpcctl&lt;/code&gt;&lt;/strong&gt; - Our automation tool (we'll get this from the repository)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's get started!&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 1: Understanding the Building Blocks
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is a VPC?
&lt;/h3&gt;

&lt;p&gt;A Virtual Private Cloud (VPC) is essentially a virtual network that you can use to isolate your resources. Think of it like creating a private room in a shared building - you have your own space with controlled access points.&lt;/p&gt;

&lt;p&gt;In our implementation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;VPC&lt;/strong&gt; = A virtual network with its own IP address range&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Subnet&lt;/strong&gt; = A smaller network within the VPC (like a room within the private space)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bridge&lt;/strong&gt; = A virtual router that connects subnets&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Namespace&lt;/strong&gt; = An isolated network environment (like a separate room)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Linux Primitives
&lt;/h3&gt;

&lt;p&gt;Before we dive in, let's understand the Linux networking primitives we'll be using:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Network Namespaces&lt;/strong&gt;: Isolated network environments - think of them as separate computers on the same physical machine&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;veth Pairs&lt;/strong&gt;: Virtual Ethernet cables - they come in pairs, like a cable with two ends&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Linux Bridges&lt;/strong&gt;: Virtual switches that connect multiple networks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Routing Tables&lt;/strong&gt;: Maps that tell packets where to go&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;iptables&lt;/strong&gt;: Firewall and NAT rules&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Don't worry if this sounds complex - we'll learn by doing!&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 2: Setting Up Your Environment
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Clone the Repository
&lt;/h3&gt;

&lt;p&gt;First, let's get the project code which includes the &lt;code&gt;vpcctl&lt;/code&gt; tool and example scripts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Clone the repository&lt;/span&gt;
git clone https://github.com/Donkross360/vpc-project.git
&lt;span class="nb"&gt;cd &lt;/span&gt;vpc-project

&lt;span class="c"&gt;# Alternatively, if you prefer to download:&lt;/span&gt;
&lt;span class="c"&gt;# wget https://github.com/Donkross360/vpc-project/archive/main.zip&lt;/span&gt;
&lt;span class="c"&gt;# unzip main.zip&lt;/span&gt;
&lt;span class="c"&gt;# cd vpc-project-main&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The repository contains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;vpcctl&lt;/code&gt; - The CLI tool for automating VPC management (Python script)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;examples/&lt;/code&gt; - Example scripts and a demo web application&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;README.md&lt;/code&gt; - Complete documentation&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Makefile&lt;/code&gt; - Automation for common tasks&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;cleanup.sh&lt;/code&gt; - Script to clean up all VPC resources&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;policies/&lt;/code&gt; - Example firewall policy files&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 2: Install Required Tools
&lt;/h3&gt;

&lt;p&gt;Now, let's make sure we have all the necessary tools installed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Update package list&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update

&lt;span class="c"&gt;# Install required tools&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    iproute2 &lt;span class="se"&gt;\&lt;/span&gt;
    bridge-utils &lt;span class="se"&gt;\&lt;/span&gt;
    iptables &lt;span class="se"&gt;\&lt;/span&gt;
    python3 &lt;span class="se"&gt;\&lt;/span&gt;
    curl &lt;span class="se"&gt;\&lt;/span&gt;
    git

&lt;span class="c"&gt;# Make vpcctl executable&lt;/span&gt;
&lt;span class="nb"&gt;chmod&lt;/span&gt; +x vpcctl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Quick Start with vpcctl (Optional)
&lt;/h3&gt;

&lt;p&gt;Want to see &lt;code&gt;vpcctl&lt;/code&gt; in action right away? You can skip ahead to Part 11, or follow along manually to understand how it works. For now, here's a quick preview:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a VPC with one command&lt;/span&gt;
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./vpcctl create &lt;span class="nt"&gt;--name&lt;/span&gt; demo-vpc &lt;span class="nt"&gt;--cidr&lt;/span&gt; 10.0.0.0/16

&lt;span class="c"&gt;# Add a public subnet&lt;/span&gt;
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./vpcctl add-subnet &lt;span class="nt"&gt;--vpc&lt;/span&gt; demo-vpc &lt;span class="nt"&gt;--name&lt;/span&gt; public &lt;span class="nt"&gt;--cidr&lt;/span&gt; 10.0.1.0/24 &lt;span class="nt"&gt;--type&lt;/span&gt; public

&lt;span class="c"&gt;# List all VPCs&lt;/span&gt;
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./vpcctl list

&lt;span class="c"&gt;# View VPC details&lt;/span&gt;
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./vpcctl show demo-vpc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But wait! Before we automate, let's understand what's happening under the hood by building manually first.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Enable IP Forwarding
&lt;/h3&gt;

&lt;p&gt;IP forwarding allows Linux to act as a router, forwarding packets between networks. This is essential for our VPC to work:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Enable IP forwarding temporarily&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;sysctl &lt;span class="nt"&gt;-w&lt;/span&gt; net.ipv4.ip_forward&lt;span class="o"&gt;=&lt;/span&gt;1

&lt;span class="c"&gt;# Make it permanent (so it survives reboots)&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"net.ipv4.ip_forward=1"&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; /etc/sysctl.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What this does&lt;/strong&gt;: This tells the Linux kernel to forward IP packets between network interfaces, which is necessary for routing traffic between our virtual networks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Identify Your Internet Interface
&lt;/h3&gt;

&lt;p&gt;We need to know which network interface connects your machine to the internet. This will be used for NAT (Network Address Translation) later:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Find your default internet interface&lt;/span&gt;
&lt;span class="nv"&gt;DEFAULT_INTERFACE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;ip route show default | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'/default/ {print $5}'&lt;/span&gt; | &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-1&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Internet interface: &lt;/span&gt;&lt;span class="nv"&gt;$DEFAULT_INTERFACE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# Verify the interface exists and is UP&lt;/span&gt;
ip &lt;span class="nb"&gt;link &lt;/span&gt;show &lt;span class="nv"&gt;$DEFAULT_INTERFACE&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Expected output&lt;/strong&gt;: You should see something like &lt;code&gt;enp0s25&lt;/code&gt; or &lt;code&gt;eth0&lt;/code&gt; or &lt;code&gt;wlan0&lt;/code&gt;. This is your connection to the outside world.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Modern Linux systems use "predictable naming" like &lt;code&gt;enp0s25&lt;/code&gt; instead of the traditional &lt;code&gt;eth0&lt;/code&gt;. This is normal and expected!&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 3: Creating Your First Network Namespace
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Understanding Namespaces
&lt;/h3&gt;

&lt;p&gt;A network namespace is like a separate computer with its own network stack. It has its own:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Network interfaces&lt;/li&gt;
&lt;li&gt;IP addresses&lt;/li&gt;
&lt;li&gt;Routing tables&lt;/li&gt;
&lt;li&gt;Firewall rules&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's create our first namespace to see how it works:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a network namespace&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns add test-ns-1

&lt;span class="c"&gt;# List all namespaces&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns list

&lt;span class="c"&gt;# Verify the namespace was created&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;test-ns-1 ip &lt;span class="nb"&gt;link &lt;/span&gt;show
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What you'll see&lt;/strong&gt;: The namespace only has a &lt;code&gt;lo&lt;/code&gt; (loopback) interface, which is like a network interface that points back to itself. It's completely isolated from your host system!&lt;/p&gt;

&lt;h3&gt;
  
  
  Bringing Up the Loopback Interface
&lt;/h3&gt;

&lt;p&gt;Even though the namespace has a loopback interface, it's not active by default. Let's bring it up:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Bring up the loopback interface in the namespace&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;test-ns-1 ip &lt;span class="nb"&gt;link set &lt;/span&gt;lo up

&lt;span class="c"&gt;# Verify it's up&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;test-ns-1 ip &lt;span class="nb"&gt;link &lt;/span&gt;show lo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Success indicator&lt;/strong&gt;: You should see &lt;code&gt;state UNKNOWN&lt;/code&gt; or &lt;code&gt;state UP&lt;/code&gt; instead of &lt;code&gt;state DOWN&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 4: Creating Virtual Ethernet Pairs
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What are veth Pairs?
&lt;/h3&gt;

&lt;p&gt;A veth (virtual Ethernet) pair is like a virtual network cable with two ends. Whatever you send into one end comes out the other end. We use these to connect namespaces to the host or to bridges.&lt;/p&gt;

&lt;p&gt;Let's create our first veth pair:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a veth pair&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link &lt;/span&gt;add veth-host &lt;span class="nb"&gt;type &lt;/span&gt;veth peer name veth-ns

&lt;span class="c"&gt;# Verify both ends were created&lt;/span&gt;
ip &lt;span class="nb"&gt;link &lt;/span&gt;show &lt;span class="nb"&gt;type &lt;/span&gt;veth
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What you'll see&lt;/strong&gt;: Two interfaces - &lt;code&gt;veth-host&lt;/code&gt; and &lt;code&gt;veth-ns&lt;/code&gt;. They're connected like two ends of a cable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Moving One End to a Namespace
&lt;/h3&gt;

&lt;p&gt;Now let's move one end into our namespace. This creates a connection between the host and the namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a namespace for testing&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns add test-ns-2

&lt;span class="c"&gt;# Move veth-ns to the namespace&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-ns netns test-ns-2

&lt;span class="c"&gt;# Verify veth-ns is now in the namespace&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;test-ns-2 ip &lt;span class="nb"&gt;link &lt;/span&gt;show

&lt;span class="c"&gt;# Verify veth-host is still on the host&lt;/span&gt;
ip &lt;span class="nb"&gt;link &lt;/span&gt;show veth-host
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What happened&lt;/strong&gt;: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;veth-ns&lt;/code&gt; is now inside &lt;code&gt;test-ns-2&lt;/code&gt; and no longer visible on the host&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;veth-host&lt;/code&gt; is still on the host&lt;/li&gt;
&lt;li&gt;They're still connected - data sent through one end will come out the other!&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Assigning IP Addresses
&lt;/h3&gt;

&lt;p&gt;Now let's give both ends IP addresses so they can communicate:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Assign IP to host side&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip addr add 10.0.1.1/24 dev veth-host
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-host up

&lt;span class="c"&gt;# Assign IP to namespace side&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;test-ns-2 ip addr add 10.0.1.10/24 dev veth-ns
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;test-ns-2 ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-ns up

&lt;span class="c"&gt;# Bring up loopback in namespace&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;test-ns-2 ip &lt;span class="nb"&gt;link set &lt;/span&gt;lo up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Understanding the IP addresses&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;10.0.1.1/24&lt;/code&gt; means IP address &lt;code&gt;10.0.1.1&lt;/code&gt; with subnet mask &lt;code&gt;255.255.255.0&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;/24&lt;/code&gt; means the first 24 bits are the network part&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;10.0.1.0/24&lt;/code&gt; is the network, &lt;code&gt;10.0.1.1&lt;/code&gt; and &lt;code&gt;10.0.1.10&lt;/code&gt; are hosts in that network&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Testing Connectivity
&lt;/h3&gt;

&lt;p&gt;Let's test if the host and namespace can communicate:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Ping from host to namespace&lt;/span&gt;
ping &lt;span class="nt"&gt;-c&lt;/span&gt; 2 10.0.1.10

&lt;span class="c"&gt;# Ping from namespace to host&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;test-ns-2 ping &lt;span class="nt"&gt;-c&lt;/span&gt; 2 10.0.1.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Success!&lt;/strong&gt; If both pings work, you've successfully created your first virtual network connection! 🎉&lt;/p&gt;

&lt;h3&gt;
  
  
  Cleanup
&lt;/h3&gt;

&lt;p&gt;Let's clean up before moving on:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Bring interfaces down&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-host down
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link &lt;/span&gt;delete veth-host

&lt;span class="c"&gt;# Delete namespace (this also removes veth-ns)&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns delete test-ns-2
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns delete test-ns-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Part 5: Creating a Linux Bridge (Your First Router)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is a Linux Bridge?
&lt;/h3&gt;

&lt;p&gt;A Linux bridge is like a virtual network switch. It can connect multiple network interfaces and forward traffic between them. In our VPC, the bridge will act as a router, connecting multiple subnets.&lt;/p&gt;

&lt;p&gt;Let's create our first bridge:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a bridge&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link &lt;/span&gt;add br-test &lt;span class="nb"&gt;type &lt;/span&gt;bridge

&lt;span class="c"&gt;# Bring it up&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;br-test up

&lt;span class="c"&gt;# Verify it was created&lt;/span&gt;
ip &lt;span class="nb"&gt;link &lt;/span&gt;show br-test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Connecting a Namespace to the Bridge
&lt;/h3&gt;

&lt;p&gt;Now let's connect a namespace to our bridge using a veth pair:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create namespace&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns add test-ns-3

&lt;span class="c"&gt;# Create veth pair&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link &lt;/span&gt;add veth-br-host &lt;span class="nb"&gt;type &lt;/span&gt;veth peer name veth-br-ns

&lt;span class="c"&gt;# Move namespace end to namespace&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-br-ns netns test-ns-3

&lt;span class="c"&gt;# Connect host end to bridge (IMPORTANT: Do this before bringing it up!)&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-br-host master br-test

&lt;span class="c"&gt;# Bring host end up&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-br-host up

&lt;span class="c"&gt;# Configure namespace&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;test-ns-3 ip &lt;span class="nb"&gt;link set &lt;/span&gt;lo up
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;test-ns-3 ip addr add 10.0.1.10/24 dev veth-br-ns
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;test-ns-3 ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-br-ns up

&lt;span class="c"&gt;# Assign IP to bridge (this acts as the gateway)&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip addr add 10.0.1.1/24 dev br-test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key points&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The bridge IP (&lt;code&gt;10.0.1.1&lt;/code&gt;) acts as the gateway for the namespace&lt;/li&gt;
&lt;li&gt;The namespace and bridge must be in the same subnet (&lt;code&gt;10.0.1.0/24&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;We attach the veth to the bridge BEFORE bringing it up&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Adding a Default Route
&lt;/h3&gt;

&lt;p&gt;The namespace needs to know where to send packets that aren't in its local network. Let's add a default route:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Add default route in namespace (points to bridge as gateway)&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;test-ns-3 ip route add default via 10.0.1.1 dev veth-br-ns

&lt;span class="c"&gt;# Verify the route&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;test-ns-3 ip route show
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Testing Connectivity
&lt;/h3&gt;

&lt;p&gt;Let's test if the namespace can reach the bridge:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Ping bridge from namespace&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;test-ns-3 ping &lt;span class="nt"&gt;-c&lt;/span&gt; 2 10.0.1.1

&lt;span class="c"&gt;# Verify bridge sees the connection&lt;/span&gt;
brctl show br-test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Expected output&lt;/strong&gt;: The bridge should show &lt;code&gt;veth-br-host&lt;/code&gt; as a connected interface, and the ping should succeed!&lt;/p&gt;

&lt;h3&gt;
  
  
  Connecting Multiple Namespaces
&lt;/h3&gt;

&lt;p&gt;Now let's add a second namespace to the same bridge to create a small network:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create second namespace&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns add test-ns-4

&lt;span class="c"&gt;# Create second veth pair&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link &lt;/span&gt;add veth-br-host-2 &lt;span class="nb"&gt;type &lt;/span&gt;veth peer name veth-br-ns-2

&lt;span class="c"&gt;# Move namespace end to namespace&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-br-ns-2 netns test-ns-4

&lt;span class="c"&gt;# Connect to bridge&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-br-host-2 master br-test
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-br-host-2 up

&lt;span class="c"&gt;# Configure second namespace&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;test-ns-4 ip &lt;span class="nb"&gt;link set &lt;/span&gt;lo up
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;test-ns-4 ip addr add 10.0.1.20/24 dev veth-br-ns-2
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;test-ns-4 ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-br-ns-2 up
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;test-ns-4 ip route add default via 10.0.1.1 dev veth-br-ns-2

&lt;span class="c"&gt;# Test connectivity between namespaces&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;test-ns-3 ping &lt;span class="nt"&gt;-c&lt;/span&gt; 2 10.0.1.20
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;test-ns-4 ping &lt;span class="nt"&gt;-c&lt;/span&gt; 2 10.0.1.10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Success!&lt;/strong&gt; Both namespaces can now communicate with each other through the bridge! This is the foundation of our VPC.&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 6: Building Your First VPC
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Understanding VPC Structure
&lt;/h3&gt;

&lt;p&gt;A VPC consists of:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;A bridge&lt;/strong&gt; that acts as the VPC router&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multiple subnets&lt;/strong&gt; (network namespaces)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;veth pairs&lt;/strong&gt; connecting subnets to the bridge&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Routing tables&lt;/strong&gt; directing traffic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NAT rules&lt;/strong&gt; for internet access (for public subnets)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's build a complete VPC with public and private subnets!&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Create the VPC Bridge
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create VPC bridge&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link &lt;/span&gt;add br-vpc1 &lt;span class="nb"&gt;type &lt;/span&gt;bridge
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;br-vpc1 up

&lt;span class="c"&gt;# Assign bridge IP addresses for each subnet&lt;/span&gt;
&lt;span class="c"&gt;# The bridge needs an IP in each subnet to act as the gateway&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip addr add 10.0.1.1/24 dev br-vpc1  &lt;span class="c"&gt;# Gateway for public subnet&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip addr add 10.0.2.1/24 dev br-vpc1  &lt;span class="c"&gt;# Gateway for private subnet&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip addr add 10.0.0.1/16 dev br-vpc1  &lt;span class="c"&gt;# VPC router IP (optional)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why multiple IPs?&lt;/strong&gt; The bridge needs to be in the same subnet as each namespace it serves as a gateway for. This allows the namespaces to reach the gateway using ARP (Address Resolution Protocol).&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Create Public Subnet
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create public subnet namespace&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns add ns-vpc1-public

&lt;span class="c"&gt;# Create veth pair for public subnet&lt;/span&gt;
&lt;span class="c"&gt;# Note: Interface names must be 15 characters or less (Linux limit)&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link &lt;/span&gt;add veth-vpc1-pub-h &lt;span class="nb"&gt;type &lt;/span&gt;veth peer name veth-vpc1-pub-n

&lt;span class="c"&gt;# Move namespace end to namespace&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-vpc1-pub-n netns ns-vpc1-public

&lt;span class="c"&gt;# Connect host end to bridge&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-vpc1-pub-h master br-vpc1
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-vpc1-pub-h up

&lt;span class="c"&gt;# Configure public subnet&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc1-public ip &lt;span class="nb"&gt;link set &lt;/span&gt;lo up
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc1-public ip addr add 10.0.1.10/24 dev veth-vpc1-pub-n
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc1-public ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-vpc1-pub-n up

&lt;span class="c"&gt;# Add routes&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc1-public ip route add default via 10.0.1.1 dev veth-vpc1-pub-n
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc1-public ip route add 10.0.2.0/24 via 10.0.1.1 dev veth-vpc1-pub-n
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Create Private Subnet
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create private subnet namespace&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns add ns-vpc1-private

&lt;span class="c"&gt;# Create veth pair for private subnet&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link &lt;/span&gt;add veth-vpc1-prv-h &lt;span class="nb"&gt;type &lt;/span&gt;veth peer name veth-vpc1-prv-n

&lt;span class="c"&gt;# Move namespace end to namespace&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-vpc1-prv-n netns ns-vpc1-private

&lt;span class="c"&gt;# Connect host end to bridge&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-vpc1-prv-h master br-vpc1
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-vpc1-prv-h up

&lt;span class="c"&gt;# Configure private subnet&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc1-private ip &lt;span class="nb"&gt;link set &lt;/span&gt;lo up
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc1-private ip addr add 10.0.2.10/24 dev veth-vpc1-prv-n
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc1-private ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-vpc1-prv-n up

&lt;span class="c"&gt;# Add routes&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc1-private ip route add default via 10.0.2.1 dev veth-vpc1-prv-n
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc1-private ip route add 10.0.1.0/24 via 10.0.2.1 dev veth-vpc1-prv-n
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4: Enable Proxy ARP
&lt;/h3&gt;

&lt;p&gt;Proxy ARP allows the bridge to respond to ARP requests for IP addresses in different subnets. This is essential for inter-subnet routing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Enable proxy ARP on bridge and veth interfaces&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;sysctl &lt;span class="nt"&gt;-w&lt;/span&gt; net.ipv4.conf.br-vpc1.proxy_arp&lt;span class="o"&gt;=&lt;/span&gt;1
&lt;span class="nb"&gt;sudo &lt;/span&gt;sysctl &lt;span class="nt"&gt;-w&lt;/span&gt; net.ipv4.conf.veth-vpc1-pub-h.proxy_arp&lt;span class="o"&gt;=&lt;/span&gt;1
&lt;span class="nb"&gt;sudo &lt;/span&gt;sysctl &lt;span class="nt"&gt;-w&lt;/span&gt; net.ipv4.conf.veth-vpc1-prv-h.proxy_arp&lt;span class="o"&gt;=&lt;/span&gt;1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What is Proxy ARP?&lt;/strong&gt; Normally, a device only responds to ARP requests for IPs on its own interface. Proxy ARP allows the bridge to respond for IPs in connected subnets, enabling routing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Test Inter-Subnet Communication
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Test: public to private&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc1-public ping &lt;span class="nt"&gt;-c&lt;/span&gt; 2 10.0.2.10

&lt;span class="c"&gt;# Test: private to public&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc1-private ping &lt;span class="nt"&gt;-c&lt;/span&gt; 2 10.0.1.10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Success!&lt;/strong&gt; Your subnets can now communicate with each other! 🎉&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 7: Enabling Internet Access with NAT
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Understanding NAT
&lt;/h3&gt;

&lt;p&gt;Network Address Translation (NAT) allows private IP addresses to access the internet by translating them to a public IP address. This is how your home router works!&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Enable NAT for Public Subnet
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Get your internet interface (we identified this earlier)&lt;/span&gt;
&lt;span class="nv"&gt;DEFAULT_INTERFACE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;ip route show default | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'/default/ {print $5}'&lt;/span&gt; | &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-1&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Add NAT rule for public subnet&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;iptables &lt;span class="nt"&gt;-t&lt;/span&gt; nat &lt;span class="nt"&gt;-A&lt;/span&gt; POSTROUTING &lt;span class="nt"&gt;-s&lt;/span&gt; 10.0.1.0/24 &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;$DEFAULT_INTERFACE&lt;/span&gt; &lt;span class="nt"&gt;-j&lt;/span&gt; MASQUERADE

&lt;span class="c"&gt;# Verify the rule&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;iptables &lt;span class="nt"&gt;-t&lt;/span&gt; nat &lt;span class="nt"&gt;-L&lt;/span&gt; POSTROUTING &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What this does&lt;/strong&gt;: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-s 10.0.1.0/24&lt;/code&gt; - Source is the public subnet&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-o $DEFAULT_INTERFACE&lt;/code&gt; - Outgoing interface is your internet connection&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-j MASQUERADE&lt;/code&gt; - Masquerade (NAT) the traffic&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 2: Configure DNS
&lt;/h3&gt;

&lt;p&gt;Namespaces need DNS to resolve domain names:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create DNS configuration for namespace&lt;/span&gt;
&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/netns/ns-vpc1-public
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"nameserver 8.8.8.8"&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/netns/ns-vpc1-public/resolv.conf
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"nameserver 8.8.4.4"&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; /etc/netns/ns-vpc1-public/resolv.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Test Internet Access
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Test: ping Google's DNS&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc1-public ping &lt;span class="nt"&gt;-c&lt;/span&gt; 2 8.8.8.8

&lt;span class="c"&gt;# Test: ping Google by domain name&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc1-public ping &lt;span class="nt"&gt;-c&lt;/span&gt; 2 google.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Success!&lt;/strong&gt; Your public subnet can now access the internet! 🌐&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Verify Private Subnet Isolation
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Private subnet should NOT be able to access internet (no NAT)&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc1-private ping &lt;span class="nt"&gt;-c&lt;/span&gt; 2 8.8.8.8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Expected result&lt;/strong&gt;: The ping should fail or timeout. This is correct - private subnets don't have NAT, so they can't access the internet directly.&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 8: Creating Multiple VPCs and Enforcing Isolation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Creating a Second VPC
&lt;/h3&gt;

&lt;p&gt;Let's create a second VPC to demonstrate isolation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create second VPC bridge&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link &lt;/span&gt;add br-vpc2 &lt;span class="nb"&gt;type &lt;/span&gt;bridge
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;br-vpc2 up

&lt;span class="c"&gt;# Assign bridge IPs&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip addr add 172.16.1.1/24 dev br-vpc2  &lt;span class="c"&gt;# Gateway for public subnet&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip addr add 172.16.2.1/24 dev br-vpc2  &lt;span class="c"&gt;# Gateway for private subnet&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip addr add 172.16.0.1/16 dev br-vpc2  &lt;span class="c"&gt;# VPC router IP&lt;/span&gt;

&lt;span class="c"&gt;# Create public subnet for VPC2&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns add ns-vpc2-public
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link &lt;/span&gt;add veth-vpc2-pub-h &lt;span class="nb"&gt;type &lt;/span&gt;veth peer name veth-vpc2-pub-n
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-vpc2-pub-n netns ns-vpc2-public
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-vpc2-pub-h master br-vpc2
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-vpc2-pub-h up
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc2-public ip &lt;span class="nb"&gt;link set &lt;/span&gt;lo up
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc2-public ip addr add 172.16.1.10/24 dev veth-vpc2-pub-n
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc2-public ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-vpc2-pub-n up
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc2-public ip route add default via 172.16.1.1 dev veth-vpc2-pub-n
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc2-public ip route add 172.16.2.0/24 via 172.16.1.1 dev veth-vpc2-pub-n

&lt;span class="c"&gt;# Create private subnet for VPC2&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns add ns-vpc2-private
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link &lt;/span&gt;add veth-vpc2-prv-h &lt;span class="nb"&gt;type &lt;/span&gt;veth peer name veth-vpc2-prv-n
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-vpc2-prv-n netns ns-vpc2-private
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-vpc2-prv-h master br-vpc2
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-vpc2-prv-h up
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc2-private ip &lt;span class="nb"&gt;link set &lt;/span&gt;lo up
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc2-private ip addr add 172.16.2.10/24 dev veth-vpc2-prv-n
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc2-private ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-vpc2-prv-n up
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc2-private ip route add default via 172.16.2.1 dev veth-vpc2-prv-n
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc2-private ip route add 172.16.1.0/24 via 172.16.2.1 dev veth-vpc2-prv-n

&lt;span class="c"&gt;# Enable proxy ARP&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;sysctl &lt;span class="nt"&gt;-w&lt;/span&gt; net.ipv4.conf.br-vpc2.proxy_arp&lt;span class="o"&gt;=&lt;/span&gt;1
&lt;span class="nb"&gt;sudo &lt;/span&gt;sysctl &lt;span class="nt"&gt;-w&lt;/span&gt; net.ipv4.conf.veth-vpc2-pub-h.proxy_arp&lt;span class="o"&gt;=&lt;/span&gt;1
&lt;span class="nb"&gt;sudo &lt;/span&gt;sysctl &lt;span class="nt"&gt;-w&lt;/span&gt; net.ipv4.conf.veth-vpc2-prv-h.proxy_arp&lt;span class="o"&gt;=&lt;/span&gt;1

&lt;span class="c"&gt;# Add NAT for VPC2 public subnet&lt;/span&gt;
&lt;span class="nv"&gt;DEFAULT_INTERFACE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;ip route show default | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'/default/ {print $5}'&lt;/span&gt; | &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-1&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;iptables &lt;span class="nt"&gt;-t&lt;/span&gt; nat &lt;span class="nt"&gt;-A&lt;/span&gt; POSTROUTING &lt;span class="nt"&gt;-s&lt;/span&gt; 172.16.1.0/24 &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;$DEFAULT_INTERFACE&lt;/span&gt; &lt;span class="nt"&gt;-j&lt;/span&gt; MASQUERADE

&lt;span class="c"&gt;# Configure DNS&lt;/span&gt;
&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/netns/ns-vpc2-public
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"nameserver 8.8.8.8"&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/netns/ns-vpc2-public/resolv.conf
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"nameserver 8.8.4.4"&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; /etc/netns/ns-vpc2-public/resolv.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Enforcing VPC Isolation
&lt;/h3&gt;

&lt;p&gt;By default, Linux might route traffic between VPCs. We need to explicitly block this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Block traffic between VPC1 and VPC2&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;iptables &lt;span class="nt"&gt;-A&lt;/span&gt; FORWARD &lt;span class="nt"&gt;-s&lt;/span&gt; 10.0.0.0/16 &lt;span class="nt"&gt;-d&lt;/span&gt; 172.16.0.0/16 &lt;span class="nt"&gt;-j&lt;/span&gt; DROP
&lt;span class="nb"&gt;sudo &lt;/span&gt;iptables &lt;span class="nt"&gt;-A&lt;/span&gt; FORWARD &lt;span class="nt"&gt;-s&lt;/span&gt; 172.16.0.0/16 &lt;span class="nt"&gt;-d&lt;/span&gt; 10.0.0.0/16 &lt;span class="nt"&gt;-j&lt;/span&gt; DROP

&lt;span class="c"&gt;# Verify isolation&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc1-public ping &lt;span class="nt"&gt;-c&lt;/span&gt; 2 172.16.1.10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Expected result&lt;/strong&gt;: The ping should fail! VPCs are now isolated from each other. 🔒&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 9: VPC Peering (Connecting VPCs)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is VPC Peering?
&lt;/h3&gt;

&lt;p&gt;VPC peering allows you to connect two VPCs so they can communicate with each other. This is useful when you want controlled communication between different environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Create Peering Connection
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create veth pair for peering&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link &lt;/span&gt;add veth-peer-1 &lt;span class="nb"&gt;type &lt;/span&gt;veth peer name veth-peer-2

&lt;span class="c"&gt;# Connect one end to VPC1 bridge&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-peer-1 master br-vpc1
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-peer-1 up
&lt;span class="c"&gt;# Use a shared /30 subnet for peering (192.168.255.0/30)&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip addr add 192.168.255.1/30 dev veth-peer-1

&lt;span class="c"&gt;# Connect other end to VPC2 bridge&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-peer-2 master br-vpc2
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip &lt;span class="nb"&gt;link set &lt;/span&gt;veth-peer-2 up
&lt;span class="c"&gt;# Same subnet - this is the peer IP&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip addr add 192.168.255.2/30 dev veth-peer-2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why a /30 subnet?&lt;/strong&gt; A /30 subnet provides exactly 2 usable IPs (&lt;code&gt;.1&lt;/code&gt; and &lt;code&gt;.2&lt;/code&gt;), perfect for a point-to-point link between two VPCs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Add Routes
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Add route on host to reach VPC2 through peering link&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip route add 172.16.0.0/16 via 192.168.255.2 dev veth-peer-1

&lt;span class="c"&gt;# Add route on host to reach VPC1 through peering link&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip route add 10.0.0.0/16 via 192.168.255.1 dev veth-peer-2

&lt;span class="c"&gt;# Remove isolation rules (peering should allow communication)&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;iptables &lt;span class="nt"&gt;-D&lt;/span&gt; FORWARD &lt;span class="nt"&gt;-s&lt;/span&gt; 10.0.0.0/16 &lt;span class="nt"&gt;-d&lt;/span&gt; 172.16.0.0/16 &lt;span class="nt"&gt;-j&lt;/span&gt; DROP 2&amp;gt;/dev/null &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true
sudo &lt;/span&gt;iptables &lt;span class="nt"&gt;-D&lt;/span&gt; FORWARD &lt;span class="nt"&gt;-s&lt;/span&gt; 172.16.0.0/16 &lt;span class="nt"&gt;-d&lt;/span&gt; 10.0.0.0/16 &lt;span class="nt"&gt;-j&lt;/span&gt; DROP 2&amp;gt;/dev/null &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true&lt;/span&gt;

&lt;span class="c"&gt;# Add routes in namespaces to reach the peer VPC&lt;/span&gt;
&lt;span class="c"&gt;# In VPC1 namespaces, add route to VPC2&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc1-public ip route add 172.16.0.0/16 via 10.0.1.1 dev veth-vpc1-pub-n
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc1-private ip route add 172.16.0.0/16 via 10.0.2.1 dev veth-vpc1-prv-n

&lt;span class="c"&gt;# In VPC2 namespaces, add route to VPC1&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc2-public ip route add 10.0.0.0/16 via 172.16.1.1 dev veth-vpc2-pub-n
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc2-private ip route add 10.0.0.0/16 via 172.16.2.1 dev veth-vpc2-prv-n
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Test Peering
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Test VPC1 to VPC2 connectivity&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc1-public ping &lt;span class="nt"&gt;-c&lt;/span&gt; 2 172.16.1.10

&lt;span class="c"&gt;# Test VPC2 to VPC1 connectivity&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc2-public ping &lt;span class="nt"&gt;-c&lt;/span&gt; 2 10.0.1.10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Success!&lt;/strong&gt; VPCs can now communicate through peering! 🌉&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 10: Implementing Firewall Rules (Security Groups)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What are Security Groups?
&lt;/h3&gt;

&lt;p&gt;Security Groups are firewall rules that control inbound and outbound traffic. In our implementation, we'll use iptables in each namespace to enforce these rules.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Create a Firewall Policy
&lt;/h3&gt;

&lt;p&gt;Create a JSON file with your firewall rules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; firewall-policy.json &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
{
  "subnet": "10.0.1.0/24",
  "ingress": [
    {"port": 80, "protocol": "tcp", "action": "allow"},
    {"port": 443, "protocol": "tcp", "action": "allow"},
    {"port": 22, "protocol": "tcp", "action": "deny"}
  ],
  "egress": [
    {"port": 0, "protocol": "all", "action": "allow"}
  ]
}
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Apply Firewall Rules
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Allow HTTP (port 80)&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc1-public iptables &lt;span class="nt"&gt;-A&lt;/span&gt; INPUT &lt;span class="nt"&gt;-p&lt;/span&gt; tcp &lt;span class="nt"&gt;--dport&lt;/span&gt; 80 &lt;span class="nt"&gt;-j&lt;/span&gt; ACCEPT

&lt;span class="c"&gt;# Allow HTTPS (port 443)&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc1-public iptables &lt;span class="nt"&gt;-A&lt;/span&gt; INPUT &lt;span class="nt"&gt;-p&lt;/span&gt; tcp &lt;span class="nt"&gt;--dport&lt;/span&gt; 443 &lt;span class="nt"&gt;-j&lt;/span&gt; ACCEPT

&lt;span class="c"&gt;# Deny SSH (port 22)&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc1-public iptables &lt;span class="nt"&gt;-A&lt;/span&gt; INPUT &lt;span class="nt"&gt;-p&lt;/span&gt; tcp &lt;span class="nt"&gt;--dport&lt;/span&gt; 22 &lt;span class="nt"&gt;-j&lt;/span&gt; DROP

&lt;span class="c"&gt;# Allow established connections&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc1-public iptables &lt;span class="nt"&gt;-A&lt;/span&gt; INPUT &lt;span class="nt"&gt;-m&lt;/span&gt; state &lt;span class="nt"&gt;--state&lt;/span&gt; ESTABLISHED,RELATED &lt;span class="nt"&gt;-j&lt;/span&gt; ACCEPT

&lt;span class="c"&gt;# Allow loopback&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc1-public iptables &lt;span class="nt"&gt;-A&lt;/span&gt; INPUT &lt;span class="nt"&gt;-i&lt;/span&gt; lo &lt;span class="nt"&gt;-j&lt;/span&gt; ACCEPT

&lt;span class="c"&gt;# Set default policy to DROP&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc1-public iptables &lt;span class="nt"&gt;-P&lt;/span&gt; INPUT DROP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Deploy a Web Server
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Deploy a simple HTTP server in the public subnet&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;ns-vpc1-public python3 &lt;span class="nt"&gt;-m&lt;/span&gt; http.server 80 &amp;amp;

&lt;span class="c"&gt;# Test HTTP access (should work)&lt;/span&gt;
curl http://10.0.1.10:80

&lt;span class="c"&gt;# Test SSH access (should be blocked)&lt;/span&gt;
nc &lt;span class="nt"&gt;-zv&lt;/span&gt; 10.0.1.10 22
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Expected results&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;HTTP works ✅&lt;/li&gt;
&lt;li&gt;SSH is blocked ❌&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Part 11: Automating with vpcctl
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Why Automate?
&lt;/h3&gt;

&lt;p&gt;By now, you've learned how to manually create VPCs using Linux commands. You've seen how namespaces, bridges, veth pairs, routing, and NAT all work together. This knowledge is invaluable!&lt;/p&gt;

&lt;p&gt;However, creating VPCs manually requires dozens of commands and is error-prone. That's where &lt;code&gt;vpcctl&lt;/code&gt; comes in - it's a CLI tool that automates all the steps we've learned, making VPC management as simple as a single command.&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting Started with vpcctl
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt;: Before using &lt;code&gt;vpcctl&lt;/code&gt;, make sure you've cloned the repository as shown in Part 2, Step 1. The &lt;code&gt;vpcctl&lt;/code&gt; tool is included in the repository.&lt;/p&gt;

&lt;p&gt;If you haven't done so already:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Clone the repository&lt;/span&gt;
git clone https://github.com/Donkross360/vpc-project.git
&lt;span class="nb"&gt;cd &lt;/span&gt;vpc-project

&lt;span class="c"&gt;# Make vpcctl executable&lt;/span&gt;
&lt;span class="nb"&gt;chmod&lt;/span&gt; +x vpcctl

&lt;span class="c"&gt;# Verify it works&lt;/span&gt;
./vpcctl &lt;span class="nt"&gt;--help&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Understanding What vpcctl Does
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;vpcctl&lt;/code&gt; is a Python CLI tool that automates everything we did manually:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Creates VPCs&lt;/strong&gt; - Sets up bridges with proper IP addresses&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adds Subnets&lt;/strong&gt; - Creates namespaces, veth pairs, and configures routing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enables NAT&lt;/strong&gt; - Automatically configures NAT for public subnets&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Manages Peering&lt;/strong&gt; - Creates VPC peering connections&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Applies Firewall Rules&lt;/strong&gt; - Reads JSON policies and applies iptables rules&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enforces Isolation&lt;/strong&gt; - Ensures VPCs are isolated by default&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintains State&lt;/strong&gt; - Tracks all VPCs in a JSON file&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Basic Usage
&lt;/h3&gt;

&lt;p&gt;Now that you understand how VPCs work manually, let's see how &lt;code&gt;vpcctl&lt;/code&gt; simplifies everything:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a VPC (replaces all the manual bridge creation)&lt;/span&gt;
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./vpcctl create &lt;span class="nt"&gt;--name&lt;/span&gt; myvpc &lt;span class="nt"&gt;--cidr&lt;/span&gt; 10.0.0.0/16

&lt;span class="c"&gt;# Add a public subnet (replaces namespace, veth, routing setup)&lt;/span&gt;
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./vpcctl add-subnet &lt;span class="nt"&gt;--vpc&lt;/span&gt; myvpc &lt;span class="nt"&gt;--name&lt;/span&gt; public &lt;span class="nt"&gt;--cidr&lt;/span&gt; 10.0.1.0/24 &lt;span class="nt"&gt;--type&lt;/span&gt; public

&lt;span class="c"&gt;# Add a private subnet&lt;/span&gt;
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./vpcctl add-subnet &lt;span class="nt"&gt;--vpc&lt;/span&gt; myvpc &lt;span class="nt"&gt;--name&lt;/span&gt; private &lt;span class="nt"&gt;--cidr&lt;/span&gt; 10.0.2.0/24 &lt;span class="nt"&gt;--type&lt;/span&gt; private

&lt;span class="c"&gt;# List all VPCs&lt;/span&gt;
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./vpcctl list

&lt;span class="c"&gt;# Show VPC details (namespace, IPs, etc.)&lt;/span&gt;
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./vpcctl show myvpc

&lt;span class="c"&gt;# Apply firewall rules from a JSON policy&lt;/span&gt;
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./vpcctl apply-firewall &lt;span class="nt"&gt;--vpc&lt;/span&gt; myvpc &lt;span class="nt"&gt;--subnet&lt;/span&gt; public &lt;span class="nt"&gt;--policy&lt;/span&gt; policies/public-subnet.json

&lt;span class="c"&gt;# Create VPC peering (connects two VPCs)&lt;/span&gt;
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./vpcctl peer &lt;span class="nt"&gt;--vpc1&lt;/span&gt; myvpc &lt;span class="nt"&gt;--vpc2&lt;/span&gt; another-vpc

&lt;span class="c"&gt;# Deploy an application (using helper script)&lt;/span&gt;
./examples/deploy-web-app.sh myvpc public 8000

&lt;span class="c"&gt;# Delete a VPC (cleans up everything)&lt;/span&gt;
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./vpcctl delete &lt;span class="nt"&gt;--name&lt;/span&gt; myvpc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Using the Makefile (Even Easier!)
&lt;/h3&gt;

&lt;p&gt;The repository includes a &lt;code&gt;Makefile&lt;/code&gt; that makes common tasks even simpler:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# One command to set up everything and deploy a demo&lt;/span&gt;
make demo

&lt;span class="c"&gt;# Run comprehensive tests&lt;/span&gt;
make &lt;span class="nb"&gt;test&lt;/span&gt;

&lt;span class="c"&gt;# Clean up all VPCs&lt;/span&gt;
make clean

&lt;span class="c"&gt;# Show all available targets&lt;/span&gt;
make &lt;span class="nb"&gt;help&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;make demo&lt;/code&gt; command will:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Set up IP forwarding&lt;/li&gt;
&lt;li&gt;Create a demo VPC&lt;/li&gt;
&lt;li&gt;Add a public subnet&lt;/li&gt;
&lt;li&gt;Deploy a web application&lt;/li&gt;
&lt;li&gt;Show you the URL to access it&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Try it out!&lt;/p&gt;

&lt;h3&gt;
  
  
  What vpcctl Does Under the Hood
&lt;/h3&gt;

&lt;p&gt;Remember all those manual commands we ran? &lt;code&gt;vpcctl&lt;/code&gt; does the same things automatically:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When you run &lt;code&gt;sudo ./vpcctl create --name myvpc --cidr 10.0.0.0/16&lt;/code&gt;:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creates a Linux bridge (&lt;code&gt;br-myvpc&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Assigns an IP address to the bridge&lt;/li&gt;
&lt;li&gt;Enables the bridge&lt;/li&gt;
&lt;li&gt;Stores VPC information in &lt;code&gt;.vpcctl/vpcs.json&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;When you run &lt;code&gt;sudo ./vpcctl add-subnet --vpc myvpc --name public --cidr 10.0.1.0/24 --type public&lt;/code&gt;:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creates a network namespace (&lt;code&gt;ns-myvpc-public&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Creates a veth pair&lt;/li&gt;
&lt;li&gt;Moves one end into the namespace&lt;/li&gt;
&lt;li&gt;Connects the other end to the bridge&lt;/li&gt;
&lt;li&gt;Assigns IP addresses&lt;/li&gt;
&lt;li&gt;Configures routing tables&lt;/li&gt;
&lt;li&gt;Enables proxy ARP&lt;/li&gt;
&lt;li&gt;Sets up NAT (if public subnet)&lt;/li&gt;
&lt;li&gt;Configures DNS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of this happens automatically! 🎉&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of Automation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Idempotent&lt;/strong&gt;: Safe to run multiple times - won't create duplicates&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;State management&lt;/strong&gt;: Tracks VPCs in JSON file (&lt;code&gt;.vpcctl/vpcs.json&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Logging&lt;/strong&gt;: All actions are logged to &lt;code&gt;.vpcctl/vpcctl.log&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error handling&lt;/strong&gt;: Validates inputs and handles errors gracefully&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consistent&lt;/strong&gt;: Same results every time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fast&lt;/strong&gt;: Creates VPCs in seconds instead of minutes&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Part 12: Cleanup and Best Practices
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Cleaning Up
&lt;/h3&gt;

&lt;p&gt;Always clean up your test resources:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Use the cleanup script&lt;/span&gt;
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./cleanup.sh

&lt;span class="c"&gt;# Or manually delete VPCs&lt;/span&gt;
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./vpcctl delete &lt;span class="nt"&gt;--name&lt;/span&gt; myvpc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Best Practices
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Use descriptive names&lt;/strong&gt;: Name your VPCs and subnets clearly&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plan your IP ranges&lt;/strong&gt;: Avoid overlapping CIDR blocks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test isolation&lt;/strong&gt;: Always verify that VPCs are isolated&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Document your setup&lt;/strong&gt;: Keep notes on your VPC configurations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clean up regularly&lt;/strong&gt;: Remove unused VPCs to free resources&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Common Pitfalls
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Interface name length&lt;/strong&gt;: Linux limits interface names to 15 characters&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Subnet mismatches&lt;/strong&gt;: Gateway IP must be in the same subnet as the namespace&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Missing proxy ARP&lt;/strong&gt;: Required for inter-subnet routing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DNS configuration&lt;/strong&gt;: Don't forget to configure DNS for public subnets&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NAT rules&lt;/strong&gt;: Only public subnets need NAT rules&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Congratulations! You've successfully built your own VPC from scratch! 🎉&lt;/p&gt;

&lt;h3&gt;
  
  
  What You've Accomplished
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Created isolated virtual networks using Linux namespaces&lt;/li&gt;
&lt;li&gt;Connected networks using Linux bridges and veth pairs&lt;/li&gt;
&lt;li&gt;Implemented routing between subnets&lt;/li&gt;
&lt;li&gt;Enabled internet access using NAT&lt;/li&gt;
&lt;li&gt;Enforced VPC isolation&lt;/li&gt;
&lt;li&gt;Created VPC peering connections&lt;/li&gt;
&lt;li&gt;Implemented firewall rules&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Next Steps
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Experiment&lt;/strong&gt;: Try creating more complex VPC topologies&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automate&lt;/strong&gt;: Use &lt;code&gt;vpcctl&lt;/code&gt; to manage your VPCs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy applications&lt;/strong&gt;: Run real applications in your VPCs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learn more&lt;/strong&gt;: Explore advanced networking topics like BGP, VPNs, and load balancing&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Resources
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Repository&lt;/strong&gt;: Clone the project from GitHub to get &lt;code&gt;vpcctl&lt;/code&gt; and all examples&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;README.md&lt;/strong&gt;: Complete CLI documentation and usage examples&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Linux Networking Documentation&lt;/strong&gt;: &lt;a href="https://www.kernel.org/doc/Documentation/networking/" rel="noopener noreferrer"&gt;Official Linux networking docs&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Makefile&lt;/strong&gt;: Run &lt;code&gt;make help&lt;/code&gt; to see all available automation targets&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Final Thoughts
&lt;/h3&gt;

&lt;p&gt;Building a VPC from scratch teaches you the fundamentals of network virtualization. The concepts you've learned here apply to cloud providers, container networking (Docker, Kubernetes), and virtualized environments.&lt;/p&gt;

&lt;p&gt;Remember: Understanding the fundamentals makes you a better engineer. Keep learning, keep building, and keep experimenting! 🚀&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Happy VPC Building!&lt;/strong&gt; 🌐&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you found this guide helpful, please share it with others who might benefit from it!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>networking</category>
      <category>beginners</category>
      <category>tutorial</category>
      <category>linux</category>
    </item>
    <item>
      <title>Creating Your First Isolated Linux Container with SSH Access: A Step-by-Step Guide</title>
      <dc:creator>Mart Young</dc:creator>
      <pubDate>Thu, 27 Mar 2025 22:46:01 +0000</pubDate>
      <link>https://dev.to/mart_young_ce778e4c31eb33/creating-your-first-isolated-linux-container-with-ssh-access-a-step-by-step-guide-3j6h</link>
      <guid>https://dev.to/mart_young_ce778e4c31eb33/creating-your-first-isolated-linux-container-with-ssh-access-a-step-by-step-guide-3j6h</guid>
      <description>&lt;h3&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Have you ever wondered how tools like Docker create isolated environments? In this guide, we’ll build a simple container from scratch using Linux namespaces. You’ll create a fully isolated environment with its own network and filesystem, then access it remotely via SSH or NSENTER. - No prior container experience needed!&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;What You’ll Learn&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;How Linux namespaces work
&lt;/li&gt;
&lt;li&gt;Setting up a minimal container filesystem
&lt;/li&gt;
&lt;li&gt;Configuring isolated networking
&lt;/li&gt;
&lt;li&gt;Create a Custom cgroup&lt;/li&gt;
&lt;li&gt;Limit CPU Usage&lt;/li&gt;
&lt;li&gt;Startup a webserver &amp;amp; host a simple webpage
&lt;/li&gt;
&lt;li&gt;nsenter or SSH access with Dropbear (a lightweight SSH server) optional &lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Prerequisites&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;A Linux system (Ubuntu 24.04 used here)
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sudo&lt;/code&gt; access
&lt;/li&gt;
&lt;li&gt;Basic terminal familiarity
&lt;/li&gt;
&lt;li&gt;Internet connection
&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Step 1: The Complete Script&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Here’s the full script we’ll be using. Don’t worry if it looks complex—we’ll break it down line by line!&lt;br&gt;&lt;br&gt;
Save this as &lt;code&gt;create-container.sh&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/usr/bin/env bash&lt;/span&gt;
&lt;span class="c"&gt;# setup_isolated_container_ssh.sh&lt;/span&gt;
&lt;span class="c"&gt;# This script sets up a fully isolated container environment with its own network namespace using a veth pair,&lt;/span&gt;
&lt;span class="c"&gt;# then chroots into a minimal filesystem and starts Dropbear SSH daemon and a BusyBox HTTP server.&lt;/span&gt;

&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-eo&lt;/span&gt; pipefail

&lt;span class="c"&gt;# CONFIGURATION VARIABLES&lt;/span&gt;
&lt;span class="nv"&gt;CONTAINER_ROOT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/opt/isolated_container"&lt;/span&gt;
&lt;span class="nv"&gt;HOST_VETH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"veth-host"&lt;/span&gt;
&lt;span class="nv"&gt;CONTAINER_VETH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"veth-container"&lt;/span&gt;
&lt;span class="nv"&gt;HOST_IP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"192.168.200.1/24"&lt;/span&gt;
&lt;span class="nv"&gt;CONTAINER_IP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"192.168.200.2/24"&lt;/span&gt;
&lt;span class="nv"&gt;GATEWAY_IP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"192.168.200.1"&lt;/span&gt;
&lt;span class="nv"&gt;SSH_PORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;22
&lt;span class="nv"&gt;HTTP_PORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80

&lt;span class="c"&gt;# Clean up any existing container filesystem&lt;/span&gt;
&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CONTAINER_ROOT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CONTAINER_ROOT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/&lt;span class="o"&gt;{&lt;/span&gt;bin,etc/dropbear,proc,dev,www&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Create device nodes (if not already present)&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CONTAINER_ROOT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/dev/null"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;mknod&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; 666 &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CONTAINER_ROOT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/dev/null"&lt;/span&gt; c 1 3
&lt;span class="k"&gt;fi
if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CONTAINER_ROOT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/dev/urandom"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;mknod&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; 666 &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CONTAINER_ROOT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/dev/urandom"&lt;/span&gt; c 1 9
&lt;span class="k"&gt;fi&lt;/span&gt;

&lt;span class="c"&gt;# Set up BusyBox in the container filesystem&lt;/span&gt;
&lt;span class="nb"&gt;cp&lt;/span&gt; /bin/busybox &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CONTAINER_ROOT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/bin/"&lt;/span&gt;
&lt;span class="nb"&gt;chmod&lt;/span&gt; +x &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CONTAINER_ROOT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/bin/busybox"&lt;/span&gt;
&lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CONTAINER_ROOT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/bin"&lt;/span&gt;
&lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-sf&lt;/span&gt; busybox sh
&lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-sf&lt;/span&gt; busybox &lt;span class="nb"&gt;ls
ln&lt;/span&gt; &lt;span class="nt"&gt;-sf&lt;/span&gt; busybox &lt;span class="nb"&gt;mkdir
ln&lt;/span&gt; &lt;span class="nt"&gt;-sf&lt;/span&gt; busybox &lt;span class="nb"&gt;cat
ln&lt;/span&gt; &lt;span class="nt"&gt;-sf&lt;/span&gt; busybox &lt;span class="nb"&gt;echo
ln&lt;/span&gt; &lt;span class="nt"&gt;-sf&lt;/span&gt; busybox httpd
&lt;span class="nb"&gt;cd&lt;/span&gt; -

&lt;span class="c"&gt;# Set up Dropbear in the container filesystem&lt;/span&gt;
&lt;span class="nb"&gt;cp&lt;/span&gt; /usr/sbin/dropbear &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CONTAINER_ROOT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/bin/dropbear"&lt;/span&gt;
&lt;span class="nb"&gt;chmod&lt;/span&gt; +x &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CONTAINER_ROOT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/bin/dropbear"&lt;/span&gt;
&lt;span class="c"&gt;# Copy required libraries for dropbear (using ldd)&lt;/span&gt;
ldd /usr/sbin/dropbear | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'/=&amp;gt;/ {print $3}'&lt;/span&gt; | &lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; lib&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
    &lt;/span&gt;&lt;span class="nv"&gt;dest_dir&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CONTAINER_ROOT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;dirname&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$lib&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$dest_dir&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$lib&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$dest_dir&lt;/span&gt;&lt;span class="s2"&gt;/"&lt;/span&gt;
&lt;span class="k"&gt;done&lt;/span&gt;

&lt;span class="c"&gt;# Create minimal passwd and group files&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"root:x:0:0:root:/:/bin/sh"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CONTAINER_ROOT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/etc/passwd"&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"root:x:0:"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CONTAINER_ROOT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/etc/group"&lt;/span&gt;

&lt;span class="c"&gt;# Generate SSH host key if not present&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CONTAINER_ROOT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/etc/dropbear/dropbear_rsa_host_key"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;dropbearkey &lt;span class="nt"&gt;-t&lt;/span&gt; rsa &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CONTAINER_ROOT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/etc/dropbear/dropbear_rsa_host_key"&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;

&lt;span class="c"&gt;# Set up web content (for testing; remove if not needed)&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Isolated Container Web Server Ready!"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CONTAINER_ROOT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/www/index.html"&lt;/span&gt;

&lt;span class="c"&gt;# Set up veth pair on the host&lt;/span&gt;
ip &lt;span class="nb"&gt;link &lt;/span&gt;add &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$HOST_VETH&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nb"&gt;type &lt;/span&gt;veth peer name &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CONTAINER_VETH&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
ip addr add &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$HOST_IP&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; dev &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$HOST_VETH&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
ip &lt;span class="nb"&gt;link set&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$HOST_VETH&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; up

&lt;span class="c"&gt;# Launch an isolated container shell with unshare and capture its PID&lt;/span&gt;
&lt;span class="nv"&gt;CONTAINER_PID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;unshare &lt;span class="nt"&gt;--fork&lt;/span&gt; &lt;span class="nt"&gt;--pid&lt;/span&gt; &lt;span class="nt"&gt;--mount&lt;/span&gt; &lt;span class="nt"&gt;--uts&lt;/span&gt; &lt;span class="nt"&gt;--ipc&lt;/span&gt; &lt;span class="nt"&gt;--net&lt;/span&gt; &lt;span class="nt"&gt;--user&lt;/span&gt; &lt;span class="nt"&gt;--map-root-user&lt;/span&gt; bash &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'echo $$; exec sleep infinity'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Container PID: &lt;/span&gt;&lt;span class="nv"&gt;$CONTAINER_PID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# Move the container side of the veth pair into the container's network namespace&lt;/span&gt;
ip &lt;span class="nb"&gt;link set&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CONTAINER_VETH&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; netns &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CONTAINER_PID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# Configure networking inside the container using nsenter&lt;/span&gt;
nsenter &lt;span class="nt"&gt;--target&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CONTAINER_PID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--net&lt;/span&gt; bash &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"
  ip addr add '&lt;/span&gt;&lt;span class="nv"&gt;$CONTAINER_IP&lt;/span&gt;&lt;span class="s2"&gt;' dev &lt;/span&gt;&lt;span class="nv"&gt;$CONTAINER_VETH&lt;/span&gt;&lt;span class="s2"&gt; &amp;amp;&amp;amp;
  ip link set &lt;/span&gt;&lt;span class="nv"&gt;$CONTAINER_VETH&lt;/span&gt;&lt;span class="s2"&gt; up &amp;amp;&amp;amp;
  ip link set lo up &amp;amp;&amp;amp;
  ip route add default via '&lt;/span&gt;&lt;span class="nv"&gt;$GATEWAY_IP&lt;/span&gt;&lt;span class="s2"&gt;'
"&lt;/span&gt;
&lt;span class="c"&gt;# Chroot into the container and start SSH and HTTP services&lt;/span&gt;
  ip &lt;span class="nb"&gt;link set &lt;/span&gt;lo up &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;
  ip route add default via &lt;span class="s1"&gt;'$GATEWAY_IP'&lt;/span&gt;
&lt;span class="s2"&gt;"

# Mount proc in the container
mkdir -p "&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CONTAINER_ROOT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;/proc&lt;span class="s2"&gt;"
nsenter --target "&lt;/span&gt;&lt;span class="nv"&gt;$CONTAINER_PID&lt;/span&gt;&lt;span class="s2"&gt;" --mount bash -c "&lt;/span&gt;mount &lt;span class="nt"&gt;-t&lt;/span&gt; proc proc &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CONTAINER_ROOT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;/proc&lt;span class="s2"&gt;"


nsenter --target "&lt;/span&gt;&lt;span class="nv"&gt;$CONTAINER_PID&lt;/span&gt;&lt;span class="s2"&gt;" --mount chroot "&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CONTAINER_ROOT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;" /bin/sh -c "&lt;/span&gt;
nsenter &lt;span class="nt"&gt;--target&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CONTAINER_PID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--mount&lt;/span&gt; &lt;span class="nb"&gt;chroot&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CONTAINER_ROOT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; /bin/sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"
  echo 'Inside container: starting HTTP server and Dropbear SSH daemon...';
  echo 'Inside container: starting HTTP server and Dropbear SSH daemon...';
  /bin/httpd -f -p &lt;/span&gt;&lt;span class="nv"&gt;$HTTP_PORT&lt;/span&gt;&lt;span class="s2"&gt; -h /www &amp;amp;
  exec /bin/dropbear -F -E -p &lt;/span&gt;&lt;span class="nv"&gt;$SSH_PORT&lt;/span&gt;&lt;span class="s2"&gt;
"&lt;/span&gt;

&lt;span class="c"&gt;# End of script&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  &lt;strong&gt;Step 2: Understanding Key Components&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;1. Container Filesystem&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;/bin&lt;/code&gt;&lt;/strong&gt;: Contains BusyBox (provides basic Linux commands)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;/dev&lt;/code&gt;&lt;/strong&gt;: Device files like &lt;code&gt;null&lt;/code&gt; and &lt;code&gt;urandom&lt;/code&gt; (required for programs to work)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;/etc/dropbear&lt;/code&gt;&lt;/strong&gt;: Stores SSH host keys
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;2. Network Setup&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Creates a virtual Ethernet pair (&lt;code&gt;host-side&lt;/code&gt; and &lt;code&gt;container-side&lt;/code&gt;)
&lt;/li&gt;
&lt;li&gt;Container gets IP &lt;code&gt;192.168.100.2&lt;/code&gt;, host uses &lt;code&gt;192.168.100.1&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Port &lt;code&gt;2222&lt;/code&gt; on host forwards to container’s SSH port (&lt;code&gt;22&lt;/code&gt;)
&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Step 3: Running the Script&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Make it executable&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nb"&gt;chmod&lt;/span&gt; +x create-container.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Execute with sudo&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nb"&gt;sudo&lt;/span&gt; ./create-container.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Connect via NSENTER&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   - to get the PID run : ps aux | &lt;span class="nb"&gt;grep sleep  
   sudo &lt;/span&gt;nsenter &lt;span class="nt"&gt;--target&lt;/span&gt; &amp;lt;PID&amp;gt; &lt;span class="nt"&gt;--mount&lt;/span&gt; &lt;span class="nt"&gt;--uts&lt;/span&gt; &lt;span class="nt"&gt;--ipc&lt;/span&gt; &lt;span class="nt"&gt;--net&lt;/span&gt; &lt;span class="nt"&gt;--pid&lt;/span&gt; bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;OR&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Connect via SSH&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   ssh root@localhost &lt;span class="nt"&gt;-p&lt;/span&gt; 2222
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  &lt;strong&gt;Step 4: What to Expect&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Connection&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;You’ll see a warning about the SSH key fingerprint (this is normal). Type &lt;code&gt;yes&lt;/code&gt; to continue.  &lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Inside the Container&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Try these commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; /      &lt;span class="c"&gt;# See container filesystem&lt;/span&gt;
ip addr   &lt;span class="c"&gt;# Show container's network&lt;/span&gt;
ps aux    &lt;span class="c"&gt;# List running processes (only container's)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h4&gt;
  
  
  &lt;strong&gt;Step 5. Start a Web Server&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Create a file, e.g., /opt/isolated_container/start_services.sh
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  #!/bin/sh
  echo "Starting HTTP server..."
  /bin/httpd -p 80 -h /www &amp;amp;
  echo "Starting Dropbear SSH daemon..."
  /bin/dropbear -F -E -p 22
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  chmod +x /opt/isolated_container/start_services.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  chroot /opt/isolated_container /bin/sh -c "/start_services.sh"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  &lt;strong&gt;Troubleshooting&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  ** SSH Connection Fails**
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Verify port forwarding:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  iptables &lt;span class="nt"&gt;-t&lt;/span&gt; nat &lt;span class="nt"&gt;-L&lt;/span&gt; PREROUTING
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  &lt;strong&gt;Permission Denied&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Ensure you used &lt;code&gt;sudo&lt;/code&gt; when running the script
&lt;/li&gt;
&lt;li&gt;Recreate device files if missing:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  &lt;span class="nb"&gt;mknod&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; 666 /opt/my_container/dev/null c 1 3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  &lt;strong&gt;How It Works: Beginner’s Glossary&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Namespace&lt;/strong&gt;: A isolated workspace for processes (like a private room)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Veth Pair&lt;/strong&gt;: Virtual Ethernet cable connecting host and container
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;BusyBox&lt;/strong&gt;: Swiss Army knife of Linux commands in a single file
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dropbear&lt;/strong&gt;: Lightweight SSH server (uses less resources than OpenSSH)
&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Next Steps&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Add Persistence&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
   Create files in &lt;code&gt;/opt/my_container/www&lt;/code&gt;.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Customize SSH&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
   Add your public key to &lt;code&gt;/opt/my_container/root/.ssh/authorized_keys&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;** Verify cgroups v2 is Active**&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   mount | grep cgroup2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create a Custom cgroup&lt;/strong&gt;&lt;br&gt;
   mkdir /sys/fs/cgroup/mycontainer&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limit CPU Usage&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   echo "50000 100000" &amp;gt; /sys/fs/cgroup/mycontainer/cpu.max
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Limit Memory Usage&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   echo $((512 * 1024 * 1024)) &amp;gt; /sys/fs/cgroup/mycontainer/memory.max
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explore More&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
   Try installing additional software in the container using &lt;code&gt;chroot&lt;/code&gt;.  &lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;You’ve just created a fully isolated Linux container from scratch! While this isn’t production-ready, it demonstrates core containerization concepts used by Docker and Kubernetes. Experiment with modifying the network setup or adding new services to deepen your understanding.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Happy container hacking!&lt;/strong&gt; 🐧🔒&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Hosting a Page with Nginx: A Newbie’s Journey</title>
      <dc:creator>Mart Young</dc:creator>
      <pubDate>Wed, 29 Jan 2025 18:34:31 +0000</pubDate>
      <link>https://dev.to/mart_young_ce778e4c31eb33/hosting-a-page-with-nginx-a-newbies-journey-39i6</link>
      <guid>https://dev.to/mart_young_ce778e4c31eb33/hosting-a-page-with-nginx-a-newbies-journey-39i6</guid>
      <description>&lt;p&gt;How I learned to launch a simple HTML site (and you can too!)&lt;/p&gt;

&lt;p&gt;As someone just dipping their toes into &lt;a href="https://hng.tech/hire/devops-engineers" rel="noopener noreferrer"&gt;DevOps Engineer&lt;/a&gt;, &lt;a href="https://hng.tech/hire/kubernetes-specialists" rel="noopener noreferrer"&gt;Kubernetes Specialists&lt;/a&gt; and web hosting, I wanted to conquer a classic "first project": setting up a web server. My goal? Run Nginx on Ubuntu, host a page with my name, and document every stumble and triumph. Spoiler: It worked! Here’s how,&lt;/p&gt;

&lt;p&gt;Step 1: Preparing My Ubuntu Playground&lt;br&gt;
I started by firing up a clean Ubuntu 22.04 environment. For simplicity, I used a virtual machine(ec2-Instance) on AWS &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html" rel="noopener noreferrer"&gt;Docs-here&lt;/a&gt;, but you could also&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Use any other cloud server (Google cloud/Azure/Digital Ocean)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Or even Docker (which I’ll mention later!).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Pro Tip:&lt;/em&gt; If you’re using Docker, this one-liner saves time (see &lt;a href="https://docs.docker.com/get-started/" rel="noopener noreferrer"&gt;Docker’s getting-started guide&lt;/a&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 80:80 &lt;span class="nt"&gt;--name&lt;/span&gt; my-nginx ubuntu:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Step 2: Installing Nginx – The “Aha!” Moment&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Updating packages felt like stretching before a workout:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then came the magic command, as recommended in &lt;a href="https://nginx.org/en/linux_packages.html" rel="noopener noreferrer"&gt;Nginx’s official installation guide&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;nginx &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Fun Fact:&lt;/strong&gt; Nginx (pronounced "Engine-X") powers over &lt;a href="https://w3techs.com/technologies/details/ws-nginx" rel="noopener noreferrer"&gt;33% of websites&lt;/a&gt;. Now I’m one of them! &lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Step 3: Bringing Nginx to Life&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Starting the service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Making sure it auto-restarts on reboots (learn more about &lt;code&gt;systemctl&lt;/code&gt; &lt;a href="https://www.freedesktop.org/software/systemd/man/systemctl.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;At this point, I nervously opened my browser to &lt;code&gt;http://&amp;lt;aws public ip&amp;gt;&lt;/code&gt;... and saw the default Nginx page! Progress!&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Step 4: Crafting My Digital Hello World&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Time to replace the generic page with my own. I navigated to the web root (Nginx’s default directory structure &lt;a href="https://nginx.org/en/docs/beginners_guide.html" rel="noopener noreferrer"&gt;documented here&lt;/a&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; /var/www/html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then edited &lt;code&gt;index.html&lt;/code&gt; with Nano (Vim warriors, fight me later ):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;nano index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I wrote this barebones HTML (swap "[My Name]" with yours!):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="cp"&gt;&amp;lt;!DOCTYPE html&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;html&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;head&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;title&amp;gt;&lt;/span&gt;Welcome to My Corner of the Internet&lt;span class="nt"&gt;&amp;lt;/title&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;style&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;body&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nl"&gt;display&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;flex&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nl"&gt;flex-direction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;column&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nl"&gt;align-items&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;center&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nl"&gt;justify-content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;center&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nl"&gt;min-height&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;100vh&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c"&gt;/* Full viewport height */&lt;/span&gt;
            &lt;span class="nl"&gt;margin&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nl"&gt;font-family&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Arial&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;sans-serif&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="nt"&gt;h1&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nl"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;#2c3e50&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c"&gt;/* Modern dark-blue color */&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="nt"&gt;p&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nl"&gt;font-size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1.2em&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/style&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/head&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;body&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;h1&amp;gt;&lt;/span&gt;Hey there! I'm [my name].&lt;span class="nt"&gt;&amp;lt;/h1&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;h2&amp;gt;&lt;/span&gt;Welcome to DevOps Stage 0 - [slack display name]&lt;span class="nt"&gt;&amp;lt;/h2&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/body&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/html&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Ctrl+O to save, Ctrl+X to exit. Simple as that!&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Step 5: The Grand Reveal&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I refreshed &lt;code&gt;http://&amp;lt;AWS - public-ip&amp;gt;&lt;/code&gt; and… &lt;strong&gt;there it was!&lt;/strong&gt; My name in bold, staring back like a digital monument. For extra validation, on my terminal I ran:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://&amp;lt;AWS - public-ip&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;:For Docker&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://localhost
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The terminal echoed my HTML – pure satisfaction.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0avk1xufdln9a014c7f9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0avk1xufdln9a014c7f9.png" alt="webpage-screenshot" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Pro Tip:&lt;/em&gt; When trying to access your webpage with AWS public-IP, remember to use &lt;em&gt;http&lt;/em&gt; and not &lt;em&gt;https&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Bumps Along the way&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security Group Fiasco:&lt;/strong&gt; My first attempt failed due to security group issues on AWS. I had to spend sometime reading this &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html" rel="noopener noreferrer"&gt;AWS-Docs&lt;/a&gt; and was able to resolve it by adding a rule to  my security group&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker Dilemma:&lt;/strong&gt; When testing the Docker method, I forgot to mount the HTML file. Protip for Docker fans (&lt;a href="https://docs.docker.com/storage/volumes/" rel="noopener noreferrer"&gt;Docker volumes explained here&lt;/a&gt;):&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 80:80 &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;/index.html:/usr/share/nginx/html/index.html nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  &lt;strong&gt;Why This Matters&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This tiny project taught me:  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Servers aren’t magic&lt;/strong&gt; – just software following instructions.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Documentation is survival gear.&lt;/strong&gt; I kept notes like this:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   # My Cheat Sheet
   - Web root: /var/www/html
   - Test command: curl localhost
   - Logs: /var/log/nginx/error.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Troubleshooting is 80% of the job.&lt;/strong&gt; (But that 20% success? Worth it.)&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;References &amp;amp; Further Reading&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://nginx.org/en/docs/beginners_guide.html" rel="noopener noreferrer"&gt;Nginx Beginner’s Guide&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://ubuntu.com/server/docs" rel="noopener noreferrer"&gt;Ubuntu Server Documentation&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.docker.com/get-started/" rel="noopener noreferrer"&gt;Docker’s Official Tutorials&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://developer.mozilla.org/en-US/docs/Learn/%0AHTML/Introduction_to_HTML/Getting_started" rel="noopener noreferrer"&gt;Mozilla’s HTML Basics&lt;/a&gt; (for crafting your page)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html" rel="noopener noreferrer"&gt;Get started with Amazon EC2&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Final Thoughts&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In under an hour, I went from zero to hosting my own page. Whether you’re building a portfolio, a blog, or just experimenting, Nginx on Ubuntu is a sturdy foundation. Next step? Maybe add CSS, or try HTTPS with &lt;a href="https://letsencrypt.org/" rel="noopener noreferrer"&gt;Let’s Encrypt&lt;/a&gt;!  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Got questions or war stories?&lt;/strong&gt; Drop them in the comments (or tweet me)! Let’s geek out. &lt;/p&gt;




&lt;p&gt;&lt;em&gt;Enjoyed this walkthrough? Share it with a friend who’s starting out their web hosting or DevOps journey!&lt;/em&gt;  &lt;/p&gt;




</description>
      <category>devops</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
