<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: PrimoCrypt</title>
    <description>The latest articles on DEV Community by PrimoCrypt (@primocrypt).</description>
    <link>https://dev.to/primocrypt</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/primocrypt"/>
    <language>en</language>
    <item>
      <title>Building an Automated Docker Deployment Script: A Complete Beginner's Guide</title>
      <dc:creator>PrimoCrypt</dc:creator>
      <pubDate>Tue, 09 Dec 2025 21:53:18 +0000</pubDate>
      <link>https://dev.to/primocrypt/building-an-automated-docker-deployment-script-a-complete-beginners-guide-2b0f</link>
      <guid>https://dev.to/primocrypt/building-an-automated-docker-deployment-script-a-complete-beginners-guide-2b0f</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Have you ever wondered how professional developers deploy their applications to servers automatically? In this comprehensive guide, I'll walk you through creating a powerful Bash script that automates the entire deployment process for Docker-based applications. By the end of this article, you'll have a script that can:&lt;/p&gt;

&lt;p&gt;✅ Clone your Git repository&lt;br&gt;&lt;br&gt;
✅ Set up a remote server environment&lt;br&gt;&lt;br&gt;
✅ Build and deploy Docker containers&lt;br&gt;&lt;br&gt;
✅ Configure Nginx as a reverse proxy&lt;br&gt;&lt;br&gt;
✅ Validate your deployment&lt;br&gt;&lt;br&gt;
✅ Handle cleanup and log management&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best of all&lt;/strong&gt;: You'll understand every step, even if you're just starting your DevOps journey!&lt;/p&gt;


&lt;h2&gt;
  
  
  What is Automated Deployment and Why Do We Need It?
&lt;/h2&gt;

&lt;p&gt;In the early days of web development, deploying an application meant manually copying files to a server, installing dependencies, and configuring everything by hand. This process was:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Time-consuming&lt;/strong&gt;: Could take hours for a single deployment&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error-prone&lt;/strong&gt;: Easy to forget a step or misconfigure something&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Not repeatable&lt;/strong&gt;: Hard to deploy the same way twice&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Not scalable&lt;/strong&gt;: Imagine doing this for 10 different applications!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Automated deployment solves all these problems&lt;/strong&gt; by using scripts to perform all deployment steps consistently and reliably.&lt;/p&gt;


&lt;h2&gt;
  
  
  Understanding the Deployment Workflow
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3vefmxj2fd8l3xf6iv36.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3vefmxj2fd8l3xf6iv36.png" alt="Deployment Workflow" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before we dive into the code, let's understand the big picture of what happens during deployment:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Your local machine&lt;/strong&gt; runs the deployment script&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The script clones&lt;/strong&gt; your application code from Git&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It connects&lt;/strong&gt; to your remote server via SSH&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It sets up&lt;/strong&gt; Docker and Nginx on the server&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It builds&lt;/strong&gt; your Docker container and runs it&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It configures&lt;/strong&gt; Nginx to route traffic to your app&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It validates&lt;/strong&gt; everything is working correctly&lt;/li&gt;
&lt;/ol&gt;


&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before you start, make sure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Basic understanding of&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Linux command line basics&lt;/li&gt;
&lt;li&gt;Git version control&lt;/li&gt;
&lt;li&gt;What Docker is (don't worry, we'll explain as we go!)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Technical requirements&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;A Linux/Mac machine (or WSL on Windows)&lt;/li&gt;
&lt;li&gt;A remote server (AWS EC2, DigitalOcean, etc.)&lt;/li&gt;
&lt;li&gt;SSH key access to your server&lt;/li&gt;
&lt;li&gt;A Git repository with your application code&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;[!TIP]&lt;br&gt;
If you're new to SSH keys, think of them as a secure password alternative that uses cryptographic keys instead of text passwords.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  The 7 Stages of Our Deployment Script
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh9pc15nskzhjvyt9g3ey.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh9pc15nskzhjvyt9g3ey.png" alt="Deployment Stages" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our deployment script is organized into 7 distinct stages. Let's explore each one in detail!&lt;/p&gt;


&lt;h3&gt;
  
  
  Stage 0.5: Setup and Housekeeping
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it does&lt;/strong&gt;: Sets up logging, error handling, and cleanup functionality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key concepts&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/usr/bin/env bash&lt;/span&gt;

&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; errexit   &lt;span class="c"&gt;# Exit on any error&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; nounset   &lt;span class="c"&gt;# Exit on undefined variable&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; pipefail  &lt;span class="c"&gt;# Exit if any command in a pipe fails&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These three lines are &lt;strong&gt;crucial safety features&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;errexit&lt;/code&gt;: If any command fails, the script stops immediately&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;nounset&lt;/code&gt;: Catches typos in variable names&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;pipefail&lt;/code&gt;: Ensures errors in complex commands don't get hidden&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Logging system&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;TIMESTAMP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; +%Y%m%d_%H%M%S&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nv"&gt;LOGDIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"./logs"&lt;/span&gt;
&lt;span class="nv"&gt;LOGFILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;LOGDIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/deploy_&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TIMESTAMP&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.log"&lt;/span&gt;

log&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="nb"&gt;printf&lt;/span&gt; &lt;span class="s2"&gt;"%s [%s] %s&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="s1"&gt;'+%Y-%m-%d %H:%M:%S'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$2&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;tee&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;LOGFILE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a timestamped log file for every deployment, so you can troubleshoot issues later.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cleanup mode&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;The script includes a special &lt;code&gt;--cleanup&lt;/code&gt; flag that removes all deployed containers, images, and configurations. This is useful for starting fresh or removing old deployments.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./deploy.sh &lt;span class="nt"&gt;--cleanup&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;[!CAUTION]&lt;br&gt;
Cleanup mode will remove ALL Docker containers and images on your server. Always backup important data first!&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Stage 1: Collect Parameters and Basic Validation
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it does&lt;/strong&gt;: Gathers all the information needed for deployment through interactive prompts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you'll be asked&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Git Repository URL&lt;/strong&gt;: Where is your code?

&lt;ul&gt;
&lt;li&gt;Example: &lt;code&gt;https://github.com/yourusername/your-app.git&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Personal Access Token (PAT)&lt;/strong&gt;: For private repositories

&lt;ul&gt;
&lt;li&gt;Entered securely (won't show on screen)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Branch Name&lt;/strong&gt;: Which branch to deploy?

&lt;ul&gt;
&lt;li&gt;Default: &lt;code&gt;main&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SSH Details&lt;/strong&gt;: How to connect to your server

&lt;ul&gt;
&lt;li&gt;Username: &lt;code&gt;ubuntu&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Host: &lt;code&gt;54.123.45.67&lt;/code&gt; (your server IP)&lt;/li&gt;
&lt;li&gt;SSH Key Path: &lt;code&gt;~/.ssh/your-key.pem&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Application Port&lt;/strong&gt;: What port does your app use?

&lt;ul&gt;
&lt;li&gt;Example: &lt;code&gt;3000&lt;/code&gt; for Node.js, &lt;code&gt;8080&lt;/code&gt; for many others&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Example interaction&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Git repository URL: https://github.com/john/awesome-app.git
Branch name: main
Remote SSH username: ubuntu
Remote SSH host/IP: 54.123.45.67
Path to local SSH private key: ~/.ssh/deploy-key.pem
Application internal container port: 3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why validation matters&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;The script validates your inputs immediately:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;APP_PORT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt;
  &lt;span class="s1"&gt;''&lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="o"&gt;[!&lt;/span&gt;0-9]&lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="p"&gt;)&lt;/span&gt; die 12 &lt;span class="s2"&gt;"Invalid port"&lt;/span&gt;&lt;span class="p"&gt;;;&lt;/span&gt;
&lt;span class="k"&gt;esac&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures the port is actually a number, preventing errors later.&lt;/p&gt;




&lt;h3&gt;
  
  
  Stage 2: Repository Clone and Validation
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it does&lt;/strong&gt;: Downloads your code and verifies connectivity to the server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The cloning process&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;WORKDIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"./workspace_&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TIMESTAMP&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;WORKDIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

git clone &lt;span class="nt"&gt;-b&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;BRANCH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GIT_REPO_URL&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;WORKDIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/repo"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a timestamped workspace directory and clones your specific branch into it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Handling authentication&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;For private repositories with HTTPS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Inject PAT into the URL securely&lt;/span&gt;
&lt;span class="nv"&gt;AUTH_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GIT_REPO_URL&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-E&lt;/span&gt; &lt;span class="s2"&gt;"s#https://#https://&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GIT_PAT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;@#"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
git clone &lt;span class="nt"&gt;-b&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;BRANCH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;AUTH_URL&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;WORKDIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/repo"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For SSH repositories:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;GIT_SSH_COMMAND&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"ssh -i &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SSH_KEY_PATH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; -o StrictHostKeyChecking=no"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  git clone &lt;span class="nt"&gt;-b&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;BRANCH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GIT_REPO_URL&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;WORKDIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/repo"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Docker setup detection&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;WORKDIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/repo/Dockerfile"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;log &lt;span class="s2"&gt;"INFO"&lt;/span&gt; &lt;span class="s2"&gt;"Found Dockerfile."&lt;/span&gt;
&lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;WORKDIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/repo/docker-compose.yml"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;log &lt;span class="s2"&gt;"INFO"&lt;/span&gt; &lt;span class="s2"&gt;"Found docker-compose.yml."&lt;/span&gt;
&lt;span class="k"&gt;else
  &lt;/span&gt;log &lt;span class="s2"&gt;"WARN"&lt;/span&gt; &lt;span class="s2"&gt;"No Docker setup found. Skipping container checks."&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script intelligently detects whether you're using a simple &lt;code&gt;Dockerfile&lt;/code&gt; or a complex &lt;code&gt;docker-compose.yml&lt;/code&gt; setup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SSH connectivity test&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Before transferring files, the script tests the SSH connection:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SSH_KEY_PATH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;BatchMode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;ConnectTimeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;REMOTE_USER&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;@&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;REMOTE_HOST&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"echo 'SSH_OK'"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null 2&amp;gt;&amp;amp;1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If this fails, you'll know immediately that there's an SSH problem, saving you time debugging later.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dry-run file transfer&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;rsync &lt;span class="nt"&gt;-avz&lt;/span&gt; &lt;span class="nt"&gt;--dry-run&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"ssh -i &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SSH_KEY_PATH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;WORKDIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/repo/"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;REMOTE_USER&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;@&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;REMOTE_HOST&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;:/tmp/deploy-test/"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This simulates the file transfer without actually copying anything, ensuring the process will work when we do it for real.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;[!NOTE] &amp;gt; &lt;code&gt;rsync&lt;/code&gt; is better than &lt;code&gt;scp&lt;/code&gt; for deployments because it only transfers changed files, making updates much faster.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Stage 3: Prepare Remote Environment
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it does&lt;/strong&gt;: Installs and configures all necessary software on your server.&lt;/p&gt;

&lt;p&gt;This is where the magic happens! The script automatically sets up:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Docker&lt;/strong&gt;: The containerization platform&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker Compose&lt;/strong&gt;: For managing multi-container applications&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Nginx&lt;/strong&gt;: The web server and reverse proxy&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The installation process&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Update package lists&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update &lt;span class="nt"&gt;-y&lt;/span&gt;

&lt;span class="c"&gt;# Install Docker if not present&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nb"&gt;command&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; docker &amp;amp;&amp;gt;/dev/null&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Installing Docker..."&lt;/span&gt;
  &lt;span class="c"&gt;# Add Docker's official GPG key&lt;/span&gt;
  &lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/apt/keyrings
  curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://download.docker.com/linux/ubuntu/gpg | &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nb"&gt;sudo &lt;/span&gt;gpg &lt;span class="nt"&gt;--dearmor&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /etc/apt/keyrings/docker.gpg

  &lt;span class="c"&gt;# Set up Docker repository&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb [arch=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;dpkg &lt;span class="nt"&gt;--print-architecture&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; signed-by=/etc/apt/keyrings/docker.gpg] &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
    https://download.docker.com/linux/ubuntu &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;lsb_release &lt;span class="nt"&gt;-cs&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; stable"&lt;/span&gt; | &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/docker.list &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null

  &lt;span class="c"&gt;# Install Docker&lt;/span&gt;
  &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update &lt;span class="nt"&gt;-y&lt;/span&gt;
  &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; docker-ce docker-ce-cli containerd.io
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;User permissions&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Add current user to Docker group&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nb"&gt;groups&lt;/span&gt; &lt;span class="nv"&gt;$USER&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-q&lt;/span&gt; docker&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;usermod &lt;span class="nt"&gt;-aG&lt;/span&gt; docker &lt;span class="nv"&gt;$USER&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This allows you to run Docker commands without &lt;code&gt;sudo&lt;/code&gt;, which is a security best practice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service management&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Enable and start services&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;docker
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start docker
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;nginx
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;systemctl enable&lt;/code&gt; ensures services start automatically when the server reboots.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verification&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"--- Installed Versions ---"&lt;/span&gt;
docker &lt;span class="nt"&gt;--version&lt;/span&gt;
docker-compose &lt;span class="nt"&gt;--version&lt;/span&gt;
nginx &lt;span class="nt"&gt;-v&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This confirms everything installed correctly and logs the versions for troubleshooting.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;[!IMPORTANT]&lt;br&gt;
This stage is &lt;strong&gt;idempotent&lt;/strong&gt;, meaning you can run it multiple times safely. If Docker is already installed, it won't reinstall it.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Stage 4: Deploy Application on Remote
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it does&lt;/strong&gt;: Transfers your code to the server, builds Docker images, and runs containers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;File synchronization&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;APP_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;basename&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GIT_REPO_URL&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; .git&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;REMOTE_APP_PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/opt/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;APP_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# Prepare directory with correct permissions&lt;/span&gt;
ssh &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SSH_KEY_PATH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;REMOTE_USER&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;@&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;REMOTE_HOST&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"sudo mkdir -p '&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;REMOTE_APP_PATH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;' &amp;amp;&amp;amp; &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
   sudo chown -R &lt;/span&gt;&lt;span class="se"&gt;\$&lt;/span&gt;&lt;span class="s2"&gt;(whoami):&lt;/span&gt;&lt;span class="se"&gt;\$&lt;/span&gt;&lt;span class="s2"&gt;(whoami) '&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;REMOTE_APP_PATH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;'"&lt;/span&gt;

&lt;span class="c"&gt;# Transfer files&lt;/span&gt;
rsync &lt;span class="nt"&gt;-avz&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"ssh -i &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SSH_KEY_PATH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--delete&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;WORKDIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/repo/"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;REMOTE_USER&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;@&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;REMOTE_HOST&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;REMOTE_APP_PATH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;--delete&lt;/code&gt; flag removes files on the server that don't exist in your repository, keeping everything in sync.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker Compose deployment&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; docker-compose.yml &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;docker compose down &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true&lt;/span&gt;  &lt;span class="c"&gt;# Stop old containers&lt;/span&gt;
  &lt;span class="nb"&gt;sudo &lt;/span&gt;docker compose build         &lt;span class="c"&gt;# Build new images&lt;/span&gt;
  &lt;span class="nb"&gt;sudo &lt;/span&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;         &lt;span class="c"&gt;# Start containers in background&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Single Dockerfile deployment&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; Dockerfile &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nv"&gt;APP_TAG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;basename&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

  &lt;span class="c"&gt;# Build image&lt;/span&gt;
  &lt;span class="nb"&gt;sudo &lt;/span&gt;docker build &lt;span class="nt"&gt;--build-arg&lt;/span&gt; &lt;span class="nv"&gt;PORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'${APP_PORT}'&lt;/span&gt; &lt;span class="nt"&gt;-t&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;APP_TAG&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;:latest"&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt;

  &lt;span class="c"&gt;# Remove old container&lt;/span&gt;
  &lt;span class="nb"&gt;sudo &lt;/span&gt;docker stop &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;APP_TAG&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true
  sudo &lt;/span&gt;docker &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;APP_TAG&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true&lt;/span&gt;

  &lt;span class="c"&gt;# Run new container&lt;/span&gt;
  &lt;span class="nb"&gt;sudo &lt;/span&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s1"&gt;'${APP_PORT}'&lt;/span&gt;:&lt;span class="s1"&gt;'${APP_PORT}'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;PORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'${APP_PORT}'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--name&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;APP_TAG&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;APP_TAG&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;:latest"&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Health check&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Waiting for app to start..."&lt;/span&gt;
&lt;span class="nb"&gt;sleep &lt;/span&gt;5

&lt;span class="k"&gt;if &lt;/span&gt;curl &lt;span class="nt"&gt;-fs&lt;/span&gt; &lt;span class="s2"&gt;"http://localhost:&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;APP_PORT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null 2&amp;gt;&amp;amp;1&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Application reachable on port &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;APP_PORT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;."&lt;/span&gt;
&lt;span class="k"&gt;else
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"WARNING: Application not responding yet."&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gives your application a few seconds to start up, then tests if it's responding.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;[!TIP]&lt;br&gt;
The &lt;code&gt;-d&lt;/code&gt; flag in &lt;code&gt;docker run&lt;/code&gt; means "detached mode" - the container runs in the background, not blocking your terminal.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Stage 5: Configure Nginx Reverse Proxy
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it does&lt;/strong&gt;: Sets up Nginx to route external traffic (port 80) to your Docker container.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fysg1e4e5d55nq75mpdhy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fysg1e4e5d55nq75mpdhy.png" alt="Nginx Reverse Proxy" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why use a reverse proxy?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Without Nginx, users would need to access your app like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://your-server.com:3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With Nginx as a reverse proxy, they can use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://your-server.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Much cleaner! Plus, Nginx handles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SSL/TLS certificates (HTTPS)&lt;/li&gt;
&lt;li&gt;Load balancing&lt;/li&gt;
&lt;li&gt;Static file serving&lt;/li&gt;
&lt;li&gt;DDoS protection&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Nginx configuration&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;server_name&lt;/span&gt; &lt;span class="s"&gt;_&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_pass&lt;/span&gt; &lt;span class="s"&gt;http://localhost:3000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Host&lt;/span&gt; &lt;span class="nv"&gt;$host&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;X-Real-IP&lt;/span&gt; &lt;span class="nv"&gt;$remote_addr&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;X-Forwarded-For&lt;/span&gt; &lt;span class="nv"&gt;$proxy_add_x_forwarded_for&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;X-Forwarded-Proto&lt;/span&gt; &lt;span class="nv"&gt;$scheme&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Breaking it down&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;listen 80&lt;/code&gt;: Nginx listens on port 80 (HTTP)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;server_name _&lt;/code&gt;: Matches any domain/IP (use your domain here in production)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;proxy_pass http://localhost:3000&lt;/code&gt;: Forwards requests to your app&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;proxy_set_header&lt;/code&gt; lines: Pass client information to your app&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Deployment process&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;NGINX_CONFIG_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;APP_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.conf"&lt;/span&gt;
&lt;span class="nv"&gt;NGINX_PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/etc/nginx/sites-available/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NGINX_CONFIG_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nv"&gt;NGINX_LINK&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/etc/nginx/sites-enabled/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NGINX_CONFIG_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# Backup old config&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$NGINX_PATH&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;sudo mv&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$NGINX_PATH&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NGINX_PATH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.bak_&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; +%s&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;

&lt;span class="c"&gt;# Write new config&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;bash &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"cat &amp;gt; &lt;/span&gt;&lt;span class="nv"&gt;$NGINX_PATH&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;CONF&lt;/span&gt;&lt;span class="sh"&gt;'
# ... nginx config here ...
&lt;/span&gt;&lt;span class="no"&gt;CONF

&lt;/span&gt;&lt;span class="c"&gt;# Enable site by creating symlink&lt;/span&gt;
&lt;span class="nb"&gt;sudo ln&lt;/span&gt; &lt;span class="nt"&gt;-sf&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$NGINX_PATH&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$NGINX_LINK&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# Test configuration and reload&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;nginx &lt;span class="nt"&gt;-t&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl reload nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;nginx -t&lt;/code&gt; command tests the configuration before applying it, preventing syntax errors from breaking your web server.&lt;/p&gt;




&lt;h3&gt;
  
  
  Stage 6: Validate Deployment
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it does&lt;/strong&gt;: Runs automated checks to ensure everything is working correctly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service status checks&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Check if Docker is running&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl is-active &lt;span class="nt"&gt;--quiet&lt;/span&gt; docker &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Docker: Active"&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Docker: Inactive"&lt;/span&gt;

&lt;span class="c"&gt;# Check if Nginx is running&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl is-active &lt;span class="nt"&gt;--quiet&lt;/span&gt; nginx &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Nginx: Active"&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Nginx: Inactive"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Container health&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;docker ps &lt;span class="nt"&gt;--format&lt;/span&gt; &lt;span class="s2"&gt;"table {{.Names}}&lt;/span&gt;&lt;span class="se"&gt;\t&lt;/span&gt;&lt;span class="s2"&gt;{{.Status}}&lt;/span&gt;&lt;span class="se"&gt;\t&lt;/span&gt;&lt;span class="s2"&gt;{{.Ports}}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This shows all running containers with their status and port mappings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;End-to-end test&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;if &lt;/span&gt;curl &lt;span class="nt"&gt;-fs&lt;/span&gt; &lt;span class="s2"&gt;"http://localhost"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null 2&amp;gt;&amp;amp;1&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"SUCCESS: Application reachable via Nginx on port 80!"&lt;/span&gt;
&lt;span class="k"&gt;else
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"ERROR: Application not responding through Nginx!"&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This simulates a real user accessing your application through Nginx.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;[!NOTE]&lt;br&gt;
The &lt;code&gt;-f&lt;/code&gt; flag in curl makes it fail silently on HTTP errors, and &lt;code&gt;-s&lt;/code&gt; makes it quiet. Perfect for scripting!&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Stage 7: Final Cleanup and Log Management
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it does&lt;/strong&gt;: Performs maintenance tasks to keep your server clean and efficient.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remove broken symlinks&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; /etc/nginx/sites-enabled &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;find /etc/nginx/sites-enabled &lt;span class="nt"&gt;-xtype&lt;/span&gt; l &lt;span class="nt"&gt;-delete&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true
&lt;/span&gt;&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Clean up old Docker resources&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Remove stopped containers older than 7 days&lt;/span&gt;
docker ps &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nt"&gt;--filter&lt;/span&gt; &lt;span class="s2"&gt;"status=exited"&lt;/span&gt; &lt;span class="nt"&gt;--format&lt;/span&gt; &lt;span class="s1"&gt;'{{.ID}}'&lt;/span&gt; | &lt;span class="se"&gt;\&lt;/span&gt;
  xargs &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="nt"&gt;-n1&lt;/span&gt; docker &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true&lt;/span&gt;

&lt;span class="c"&gt;# Remove unused images&lt;/span&gt;
docker image prune &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true&lt;/span&gt;

&lt;span class="c"&gt;# Remove unused networks&lt;/span&gt;
docker network prune &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This prevents your server from filling up with old Docker artifacts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Log rotation&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Compress current log&lt;/span&gt;
&lt;span class="nb"&gt;gzip&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;LOGFILE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;LOGFILE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.gz"&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true&lt;/span&gt;

&lt;span class="c"&gt;# Delete logs older than 30 days&lt;/span&gt;
find &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;LOGDIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-type&lt;/span&gt; f &lt;span class="nt"&gt;-name&lt;/span&gt; &lt;span class="s2"&gt;"deploy_*.log.gz"&lt;/span&gt; &lt;span class="nt"&gt;-mtime&lt;/span&gt; +30 &lt;span class="nt"&gt;-exec&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="o"&gt;{}&lt;/span&gt; &lt;span class="se"&gt;\;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This keeps your deployment history while managing disk space.&lt;/p&gt;




&lt;h2&gt;
  
  
  Advanced Features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Idempotency
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What is idempotency?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A script is idempotent if running it multiple times produces the same result as running it once. Our script is idempotent because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;rsync --delete&lt;/code&gt; ensures remote files match local files exactly&lt;/li&gt;
&lt;li&gt;Old Docker containers are removed before creating new ones&lt;/li&gt;
&lt;li&gt;Nginx symlinks use &lt;code&gt;-sf&lt;/code&gt; (force) to overwrite existing links&lt;/li&gt;
&lt;li&gt;Service installations check if already installed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why it matters&lt;/strong&gt;: You can safely re-run the deployment without worrying about duplicate resources or conflicts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Error Handling
&lt;/h3&gt;

&lt;p&gt;The script includes comprehensive error handling:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;die&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  log &lt;span class="s2"&gt;"ERROR"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$2&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
  &lt;span class="nb"&gt;exit&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Usage example:&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GIT_REPO_URL&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;die 10 &lt;span class="s2"&gt;"Git repository URL is required"&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each error has a unique exit code for easier troubleshooting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;10-19&lt;/code&gt;: Parameter validation errors&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;20-29&lt;/code&gt;: Repository and SSH errors&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;30-39&lt;/code&gt;: Remote setup errors&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;40-49&lt;/code&gt;: Deployment errors&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;50-59&lt;/code&gt;: Nginx configuration errors&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;60+&lt;/code&gt;: Validation errors&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Comprehensive Logging
&lt;/h3&gt;

&lt;p&gt;Every action is logged with timestamps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;log &lt;span class="s2"&gt;"INFO"&lt;/span&gt; &lt;span class="s2"&gt;"Starting Stage 3: remote environment preparation..."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Logs are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Written to timestamped files&lt;/li&gt;
&lt;li&gt;Compressed automatically&lt;/li&gt;
&lt;li&gt;Rotated after 30 days&lt;/li&gt;
&lt;li&gt;Displayed in real-time during deployment&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Common Issues and Troubleshooting
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Issue 1: SSH Connection Fails
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Symptoms&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ERROR: Cannot SSH into remote. Check SSH key, username, host IP, or firewall.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Solutions&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Verify SSH key permissions: &lt;code&gt;chmod 600 ~/.ssh/your-key.pem&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Test SSH manually: &lt;code&gt;ssh -i ~/.ssh/your-key.pem user@host&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Check server firewall allows port 22&lt;/li&gt;
&lt;li&gt;Verify the SSH key is added to server's &lt;code&gt;~/.ssh/authorized_keys&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Issue 2: Docker Build Fails
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Symptoms&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ERROR: The command '/bin/sh -c npm install' returned a non-zero code
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Solutions&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Check your &lt;code&gt;Dockerfile&lt;/code&gt; syntax&lt;/li&gt;
&lt;li&gt;Verify your app's dependencies are correct&lt;/li&gt;
&lt;li&gt;Look at the full error in the log file&lt;/li&gt;
&lt;li&gt;Try building locally first: &lt;code&gt;docker build -t test .&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Issue 3: Application Not Responding
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Symptoms&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;WARNING: Application not responding yet.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Solutions&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Check container logs: &lt;code&gt;docker logs &amp;lt;container-name&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Verify the port number is correct&lt;/li&gt;
&lt;li&gt;Ensure your app is binding to &lt;code&gt;0.0.0.0&lt;/code&gt;, not &lt;code&gt;localhost&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Check if the container is running: &lt;code&gt;docker ps&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Issue 4: Nginx Configuration Error
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Symptoms&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nginx: [emerg] unexpected "}" in /etc/nginx/sites-available/app.conf:10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Solutions&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The script tests configuration before applying: &lt;code&gt;nginx -t&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Check for syntax errors in the Nginx config section&lt;/li&gt;
&lt;li&gt;Verify the &lt;code&gt;APP_PORT&lt;/code&gt; variable is correctly substituted&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Best Practices and Security Considerations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. SSH Key Management
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Do&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use separate SSH keys for deployment (not your personal key)&lt;/li&gt;
&lt;li&gt;Restrict key permissions: &lt;code&gt;chmod 600 key.pem&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Use SSH key passphrases for extra security&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Don't&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Commit SSH keys to Git&lt;/li&gt;
&lt;li&gt;Share deployment keys between projects&lt;/li&gt;
&lt;li&gt;Use weak key types (use RSA 4096-bit or ed25519)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Secrets Management
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Do&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use environment variables for sensitive data&lt;/li&gt;
&lt;li&gt;Consider using secret management tools (HashiCorp Vault, AWS Secrets Manager)&lt;/li&gt;
&lt;li&gt;Rotate credentials regularly&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Don't&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hardcode passwords or API keys in your code&lt;/li&gt;
&lt;li&gt;Commit &lt;code&gt;.env&lt;/code&gt; files to Git&lt;/li&gt;
&lt;li&gt;Log sensitive information&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Server Security
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Do&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keep server software updated: &lt;code&gt;sudo apt update &amp;amp;&amp;amp; sudo apt upgrade&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Use firewall rules (UFW or security groups)&lt;/li&gt;
&lt;li&gt;Enable automatic security updates&lt;/li&gt;
&lt;li&gt;Implement SSL/TLS with Let's Encrypt&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Don't&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run containers as root unnecessarily&lt;/li&gt;
&lt;li&gt;Expose Docker daemon to the internet&lt;/li&gt;
&lt;li&gt;Use default passwords&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. Deployment Strategy
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Do&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test deployments in a staging environment first&lt;/li&gt;
&lt;li&gt;Implement health checks&lt;/li&gt;
&lt;li&gt;Keep backups before deploying&lt;/li&gt;
&lt;li&gt;Use version tags for Docker images&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Don't&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy directly to production without testing&lt;/li&gt;
&lt;li&gt;Deploy during peak traffic hours&lt;/li&gt;
&lt;li&gt;Skip validation checks&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Taking It Further: Next Steps
&lt;/h2&gt;

&lt;p&gt;Now that you have a working deployment script, consider these enhancements:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Add CI/CD Integration
&lt;/h3&gt;

&lt;p&gt;Integrate with GitHub Actions or GitLab CI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# .github/workflows/deploy.yml&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;main&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v3&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy to server&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;echo "${{ secrets.SSH_KEY }}" &amp;gt; key.pem&lt;/span&gt;
          &lt;span class="s"&gt;chmod 600 key.pem&lt;/span&gt;
          &lt;span class="s"&gt;./deploy.sh&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Implement Blue-Green Deployment
&lt;/h3&gt;

&lt;p&gt;Run two versions of your app and switch between them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Start new version on different port&lt;/span&gt;
docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 3001:3000 &lt;span class="nt"&gt;--name&lt;/span&gt; app-blue app:v2

&lt;span class="c"&gt;# After validation, update Nginx to point to new version&lt;/span&gt;
&lt;span class="c"&gt;# Then remove old version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Add Monitoring
&lt;/h3&gt;

&lt;p&gt;Use tools like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Prometheus&lt;/strong&gt; for metrics&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Grafana&lt;/strong&gt; for visualization&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sentry&lt;/strong&gt; for error tracking&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Uptime Robot&lt;/strong&gt; for availability monitoring&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. Database Management
&lt;/h3&gt;

&lt;p&gt;Add database backup and migration steps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Backup before deployment&lt;/span&gt;
docker &lt;span class="nb"&gt;exec &lt;/span&gt;postgres pg_dump &lt;span class="nt"&gt;-U&lt;/span&gt; user dbname &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; backup.sql

&lt;span class="c"&gt;# Run migrations&lt;/span&gt;
docker &lt;span class="nb"&gt;exec &lt;/span&gt;app npm run migrate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  5. Multi-Environment Support
&lt;/h3&gt;

&lt;p&gt;Extend the script to handle dev/staging/production:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-rp&lt;/span&gt; &lt;span class="s2"&gt;"Environment (dev/staging/prod): "&lt;/span&gt; ENVIRONMENT
&lt;span class="c"&gt;# Use different configs based on environment&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Congratulations! 🎉 You now have a comprehensive understanding of automated Docker deployment using Bash scripts. Let's recap what we've covered:&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;The basics&lt;/strong&gt;: What deployment automation is and why it matters&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;The workflow&lt;/strong&gt;: How code gets from your computer to a live server&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;The 7 stages&lt;/strong&gt;: Every step of the deployment process in detail&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Advanced concepts&lt;/strong&gt;: Idempotency, error handling, and logging&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Troubleshooting&lt;/strong&gt;: Common issues and how to fix them&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Best practices&lt;/strong&gt;: Security and deployment strategies&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Next steps&lt;/strong&gt;: How to enhance your deployment pipeline&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Takeaways
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Automation saves time&lt;/strong&gt;: What used to take hours now takes minutes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consistency matters&lt;/strong&gt;: The same process every time reduces errors&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Logging is crucial&lt;/strong&gt;: Good logs make troubleshooting 10x easier&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security is paramount&lt;/strong&gt;: Never compromise on security practices&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Iteration is key&lt;/strong&gt;: Start simple, then add features as needed&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Your Deployment Journey
&lt;/h3&gt;

&lt;p&gt;This script is a foundation, not a final destination. As you grow in your DevOps journey:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Customize it for your specific needs&lt;/li&gt;
&lt;li&gt;Add features that solve your problems&lt;/li&gt;
&lt;li&gt;Share your improvements with the community&lt;/li&gt;
&lt;li&gt;Keep learning and experimenting&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Additional Resources
&lt;/h3&gt;

&lt;p&gt;Want to dive deeper? Check out these resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Docker Documentation&lt;/strong&gt;: &lt;a href="https://docs.docker.com" rel="noopener noreferrer"&gt;docs.docker.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Nginx Beginner's Guide&lt;/strong&gt;: &lt;a href="http://nginx.org/en/docs/beginners_guide.html" rel="noopener noreferrer"&gt;nginx.org/en/docs/beginners_guide.html&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bash Scripting Guide&lt;/strong&gt;: &lt;a href="https://tldp.org/LDP/abs/html/" rel="noopener noreferrer"&gt;tldp.org/LDP/abs/html/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DevOps Roadmap&lt;/strong&gt;: &lt;a href="https://roadmap.sh/devops" rel="noopener noreferrer"&gt;roadmap.sh/devops&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Get Started Today!
&lt;/h2&gt;

&lt;p&gt;Ready to deploy your first application automatically? Here's your action plan:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Clone the script&lt;/strong&gt; from the repository&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Set up a test server&lt;/strong&gt; (DigitalOcean has a $5/month droplet)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create a simple Dockerized app&lt;/strong&gt; (a "Hello World" Node.js app works great)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run the deployment script&lt;/strong&gt; following this guide&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Celebrate&lt;/strong&gt; when you see your app live! 🚀&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Remember: Every expert was once a beginner. Don't be afraid to experiment, break things (in a test environment!), and learn from mistakes.&lt;/p&gt;




&lt;h2&gt;
  
  
  Questions or Feedback?
&lt;/h2&gt;

&lt;p&gt;I'd love to hear about your deployment journey! Did this guide help you? What challenges did you face? What would you like to see covered in a follow-up article?&lt;/p&gt;

&lt;p&gt;Drop your thoughts in the comments below, and happy deploying! 💻✨&lt;/p&gt;




&lt;h2&gt;
  
  
  About This Article
&lt;/h2&gt;

&lt;p&gt;This guide was created as part of the HNG Internship Stage 1 DevOps task. The script implements best practices for production-ready deployments while remaining accessible to beginners.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Connect with me&lt;/strong&gt;: [Your social links here]&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Want to learn more about DevOps?&lt;/strong&gt; Check out the &lt;a href="https://hng.tech/internship" rel="noopener noreferrer"&gt;HNG Internship program&lt;/a&gt; for hands-on learning opportunities.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Tags: #DevOps #Docker #Nginx #BashScripting #Deployment #Automation #Linux #CloudComputing #WebDevelopment #Tutorial&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>bash</category>
      <category>nginx</category>
    </item>
    <item>
      <title>Building a Production-Ready CI/CD Pipeline: Automating Infrastructure with Terraform, GitHub Actions, and Ansible</title>
      <dc:creator>PrimoCrypt</dc:creator>
      <pubDate>Tue, 09 Dec 2025 21:15:31 +0000</pubDate>
      <link>https://dev.to/primocrypt/building-a-production-ready-cicd-pipeline-automating-infrastructure-with-terraform-github-33gg</link>
      <guid>https://dev.to/primocrypt/building-a-production-ready-cicd-pipeline-automating-infrastructure-with-terraform-github-33gg</guid>
      <description>&lt;p&gt;In the modern DevOps landscape, manual infrastructure management and application deployment are rapidly becoming obsolete. This comprehensive guide walks you through building a complete, production-ready CI/CD pipeline for a microservices application, covering infrastructure provisioning, automated deployments, drift detection, and continuous delivery—all using industry-standard DevOps tools and best practices.&lt;/p&gt;

&lt;h2&gt;
  
  
  [TABLE OF CONTENTS]
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Project Goals and Overview&lt;/li&gt;
&lt;li&gt;System Architecture and Design&lt;/li&gt;
&lt;li&gt;Infrastructure as Code with Terraform&lt;/li&gt;
&lt;li&gt;CI/CD Pipeline Implementation with GitHub Actions&lt;/li&gt;
&lt;li&gt;Configuration Management with Ansible&lt;/li&gt;
&lt;li&gt;Container Orchestration with Docker Compose&lt;/li&gt;
&lt;li&gt;Security Implementation and Best Practices&lt;/li&gt;
&lt;li&gt;Observability and Distributed Tracing&lt;/li&gt;
&lt;li&gt;Lessons Learned and Key Takeaways&lt;/li&gt;
&lt;li&gt;Challenges Encountered and Solutions&lt;/li&gt;
&lt;li&gt;Future Improvements and Roadmap&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  [PROJECT GOALS AND OVERVIEW]
&lt;/h2&gt;

&lt;p&gt;The primary objective of this project was to create a fully automated deployment pipeline for a multi-service TODO application with complete infrastructure automation. The solution needed to address several key requirements:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Core Requirements:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Complete automation from code commit to production deployment&lt;/li&gt;
&lt;li&gt;Infrastructure provisioning using declarative configuration&lt;/li&gt;
&lt;li&gt;Automated configuration management for consistent server setup&lt;/li&gt;
&lt;li&gt;Zero-downtime deployments with SSL/TLS termination&lt;/li&gt;
&lt;li&gt;Drift detection to maintain infrastructure consistency&lt;/li&gt;
&lt;li&gt;Distributed tracing for debugging microservices interactions&lt;/li&gt;
&lt;li&gt;Security-first approach with encrypted secrets and minimal attack surface&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Technology Stack Selected:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure as Code&lt;/strong&gt;: Terraform for AWS resource provisioning&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configuration Management&lt;/strong&gt;: Ansible for server configuration and application deployment&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CI/CD Orchestration&lt;/strong&gt;: GitHub Actions for workflow automation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Containerization&lt;/strong&gt;: Docker and Docker Compose for service isolation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reverse Proxy&lt;/strong&gt;: Traefik for routing, load balancing, and automatic SSL&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Observability&lt;/strong&gt;: Zipkin for distributed request tracing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Message Queue&lt;/strong&gt;: Redis for asynchronous log processing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The end goal was a system where infrastructure changes and application updates could be deployed with a single &lt;code&gt;git push&lt;/code&gt;, with built-in safety mechanisms including drift detection, email notifications, and manual approval gates for production environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  [SYSTEM ARCHITECTURE AND DESIGN]
&lt;/h2&gt;

&lt;p&gt;The application architecture follows a microservices pattern with seven distinct services, each serving a specific purpose. This polyglot architecture demonstrates real-world complexity where different services are written in different programming languages based on their specific requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture Diagram
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frg5y5fsq0q9n41mv3qb1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frg5y5fsq0q9n41mv3qb1.png" alt="Microservice Architecture" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 1: Complete microservices architecture showing all 7 services, their technologies, and data flow between components&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Service Responsibilities
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Frontend Service (Vue.js)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Single-page application providing the complete user interface&lt;/li&gt;
&lt;li&gt;Communicates with backend APIs via RESTful endpoints&lt;/li&gt;
&lt;li&gt;Implements distributed tracing via Zipkin client&lt;/li&gt;
&lt;li&gt;Served as static assets with client-side routing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Auth API (Go)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Handles user authentication and authorization&lt;/li&gt;
&lt;li&gt;Generates and validates JWT tokens for session management&lt;/li&gt;
&lt;li&gt;Communicates with Users API to validate credentials&lt;/li&gt;
&lt;li&gt;Written in Go for performance and concurrency&lt;/li&gt;
&lt;li&gt;Port: 8081 (internal)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Todos API (Node.js)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provides full CRUD operations for user TODO items&lt;/li&gt;
&lt;li&gt;Publishes create/delete events to Redis message queue&lt;/li&gt;
&lt;li&gt;Validates JWT tokens for authenticated requests&lt;/li&gt;
&lt;li&gt;Asynchronous, event-driven architecture&lt;/li&gt;
&lt;li&gt;Port: 8082 (internal)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Users API (Spring Boot / Java)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Manages user profiles and account information&lt;/li&gt;
&lt;li&gt;Provides user lookup for authentication service&lt;/li&gt;
&lt;li&gt;Simplified implementation (read-only operations)&lt;/li&gt;
&lt;li&gt;Leverages Spring Boot ecosystem&lt;/li&gt;
&lt;li&gt;Port: 8083 (internal)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Log Message Processor (Python)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Consumes messages from Redis queue&lt;/li&gt;
&lt;li&gt;Processes TODO creation and deletion events&lt;/li&gt;
&lt;li&gt;Logs events to stdout for monitoring/aggregation&lt;/li&gt;
&lt;li&gt;Demonstrates asynchronous processing pattern&lt;/li&gt;
&lt;li&gt;Queue-based, no exposed ports&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;6. Redis&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In-memory data store used as message queue&lt;/li&gt;
&lt;li&gt;Pub/sub pattern for event broadcasting&lt;/li&gt;
&lt;li&gt;Minimal configuration, Alpine-based image&lt;/li&gt;
&lt;li&gt;Port: 6379 (internal only)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;7. Zipkin&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Distributed tracing system for microservices&lt;/li&gt;
&lt;li&gt;Collects timing data from all services&lt;/li&gt;
&lt;li&gt;Provides visualization of request flows&lt;/li&gt;
&lt;li&gt;Helps identify performance bottlenecks&lt;/li&gt;
&lt;li&gt;Port: 9411 (exposed via Traefik)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;8. Traefik&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Modern reverse proxy and load balancer&lt;/li&gt;
&lt;li&gt;Automatic service discovery via Docker labels&lt;/li&gt;
&lt;li&gt;Let's Encrypt integration for automatic SSL certificates&lt;/li&gt;
&lt;li&gt;Path-based and host-based routing&lt;/li&gt;
&lt;li&gt;HTTP to HTTPS automatic redirection&lt;/li&gt;
&lt;li&gt;Ports: 80 (HTTP), 443 (HTTPS), 8080 (Dashboard)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Network Architecture
&lt;/h3&gt;

&lt;p&gt;&lt;a href="/home/leo/Coding/Devops/HNG/DevOps-Stage-6/blog-images/docker_network_architecture_1765314361983.png" class="article-body-image-wrapper"&gt;&lt;img src="/home/leo/Coding/Devops/HNG/DevOps-Stage-6/blog-images/docker_network_architecture_1765314361983.png" alt="Docker Network Architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 2: Docker networking showing isolated app-network with Traefik as the only external gateway&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;All services communicate via a dedicated Docker bridge network named &lt;code&gt;app-network&lt;/code&gt;. This provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Network isolation from the host system&lt;/li&gt;
&lt;li&gt;Service-to-service communication using container names (DNS resolution)&lt;/li&gt;
&lt;li&gt;No exposed ports except through Traefik&lt;/li&gt;
&lt;li&gt;Encrypted traffic between external clients and Traefik&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  [INFRASTRUCTURE AS CODE WITH TERRAFORM]
&lt;/h2&gt;

&lt;p&gt;Terraform was chosen for infrastructure provisioning because it provides declarative configuration, state management, and a mature AWS provider with extensive resource coverage.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Resources Provisioned
&lt;/h3&gt;

&lt;p&gt;The Terraform configuration provisions the following AWS resources:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. EC2 Instance&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Instance Type: Configurable via variable (default: t2.medium recommended)&lt;/li&gt;
&lt;li&gt;AMI: Latest Ubuntu 22.04 LTS (Jammy Jellyfish)&lt;/li&gt;
&lt;li&gt;Automatically tagged for easy identification and billing&lt;/li&gt;
&lt;li&gt;Uses data source to always fetch the latest Ubuntu AMI
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Data source ensures we always use the latest Ubuntu AMI&lt;/span&gt;
&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"aws_ami"&lt;/span&gt; &lt;span class="s2"&gt;"ubuntu"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;most_recent&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;owners&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"099720109477"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="c1"&gt;# Canonical's AWS account&lt;/span&gt;

  &lt;span class="nx"&gt;filter&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"name"&lt;/span&gt;
    &lt;span class="nx"&gt;values&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;filter&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"virtualization-type"&lt;/span&gt;
    &lt;span class="nx"&gt;values&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"hvm"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# EC2 Instance resource definition&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_instance"&lt;/span&gt; &lt;span class="s2"&gt;"todo_app"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;ami&lt;/span&gt;                    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_ami&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ubuntu&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;instance_type&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;instance_type&lt;/span&gt;
  &lt;span class="nx"&gt;key_name&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_key_pair&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;deployer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;key_name&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_security_group_ids&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;aws_security_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;todo_app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;Name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"todo-app-server-v2"&lt;/span&gt;
    &lt;span class="nx"&gt;Environment&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"production"&lt;/span&gt;
    &lt;span class="nx"&gt;Project&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hngi13-stage6"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
`&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Security Group&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ingress: SSH (22), HTTP (80), HTTPS (443)&lt;/li&gt;
&lt;li&gt;Egress: All traffic allowed (for package downloads, API calls, etc.)&lt;/li&gt;
&lt;li&gt;SSH access restricted to specific CIDR block for security&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`hcl&lt;br&gt;
resource "aws_security_group" "todo_app" {&lt;br&gt;
  name        = "todo-app-sg"&lt;br&gt;
  description = "Security group for TODO application"&lt;/p&gt;

&lt;p&gt;# HTTP access for initial Let's Encrypt challenges&lt;br&gt;
  ingress {&lt;br&gt;
    description = "HTTP"&lt;br&gt;
    from_port   = 80&lt;br&gt;
    to_port     = 80&lt;br&gt;
    protocol    = "tcp"&lt;br&gt;
    cidr_blocks = ["0.0.0.0/0"]&lt;br&gt;
  }&lt;/p&gt;

&lt;p&gt;# HTTPS for production traffic&lt;br&gt;
  ingress {&lt;br&gt;
    description = "HTTPS"&lt;br&gt;
    from_port   = 443&lt;br&gt;
    to_port     = 443&lt;br&gt;
    protocol    = "tcp"&lt;br&gt;
    cidr_blocks = ["0.0.0.0/0"]&lt;br&gt;
  }&lt;/p&gt;

&lt;p&gt;# SSH restricted to specific CIDR for security&lt;br&gt;
  ingress {&lt;br&gt;
    description = "SSH"&lt;br&gt;
    from_port   = 22&lt;br&gt;
    to_port     = 22&lt;br&gt;
    protocol    = "tcp"&lt;br&gt;
    cidr_blocks = [var.ssh_cidr]  # Only allow from specific IP range&lt;br&gt;
  }&lt;/p&gt;

&lt;p&gt;# Allow all outbound traffic&lt;br&gt;
  egress {&lt;br&gt;
    from_port   = 0&lt;br&gt;
    to_port     = 0&lt;br&gt;
    protocol    = "-1"&lt;br&gt;
    cidr_blocks = ["0.0.0.0/0"]&lt;br&gt;
  }&lt;/p&gt;

&lt;p&gt;tags = {&lt;br&gt;
    Name = "todo-app-sg"&lt;br&gt;
  }&lt;br&gt;
}&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. SSH Key Pair&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Public key uploaded to AWS for instance access&lt;/li&gt;
&lt;li&gt;Private key stored securely in GitHub Secrets&lt;/li&gt;
&lt;li&gt;Used by both Terraform and Ansible for authentication&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;hcl&lt;br&gt;
resource "aws_key_pair" "deployer" {&lt;br&gt;
  key_name   = var.key_name&lt;br&gt;
  public_key = file(var.public_key_path)&lt;br&gt;
}&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Remote State Configuration (S3 + DynamoDB)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;S3 bucket stores Terraform state file with encryption&lt;/li&gt;
&lt;li&gt;DynamoDB table provides state locking to prevent concurrent modifications&lt;/li&gt;
&lt;li&gt;Configured in separate backend.tf file&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Dynamic Ansible Inventory Generation
&lt;/h3&gt;

&lt;p&gt;One of the most elegant aspects of this setup is the automatic generation of Ansible inventory files. Since the EC2 instance's public IP address is only known after Terraform creates it, we need a mechanism to pass this information to Ansible.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`hcl&lt;/p&gt;

&lt;h1&gt;
  
  
  Template file for inventory generation
&lt;/h1&gt;

&lt;p&gt;resource "local_file" "ansible_inventory" {&lt;br&gt;
  content = templatefile("${path.module}/inventory.tftpl", {&lt;br&gt;
    host = aws_instance.todo_app.public_ip&lt;br&gt;
    user = var.server_user&lt;br&gt;
    key  = var.private_key_path&lt;br&gt;
  })&lt;br&gt;
  filename = "${path.module}/../ansible/inventory/hosts.yml"&lt;br&gt;
}&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;inventory.tftpl&lt;/code&gt; template file looks like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;yaml&lt;br&gt;
[web]&lt;br&gt;
${host} ansible_user=${user} ansible_ssh_private_key_file=${key}&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;After Terraform applies, this becomes a fully functional Ansible inventory file with the actual IP address populated.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integrated Terraform-Ansible Provisioning
&lt;/h3&gt;

&lt;p&gt;To create a truly seamless deployment experience, Terraform automatically triggers Ansible configuration after infrastructure creation. This is achieved using a &lt;code&gt;null_resource&lt;/code&gt; with a &lt;code&gt;local-exec&lt;/code&gt; provisioner:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`hcl&lt;br&gt;
resource "null_resource" "ansible_provision" {&lt;br&gt;
  # Trigger re-provisioning when instance or inventory changes&lt;br&gt;
  triggers = {&lt;br&gt;
    instance_id = aws_instance.todo_app.id&lt;br&gt;
    inventory   = local_file.ansible_inventory.content&lt;br&gt;
  }&lt;/p&gt;

&lt;p&gt;provisioner "local-exec" {&lt;br&gt;
    command = &amp;lt;&amp;lt;-EOT&lt;br&gt;
      echo "Waiting for SSH to be available..."&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  # Wait up to 5 minutes for SSH to become available
  for i in {1..30}; do
    nc -z -w 5 ${aws_instance.todo_app.public_ip} 22 &amp;amp;&amp;amp; break
    echo "Waiting for port 22... (attempt $i/30)"
    sleep 10
  done

  echo "Running Ansible playbook..."
  # Disable host key checking for automated deployments
  export ANSIBLE_HOST_KEY_CHECKING=False
  export ANSIBLE_CONFIG=${path.module}/../ansible/ansible.cfg

  ansible-playbook \
    -i ${path.module}/../ansible/inventory/hosts.yml \
    ${path.module}/../ansible/playbook.yml \
    --extra-vars "domain_name=${var.domain_name} email=${var.email}"
EOT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;}&lt;/p&gt;

&lt;p&gt;depends_on = [&lt;br&gt;
    aws_instance.todo_app,&lt;br&gt;
    local_file.ansible_inventory&lt;br&gt;
  ]&lt;br&gt;
}&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This approach provides several benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Infrastructure and configuration are provisioned in a single Terraform apply&lt;/li&gt;
&lt;li&gt;No manual intervention required between infrastructure and configuration steps&lt;/li&gt;
&lt;li&gt;SSH availability check prevents Ansible from failing on a booting instance&lt;/li&gt;
&lt;li&gt;Extra variables (domain, email) are passed from Terraform to Ansible seamlessly&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  [CI/CD PIPELINE IMPLEMENTATION WITH GITHUB ACTIONS]
&lt;/h2&gt;

&lt;p&gt;&lt;a href="/home/leo/Coding/Devops/HNG/DevOps-Stage-6/blog-images/cicd_pipeline_flow_1765314310386.png" class="article-body-image-wrapper"&gt;&lt;img src="/home/leo/Coding/Devops/HNG/DevOps-Stage-6/blog-images/cicd_pipeline_flow_1765314310386.png" alt="CI/CD Pipeline Flow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 3: Complete CI/CD pipeline showing infrastructure and application deployment workflows with drift detection and manual approval gates&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;GitHub Actions provides the orchestration layer for our CI/CD pipeline. Two separate workflows handle infrastructure changes and application deployments respectively.&lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure Pipeline (infra.yml)
&lt;/h3&gt;

&lt;p&gt;This workflow implements a sophisticated drift detection and approval mechanism:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`yaml&lt;br&gt;
name: Infrastructure Pipeline&lt;/p&gt;

&lt;p&gt;on:&lt;br&gt;
  push:&lt;br&gt;
    paths:&lt;br&gt;
      - "infra/terraform/&lt;strong&gt;"&lt;br&gt;
      - "infra/ansible/&lt;/strong&gt;"&lt;/p&gt;

&lt;p&gt;jobs:&lt;br&gt;
  terraform-plan:&lt;br&gt;
    runs-on: ubuntu-latest&lt;br&gt;
    outputs:&lt;br&gt;
      drift_detected: ${{ steps.plan.outputs.exitcode == 2 }}&lt;br&gt;
    env:&lt;br&gt;
      AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}&lt;br&gt;
      AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}&lt;br&gt;
      AWS_DEFAULT_REGION: "us-east-1"&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;steps:
  - uses: actions/checkout@v3

  - name: Setup Terraform
    uses: hashicorp/setup-terraform@v2
    with:
      terraform_wrapper: false # Allows capturing raw output

  - name: Terraform Init
    run: terraform init
    working-directory: infra/terraform

  - name: Create SSH Keys for Plan
    run: |
      echo "${{ secrets.SSH_PUBLIC_KEY }}" &amp;gt; infra/terraform/deployer_key.pub
      echo "${{ secrets.SSH_PRIVATE_KEY }}" &amp;gt; infra/terraform/deployer_key
      chmod 600 infra/terraform/deployer_key

  - name: Terraform Plan
    id: plan
    run: |
      exit_code=0
      terraform plan -detailed-exitcode -out=tfplan || exit_code=$?
      echo "exitcode=$exit_code" &amp;gt;&amp;gt; $GITHUB_OUTPUT

      if [ $exit_code -eq 2 ]; then
        echo "Infrastructure drift detected!"
      elif [ $exit_code -eq 1 ]; then
        echo "Terraform plan failed with errors"
        exit 1
      else
        echo "No infrastructure changes detected"
      fi
    working-directory: infra/terraform
    env:
      TF_VAR_public_key_path: "${{ github.workspace }}/infra/terraform/deployer_key.pub"
      TF_VAR_private_key_path: "${{ github.workspace }}/infra/terraform/deployer_key"
      TF_VAR_domain_name: ${{ secrets.DOMAIN_NAME }}
      TF_VAR_email: ${{ secrets.ACME_EMAIL }}

  - name: Upload Terraform Plan
    uses: actions/upload-artifact@v4
    with:
      name: tfplan
      path: infra/terraform/tfplan

  - name: Send Email on Drift
    if: steps.plan.outputs.exitcode == 2
    uses: dawidd6/action-send-mail@v3
    with:
      server_address: smtp.gmail.com
      server_port: 465
      username: ${{ secrets.MAIL_USERNAME }}
      password: ${{ secrets.MAIL_PASSWORD }}
      subject: "Infrastructure Drift Detected: Manual Review Required"
      html_body: |
        &amp;lt;h3&amp;gt;Terraform Drift Detected&amp;lt;/h3&amp;gt;
        &amp;lt;p&amp;gt;Infrastructure changes have been detected for &amp;lt;b&amp;gt;${{ github.repository }}&amp;lt;/b&amp;gt;.&amp;lt;/p&amp;gt;
        &amp;lt;p&amp;gt;Please review the Terraform plan and approve the deployment to apply changes.&amp;lt;/p&amp;gt;
        &amp;lt;p&amp;gt;
          &amp;lt;a href="${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}" 
             style="background-color: #2ea44f; color: white; padding: 10px 20px; 
                    text-decoration: none; border-radius: 5px;"&amp;gt;
            View Plan &amp;amp; Approve Deployment
          &amp;lt;/a&amp;gt;
        &amp;lt;/p&amp;gt;
      to: ${{ secrets.MAIL_TO }}
      from: GitHub Actions CI/CD
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;terraform-apply:&lt;br&gt;
    needs: terraform-plan&lt;br&gt;
    if: needs.terraform-plan.outputs.drift_detected == 'true'&lt;br&gt;
    runs-on: ubuntu-latest&lt;br&gt;
    environment: production # Requires manual approval&lt;br&gt;
    env:&lt;br&gt;
      AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}&lt;br&gt;
      AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}&lt;br&gt;
      AWS_DEFAULT_REGION: "us-east-1"&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;steps:
  - uses: actions/checkout@v3

  - name: Setup Terraform
    uses: hashicorp/setup-terraform@v2

  - name: Create SSH Keys
    run: |
      echo "${{ secrets.SSH_PUBLIC_KEY }}" &amp;gt; infra/terraform/deployer_key.pub
      echo "${{ secrets.SSH_PRIVATE_KEY }}" &amp;gt; infra/terraform/deployer_key
      chmod 600 infra/terraform/deployer_key

  - name: Verify SSH Key Format
    run: |
      chmod 600 infra/terraform/deployer_key
      ssh-keygen -l -f infra/terraform/deployer_key || \
        echo "::error::SSH Private Key is invalid! Check your GitHub Secret."

  - name: Terraform Init
    run: terraform init
    working-directory: infra/terraform

  - name: Download Terraform Plan
    uses: actions/download-artifact@v4
    with:
      name: tfplan
      path: infra/terraform

  - name: Terraform Apply
    run: terraform apply -auto-approve tfplan
    working-directory: infra/terraform
    env:
      TF_VAR_public_key_path: "${{ github.workspace }}/infra/terraform/deployer_key.pub"
      TF_VAR_private_key_path: "${{ github.workspace}}/infra/terraform/deployer_key"
      TF_VAR_domain_name: ${{ secrets.DOMAIN_NAME }}
      TF_VAR_email: ${{ secrets.ACME_EMAIL }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pipeline Features Explained:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Trigger Conditions:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The workflow is triggered on &lt;code&gt;push&lt;/code&gt; events to the &lt;code&gt;infra/terraform/**&lt;/code&gt; or &lt;code&gt;infra/ansible/**&lt;/code&gt; paths. This ensures that any changes to infrastructure code or Ansible playbooks automatically initiate a plan.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Terraform Plan Job (&lt;code&gt;terraform-plan&lt;/code&gt;):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;runs-on: ubuntu-latest&lt;/code&gt;&lt;/strong&gt;: Executes on a fresh Ubuntu runner.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Credentials&lt;/strong&gt;: &lt;code&gt;AWS_ACCESS_KEY_ID&lt;/code&gt; and &lt;code&gt;AWS_SECRET_ACCESS_KEY&lt;/code&gt; are securely injected from GitHub Secrets, ensuring the workflow has permissions to interact with AWS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;actions/checkout@v3&lt;/code&gt;&lt;/strong&gt;: Checks out the repository content.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;hashicorp/setup-terraform@v2&lt;/code&gt;&lt;/strong&gt;: Installs the specified Terraform version. &lt;code&gt;terraform_wrapper: false&lt;/code&gt; is crucial here to allow capturing the raw exit code from &lt;code&gt;terraform plan&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;Terraform Init&lt;/code&gt;&lt;/strong&gt;: Initializes the Terraform working directory, downloading providers and setting up the S3 backend for state management.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;Create SSH Keys for Plan&lt;/code&gt;&lt;/strong&gt;: Dynamically creates &lt;code&gt;deployer_key.pub&lt;/code&gt; and &lt;code&gt;deployer_key&lt;/code&gt; files from GitHub Secrets. These are needed for Terraform to pass the public key to AWS and for the &lt;code&gt;null_resource&lt;/code&gt; to use the private key for Ansible. Proper &lt;code&gt;chmod 600&lt;/code&gt; is applied for security.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;Terraform Plan&lt;/code&gt; (&lt;code&gt;id: plan&lt;/code&gt;)&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Executes &lt;code&gt;terraform plan -detailed-exitcode -out=tfplan&lt;/code&gt;. The &lt;code&gt;-detailed-exitcode&lt;/code&gt; option is key for drift detection:&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;0&lt;/code&gt;: No changes, infrastructure matches state.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;1&lt;/code&gt;: Error occurred.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;2&lt;/code&gt;: Changes detected, infrastructure differs from state.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;exit_code&lt;/code&gt; is captured and set as an output (&lt;code&gt;drift_detected&lt;/code&gt;) for subsequent jobs.&lt;/li&gt;
&lt;li&gt;Terraform variables (&lt;code&gt;TF_VAR_public_key_path&lt;/code&gt;, &lt;code&gt;TF_VAR_private_key_path&lt;/code&gt;, &lt;code&gt;TF_VAR_domain_name&lt;/code&gt;, &lt;code&gt;TF_VAR_email&lt;/code&gt;) are passed from GitHub Actions environment variables, which in turn are populated from GitHub Secrets.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;&lt;code&gt;Upload Terraform Plan&lt;/code&gt;&lt;/strong&gt;: The generated &lt;code&gt;tfplan&lt;/code&gt; file (which contains the proposed changes) is uploaded as an artifact. This allows the &lt;code&gt;terraform-apply&lt;/code&gt; job to use the exact same plan, preventing "plan drift" between the plan and apply stages.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;&lt;code&gt;Send Email on Drift&lt;/code&gt;&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;This step runs &lt;code&gt;if: steps.plan.outputs.exitcode == 2&lt;/code&gt;, meaning it only executes if drift is detected.&lt;/li&gt;
&lt;li&gt;It uses &lt;code&gt;dawidd6/action-send-mail@v3&lt;/code&gt; to send an email notification to a configured address (&lt;code&gt;MAIL_TO&lt;/code&gt; from secrets).&lt;/li&gt;
&lt;li&gt;The email includes a direct link to the GitHub Actions run, prompting a manual review.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Terraform Apply Job (&lt;code&gt;terraform-apply&lt;/code&gt;):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;needs: terraform-plan&lt;/code&gt;&lt;/strong&gt;: This job depends on the &lt;code&gt;terraform-plan&lt;/code&gt; job completing successfully.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;if: needs.terraform-plan.outputs.drift_detected == 'true'&lt;/code&gt;&lt;/strong&gt;: This job only runs if drift was detected in the planning phase. If there's no drift, there's nothing to apply.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;environment: production&lt;/code&gt;&lt;/strong&gt;: This is a critical security feature. GitHub Environments allow you to configure protection rules, such as requiring manual approval before a workflow can proceed to this step. This acts as a "human in the loop" for production infrastructure changes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Credentials&lt;/strong&gt;: Same as the plan job.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;actions/checkout@v3&lt;/code&gt;&lt;/strong&gt;: Checks out the repository.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;Setup Terraform&lt;/code&gt;&lt;/strong&gt;: Installs Terraform.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;Create SSH Keys&lt;/code&gt;&lt;/strong&gt;: Recreates the SSH key files, as runners are ephemeral.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;Verify SSH Key Format&lt;/code&gt;&lt;/strong&gt;: A defensive step to ensure the private key from secrets is valid before attempting to use it. This helps catch misconfigurations early.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;Terraform Init&lt;/code&gt;&lt;/strong&gt;: Initializes Terraform.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;Download Terraform Plan&lt;/code&gt;&lt;/strong&gt;: Downloads the &lt;code&gt;tfplan&lt;/code&gt; artifact generated by the &lt;code&gt;terraform-plan&lt;/code&gt; job. This ensures that the &lt;code&gt;apply&lt;/code&gt; operation is based on the exact plan that was reviewed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;Terraform Apply&lt;/code&gt;&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Executes &lt;code&gt;terraform apply -auto-approve tfplan&lt;/code&gt;. The &lt;code&gt;-auto-approve&lt;/code&gt; flag is used because the manual approval for the &lt;code&gt;production&lt;/code&gt; environment already serves as the explicit approval.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;tfplan&lt;/code&gt; file is passed directly, guaranteeing that only the planned changes are applied.&lt;/li&gt;
&lt;li&gt;Terraform variables are passed as in the plan step.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;This infrastructure pipeline provides a robust, secure, and auditable process for managing infrastructure changes, incorporating drift detection, email notifications, and manual approval for critical environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deployment Pipeline (deploy.yml)
&lt;/h3&gt;

&lt;p&gt;The application deployment pipeline handles code changes and responds to infrastructure updates:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`yaml&lt;br&gt;
name: Application Deployment&lt;/p&gt;

&lt;p&gt;on:&lt;br&gt;
  workflow_run:&lt;br&gt;
    workflows: ["Infrastructure Pipeline"]&lt;br&gt;
    types: [completed]&lt;br&gt;
  push:&lt;br&gt;
    paths:&lt;br&gt;
      - "frontend/&lt;strong&gt;"&lt;br&gt;
      - "auth-api/&lt;/strong&gt;"&lt;br&gt;
      - "todos-api/&lt;strong&gt;"&lt;br&gt;
      - "users-api/&lt;/strong&gt;"&lt;br&gt;
      - "log-message-processor/**"&lt;br&gt;
      - "docker-compose.yml"&lt;/p&gt;

&lt;p&gt;jobs:&lt;br&gt;
  deploy:&lt;br&gt;
    runs-on: ubuntu-latest&lt;br&gt;
    if: ${{ github.event.workflow_run.conclusion == 'success' || github.event_name == 'push' }}&lt;br&gt;
    env:&lt;br&gt;
      AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}&lt;br&gt;
      AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}&lt;br&gt;
      AWS_DEFAULT_REGION: "us-east-1"&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;steps:
  - uses: actions/checkout@v3

  - name: Get Server IP Dynamically
    id: get-ip
    run: |
      IP=$(aws ec2 describe-instances \
        --filters "Name=tag:Name,Values=todo-app-server-v2" \
                  "Name=instance-state-name,Values=running" \
        --query "Reservations[*].Instances[*].PublicIpAddress" \
        --output text)
      echo "SERVER_IP=$IP" &amp;gt;&amp;gt; $GITHUB_ENV
      echo "Deploying to instance at IP: $IP"

  - name: Deploy via Ansible
    uses: dawidd6/action-ansible-playbook@v2
    with:
      playbook: infra/ansible/playbook.yml
      directory: ./
      key: ${{ secrets.SSH_PRIVATE_KEY }}
      inventory: |
        [web]
        ${{ env.SERVER_IP }} ansible_user=ubuntu
      options: |
        --extra-vars "domain_name=${{ secrets.DOMAIN_NAME }} email=${{ secrets.ACME_EMAIL }}"
    env:
      ANSIBLE_CONFIG: infra/ansible/ansible.cfg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Aspects:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trigger Conditions:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;workflow_run&lt;/code&gt;&lt;/strong&gt;: This pipeline is triggered when the &lt;code&gt;Infrastructure Pipeline&lt;/code&gt; completes. This ensures that if infrastructure changes (e.g., a new EC2 instance is provisioned), the application deployment automatically follows. The &lt;code&gt;types: [completed]&lt;/code&gt; ensures it runs after the workflow finishes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;push&lt;/code&gt;&lt;/strong&gt;: It also triggers on direct pushes to specific application code directories (&lt;code&gt;frontend/**&lt;/code&gt;, &lt;code&gt;auth-api/**&lt;/code&gt;, etc.) or the &lt;code&gt;docker-compose.yml&lt;/code&gt; file. This allows for rapid iteration on application code without requiring an infrastructure change.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;if&lt;/code&gt; condition&lt;/strong&gt;: &lt;code&gt;if: ${{ github.event.workflow_run.conclusion == 'success' || github.event_name == 'push' }}&lt;/code&gt; ensures the deployment only proceeds if the infrastructure pipeline was successful (if triggered by &lt;code&gt;workflow_run&lt;/code&gt;) or if it's a direct code push.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Dynamic IP Resolution:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;Get Server IP Dynamically&lt;/code&gt;&lt;/strong&gt;: This step uses the AWS CLI to find the public IP address of the EC2 instance.

&lt;ul&gt;
&lt;li&gt;It filters instances by the &lt;code&gt;Name&lt;/code&gt; tag (&lt;code&gt;todo-app-server-v2&lt;/code&gt;) and ensures the instance is &lt;code&gt;running&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;jq&lt;/code&gt;-like query (&lt;code&gt;--query "Reservations[*].Instances[*].PublicIpAddress"&lt;/code&gt;) extracts the IP address.&lt;/li&gt;
&lt;li&gt;The IP is then stored in a GitHub Actions environment variable (&lt;code&gt;SERVER_IP&lt;/code&gt;) for use in subsequent steps. This is crucial because the EC2 instance's IP might change if it's stopped and started, or if a new instance replaces an old one.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Inline Inventory:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;Deploy via Ansible&lt;/code&gt;&lt;/strong&gt;: This step uses the &lt;code&gt;dawidd6/action-ansible-playbook@v2&lt;/code&gt; action to run the Ansible playbook.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;playbook: infra/ansible/playbook.yml&lt;/code&gt;&lt;/strong&gt;: Specifies the main Ansible playbook.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;key: ${{ secrets.SSH_PRIVATE_KEY }}&lt;/code&gt;&lt;/strong&gt;: The SSH private key is securely passed from GitHub Secrets, allowing Ansible to connect to the EC2 instance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;inventory: | ...&lt;/code&gt;&lt;/strong&gt;: Instead of a static inventory file, an inline inventory is generated using the dynamically fetched &lt;code&gt;SERVER_IP&lt;/code&gt;. This makes the deployment highly flexible and resilient to IP changes. The &lt;code&gt;ansible_user=ubuntu&lt;/code&gt; specifies the default user for SSH connection.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;options: | --extra-vars ...&lt;/code&gt;&lt;/strong&gt;: Additional variables like &lt;code&gt;domain_name&lt;/code&gt; and &lt;code&gt;email&lt;/code&gt; are passed to Ansible from GitHub Secrets, ensuring consistency across the pipeline.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;ANSIBLE_CONFIG: infra/ansible/ansible.cfg&lt;/code&gt;&lt;/strong&gt;: Points Ansible to a custom configuration file if needed.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;This application deployment pipeline is designed for efficiency and reliability, automatically reacting to both infrastructure and code changes, and dynamically adapting to the current state of the infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  [CONFIGURATION MANAGEMENT WITH ANSIBLE]
&lt;/h2&gt;

&lt;p&gt;Ansible handles all server configuration and application deployment tasks. The playbook is structured using roles for modularity and reusability.&lt;/p&gt;

&lt;h3&gt;
  
  
  Playbook Structure
&lt;/h3&gt;

&lt;p&gt;The main Ansible playbook (&lt;code&gt;infra/ansible/playbook.yml&lt;/code&gt;) orchestrates the execution of different roles:&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;code&gt;&lt;/code&gt;`yaml
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;hosts: web
become: yes # Run tasks with sudo privileges
vars:
project_root: /opt/todo-app # Base directory for the application
repo_url: &lt;a href="https://github.com/PrimoCrypt/DevOps-Stage-6.git" rel="noopener noreferrer"&gt;https://github.com/PrimoCrypt/DevOps-Stage-6.git&lt;/a&gt; # Application repository URL
# jwt_secret is passed via --extra-vars from GitHub Actions
# domain_name and email are also passed via --extra-vars
roles:

&lt;ul&gt;
&lt;li&gt;dependencies # Installs Docker, Docker Compose, Git, configures firewall&lt;/li&gt;
&lt;li&gt;deploy # Clones repo, creates .env, runs docker-compose
`&lt;code&gt;&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;This structure clearly separates concerns: &lt;code&gt;dependencies&lt;/code&gt; role sets up the server environment, and &lt;code&gt;deploy&lt;/code&gt; role handles the application-specific deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dependencies Role
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;dependencies&lt;/code&gt; role (&lt;code&gt;infra/ansible/roles/dependencies/tasks/main.yml&lt;/code&gt;) ensures all prerequisite software is installed and configured on the EC2 instance. This makes the instance ready to host Dockerized applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tasks performed:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;System package updates&lt;/strong&gt;: Ensures the system is up-to-date (&lt;code&gt;apt update &amp;amp;&amp;amp; apt upgrade&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker Engine installation&lt;/strong&gt;: Installs the latest stable version of Docker CE (Community Edition) by adding Docker's official GPG key and repository.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker Compose installation&lt;/strong&gt;: Downloads and installs the latest Docker Compose binary (v2.x) to &lt;code&gt;/usr/local/bin&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Git installation&lt;/strong&gt;: Installs Git for cloning the application repository.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;UFW firewall configuration&lt;/strong&gt;: Configures the Uncomplicated Firewall (UFW) to allow SSH (port 22), HTTP (port 80), and HTTPS (port 443) traffic, then enables the firewall.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker service enablement and startup&lt;/strong&gt;: Ensures the Docker daemon starts automatically on boot and is currently running.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User permissions for Docker socket&lt;/strong&gt;: Adds the &lt;code&gt;ubuntu&lt;/code&gt; user to the &lt;code&gt;docker&lt;/code&gt; group, allowing them to run Docker commands without &lt;code&gt;sudo&lt;/code&gt;. This requires a reboot or re-login to take effect, which is handled implicitly by subsequent SSH connections.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Example task (abbreviated):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`yaml&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;name: Install Docker dependencies&lt;br&gt;
ansible.builtin.apt:&lt;br&gt;
name:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;apt-transport-https&lt;/li&gt;
&lt;li&gt;ca-certificates&lt;/li&gt;
&lt;li&gt;curl&lt;/li&gt;
&lt;li&gt;gnupg&lt;/li&gt;
&lt;li&gt;lsb-release
state: present
update_cache: yes&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;name: Add Docker GPG key&lt;br&gt;&lt;br&gt;
ansible.builtin.apt_key:&lt;br&gt;&lt;br&gt;
url: &lt;a href="https://download.docker.com/linux/ubuntu/gpg" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="https://download.docker.com/linux/ubuntu/gpg" rel="noopener noreferrer"&gt;https://download.docker.com/linux/ubuntu/gpg&lt;/a&gt;&lt;br&gt;&lt;br&gt;
state: present&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;name: Add Docker APT repository&lt;br&gt;&lt;br&gt;
ansible.builtin.apt_repository:&lt;br&gt;&lt;br&gt;
repo: "deb [arch={{ 'amd64' if ansible_architecture == 'x86_64' else ansible_architecture }}] &lt;a href="https://download.docker.com/linux/ubuntu" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="https://download.docker.com/linux/ubuntu" rel="noopener noreferrer"&gt;https://download.docker.com/linux/ubuntu&lt;/a&gt; {{ ansible_distribution_release }} stable"&lt;br&gt;&lt;br&gt;
state: present&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;name: Install Docker Engine&lt;br&gt;
ansible.builtin.apt:&lt;br&gt;
name:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;docker-ce&lt;/li&gt;
&lt;li&gt;docker-ce-cli&lt;/li&gt;
&lt;li&gt;containerd.io&lt;/li&gt;
&lt;li&gt;docker-buildx-plugin # For buildx support&lt;/li&gt;
&lt;li&gt;docker-compose-plugin # For docker compose v2
state: present&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;name: Install Docker Compose (legacy v1 if needed, but v2 is preferred via plugin)&lt;/p&gt;

&lt;h1&gt;
  
  
  This task is often not needed if docker-compose-plugin is installed
&lt;/h1&gt;

&lt;h1&gt;
  
  
  For older systems or specific needs, you might still download the binary
&lt;/h1&gt;

&lt;p&gt;get_url:&lt;br&gt;
url: &lt;a href="https://github.com/docker/compose/releases/download/v2.20.0/docker-compose-linux-x86_64" rel="noopener noreferrer"&gt;https://github.com/docker/compose/releases/download/v2.20.0/docker-compose-linux-x86_64&lt;/a&gt;&lt;br&gt;
dest: /usr/local/bin/docker-compose&lt;br&gt;
mode: "0755"&lt;br&gt;
when: false # Disable this if using docker-compose-plugin&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;name: Ensure Docker service is running and enabled&lt;br&gt;&lt;br&gt;
ansible.builtin.systemd:&lt;br&gt;&lt;br&gt;
name: docker&lt;br&gt;&lt;br&gt;
state: started&lt;br&gt;&lt;br&gt;
enabled: yes&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;name: Add 'ubuntu' user to the 'docker' group&lt;br&gt;&lt;br&gt;
ansible.builtin.user:&lt;br&gt;&lt;br&gt;
name: ubuntu&lt;br&gt;&lt;br&gt;
groups: docker&lt;br&gt;&lt;br&gt;
append: yes&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;name: Configure UFW to allow SSH, HTTP, HTTPS&lt;br&gt;
community.general.ufw:&lt;br&gt;
rule: allow&lt;br&gt;
port: "{{ item }}"&lt;br&gt;
proto: tcp&lt;br&gt;
loop:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"22"&lt;/li&gt;
&lt;li&gt;"80"&lt;/li&gt;
&lt;li&gt;"443"&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;name: Enable UFW&lt;br&gt;&lt;br&gt;
community.general.ufw:&lt;br&gt;&lt;br&gt;
state: enabled&lt;br&gt;&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Deploy Role
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;deploy&lt;/code&gt; role (&lt;code&gt;infra/ansible/roles/deploy/tasks/main.yml&lt;/code&gt;) handles the actual application deployment and lifecycle management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workflow:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Clone or update the application repository&lt;/strong&gt;: Uses &lt;code&gt;git&lt;/code&gt; module to clone the repository if it doesn't exist, or pull the latest changes if it does. This ensures the server always has the most recent application code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generate &lt;code&gt;.env&lt;/code&gt; file&lt;/strong&gt;: Creates an &lt;code&gt;.env&lt;/code&gt; file in the application's root directory using Ansible's &lt;code&gt;template&lt;/code&gt; module. This file contains environment variables required by the Docker Compose services, such as &lt;code&gt;DOMAIN_NAME&lt;/code&gt;, &lt;code&gt;ACME_EMAIL&lt;/code&gt;, and &lt;code&gt;JWT_SECRET&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stop existing containers (if running)&lt;/strong&gt;: The &lt;code&gt;docker_compose&lt;/code&gt; module handles this implicitly when &lt;code&gt;state: present&lt;/code&gt; and &lt;code&gt;pull: yes&lt;/code&gt; are used, as it will recreate containers if images have changed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pull latest Docker images&lt;/strong&gt;: &lt;code&gt;pull: yes&lt;/code&gt; in &lt;code&gt;docker_compose&lt;/code&gt; ensures that the latest images for all services are downloaded from Docker Hub or a private registry.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Start containers with &lt;code&gt;docker-compose&lt;/code&gt;&lt;/strong&gt;: The &lt;code&gt;community.docker.docker_compose&lt;/code&gt; module orchestrates the startup of all services defined in &lt;code&gt;docker-compose.yml&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verify service health&lt;/strong&gt;: While not explicitly shown in the playbook snippet, a production setup would include tasks to wait for services to become healthy (e.g., using &lt;code&gt;wait_for&lt;/code&gt; module or health checks).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Environment file generation:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`yaml&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;name: Ensure project root directory exists&lt;br&gt;
ansible.builtin.file:&lt;br&gt;
path: "{{ project_root }}"&lt;br&gt;
state: directory&lt;br&gt;
mode: "0755"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;name: Clone or update application repository&lt;br&gt;
ansible.builtin.git:&lt;br&gt;
repo: "{{ repo_url }}"&lt;br&gt;
dest: "{{ project_root }}"&lt;br&gt;
version: master # Or a specific branch/tag&lt;br&gt;
update: yes&lt;br&gt;
force: yes # Force update in case of local changes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;name: Create .env file from template&lt;br&gt;
ansible.builtin.template:&lt;br&gt;
src: env.j2 # Template file located in infra/ansible/roles/deploy/templates/&lt;br&gt;
dest: "{{ project_root }}/.env"&lt;br&gt;
mode: "0600" # Secure permissions for sensitive environment variables&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;name: Start application services with Docker Compose&lt;br&gt;
community.docker.docker_compose:&lt;br&gt;
project_src: "{{ project_root }}"&lt;br&gt;
state: present # Ensures services are running&lt;br&gt;
pull: yes # Pulls latest images before starting&lt;br&gt;
build: yes # Builds images if necessary (e.g., local Dockerfiles)&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;env.j2&lt;/code&gt; template (&lt;code&gt;infra/ansible/roles/deploy/templates/env.j2&lt;/code&gt;) injects runtime configuration:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
DOMAIN_NAME={{ domain_name }}&lt;br&gt;
ACME_EMAIL={{ email }}&lt;br&gt;
JWT_SECRET={{ jwt_secret }}&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;jwt_secret&lt;/code&gt; variable would typically be passed as an &lt;code&gt;--extra-var&lt;/code&gt; from GitHub Actions, similar to &lt;code&gt;domain_name&lt;/code&gt; and &lt;code&gt;email&lt;/code&gt;, ensuring it's never hardcoded in the repository.&lt;/p&gt;

&lt;h2&gt;
  
  
  [CONTAINER ORCHESTRATION WITH DOCKER COMPOSE]
&lt;/h2&gt;

&lt;p&gt;Docker Compose orchestrates all seven services with a single configuration file (&lt;code&gt;docker-compose.yml&lt;/code&gt;). This file defines the services, their dependencies, network configurations, and how they interact with Traefik.&lt;/p&gt;

&lt;h3&gt;
  
  
  Complete Docker Compose Configuration
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`yaml&lt;br&gt;
version: "3.8"&lt;/p&gt;

&lt;p&gt;services:&lt;br&gt;
  traefik:&lt;br&gt;
    image: traefik:v3.6&lt;br&gt;
    command:&lt;br&gt;
      - "--api.insecure=true" # Enable Traefik dashboard (for debugging, disable in prod)&lt;br&gt;
      - "--providers.docker=true" # Enable Docker provider&lt;br&gt;
      - "--providers.docker.exposedbydefault=false" # Only expose services with traefik.enable=true&lt;br&gt;
      - "--entrypoints.web.address=:80" # HTTP entrypoint&lt;br&gt;
      - "--entrypoints.websecure.address=:443" # HTTPS entrypoint&lt;br&gt;
      - "--certificatesresolvers.myresolver.acme.tlschallenge=true" # Use TLS challenge for Let's Encrypt&lt;br&gt;
      - "--certificatesresolvers.myresolver.acme.email=${ACME_EMAIL}" # Email for Let's Encrypt notifications&lt;br&gt;
      - "--certificatesresolvers.myresolver.acme.storage=/letsencrypt/acme.json" # Storage for certificates&lt;br&gt;
    ports:&lt;br&gt;
      - "80:80" # Expose HTTP&lt;br&gt;
      - "443:443" # Expose HTTPS&lt;br&gt;
      - "8080:8080" # Expose Traefik dashboard (for debugging, disable in prod)&lt;br&gt;
    volumes:&lt;br&gt;
      - "./letsencrypt:/letsencrypt" # Persistent storage for Let's Encrypt certificates&lt;br&gt;
      - "/var/run/docker.sock:/var/run/docker.sock:ro" # Mount Docker socket for service discovery&lt;br&gt;
    networks:&lt;br&gt;
      - app-network&lt;br&gt;
    restart: unless-stopped # Always restart unless explicitly stopped&lt;/p&gt;

&lt;p&gt;frontend:&lt;br&gt;
    build: ./frontend # Build from local Dockerfile&lt;br&gt;
    image: frontend:latest # Tag the built image&lt;br&gt;
    container_name: frontend&lt;br&gt;
    labels:&lt;br&gt;
      - "traefik.enable=true"&lt;br&gt;
      - "traefik.http.routers.frontend.rule=Host(&lt;code&gt;${DOMAIN_NAME}&lt;/code&gt;)" # Route based on domain name&lt;br&gt;
      - "traefik.http.routers.frontend.entrypoints=websecure" # Use HTTPS entrypoint&lt;br&gt;
      - "traefik.http.routers.frontend.tls.certresolver=myresolver" # Use Let's Encrypt resolver&lt;br&gt;
      - "traefik.http.routers.frontend-http.rule=Host(&lt;code&gt;${DOMAIN_NAME}&lt;/code&gt;)" # HTTP router for redirect&lt;br&gt;
      - "traefik.http.routers.frontend-http.entrypoints=web" # Use HTTP entrypoint&lt;br&gt;
      - "traefik.http.routers.frontend-http.middlewares=https-redirect" # Apply HTTPS redirect middleware&lt;br&gt;
      - "traefik.http.middlewares.https-redirect.redirectscheme.scheme=https" # Define HTTPS redirect&lt;br&gt;
    networks:&lt;br&gt;
      - app-network&lt;br&gt;
    restart: unless-stopped&lt;/p&gt;

&lt;p&gt;auth-api:&lt;br&gt;
    build: ./auth-api&lt;br&gt;
    image: auth-api:latest&lt;br&gt;
    container_name: auth-api&lt;br&gt;
    environment:&lt;br&gt;
      - USERS_API_ADDRESS=&lt;a href="http://users-api:8083" rel="noopener noreferrer"&gt;http://users-api:8083&lt;/a&gt; # Internal service communication&lt;br&gt;
      - JWT_SECRET=${JWT_SECRET} # Injected from .env&lt;br&gt;
      - AUTH_API_PORT=8081&lt;br&gt;
      - ZIPKIN_URL=&lt;a href="http://zipkin:9411/api/v2/spans" rel="noopener noreferrer"&gt;http://zipkin:9411/api/v2/spans&lt;/a&gt; # Zipkin endpoint&lt;br&gt;
    labels:&lt;br&gt;
      - "traefik.enable=true"&lt;br&gt;
      - "traefik.http.routers.auth-api.rule=Host(&lt;code&gt;${DOMAIN_NAME}&lt;/code&gt;) &amp;amp;&amp;amp; PathPrefix(&lt;code&gt;/api/auth&lt;/code&gt;)" # Route by path prefix&lt;br&gt;
      - "traefik.http.routers.auth-api.entrypoints=websecure"&lt;br&gt;
      - "traefik.http.routers.auth-api.tls.certresolver=myresolver"&lt;br&gt;
      - "traefik.http.middlewares.auth-strip.stripprefix.prefixes=/api/auth" # Strip path prefix before forwarding&lt;br&gt;
      - "traefik.http.routers.auth-api.middlewares=auth-strip"&lt;br&gt;
    networks:&lt;br&gt;
      - app-network&lt;br&gt;
    restart: unless-stopped&lt;/p&gt;

&lt;p&gt;todos-api:&lt;br&gt;
    build: ./todos-api&lt;br&gt;
    image: todos-api:latest&lt;br&gt;
    container_name: todos-api&lt;br&gt;
    environment:&lt;br&gt;
      - REDIS_HOST=redis&lt;br&gt;
      - REDIS_PORT=6379&lt;br&gt;
      - REDIS_CHANNEL=log_channel&lt;br&gt;
      - TODO_API_PORT=8082&lt;br&gt;
      - JWT_SECRET=${JWT_SECRET}&lt;br&gt;
      - ZIPKIN_URL=&lt;a href="http://zipkin:9411/api/v2/spans" rel="noopener noreferrer"&gt;http://zipkin:9411/api/v2/spans&lt;/a&gt;&lt;br&gt;
    labels:&lt;br&gt;
      - "traefik.enable=true"&lt;br&gt;
      - "traefik.http.routers.todos-api.rule=Host(&lt;code&gt;${DOMAIN_NAME}&lt;/code&gt;) &amp;amp;&amp;amp; PathPrefix(&lt;code&gt;/api/todos&lt;/code&gt;)"&lt;br&gt;
      - "traefik.http.routers.todos-api.entrypoints=websecure"&lt;br&gt;
      - "traefik.http.routers.todos-api.tls.certresolver=myresolver"&lt;br&gt;
      - "traefik.http.middlewares.todos-strip.stripprefix.prefixes=/api"&lt;br&gt;
      - "traefik.http.routers.todos-api.middlewares=todos-strip"&lt;br&gt;
    networks:&lt;br&gt;
      - app-network&lt;br&gt;
    depends_on:&lt;br&gt;
      - redis # Ensure Redis starts before Todos API&lt;br&gt;
    restart: unless-stopped&lt;/p&gt;

&lt;p&gt;users-api:&lt;br&gt;
    build: ./users-api&lt;br&gt;
    image: users-api:latest&lt;br&gt;
    container_name: users-api&lt;br&gt;
    environment:&lt;br&gt;
      - SERVER_PORT=8083&lt;br&gt;
      - JWT_SECRET=${JWT_SECRET}&lt;br&gt;
      - SPRING_ZIPKIN_BASE_URL=&lt;a href="http://zipkin:9411/" rel="noopener noreferrer"&gt;http://zipkin:9411/&lt;/a&gt; # Spring Boot specific Zipkin config&lt;br&gt;
    labels:&lt;br&gt;
      - "traefik.enable=true"&lt;br&gt;
      - "traefik.http.routers.users-api.rule=Host(&lt;code&gt;${DOMAIN_NAME}&lt;/code&gt;) &amp;amp;&amp;amp; PathPrefix(&lt;code&gt;/api/users&lt;/code&gt;)"&lt;br&gt;
      - "traefik.http.routers.users-api.entrypoints=websecure"&lt;br&gt;
      - "traefik.http.routers.users-api.tls.certresolver=myresolver"&lt;br&gt;
      - "traefik.http.middlewares.users-strip.stripprefix.prefixes=/api"&lt;br&gt;
      - "traefik.http.routers.users-api.middlewares=users-strip"&lt;br&gt;
    networks:&lt;br&gt;
      - app-network&lt;br&gt;
    restart: unless-stopped&lt;/p&gt;

&lt;p&gt;log-message-processor:&lt;br&gt;
    build: ./log-message-processor&lt;br&gt;
    image: log-message-processor:latest&lt;br&gt;
    container_name: log-message-processor&lt;br&gt;
    environment:&lt;br&gt;
      - REDIS_HOST=redis&lt;br&gt;
      - REDIS_PORT=6379&lt;br&gt;
      - REDIS_CHANNEL=log_channel&lt;br&gt;
      - ZIPKIN_URL=&lt;a href="http://zipkin:9411/api/v2/spans" rel="noopener noreferrer"&gt;http://zipkin:9411/api/v2/spans&lt;/a&gt;&lt;br&gt;
    networks:&lt;br&gt;
      - app-network&lt;br&gt;
    depends_on:&lt;br&gt;
      - redis&lt;br&gt;
    restart: unless-stopped&lt;/p&gt;

&lt;p&gt;redis:&lt;br&gt;
    image: redis:alpine # Lightweight Redis image&lt;br&gt;
    container_name: redis&lt;br&gt;
    networks:&lt;br&gt;
      - app-network&lt;br&gt;
    restart: unless-stopped&lt;/p&gt;

&lt;p&gt;zipkin:&lt;br&gt;
    image: openzipkin/zipkin # Official Zipkin image&lt;br&gt;
    container_name: zipkin&lt;br&gt;
    ports:&lt;br&gt;
      - "9411:9411" # Expose Zipkin UI internally (Traefik handles external access)&lt;br&gt;
    networks:&lt;br&gt;
      - app-network&lt;br&gt;
    labels:&lt;br&gt;
      - "traefik.enable=true"&lt;br&gt;
      - "traefik.http.routers.zipkin.rule=Host(&lt;code&gt;${DOMAIN_NAME}&lt;/code&gt;) &amp;amp;&amp;amp; PathPrefix(&lt;code&gt;/api/zipkin&lt;/code&gt;)"&lt;br&gt;
      - "traefik.http.routers.zipkin.entrypoints=websecure"&lt;br&gt;
      - "traefik.http.routers.zipkin.tls.certresolver=myresolver"&lt;br&gt;
      - "traefik.http.middlewares.zipkin-strip.stripprefix.prefixes=/api/zipkin"&lt;br&gt;
      - "traefik.http.routers.zipkin.middlewares=zipkin-strip"&lt;br&gt;
    restart: unless-stopped&lt;/p&gt;

&lt;p&gt;networks:&lt;br&gt;
  app-network:&lt;br&gt;
    driver: bridge # Custom bridge network for inter-service communication&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Traefik Configuration Explained
&lt;/h3&gt;

&lt;p&gt;&lt;a href="/home/leo/Coding/Devops/HNG/DevOps-Stage-6/blog-images/traefik_routing_diagram_1765314390704.png" class="article-body-image-wrapper"&gt;&lt;img src="/home/leo/Coding/Devops/HNG/DevOps-Stage-6/blog-images/traefik_routing_diagram_1765314390704.png" alt="Traefik Routing Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 4: Traefik request routing showing path-based routing, SSL termination, and Let's Encrypt integration&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Traefik leverages Docker labels for dynamic service discovery and routing configuration. This eliminates the need for manual configuration file updates when services are added, removed, or updated, making it far more agile than traditional reverse proxies like Nginx.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Label breakdown for frontend service:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`yaml&lt;br&gt;
labels:&lt;br&gt;
  # 1. Enable Traefik for this container&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"traefik.enable=true"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;# 2. HTTPS router configuration for the main domain&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"traefik.http.routers.frontend.rule=Host(&lt;code&gt;example.com&lt;/code&gt;)" # Matches requests for the specified domain&lt;/li&gt;
&lt;li&gt;"traefik.http.routers.frontend.entrypoints=websecure" # Listens on the HTTPS entrypoint (port 443)&lt;/li&gt;
&lt;li&gt;"traefik.http.routers.frontend.tls.certresolver=myresolver" # Uses the Let's Encrypt certificate resolver&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;# 3. HTTP router for automatic redirection to HTTPS&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"traefik.http.routers.frontend-http.rule=Host(&lt;code&gt;example.com&lt;/code&gt;)" # Matches requests for the domain on HTTP&lt;/li&gt;
&lt;li&gt;"traefik.http.routers.frontend-http.entrypoints=web" # Listens on the HTTP entrypoint (port 80)&lt;/li&gt;
&lt;li&gt;"traefik.http.routers.frontend-http.middlewares=https-redirect" # Applies the 'https-redirect' middleware&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;# 4. Middleware definition for the HTTPS redirect&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"traefik.http.middlewares.https-redirect.redirectscheme.scheme=https" # Configures the middleware to redirect to HTTPS
`&lt;code&gt;&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Benefits of this approach:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No configuration file reloads required&lt;/strong&gt;: Traefik automatically detects changes to Docker labels and updates its routing table in real-time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Services can be added/removed without Traefik downtime&lt;/strong&gt;: This enables true zero-downtime deployments and dynamic scaling.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SSL certificates automatically provisioned and renewed&lt;/strong&gt;: Let's Encrypt integration handles the entire lifecycle of SSL certificates.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Path-based routing allows multiple services on one domain&lt;/strong&gt;: Different microservices can be exposed under different URL paths on the same domain (e.g., &lt;code&gt;/api/auth&lt;/code&gt;, &lt;code&gt;/api/todos&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Middleware support for transformations&lt;/strong&gt;: Traefik middlewares can perform various functions like path stripping, authentication, rate limiting, and more, before forwarding requests to the backend service.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Service Routing Table
&lt;/h3&gt;

&lt;p&gt;This table summarizes how external requests are routed to internal services via Traefik:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;URL Path&lt;/th&gt;
&lt;th&gt;Service&lt;/th&gt;
&lt;th&gt;Backend Port&lt;/th&gt;
&lt;th&gt;Traefik Middleware&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;https://domain.com/&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;frontend&lt;/td&gt;
&lt;td&gt;80 (internal)&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;https://domain.com/api/auth/*&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;auth-api&lt;/td&gt;
&lt;td&gt;8081&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;auth-strip&lt;/code&gt; (strips &lt;code&gt;/api/auth&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;https://domain.com/api/todos/*&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;todos-api&lt;/td&gt;
&lt;td&gt;8082&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;todos-strip&lt;/code&gt; (strips &lt;code&gt;/api&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;https://domain.com/api/users/*&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;users-api&lt;/td&gt;
&lt;td&gt;8083&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;users-strip&lt;/code&gt; (strips &lt;code&gt;/api&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;https://domain.com/api/zipkin/*&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;zipkin&lt;/td&gt;
&lt;td&gt;9411&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;zipkin-strip&lt;/code&gt; (strips &lt;code&gt;/api/zipkin&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;http://domain.com/*&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;https-redirect&lt;/code&gt; (redirects to HTTPS)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  [SECURITY IMPLEMENTATION AND BEST PRACTICES]
&lt;/h2&gt;

&lt;p&gt;Security was a primary consideration throughout this project, implemented at multiple layers from infrastructure to application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Secrets Management Strategy
&lt;/h3&gt;

&lt;p&gt;All sensitive information is stored and managed securely, never committed directly to the repository.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub Secrets Used:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;AWS_ACCESS_KEY_ID&lt;/code&gt;: AWS programmatic access key for GitHub Actions.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AWS_SECRET_ACCESS_KEY&lt;/code&gt;: AWS secret key corresponding to the access key.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;SSH_PUBLIC_KEY&lt;/code&gt;: The public part of the SSH key pair used for EC2 instance creation.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;SSH_PRIVATE_KEY&lt;/code&gt;: The private part of the SSH key pair used by Terraform and Ansible for SSH connections.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;DOMAIN_NAME&lt;/code&gt;: The production domain name for the application (e.g., &lt;code&gt;yourdomain.com&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ACME_EMAIL&lt;/code&gt;: Email address for Let's Encrypt certificate notifications.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;MAIL_USERNAME&lt;/code&gt;: SMTP username for sending drift detection emails.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;MAIL_PASSWORD&lt;/code&gt;: SMTP password for sending drift detection emails.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;MAIL_TO&lt;/code&gt;: Recipient email address for drift detection alerts.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;JWT_SECRET&lt;/code&gt;: Secret key used for signing and verifying JSON Web Tokens across microservices.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Secret Rotation Strategy:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dedicated Keys&lt;/strong&gt;: SSH keys and AWS IAM credentials are generated specifically for this CI/CD pipeline, limiting their scope.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Least Privilege&lt;/strong&gt;: The AWS IAM user associated with &lt;code&gt;AWS_ACCESS_KEY_ID&lt;/code&gt; has only the minimum permissions required to provision and manage the specified resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Regular Rotation&lt;/strong&gt;: A strategy for rotating all secrets (AWS keys, SSH keys, JWT secrets) every 90 days is recommended to minimize the impact of potential compromise.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Environment-Specific Secrets&lt;/strong&gt;: For multi-environment setups, separate secrets would be maintained for &lt;code&gt;dev&lt;/code&gt;, &lt;code&gt;staging&lt;/code&gt;, and &lt;code&gt;prod&lt;/code&gt; to further isolate environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Network Security
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;AWS Security Group Rules:&lt;/strong&gt;&lt;br&gt;
The EC2 instance's security group is configured with strict inbound rules:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;br&gt;
Inbound Rules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SSH (Port 22): Allowed only from a specific CIDR block (e.g., your office IP, VPN IP). This prevents unauthorized SSH access.&lt;/li&gt;
&lt;li&gt;HTTP (Port 80): Allowed from anywhere (0.0.0.0/0). This is necessary for Traefik to handle initial Let's Encrypt challenges and HTTP to HTTPS redirection.&lt;/li&gt;
&lt;li&gt;HTTPS (Port 443): Allowed from anywhere (0.0.0.0/0). This is for the main application traffic.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Outbound Rules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All Traffic: Allowed to anywhere (0.0.0.0/0). This is necessary for the instance to download packages, pull Docker images, and make API calls to AWS services.
`&lt;code&gt;&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Docker Network Isolation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dedicated Bridge Network&lt;/strong&gt;: All Docker services run within a custom bridge network (&lt;code&gt;app-network&lt;/code&gt;). This isolates them from the host's network and from other Docker networks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Internal Communication&lt;/strong&gt;: Services communicate with each other using their container names (e.g., &lt;code&gt;http://redis:6379&lt;/code&gt;), which are resolved by Docker's internal DNS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Minimal Port Exposure&lt;/strong&gt;: Only Traefik exposes ports (80, 443, 8080) to the host machine. All other services are only accessible internally within the &lt;code&gt;app-network&lt;/code&gt;, significantly reducing the attack surface.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Encrypted External Traffic&lt;/strong&gt;: All external traffic to the application is forced over HTTPS, encrypted by Traefik.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  SSL/TLS Implementation
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Let's Encrypt via Traefik:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automatic Certificate Provisioning&lt;/strong&gt;: Traefik is configured to automatically obtain and renew SSL certificates from Let's Encrypt using the TLS-ALPN-01 challenge.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Persistent Storage&lt;/strong&gt;: Certificates are stored in a persistent volume (&lt;code&gt;./letsencrypt:/letsencrypt&lt;/code&gt;), ensuring they survive container restarts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic Renewal&lt;/strong&gt;: Traefik handles certificate renewal automatically 30 days before expiration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No Wildcard Certificates&lt;/strong&gt;: For enhanced security, specific certificates are obtained for the primary domain, rather than using wildcard certificates, which have a broader attack surface.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HTTP to HTTPS Redirection&lt;/strong&gt;: Traefik automatically redirects all HTTP traffic to HTTPS, ensuring all communication is encrypted.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Application Security Measures
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;JWT Token Authentication:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- The `auth-api` generates and validates JSON Web Tokens (JWTs) for user sessions.
- Tokens have configurable expiration times.
- A shared `JWT_SECRET` (injected via `.env`) is used across services for token validation, ensuring only authorized services can verify tokens.
- All API endpoints requiring authentication enforce the presence and validity of JWTs in the `Authorization` header.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Input Validation:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Each API service is responsible for validating incoming request payloads to prevent common vulnerabilities like injection attacks (though no SQL DB is used here, the principle applies).
- Frontend input is also validated client-side and server-side.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;CORS Configuration:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- The frontend and backend APIs are served from the same domain (different paths), eliminating the need for complex Cross-Origin Resource Sharing (CORS) configurations and potential misconfigurations.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Firewall Configuration (UFW):&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;The Uncomplicated Firewall (UFW) is configured on the EC2 instance to provide an additional layer of host-level network security:
&lt;code&gt;&lt;/code&gt;&lt;code&gt;bash
ufw default deny incoming   # Deny all incoming traffic by default
ufw default allow outgoing  # Allow all outgoing traffic
ufw allow 22/tcp            # Allow SSH
ufw allow 80/tcp            # Allow HTTP
ufw allow 443/tcp           # Allow HTTPS
ufw enable                  # Enable the firewall
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;This ensures that only explicitly allowed ports are open, even if security group rules were to be misconfigured.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  [OBSERVABILITY AND DISTRIBUTED TRACING]
&lt;/h2&gt;

&lt;p&gt;Observability is crucial for understanding the behavior of microservices in production. Zipkin is integrated to provide distributed tracing, allowing us to visualize and analyze request flows across all services.&lt;/p&gt;

&lt;h3&gt;
  
  
  Zipkin Integration Example (Frontend)
&lt;/h3&gt;

&lt;p&gt;Each service is instrumented to send trace data to the Zipkin collector. Here's an example from the Vue.js frontend:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`javascript&lt;br&gt;
// frontend/src/zipkin.js&lt;br&gt;
import { Tracer, ExplicitContext, BatchRecorder } from "zipkin";&lt;br&gt;
import { HttpLogger } from "zipkin-transport-http";&lt;/p&gt;

&lt;p&gt;const tracer = new Tracer({&lt;br&gt;
  ctxImpl: new ExplicitContext(), // Manages the current span context&lt;br&gt;
  recorder: new BatchRecorder({&lt;br&gt;
    // Buffers spans and sends them in batches&lt;br&gt;
    logger: new HttpLogger({&lt;br&gt;
      // Sends spans over HTTP&lt;br&gt;
      endpoint: &lt;code&gt;${process.env.VUE_APP_API_URL}/api/zipkin&lt;/code&gt;, // Zipkin collector endpoint via Traefik&lt;br&gt;
      jsonEncoder: JSON.stringify, // Encodes spans as JSON&lt;br&gt;
    }),&lt;br&gt;
  }),&lt;br&gt;
  localServiceName: "frontend", // Name of this service in traces&lt;br&gt;
  supportsJoin: false, // Frontend typically starts new traces&lt;br&gt;
});&lt;/p&gt;

&lt;p&gt;export default tracer;&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Similar instrumentation is applied to the Go, Node.js, Java, and Python services, ensuring that every request's journey through the microservices architecture is captured.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Zipkin Tracks:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Request Duration&lt;/strong&gt;: Measures the time taken for each operation within a service and across services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service Dependencies&lt;/strong&gt;: Visualizes the call graph, showing which services call which others.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error Rates and Failure Points&lt;/strong&gt;: Helps identify where errors occur in a distributed transaction.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Latency Breakdown&lt;/strong&gt;: Pinpoints bottlenecks by showing the time spent in different components (e.g., network, database, internal processing).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Asynchronous Message Processing&lt;/strong&gt;: Traces can follow messages through queues (like Redis in this case) to track the full lifecycle of an event.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Use Cases:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Identifying Slow Endpoints&lt;/strong&gt;: Quickly pinpoint which API calls or internal service interactions are contributing to high latency.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debugging Timeout Issues&lt;/strong&gt;: Understand where a request is getting stuck or timing out across multiple services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Understanding Service Communication Patterns&lt;/strong&gt;: Gain insights into how services interact, which can be invaluable for refactoring or optimizing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Capacity Planning&lt;/strong&gt;: Analyze traffic patterns and service performance to inform scaling decisions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Root Cause Analysis for Production Incidents&lt;/strong&gt;: When an issue occurs, traces provide a detailed timeline of events, helping to quickly identify the root cause.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  [LESSONS LEARNED AND KEY TAKEAWAYS]
&lt;/h2&gt;

&lt;p&gt;The journey of building this CI/CD pipeline provided several invaluable lessons and reinforced core DevOps principles.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Automation ROI is Exponential
&lt;/h3&gt;

&lt;p&gt;The initial investment in setting up the pipeline was significant, approximately 40 hours of focused development and debugging. However, the return on investment (ROI) was almost immediate and continues to grow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Deployment Time Reduction&lt;/strong&gt;: Manual deployments, which previously took 2+ hours (including SSH, Git pulls, Docker builds, and manual checks), were reduced to less than 5 minutes for a full application update. Infrastructure provisioning went from hours to minutes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error Rate Reduction&lt;/strong&gt;: Manual errors, a common source of production issues, were virtually eliminated. The pipeline ensures consistent, repeatable deployments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Confidence Boost&lt;/strong&gt;: The ability to deploy changes rapidly and reliably instilled a high degree of confidence in the development team, encouraging more frequent, smaller releases.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;: The upfront time investment in automation pays dividends immediately. After just a few deployments, the time saved exceeded the initial development time, proving that automation is not a luxury but a necessity for efficient software delivery.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Drift Detection is Non-Negotiable
&lt;/h3&gt;

&lt;p&gt;During the development phase, it was tempting to make "quick fixes" directly in the AWS console for testing purposes. This inevitably led to discrepancies between the Terraform state and the actual infrastructure. The drift detection pipeline (using &lt;code&gt;terraform plan -detailed-exitcode&lt;/code&gt;) consistently caught these manual changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lesson&lt;/strong&gt;: Enforce an "infrastructure as code or it doesn't exist" policy from day one. Any change to infrastructure must go through the code repository and the CI/CD pipeline. This prevents configuration drift, ensures auditability, and maintains a single source of truth for infrastructure.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Infrastructure as Code Provides Documentation
&lt;/h3&gt;

&lt;p&gt;The Terraform configuration files, along with the Ansible playbooks, serve as living, executable documentation of the entire infrastructure and its configuration.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Clarity&lt;/strong&gt;: The HCL files clearly define every AWS resource and its properties.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auditability&lt;/strong&gt;: Every change to the infrastructure is tracked in Git, complete with commit messages and pull request reviews.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Understanding&lt;/strong&gt;: Comments within the code explain &lt;em&gt;why&lt;/em&gt; certain decisions were made, not just &lt;em&gt;what&lt;/em&gt; was configured, which is invaluable for new team members or for revisiting the setup months later.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. Docker Compose Complexity Sweet Spot
&lt;/h3&gt;

&lt;p&gt;For projects with a moderate number of services (e.g., less than 20), Docker Compose provides the perfect balance between simplicity and functionality. It offers container orchestration capabilities without the steep learning curve and operational overhead of more complex systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Alternatives considered:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes&lt;/strong&gt;: While powerful, Kubernetes would have been massive overkill for a single-server deployment. Its complexity (YAML sprawl, cluster management, networking) would have significantly slowed down development without providing proportional benefits for this scale.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker Swarm&lt;/strong&gt;: Considered, but its uncertain future and less vibrant ecosystem made it a less attractive choice.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Nomad&lt;/strong&gt;: A strong contender for lightweight orchestration, but with less ecosystem support and community resources compared to Docker Compose for this specific use case.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. Traefik is a Game-Changer
&lt;/h3&gt;

&lt;p&gt;Traefik proved to be an exceptionally powerful and developer-friendly reverse proxy. Its Docker-native approach, which uses container labels for dynamic configuration, eliminated the configuration management complexity often associated with Nginx or HAProxy.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automatic SSL&lt;/strong&gt;: The seamless integration with Let's Encrypt for automatic SSL certificate provisioning and renewal was a major time-saver and security enhancer.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic Routing&lt;/strong&gt;: The ability to add or remove services and have Traefik automatically update its routing rules without restarts was crucial for zero-downtime deployments.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  6. GitHub Actions for Team Workflows
&lt;/h3&gt;

&lt;p&gt;GitHub Actions, while perhaps not as feature-rich or flexible as some enterprise-grade CI/CD platforms (like GitLab CI or Jenkins), offers unparalleled integration with GitHub repositories.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ease of Use&lt;/strong&gt;: Its YAML-based syntax is relatively easy to learn.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tight Integration&lt;/strong&gt;: Direct access to GitHub events, secrets, and environments simplifies pipeline development.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Community Actions&lt;/strong&gt;: A vast marketplace of pre-built actions accelerates workflow creation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For smaller teams or projects already hosted on GitHub, it provides a highly effective and convenient CI/CD solution without the need for managing a separate CI/CD server.&lt;/p&gt;

&lt;h2&gt;
  
  
  [CHALLENGES ENCOUNTERED AND SOLUTIONS]
&lt;/h2&gt;

&lt;p&gt;Building a robust CI/CD pipeline often involves overcoming several technical hurdles. Here are some key challenges faced during this project and their respective solutions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenge 1: SSH Key Management in CI/CD
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Problem:&lt;/strong&gt; GitHub Actions runners are ephemeral, meaning they are provisioned fresh for each job. For Terraform to provision an EC2 instance with an SSH public key, and for Ansible to connect to that instance using the corresponding private key, these keys needed to be available as files on the runner during workflow execution. Storing them directly in the repository is a security anti-pattern.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution Implemented:&lt;/strong&gt;&lt;br&gt;
The public and private SSH keys were stored as encrypted GitHub Secrets (&lt;code&gt;SSH_PUBLIC_KEY&lt;/code&gt; and &lt;code&gt;SSH_PRIVATE_KEY&lt;/code&gt;). During the GitHub Actions workflow, these secrets were dynamically written to temporary files on the runner's filesystem.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`yaml&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;name: Create SSH Keys for Plan
run: |
echo "${{ secrets.SSH_PUBLIC_KEY }}" &amp;gt; infra/terraform/deployer_key.pub
echo "${{ secrets.SSH_PRIVATE_KEY }}" &amp;gt; infra/terraform/deployer_key
chmod 600 infra/terraform/deployer_key # Set secure permissions for the private key
`&lt;code&gt;&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Insight:&lt;/strong&gt; It's crucial to set appropriate file permissions (&lt;code&gt;chmod 600&lt;/code&gt;) for the private key to prevent unauthorized access and ensure SSH clients accept it. Additionally, a defensive step was added to verify the key format:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`yaml&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;name: Verify SSH Key Format
run: |
chmod 600 infra/terraform/deployer_key
ssh-keygen -l -f infra/terraform/deployer_key || \
  echo "::error::SSH Private Key is invalid! Check your GitHub Secret."
`&lt;code&gt;&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This check helps catch issues early if the secret was incorrectly pasted or corrupted.&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenge 2: Terraform and Ansible Integration
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Problem:&lt;/strong&gt; Ansible needs the public IP address of the EC2 instance to connect and configure it. However, this IP address is only known &lt;em&gt;after&lt;/em&gt; Terraform has successfully created the instance. This presents a classic "chicken-and-egg" problem in automation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solutions Evaluated:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; ❌ &lt;strong&gt;Manual Intervention&lt;/strong&gt;: Run Terraform, then manually copy the IP to an Ansible inventory file, then run Ansible. (Completely defeats the purpose of automation).&lt;/li&gt;
&lt;li&gt; ❌ &lt;strong&gt;Terraform Provisioners with &lt;code&gt;remote-exec&lt;/code&gt;&lt;/strong&gt;: While Terraform has &lt;code&gt;remote-exec&lt;/code&gt; provisioners, they are generally considered brittle for complex configuration management. They lack the idempotency and rich module ecosystem of Ansible.&lt;/li&gt;
&lt;li&gt; ✅ &lt;strong&gt;Dynamic Ansible Inventory Generation&lt;/strong&gt;: Use Terraform's &lt;code&gt;local_file&lt;/code&gt; resource to dynamically generate the Ansible inventory file after the EC2 instance's IP is known.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Final Solution:&lt;/strong&gt;&lt;br&gt;
A &lt;code&gt;local_file&lt;/code&gt; resource in Terraform was used to create an &lt;code&gt;hosts.yml&lt;/code&gt; file in the Ansible inventory directory. This file uses a &lt;code&gt;templatefile&lt;/code&gt; function to inject the &lt;code&gt;aws_instance.todo_app.public_ip&lt;/code&gt; into the inventory.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;hcl&lt;br&gt;
resource "local_file" "ansible_inventory" {&lt;br&gt;
  content = templatefile("${path.module}/inventory.tftpl", {&lt;br&gt;
    host = aws_instance.todo_app.public_ip&lt;br&gt;
    user = var.server_user&lt;br&gt;
    key  = var.private_key_path&lt;br&gt;
  })&lt;br&gt;
  filename = "${path.module}/../ansible/inventory/hosts.yml"&lt;br&gt;
}&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then, a &lt;code&gt;null_resource&lt;/code&gt; with a &lt;code&gt;local-exec&lt;/code&gt; provisioner was used to trigger the Ansible playbook, referencing this dynamically generated inventory file. This pattern ensures that Ansible always targets the correct, newly provisioned instance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenge 3: Environment Variable Propagation
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Problem:&lt;/strong&gt; Several critical values (e.g., &lt;code&gt;DOMAIN_NAME&lt;/code&gt;, &lt;code&gt;ACME_EMAIL&lt;/code&gt;, &lt;code&gt;JWT_SECRET&lt;/code&gt;) were needed at different stages of the pipeline and by different tools (Terraform, Ansible, Docker Compose, application containers). Maintaining consistency and securely passing these values was a challenge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt; A "single source of truth" approach was adopted, with GitHub Secrets serving as the central repository for all sensitive and configuration values. These values were then propagated down the pipeline:&lt;br&gt;
&lt;a href="/home/leo/Coding/Devops/HNG/DevOps-Stage-6/blog-images/secrets_propagation_flow_1765314330501.png" class="article-body-image-wrapper"&gt;&lt;img src="/home/leo/Coding/Devops/HNG/DevOps-Stage-6/blog-images/secrets_propagation_flow_1765314330501.png" alt="Secrets Propagation Flow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 5: Secrets and environment variables flow from GitHub Secrets through the entire deployment pipeline&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Flow:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
GitHub Secrets&lt;br&gt;
  → GitHub Actions (environment variables)&lt;br&gt;
    → Terraform (via TF_VAR_ prefix)&lt;br&gt;
      → Ansible (via --extra-vars)&lt;br&gt;
        → Docker Compose (via .env file generated by Ansible)&lt;br&gt;
          → Application Containers (via Docker Compose environment variables)&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This ensures that values are consistent, securely managed, and injected at the appropriate stage without being hardcoded.&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenge 4: Terraform State Locking
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Problem:&lt;/strong&gt; In a team environment, or even with multiple CI/CD jobs, concurrent &lt;code&gt;terraform apply&lt;/code&gt; operations on the same state file can lead to state corruption, data loss, or inconsistent infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt; Terraform's S3 backend was configured with DynamoDB for state locking.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;hcl&lt;br&gt;
terraform {&lt;br&gt;
  backend "s3" {&lt;br&gt;
    bucket         = "my-terraform-state-bucket" # Dedicated S3 bucket for state files&lt;br&gt;
    key            = "todo-app/terraform.tfstate" # Path to the state file within the bucket&lt;br&gt;
    region         = "us-east-1"&lt;br&gt;
    dynamodb_table = "terraform-state-lock"       # DynamoDB table for locking&lt;br&gt;
    encrypt        = true                         # Encrypt state file at rest&lt;br&gt;
  }&lt;br&gt;
}&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;When a &lt;code&gt;terraform apply&lt;/code&gt; is initiated, Terraform attempts to acquire a lock in the DynamoDB table. If successful, it proceeds; otherwise, it waits or fails, preventing concurrent modifications and ensuring state integrity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenge 5: Docker Build Context in GitHub Actions
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Problem:&lt;/strong&gt; Building Docker images within GitHub Actions can be slow if the entire repository is sent as the build context, especially for large repositories with many unrelated files (e.g., &lt;code&gt;.git&lt;/code&gt; directories, &lt;code&gt;node_modules&lt;/code&gt;, documentation).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt; Two primary optimizations were applied:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;&lt;code&gt;.dockerignore&lt;/code&gt; files&lt;/strong&gt;: Each service's Dockerfile directory included a &lt;code&gt;.dockerignore&lt;/code&gt; file. This file specifies patterns for files and directories that should be excluded from the Docker build context.
&lt;code&gt;&lt;/code&gt;&lt;code&gt;dockerfile
# Example .dockerignore
.git
node_modules
*.md
tests/
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;
This significantly reduces the amount of data sent to the Docker daemon, speeding up the build process.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Multi-stage builds&lt;/strong&gt;: Dockerfiles were structured using multi-stage builds to separate build-time dependencies from runtime dependencies. This results in smaller, more secure final images.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These optimizations collectively reduced Docker build times from approximately 8 minutes to less than 2 minutes, accelerating the deployment pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  [FUTURE IMPROVEMENTS AND ROADMAP]
&lt;/h2&gt;

&lt;p&gt;While the current CI/CD pipeline is production-ready, there are always opportunities for enhancement and scaling. This roadmap outlines potential future improvements.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 1: Multi-Environment Support (Q1 2025)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Objective:&lt;/strong&gt; To provide isolated and consistent environments for development, staging, and production, enabling safer testing and deployment workflows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementation Plan:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Terraform Workspaces or Separate State Files&lt;/strong&gt;: Utilize Terraform workspaces (&lt;code&gt;terraform workspace new dev&lt;/code&gt;) or maintain separate Terraform state files for each environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Environment-Specific Variable Files&lt;/strong&gt;: Create &lt;code&gt;terraform.tfvars&lt;/code&gt; files (e.g., &lt;code&gt;dev.tfvars&lt;/code&gt;, &lt;code&gt;staging.tfvars&lt;/code&gt;, &lt;code&gt;prod.tfvars&lt;/code&gt;) to manage environment-specific configurations (e.g., instance types, domain names, resource tags).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Separate GitHub Actions Environments&lt;/strong&gt;: Configure distinct GitHub Environments (e.g., &lt;code&gt;dev&lt;/code&gt;, &lt;code&gt;staging&lt;/code&gt;, &lt;code&gt;production&lt;/code&gt;) with different protection rules (e.g., manual approval for &lt;code&gt;production&lt;/code&gt;, no approval for &lt;code&gt;dev&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Subdomain Routing&lt;/strong&gt;: Implement subdomain-based routing (e.g., &lt;code&gt;dev.example.com&lt;/code&gt;, &lt;code&gt;staging.example.com&lt;/code&gt;, &lt;code&gt;app.example.com&lt;/code&gt;) to access different environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Phase 2: Auto-Scaling (Q2 2025)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Objective:&lt;/strong&gt; To automatically adjust compute capacity based on demand, ensuring application availability and cost efficiency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Components:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Auto Scaling Groups (ASG)&lt;/strong&gt;: Replace the single EC2 instance with an ASG to manage a fleet of instances.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Application Load Balancer (ALB)&lt;/strong&gt;: Introduce an ALB in front of the ASG to distribute incoming traffic and replace the single-instance Traefik as the primary entry point. Traefik would then run on each instance behind the ALB.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CloudWatch Alarms&lt;/strong&gt;: Configure CloudWatch alarms to trigger scaling policies based on metrics like CPU utilization, request count per target, or custom metrics.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shared Persistent Storage&lt;/strong&gt;: For Traefik certificates and other shared data, consider using Amazon EFS or an S3 bucket mounted via FUSE, ensuring state is synchronized across instances.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Phase 3: Database Persistence (Q2 2025)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Objective:&lt;/strong&gt; To move from in-memory data stores to durable, managed database services, ensuring data integrity and persistence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Services to Add:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Amazon RDS (PostgreSQL)&lt;/strong&gt;: For relational data storage, replacing any in-memory databases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon ElastiCache (Redis)&lt;/strong&gt;: For distributed caching and message queuing, providing a managed, highly available Redis instance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database Migration Management&lt;/strong&gt;: Integrate tools like Flyway or Liquibase into the CI/CD pipeline to manage database schema changes automatically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated Backups and Point-in-Time Recovery&lt;/strong&gt;: Configure RDS and ElastiCache for automated backups and enable point-in-time recovery.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Phase 4: Comprehensive Monitoring (Q3 2025)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Objective:&lt;/strong&gt; To implement a robust monitoring and alerting solution for proactive issue detection and performance analysis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stack:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Prometheus&lt;/strong&gt;: For collecting time-series metrics from application services, Docker containers, and the host system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Grafana&lt;/strong&gt;: For creating interactive dashboards to visualize metrics and gain insights into system health and performance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AlertManager&lt;/strong&gt;: For intelligent routing and deduplication of alerts generated by Prometheus.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CloudWatch Integration&lt;/strong&gt;: Integrate with AWS CloudWatch for monitoring AWS service health and infrastructure metrics.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Metrics to Track:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Request latency (p50, p95, p99 percentiles)&lt;/li&gt;
&lt;li&gt;Error rates by service and endpoint&lt;/li&gt;
&lt;li&gt;Container resource utilization (CPU, memory, disk I/O)&lt;/li&gt;
&lt;li&gt;Network traffic and connection counts&lt;/li&gt;
&lt;li&gt;Application-specific business metrics&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Phase 5: Blue-Green Deployments (Q3 2025)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Objective:&lt;/strong&gt; To achieve zero-downtime deployments with instant rollback capabilities, minimizing user impact during updates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Two Identical Environments&lt;/strong&gt;: Maintain two identical production environments (e.g., "Blue" and "Green").&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Traffic Switching&lt;/strong&gt;: Use the Application Load Balancer to switch traffic instantly from the old (Blue) environment to the new (Green) environment after successful deployment and health checks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated Health Checks&lt;/strong&gt;: Implement comprehensive health checks for the new environment before traffic is shifted.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One-Click Rollback&lt;/strong&gt;: In case of issues in the Green environment, traffic can be instantly switched back to the stable Blue environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Phase 6: Security Hardening (Q4 2025)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Additional Measures:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS WAF (Web Application Firewall)&lt;/strong&gt;: Deploy WAF in front of the ALB to protect against common web exploits (e.g., SQL injection, cross-site scripting).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon GuardDuty&lt;/strong&gt;: Enable GuardDuty for intelligent threat detection and continuous monitoring of AWS accounts for malicious activity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secrets Rotation Automation&lt;/strong&gt;: Implement automated rotation of all secrets (AWS credentials, SSH keys, database passwords) using AWS Secrets Manager or similar tools.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Encrypted Volume Storage&lt;/strong&gt;: Ensure all EBS volumes attached to EC2 instances are encrypted at rest.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Regular Penetration Testing&lt;/strong&gt;: Schedule periodic penetration tests and security audits to identify and remediate vulnerabilities.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  [CONCLUSION AND FINAL THOUGHTS]
&lt;/h2&gt;

&lt;p&gt;Building this comprehensive CI/CD pipeline demonstrated that modern DevOps is not merely about adopting trendy tools—it's about creating reliable, repeatable, and auditable processes that empower development teams to move quickly while maintaining production stability.&lt;/p&gt;

&lt;p&gt;The combination of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Terraform&lt;/strong&gt; for declarative infrastructure provisioning,&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Actions&lt;/strong&gt; for robust CI/CD orchestration,&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ansible&lt;/strong&gt; for idempotent configuration management,&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker&lt;/strong&gt; for consistent containerization, and&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Traefik&lt;/strong&gt; for dynamic routing and automatic SSL,
creates a powerful, production-ready deployment platform that can scale from prototype to production. The entire technology stack can be deployed with a single &lt;code&gt;git push&lt;/code&gt;, yet includes sophisticated safety mechanisms like drift detection, manual approval gates, and automated rollback capabilities.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Success Factors:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Declarative Infrastructure:&lt;/strong&gt; Terraform's declarative approach makes infrastructure changes reviewable, testable, and version-controlled.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Immutable Deployments:&lt;/strong&gt; Containers ensure consistent behavior across environments, reducing "it works on my machine" issues.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Automated Testing:&lt;/strong&gt; CI/CD pipelines catch issues early, preventing them from reaching production.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Observability:&lt;/strong&gt; Distributed tracing provides critical visibility into complex microservices interactions, aiding in debugging and performance optimization.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Security by Default:&lt;/strong&gt; Encrypted secrets, least-privilege IAM roles, automated SSL, and robust firewall rules establish a strong security posture.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This architecture serves as a template for modern application deployment, demonstrating that enterprise-grade automation is accessible to small teams and individual developers. The patterns established here scale effectively from single-server deployments to multi-region, highly available infrastructures.&lt;/p&gt;

&lt;h2&gt;
  
  
  [ADDITIONAL RESOURCES AND REFERENCES]
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Project Repository:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/PrimoCrypt/DevOps-Stage-6" rel="noopener noreferrer"&gt;https://github.com/PrimoCrypt/DevOps-Stage-6&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Official Documentation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terraform AWS Provider: &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs" rel="noopener noreferrer"&gt;https://registry.terraform.io/providers/hashicorp/aws/latest/docs&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Ansible Best Practices: &lt;a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_best_practices.html" rel="noopener noreferrer"&gt;https://docs.ansible.com/ansible/latest/user_guide/playbooks_best_practices.html&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Traefik v3 Documentation: &lt;a href="https://doc.traefik.io/traefik/" rel="noopener noreferrer"&gt;https://doc.traefik.io/traefik/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;GitHub Actions Workflows: &lt;a href="https://docs.github.com/en/actions" rel="noopener noreferrer"&gt;https://docs.github.com/en/actions&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Docker Compose Reference: &lt;a href="https://docs.docker.com/compose/compose-file/" rel="noopener noreferrer"&gt;https://docs.docker.com/compose/compose-file/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Zipkin Architecture: &lt;a href="https://zipkin.io/pages/architecture.html" rel="noopener noreferrer"&gt;https://zipkin.io/pages/architecture.html&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Recommended Learning Resources:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Terraform: Up &amp;amp; Running" by Yevgeniy Brikman&lt;/li&gt;
&lt;li&gt;"Ansible for DevOps" by Jeff Geerling&lt;/li&gt;
&lt;li&gt;"The DevOps Handbook" by Gene Kim&lt;/li&gt;
&lt;li&gt;HashiCorp Learn (free interactive tutorials)&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Questions or feedback?&lt;/strong&gt; I'd love to discuss DevOps automation strategies, infrastructure as code patterns, or troubleshooting deployment pipelines. Drop your questions in the comments or reach out on Twitter/LinkedIn.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Found this helpful?&lt;/strong&gt; Consider starring the project repository and sharing this guide with your team!&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Tags:&lt;/strong&gt; #DevOps #Terraform #Ansible #CICD #AWS #Docker #InfrastructureAsCode #GitHubActions #Microservices #Traefik #Automation #CloudComputing #ContainerOrchestration #SRE&lt;/p&gt;

</description>
      <category>devops</category>
      <category>terraform</category>
      <category>ansible</category>
      <category>cicd</category>
    </item>
  </channel>
</rss>
