<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ebelechukwu Lucy Okafor</title>
    <description>The latest articles on DEV Community by Ebelechukwu Lucy Okafor (@ebelechukwu_lucyokafor).</description>
    <link>https://dev.to/ebelechukwu_lucyokafor</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ebelechukwu_lucyokafor"/>
    <language>en</language>
    <item>
      <title>How I Built a Two-Pipeline Enterprise CI/CD System on Azure DevOps EpicBook Deployment</title>
      <dc:creator>Ebelechukwu Lucy Okafor</dc:creator>
      <pubDate>Tue, 21 Apr 2026 06:54:22 +0000</pubDate>
      <link>https://dev.to/ebelechukwu_lucyokafor/how-i-built-a-two-pipeline-enterprise-cicd-system-on-azure-devops-epicbook-deployment-2g8c</link>
      <guid>https://dev.to/ebelechukwu_lucyokafor/how-i-built-a-two-pipeline-enterprise-cicd-system-on-azure-devops-epicbook-deployment-2g8c</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This week, I completed what has been the most challenging and rewarding assignment of my DevOps learning journey, deploying the EpicBook online bookstore application using a two-repository, two-pipeline enterprise architecture on Azure DevOps.&lt;br&gt;
No tutorials. No hand-holding. Just a goal, a set of tools, and a lot of errors to fix.&lt;br&gt;
Here is the full story of what I built, what broke, and what I learned.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Is the Two-Pipeline Model?&lt;/strong&gt;&lt;br&gt;
In enterprise DevOps, infrastructure code and application code are kept in separate repositories and deployed by separate pipelines. This mirrors how real teams work; infrastructure engineers and developers operate independently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repository 1:&lt;/strong&gt; infra-epicbook    → Terraform IaC&lt;br&gt;
&lt;strong&gt;Repository 2:&lt;/strong&gt; theepicbook       → Node.js app + Ansible playbooks&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pipeline 1:&lt;/strong&gt; Infra Pipeline      → provisions Azure resources&lt;br&gt;
&lt;strong&gt;Pipeline 2:&lt;/strong&gt; App Pipeline        → configures servers + deploys app&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pipeline 1&lt;/strong&gt; &lt;br&gt;
runs first and produces outputs (VM IP, MySQL FQDN) that Pipeline 2 consumes. This handoff is automated through Azure DevOps pipeline artifacts.&lt;br&gt;
The Full Stack&lt;br&gt;
Layer&lt;br&gt;
Tool&lt;br&gt;
Cloud&lt;br&gt;
Microsoft Azure — South Africa North&lt;br&gt;
IaC&lt;br&gt;
Terraform v1.8.5 (azurerm ~3.100)&lt;br&gt;
Config&lt;br&gt;
Ansible&lt;br&gt;
CI/CD&lt;br&gt;
Azure DevOps — self-hosted agent (WSL)&lt;br&gt;
Authentication&lt;br&gt;
Azure Service Principal (SPN)&lt;br&gt;
App&lt;br&gt;
Node.js 18 + Express + Sequelize ORM&lt;br&gt;
Database&lt;br&gt;
Azure MySQL Flexible Server 8.0&lt;br&gt;
Web server&lt;br&gt;
Nginx — reverse proxy on port 80&lt;br&gt;
Process manager&lt;br&gt;
PM2 — keeps Node.js alive&lt;br&gt;
OS&lt;br&gt;
Ubuntu 22.04 LTS — Standard_D2s_v3&lt;br&gt;
Live URL&lt;br&gt;
&lt;a href="http://20.164.211.94" rel="noopener noreferrer"&gt;http://20.164.211.94&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture Overview&lt;/strong&gt;&lt;br&gt;
Developer → git push to main&lt;br&gt;
         ↓&lt;br&gt;
    Azure DevOps&lt;br&gt;
    ┌─────────────────────────────────────────┐&lt;br&gt;
    │  Pipeline 1 (Terraform)                 │&lt;br&gt;
    │  terraform init → plan → apply          │&lt;br&gt;
    │  → outputs: VM IP + MySQL FQDN          │&lt;br&gt;
    │                                         │&lt;br&gt;
    │  Pipeline 2 (Ansible)                   │&lt;br&gt;
    │  install Node.js + PM2 + Nginx          │&lt;br&gt;
    │  deploy EpicBook → configure MySQL      │&lt;br&gt;
    └─────────────────────────────────────────┘&lt;br&gt;
         ↓&lt;br&gt;
    Azure Virtual Network (10.0.0.0/16)&lt;br&gt;
    ┌─────────────────────┬───────────────────┐&lt;br&gt;
    │  Public Subnet      │  Private Subnet   │&lt;br&gt;
    │  10.0.1.0/24        │  10.0.2.0/24      │&lt;br&gt;
    │                     │                   │&lt;br&gt;
    │  epicbook-prod-vm   │  MySQL Flexible   │&lt;br&gt;
    │  Ubuntu 22.04       │  Server 8.0       │&lt;br&gt;
    │  Nginx :80          │  port 3306        │&lt;br&gt;
    │  Node.js :8080      │  VNet only        │&lt;br&gt;
    │  PM2                │  SSL required     │&lt;br&gt;
    └─────────────────────┴───────────────────┘&lt;br&gt;
         ↓&lt;br&gt;
    Browser → &lt;a href="http://20.164.211.94" rel="noopener noreferrer"&gt;http://20.164.211.94&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Service Principal (SPN) Setup&lt;br&gt;
Terraform needs permission to create Azure resources. I created an App Registration in Azure AD and assigned it the Contributor role on my subscription.&lt;br&gt;
az ad sp create-for-rbac \&lt;br&gt;
  --name "epicbook-devops-spn" \&lt;br&gt;
  --role Contributor \&lt;br&gt;
  --scopes /subscriptions/$(az account show --query id -o tsv) \&lt;br&gt;
  --output json&lt;/p&gt;

&lt;p&gt;This outputs four credentials: Client ID, Client Secret, Tenant ID, and Subscription ID. These go into an Azure DevOps service connection named azure-devops-connection.&lt;br&gt;
Key lesson: The Client Secret is shown exactly once. Copy it immediately before closing the page.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Terraform Remote State Backend&lt;br&gt;
For pipeline-based Terraform, the state file must live in Azure Blob Storage so the agent can access it between runs.&lt;br&gt;
az storage account create \&lt;br&gt;
  --name epicbooklucytfstate01 \&lt;br&gt;
  --resource-group rg-epicbook \&lt;br&gt;
  --location southafricanorth \&lt;br&gt;
  --sku Standard_LRS&lt;/p&gt;

&lt;p&gt;az storage container create \&lt;br&gt;
  --name tfstate \&lt;br&gt;
  --account-name epicbooklucytfstate01 \&lt;br&gt;
  --auth-mode login&lt;/p&gt;

&lt;p&gt;The backend.tf configuration:&lt;br&gt;
terraform {&lt;br&gt;
  backend "azurerm" {&lt;br&gt;
    resource_group_name  = "rg-epicbook"&lt;br&gt;
    storage_account_name = "epicbooklucytfstate01"&lt;br&gt;
    container_name       = "tfstate"&lt;br&gt;
    key                  = "epicbook.terraform.tfstate"&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Critical lesson:&lt;/strong&gt; You cannot use variables in a Terraform backend block. Pass the storage account key as the ARM_ACCESS_KEY environment variable instead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Infrastructure Pipeline (Terraform)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Infra pipeline provisions 13 Azure resources in one run:&lt;/strong&gt;&lt;br&gt;
Resource Group&lt;br&gt;
Virtual Network (10.0.0.0/16)&lt;br&gt;
Public subnet (10.0.1.0/24) + NSG (ports 22 and 80)&lt;br&gt;
Private subnet (10.0.2.0/24) + NSG (port 3306 from VNet only)&lt;br&gt;
Static Public IP (Standard SKU)&lt;br&gt;
Network Interface&lt;br&gt;
Private DNS Zone (linked to VNet)&lt;br&gt;
Linux VM (Ubuntu 22.04, Standard_D2s_v3)&lt;br&gt;
MySQL Flexible Server 8.0&lt;br&gt;
MySQL database (bookstore)&lt;/p&gt;

&lt;p&gt;The pipeline YAML authenticates using AzureCLI@2 with addSpnToEnvironment: true, which injects the SPN credentials automatically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;task: AzureCLI@2
displayName: 'Terraform Apply'
inputs:
azureSubscription: 'azure-devops-connection'
addSpnToEnvironment: true
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
  export ARM_CLIENT_ID="$servicePrincipalId"
  export ARM_CLIENT_SECRET="$servicePrincipalKey"
  export ARM_TENANT_ID="$tenantId"
  export ARM_SUBSCRIPTION_ID=$(az account show --query id -o tsv | tr -d '\r')
  export ARM_ACCESS_KEY="$STORAGE_KEY"
  terraform -chdir="$(Build.SourcesDirectory)" apply -auto-approve tfplan
** env:**
STORAGE_KEY: $(storage_account_key)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After applying, the pipeline captures and publishes the outputs:&lt;br&gt;
public_ip = 20.164.211.94&lt;br&gt;
db_host   = epicbook-prod-mysql-nrbhh4.mysql.database.azure.com&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Application Pipeline (Ansible)&lt;br&gt;
The App pipeline connects to the VM over SSH using a private key stored in Azure DevOps Secure Files, then runs an Ansible playbook that:&lt;br&gt;
Installs Node.js 18, PM2, and Nginx&lt;br&gt;
Clones the EpicBook repository&lt;br&gt;
Installs npm dependencies&lt;br&gt;
Creates .env and config/config.json with MySQL connection details&lt;br&gt;
Starts the app with PM2&lt;br&gt;
Configures Nginx as a reverse proxy to port 8080&lt;br&gt;
Verifies the app responds with HTTP 200&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;script: |
ansible-playbook \
  -i inventory.ini \
  ansible/site.yml \
  --extra-vars "mysql_host=$(MYSQL_FQDN) db_name=bookstore db_user=epicbook_user" \
  -v
displayName: 'Run Ansible playbook'&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Errors I Hit (And Fixed)&lt;/strong&gt;&lt;br&gt;
This deployment took two weeks. Here is what broke and how I fixed it.&lt;br&gt;
&lt;strong&gt;Error 1:&lt;/strong&gt; SPN Cannot Read Subscription (403 Forbidden)&lt;br&gt;
The SPN was missing the Reader role. Fixed by assigning both Contributor and Reader.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error 2:&lt;/strong&gt; Variables Not Allowed in Backend Block&lt;br&gt;
Error: Variables not allowed&lt;br&gt;
on backend.tf line 7: access_key = var.storage_account_key&lt;/p&gt;

&lt;p&gt;Terraform evaluates the backend before loading variables. Removed the access_key line and passed it via ARM_ACCESS_KEY environment variable instead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error 3:&lt;/strong&gt; Carriage Return in Subscription ID (400 Bad Request)&lt;br&gt;
The Windows Azure CLI adds \r to command output. This corrupted the subscription ID URL:&lt;br&gt;
parse "&lt;a href="https://management.azure.com/subscriptions/a75e93ba%5Cr/providers" rel="noopener noreferrer"&gt;https://management.azure.com/subscriptions/a75e93ba\r/providers&lt;/a&gt;"&lt;/p&gt;

&lt;p&gt;Fixed by piping through tr -d '\r':&lt;br&gt;
export ARM_SUBSCRIPTION_ID=$(az account show --query id -o tsv | tr -d '\r')&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error 4:&lt;/strong&gt; VM Size Quota Exceeded&lt;br&gt;
Standard_B2ats_v2 has zero quota in South Africa North. Checked available sizes:&lt;br&gt;
az vm list-skus --location southafricanorth --size Standard_D2s --output table&lt;br&gt;
Switched to Standard_D2s_v3, which had no restrictions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error 5:&lt;/strong&gt; App Connects to localhost Instead of MySQL&lt;br&gt;
SequelizeConnectionRefusedError: connect ECONNREFUSED 127.0.0.1:3306&lt;/p&gt;

&lt;p&gt;The EpicBook app reads database config from config/config.json, not .env. The file had host: 127.0.0.1 hardcoded. Updated it with the actual MySQL FQDN and SSL settings.&lt;br&gt;
Error 6: 502 Bad Gateway&lt;br&gt;
The app listens on port 8080 by default (process.env.PORT || 8080), but Nginx was proxying to port 3000. Fixed by checking the actual port in server.js:&lt;br&gt;
grep -n "port|PORT|listen" ~/theepicbook/server.js&lt;/p&gt;

&lt;h1&gt;
  
  
  const PORT = process.env.PORT || 8080;
&lt;/h1&gt;

&lt;p&gt;Then updated both .env and Nginx config to use port 8080.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Lessons Learned&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Always check what port your app actually uses before configuring Nginx. Never assume that grep the source code.&lt;/li&gt;
&lt;li&gt;Terraform backend blocks are special. They are evaluated before any variable loading. Credentials must come from environment variables, not var. references.&lt;/li&gt;
&lt;li&gt;WSL + Azure CLI adds \r to output. Always pipe through tr -d '\r' when using az CLI output in bash scripts that go into URLs or environment variables.&lt;/li&gt;
&lt;li&gt;Azure VM size quotas vary by region. Have fallback sizes ready. Standard_D2s_v3 is reliably available across most regions.&lt;/li&gt;
&lt;li&gt;Check where your app actually reads config. The EpicBook app ignored .env for database settings and read from config/config.json instead. Always check the source code.&lt;/li&gt;
&lt;li&gt;Private MySQL = no public endpoint. The MySQL Flexible Server in the private subnet has no public IP it is only reachable from within the VNet. This is production-grade security done right.&lt;/li&gt;
&lt;li&gt;PM2 is non-negotiable for Node.js in production. Without it, the app dies the moment your Ansible playbook finishes.
What I Would Do Differently
Add an admin_ssh_key block to Terraform from the start instead of manually pushing keys via az vm run-command
Use Terraform workspaces for environment separation (dev/staging/prod)
Store db_password in Azure Key Vault and reference it from the variable group
Add a health check endpoint to the app for proper pipeline verification
Set up PM2 startup script so the app survives VM reboots automatically&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The Result&lt;/strong&gt;&lt;br&gt;
After two weeks, two pipelines, thirteen Azure resources, and nine documented errors, EpicBook is live.&lt;br&gt;
The full architecture Terraform provisioning infrastructure, Ansible configuring servers, Node.js serving the application, MySQL storing data, Nginx proxying requests, PM2 keeping everything running, is automated end-to-end by Azure DevOps on every commit to main.&lt;br&gt;
That is CI/CD done the enterprise way.&lt;br&gt;
Resources&lt;br&gt;
Azure DevOps documentation docs.microsoft.com/azure/devops&lt;br&gt;
Terraform azurerm provider registry.terraform.io/providers/hashicorp/azurerm&lt;br&gt;
Ansible documentation docs.ansible.com&lt;br&gt;
PM2 documentation pm2.keymetrics.io&lt;br&gt;
EpicBook app github.com/pravinmishraaws/theepicbook&lt;/p&gt;

</description>
      <category>azure</category>
      <category>cicd</category>
      <category>devops</category>
      <category>terraform</category>
    </item>
    <item>
      <title>From 500 Errors to a Fully Working Cloud App: My EpicBook Deployment on Azure</title>
      <dc:creator>Ebelechukwu Lucy Okafor</dc:creator>
      <pubDate>Wed, 08 Apr 2026 11:03:54 +0000</pubDate>
      <link>https://dev.to/ebelechukwu_lucyokafor/from-500-errors-to-a-fully-working-cloud-app-my-epicbook-deployment-on-azure-2mn3</link>
      <guid>https://dev.to/ebelechukwu_lucyokafor/from-500-errors-to-a-fully-working-cloud-app-my-epicbook-deployment-on-azure-2mn3</guid>
      <description>&lt;p&gt;This week, I moved beyond simple deployments and built a real-world, multi-tier application architecture using Terraform + Ansible + Nginx + Node.js + MySQL on Microsoft Azure.&lt;br&gt;
And honestly? It didn’t work at first.&lt;br&gt;
But that’s where the real learning happened.&lt;br&gt;
Let me walk you through the full journey, including the mistakes, fixes, and key DevOps lessons you can apply immediately.&lt;/p&gt;

&lt;p&gt;The Goal&lt;br&gt;
Deploy a production-style web application (EpicBook) with:&lt;br&gt;
🌐 Web Layer → Nginx (Reverse Proxy)&lt;br&gt;
⚙️ App Layer → Node.js (Express app)&lt;br&gt;
🗄️ Database Layer → MySQL (separate VM)&lt;br&gt;
🏗️ Infrastructure → Terraform&lt;br&gt;
🤖 Automation → Ansible&lt;/p&gt;

&lt;p&gt;Step 1: Provisioning Infrastructure with Terraform&lt;br&gt;
Instead of manually creating servers, I used Terraform to spin up:&lt;br&gt;
4 Virtual Machines (web1, web2, app, db)&lt;br&gt;
Virtual Network + Subnets&lt;br&gt;
Public IPs&lt;br&gt;
Network Security Groups&lt;br&gt;
terraform init&lt;br&gt;
terraform plan&lt;br&gt;
terraform apply&lt;/p&gt;

&lt;p&gt;Lesson: Infrastructure should be code. If you can’t recreate it in minutes, it’s not production-ready.&lt;/p&gt;

&lt;p&gt;Step 2: SSH Setup (First Real Problem)&lt;br&gt;
After provisioning, Ansible couldn’t connect:&lt;br&gt;
UNREACHABLE! SSH host key verification failed&lt;/p&gt;

&lt;p&gt;Why does this happen?&lt;br&gt;
Every time Terraform creates new VMs, they get new SSH fingerprints.&lt;br&gt;
Fix:&lt;br&gt;
ssh-keyscan -H  &amp;gt;&amp;gt; ~/.ssh/known_hosts&lt;/p&gt;

&lt;p&gt;Lesson: Always run ssh-keyscan after redeploying infrastructure.&lt;/p&gt;

&lt;p&gt;Step 3: Running Ansible&lt;br&gt;
I executed my playbook:&lt;br&gt;
ansible-playbook -i inventory.ini site.yml&lt;/p&gt;

&lt;p&gt;✔ Everything passed&lt;br&gt;
✔ No errors&lt;br&gt;
But then I opened the browser…&lt;/p&gt;

&lt;p&gt;Step 4: The 500 Internal Server Error&lt;br&gt;
500 Internal Server Error&lt;br&gt;
nginx/1.18.0&lt;/p&gt;

&lt;p&gt;At this point, many people would think:&lt;br&gt;
“But Ansible said SUCCESS…”&lt;br&gt;
This is where DevOps thinking matters.&lt;/p&gt;

&lt;p&gt;Step 5: Debugging the Problem&lt;br&gt;
I inspected the application:&lt;br&gt;
cat /var/www/epicbook/server.js&lt;/p&gt;

&lt;p&gt;Discovery:&lt;br&gt;
It’s a Node.js app&lt;br&gt;
Runs on port 8080&lt;br&gt;
Not a static website&lt;br&gt;
The mistake:&lt;br&gt;
I configured Nginx like this:&lt;br&gt;
root /var/www/epicbook;&lt;/p&gt;

&lt;p&gt;That only works for HTML files, not backend apps.&lt;/p&gt;

&lt;p&gt;Step 6: Fixing the Architecture&lt;br&gt;
This was the turning point.&lt;br&gt;
✅ Fix 1: Nginx Reverse Proxy&lt;br&gt;
location / {&lt;br&gt;
    proxy_pass &lt;a href="http://localhost:8080" rel="noopener noreferrer"&gt;http://localhost:8080&lt;/a&gt;;&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;Now Nginx forwards traffic to Node.js.&lt;/p&gt;

&lt;p&gt;✅ Fix 2: Install Node.js&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;name: Install Node.js
apt:
name: nodejs
state: present&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Fix 3: Install Dependencies&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;name: Install npm packages
npm:
path: /var/www/epicbook&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Fix 4: Run App as a Service&lt;br&gt;
Created a systemd service:&lt;br&gt;
ExecStart=/usr/bin/node server.js&lt;/p&gt;

&lt;p&gt;✅ Fix 5: Database Setup&lt;br&gt;
Installed MySQL on the DB server&lt;br&gt;
Created:&lt;br&gt;
database: bookstore&lt;br&gt;
user: epicbook&lt;/p&gt;

&lt;p&gt;✅ Fix 6: Use Private Network (CRITICAL)&lt;br&gt;
Instead of:&lt;br&gt;
"host": "127.0.0.1"&lt;/p&gt;

&lt;p&gt;I used:&lt;br&gt;
"host": "10.0.1.7"&lt;br&gt;
 Lesson: Databases should NEVER be exposed publicly.&lt;br&gt;
Other Errors I Encountered&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Git “Dubious Ownership”
fatal: detected dubious ownership&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;✔ Fix: &lt;br&gt;
Remove and re-clone the directory&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;SSH Timeout
Connection timed out&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;✔ Fix: &lt;br&gt;
Retry + verify connectivity&lt;/p&gt;

&lt;p&gt;Final Result&lt;br&gt;
After fixing everything:&lt;br&gt;
✅ App running on both web servers&lt;br&gt;
✅ Nginx routing traffic correctly&lt;br&gt;
✅ Node.js processing requests&lt;br&gt;
✅ MySQL storing data&lt;br&gt;
Open in browser:&lt;br&gt;
http://&lt;br&gt;
http://&lt;/p&gt;

&lt;p&gt;Idempotency Check&lt;br&gt;
I ran the playbook again:&lt;br&gt;
ansible-playbook -i inventory.ini site.yml&lt;/p&gt;

&lt;p&gt;✔ changed=0&lt;br&gt;
 Lesson: Good automation should be safe to run multiple times.&lt;/p&gt;

&lt;p&gt;What I Learned (The Real Value)&lt;br&gt;
This assignment wasn’t just about tools it was about thinking like a DevOps engineer.&lt;br&gt;
Key Takeaways:&lt;br&gt;
✔ “Success” in automation doesn’t mean the app works&lt;br&gt;
✔ Always verify in the browser (real user test)&lt;br&gt;
✔ Understand your application (Node.js ≠ static site)&lt;br&gt;
✔ Debugging is more important than writing code&lt;br&gt;
✔ Architecture decisions matter more than commands&lt;/p&gt;

&lt;p&gt;Final Insight&lt;br&gt;
A deployment is only successful when a real user can access the application — not when the playbook says “OK”.&lt;/p&gt;

&lt;p&gt;If You're Learning DevOps…&lt;br&gt;
Start building projects like this:&lt;br&gt;
Multi-tier apps&lt;br&gt;
Real debugging scenarios&lt;br&gt;
Infrastructure + application together&lt;br&gt;
That’s how you move from:&lt;br&gt;
👉 “I know commands”&lt;br&gt;
to&lt;br&gt;
👉 “I can solve real-world problems”&lt;/p&gt;

&lt;p&gt;Let’s Connect&lt;br&gt;
If you’re also learning DevOps or working on similar projects, I’d love to connect and learn together.&lt;/p&gt;

&lt;h1&gt;
  
  
  DevOps #Terraform #Ansible #Azure #NodeJS #CloudComputing #LearningInPublic
&lt;/h1&gt;

</description>
      <category>azure</category>
      <category>devops</category>
      <category>terraform</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How I Deployed a Three-Tier Book Review App on AWS Using Terraform Modules and Agentic AI</title>
      <dc:creator>Ebelechukwu Lucy Okafor</dc:creator>
      <pubDate>Fri, 27 Mar 2026 07:36:59 +0000</pubDate>
      <link>https://dev.to/ebelechukwu_lucyokafor/how-i-deployed-a-three-tier-book-review-app-on-aws-using-terraform-modules-and-agentic-ai-453o</link>
      <guid>https://dev.to/ebelechukwu_lucyokafor/how-i-deployed-a-three-tier-book-review-app-on-aws-using-terraform-modules-and-agentic-ai-453o</guid>
      <description>&lt;p&gt;A complete walkthrough from VPC design to a live Next.js + Node.js + MySQL application running on EC2 and RDS, with every error documented and fixed.&lt;/p&gt;

&lt;p&gt;When I started this assignment, I had already deployed applications on both Azure and AWS in previous weeks. But this was different. This time, there was no step-by-step guide. The assignment said: "Design and execute this project like a professional."&lt;/p&gt;

&lt;p&gt;So that is exactly what I did. I used Terraform modules to organise my infrastructure, Claude Code as my Agentic AI copilot to generate templates and fix errors, and deployed a real full-stack Book Review application, Next.js frontend, Node.js backend, and MySQL database on Amazon RDS.&lt;br&gt;
"Agentic DevOps is not about replacing engineers with AI. It is about changing what engineers spend their time on. The agent executes. The engineer decides."&lt;br&gt;
This post documents everything: the architecture decisions, every command, every error I hit, and every fix that worked. If you follow this guide, you can deploy the same application with zero surprises.&lt;br&gt;
****What We Are Building&lt;/p&gt;

&lt;p&gt;A three-tier web application following production architecture patterns:&lt;/p&gt;

&lt;p&gt;Internet Users │ ▼ port 80 EC2 Ubuntu 22.04 ← Public Subnet 10.0.1.0/24 ├── Nginx (port 80) → proxies to Next.js (port 3000) ├── Next.js 15 Frontend ← PM2 managed, port 3000 └── Node.js Backend ← PM2 managed, port 3001 │ ▼ port 3306 (MySQL) RDS MySQL 8.0 ← Private Subnets — no internet access ├── bookreview-db (primary) └── Security: only EC2 can connect on port 3306 Terraform Modules: networking/ → VPC + 3 subnets + IGW + NAT security/ → EC2 SG + RDS SG compute/ → EC2 t2.micro + Elastic IP database/ → RDS MySQL db.t3.micro&lt;/p&gt;

&lt;p&gt;****Prerequisites&lt;/p&gt;

&lt;p&gt;Install these on your local machine before starting:&lt;/p&gt;

&lt;h1&gt;
  
  
  Check all tools are ready
&lt;/h1&gt;

&lt;p&gt;terraform -v              # Need &amp;gt;= 1.5&lt;br&gt;
aws sts get-caller-identity  # Your AWS account&lt;br&gt;
git --version             # Any recent version&lt;/p&gt;

&lt;p&gt;Terraform &amp;gt;= 1.5 — terraform.io/downloads&lt;br&gt;
AWS CLI configured with aws configure&lt;br&gt;
AWS account with EC2, RDS, and VPC permissions&lt;br&gt;
Git and VS Code&lt;br&gt;
SSH client (built into Mac/Linux; Git Bash on Windows)&lt;/p&gt;

&lt;p&gt;****Why Terraform Modules?&lt;/p&gt;

&lt;p&gt;****Key Concept: Terraform Modules&lt;/p&gt;

&lt;p&gt;Modules are reusable, self-contained packages of Terraform code, instead of one giant main.tf with 300+ lines. Modules separate concerns. Each module has its own inputs (variables.tf), resources (main.tf), and outputs (outputs.tf). The root main.tf calls all modules and wires their outputs together.&lt;br&gt;
For this deployment, I created five modules:&lt;br&gt;
modules/networking/&lt;br&gt;
VPC (10.0.0.0/16), public subnet, 2 private subnets, Internet Gateway, NAT Gateway, route tables&lt;br&gt;
variables.tfmain.tfoutputs.tf&lt;br&gt;
modules/security/&lt;br&gt;
EC2 security group (SSH + HTTP + port 3000) and RDS security group (MySQL from EC2 SG only)&lt;br&gt;
variables.tfmain.tfoutputs.tf&lt;br&gt;
modules/compute/&lt;br&gt;
EC2 t2.micro Ubuntu 22.04, Elastic IP, key pair attachment, user_data for setup&lt;br&gt;
variables.tfmain.tfoutputs.tf&lt;br&gt;
modules/database/&lt;br&gt;
RDS MySQL 8.0 db.t3.micro in private subnets, DB subnet group, password config&lt;br&gt;
variables.tfmain.tfoutputs.tf&lt;br&gt;
The key insight is how modules talk to each other through outputs. The networking module outputs vpc_id. The security module takes that as input. The compute module takes the security group ID from security. No module needs to know the internal details of another, only their outputs.&lt;br&gt;
The Deployment Phase by Phase&lt;/p&gt;

&lt;p&gt;****PHASE 01 Project Setup and SSH Key Generation&lt;/p&gt;

&lt;p&gt;Create the project folder, module structure, and SSH key pair. The key pair must exist before running terraform validate. Terraform reads the public key file during validation.&lt;/p&gt;

&lt;h1&gt;
  
  
  Create project and module folders
&lt;/h1&gt;

&lt;p&gt;mkdir terraform-book-review-CLI&lt;br&gt;
cd terraform-book-review-CLI&lt;br&gt;
mkdir -p modules/networking modules/security&lt;br&gt;
mkdir -p modules/compute modules/database user_data&lt;/p&gt;

&lt;h1&gt;
  
  
  Generate SSH key pair (MUST do this before terraform validate)
&lt;/h1&gt;

&lt;p&gt;ssh-keygen -t rsa -b 4096 -f bookreview-key -N ""&lt;/p&gt;

&lt;h1&gt;
  
  
  Creates: bookreview-key (private) + bookreview-key.pub (public)
&lt;/h1&gt;

&lt;h1&gt;
  
  
  Protect sensitive files from git
&lt;/h1&gt;

&lt;p&gt;echo "bookreview-key" &amp;gt;&amp;gt; .gitignore&lt;br&gt;
echo "*.tfstate" &amp;gt;&amp;gt; .gitignore&lt;br&gt;
echo ".terraform/" &amp;gt;&amp;gt; .gitignore&lt;/p&gt;

&lt;p&gt;****PHASE 02 Write All Terraform Module Files&lt;/p&gt;

&lt;p&gt;This is where I used Claude Code as an Agentic AI copilot. I used /init to create CLAUDE.md, then asked Claude Code to generate the Terraform templates. When validation errors appeared, I pasted them into Claude Code, and it read all module files simultaneously, diagnosed the root cause, and applied fixes automatically. What would take 30 minutes manually took 7 minutes with Agentic AI.&lt;br&gt;
Here is the root main.tf that calls all modules. Notice how module outputs become inputs to the next module:&lt;br&gt;
terraform {&lt;br&gt;
  required_providers {&lt;br&gt;
    aws = { source = "hashicorp/aws", version = "~&amp;gt; 5.0" }&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;provider "aws" { region = var.aws_region }&lt;/p&gt;

&lt;h1&gt;
  
  
  Upload SSH public key to AWS
&lt;/h1&gt;

&lt;p&gt;resource "aws_key_pair" "main" {&lt;br&gt;
  key_name   = "bookreview-key"&lt;br&gt;
  public_key = file("${path.module}/bookreview-key.pub")&lt;br&gt;
}&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 1: Create network foundation
&lt;/h1&gt;

&lt;p&gt;module "networking" {&lt;br&gt;
  source   = "./modules/networking"&lt;br&gt;
  vpc_cidr = var.vpc_cidr&lt;br&gt;
  azs      = var.azs&lt;br&gt;
}&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 2: Create security groups (needs VPC ID from networking)
&lt;/h1&gt;

&lt;p&gt;module "security" {&lt;br&gt;
  source = "./modules/security"&lt;br&gt;
  vpc_id = module.networking.vpc_id  # ← output from networking&lt;br&gt;
}&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 3: Create EC2 (needs subnet + SG from above modules)
&lt;/h1&gt;

&lt;p&gt;module "compute" {&lt;br&gt;
  source    = "./modules/compute"&lt;br&gt;
  subnet_id = module.networking.public_subnet_id&lt;br&gt;
  sg_id     = module.security.ec2_sg&lt;br&gt;
  key_name  = aws_key_pair.main.key_name&lt;br&gt;
}&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 4: Create RDS (needs private subnets + RDS SG)
&lt;/h1&gt;

&lt;p&gt;module "database" {&lt;br&gt;
  source             = "./modules/database"&lt;br&gt;
  private_subnet_ids = [module.networking.private_subnet_id_1,&lt;br&gt;
                        module.networking.private_subnet_id_2]&lt;br&gt;
  rds_sg_id          = module.security.rds_sg&lt;br&gt;
  db_password        = var.db_password&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;****PHASE 03 Terraform Init, Validate, Plan and Apply&lt;/p&gt;

&lt;p&gt;With all module files written, the standard Terraform workflow runs. RDS takes 5–8 minutes to provision. Note your EC2 public IP and RDS endpoint from the outputs you need them for all SSH and database commands.&lt;br&gt;
terraform init      # Downloads AWS provider ~5.0&lt;br&gt;
terraform validate  # Checks all module references&lt;br&gt;
terraform plan      # Preview — confirm no destroy actions&lt;br&gt;
terraform apply     # Type 'yes' — wait ~10 minutes&lt;/p&gt;

&lt;h1&gt;
  
  
  After apply completes, note these outputs:
&lt;/h1&gt;

&lt;p&gt;ec2_public_ip   = "54.227.244.250"&lt;br&gt;
rds_endpoint    = "bookreview-db.cixq88qmex1t.us-east-1.rds.amazonaws.com:3306"&lt;/p&gt;

&lt;p&gt;****PHASE 04 SSH Into EC2 and Install Node.js 18&lt;br&gt;
Ubuntu 22.04 ships with Node.js v12. The app requires v18. This is the most common trap when deploying Next.js on a fresh Ubuntu server: you must install Node.js 18 via the NodeSource repository, not the default apt package.&lt;/p&gt;

&lt;h1&gt;
  
  
  From local terminal
&lt;/h1&gt;

&lt;p&gt;chmod 400 bookreview-key&lt;br&gt;
ssh -i bookreview-key &lt;a href="mailto:ubuntu@54.227.244.250"&gt;ubuntu@54.227.244.250&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Inside EC2 — remove old Node conflict and install v18
&lt;/h1&gt;

&lt;p&gt;sudo apt remove -y libnode-dev&lt;br&gt;
sudo apt autoremove -y &amp;amp;&amp;amp; sudo apt clean&lt;br&gt;
curl -fsSL &lt;a href="https://deb.nodesource.com/setup_18.x" rel="noopener noreferrer"&gt;https://deb.nodesource.com/setup_18.x&lt;/a&gt; | sudo -E bash -&lt;br&gt;
sudo apt install -y nodejs nginx git&lt;/p&gt;

&lt;h1&gt;
  
  
  Verify
&lt;/h1&gt;

&lt;p&gt;node -v   # v18.20.8 ✅&lt;br&gt;
npm -v    # 10.8.2 ✅&lt;/p&gt;

&lt;p&gt;****PHASE 05: Clone App and Configure Backend&lt;/p&gt;

&lt;p&gt;The repository is a monorepo with backend/ (Node.js + Express + Sequelize) and frontend/ (Next.js 15). The backend .env file points to localhost by default. Update it to point to the RDS endpoint. Sequelize auto-creates all database tables on first startup.&lt;br&gt;
git clone &lt;a href="https://github.com/pravinmishraaws/book-review-app" rel="noopener noreferrer"&gt;https://github.com/pravinmishraaws/book-review-app&lt;/a&gt; ~/book-review-app&lt;br&gt;
cd ~/book-review-app/backend&lt;/p&gt;

&lt;h1&gt;
  
  
  Update .env with RDS credentials (use nano — NOT cat heredoc)
&lt;/h1&gt;

&lt;p&gt;nano .env&lt;/p&gt;

&lt;h1&gt;
  
  
  Replace content with:
&lt;/h1&gt;

&lt;p&gt;DB_HOST=bookreview-db.cixq88qmex1t.us-east-1.rds.amazonaws.com&lt;br&gt;
DB_NAME=bookreviewdb&lt;br&gt;
DB_USER=admin&lt;br&gt;
DB_PASS=BookReview123&lt;br&gt;
DB_DIALECT=mysql&lt;br&gt;
PORT=3001&lt;br&gt;
JWT_SECRET=mysecretkey&lt;/p&gt;

&lt;h1&gt;
  
  
  Ctrl+X then Y then Enter to save
&lt;/h1&gt;

&lt;h1&gt;
  
  
  Install PM2 globally and start backend
&lt;/h1&gt;

&lt;p&gt;sudo npm install -g pm2&lt;br&gt;
pm2 start src/server.js --name backend&lt;br&gt;
pm2 save&lt;/p&gt;

&lt;h1&gt;
  
  
  Watch for this in logs:
&lt;/h1&gt;

&lt;p&gt;Database 'bookreviewdb' connected successfully with SSL!&lt;br&gt;
 Database schema updated successfully!&lt;br&gt;
 Server running on port 3001&lt;/p&gt;

&lt;p&gt;****Key Concept: Sequelize Auto-Sync&lt;br&gt;
Sequelize calls sync() on startup, which creates all database tables automatically if they don't exist. You don't need to run SQL migration files manually. When you see "Database schema updated successfully", all tables are ready.&lt;/p&gt;

&lt;p&gt;****PHASE 06 Build and Deploy Frontend + Configure Nginx&lt;/p&gt;

&lt;p&gt;Next.js 15 requires a production build before starting. The build compiles all pages, optimises JavaScript, and pre-renders static pages. Nginx then acts as a reverse proxy; users hit port 80, and Nginx forwards to port 3000.&lt;br&gt;
cd ~/book-review-app/frontend&lt;/p&gt;

&lt;h1&gt;
  
  
  Tell frontend where the backend API is
&lt;/h1&gt;

&lt;p&gt;cat &amp;gt; .env.local &amp;lt;&amp;lt; 'EOF'&lt;br&gt;
NEXT_PUBLIC_API_URL=&lt;a href="http://54.227.244.250:3001" rel="noopener noreferrer"&gt;http://54.227.244.250:3001&lt;/a&gt;&lt;br&gt;
EOF&lt;/p&gt;

&lt;p&gt;npm install           # Installs 327 packages&lt;br&gt;
npm run build         # Compiles production bundle&lt;/p&gt;

&lt;h1&gt;
  
  
  Expected: ✓ Compiled successfully — 7 pages
&lt;/h1&gt;

&lt;p&gt;pm2 start npm --name frontend -- start&lt;br&gt;
pm2 save&lt;br&gt;
pm2 list              # backend + frontend both online ✅&lt;/p&gt;

&lt;h1&gt;
  
  
  Configure Nginx reverse proxy
&lt;/h1&gt;

&lt;p&gt;sudo tee /etc/nginx/sites-available/bookreview &amp;lt;&amp;lt; 'EOF'&lt;br&gt;
server {&lt;br&gt;
    listen 80;&lt;br&gt;
    server_name _;&lt;br&gt;
    location /api {&lt;br&gt;
        proxy_pass &lt;a href="http://localhost:3001" rel="noopener noreferrer"&gt;http://localhost:3001&lt;/a&gt;;&lt;br&gt;
        proxy_set_header Host $host;&lt;br&gt;
    }&lt;br&gt;
    location / {&lt;br&gt;
        proxy_pass &lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt;;&lt;br&gt;
        proxy_http_version 1.1;&lt;br&gt;
        proxy_set_header Upgrade $http_upgrade;&lt;br&gt;
        proxy_set_header Connection 'upgrade';&lt;br&gt;
        proxy_set_header Host $host;&lt;br&gt;
    }&lt;br&gt;
}&lt;br&gt;
EOF&lt;/p&gt;

&lt;p&gt;sudo ln -sf /etc/nginx/sites-available/bookreview /etc/nginx/sites-enabled/default&lt;br&gt;
sudo nginx -t &amp;amp;&amp;amp; sudo systemctl restart nginx&lt;/p&gt;

&lt;h1&gt;
  
  
  Make PM2 survive reboots
&lt;/h1&gt;

&lt;p&gt;pm2 startup    # Run the command it outputs&lt;br&gt;
pm2 save&lt;/p&gt;

&lt;p&gt;PHASE 07 Verify — App is Live!&lt;/p&gt;

&lt;p&gt;Open your browser and navigate to your EC2 public IP. The Book Review App homepage loads. Login and register pages work. The backend is connected to RDS MySQL. The full stack is live.&lt;br&gt;
&lt;a href="http://54.227.244.250:3000" rel="noopener noreferrer"&gt;http://54.227.244.250:3000&lt;/a&gt;          → Book Review App homepage ✅&lt;br&gt;
&lt;a href="http://54.227.244.250:3000/login" rel="noopener noreferrer"&gt;http://54.227.244.250:3000/login&lt;/a&gt;    → Login page with form ✅&lt;br&gt;
&lt;a href="http://54.227.244.250:3000/register" rel="noopener noreferrer"&gt;http://54.227.244.250:3000/register&lt;/a&gt; → Register page ✅&lt;br&gt;
&lt;a href="http://54.227.244.250" rel="noopener noreferrer"&gt;http://54.227.244.250&lt;/a&gt;               → Same app via Nginx port 80 ✅&lt;/p&gt;

&lt;p&gt;****Every Error I Hit and How I Fixed It&lt;/p&gt;

&lt;p&gt;These are all the errors I encountered during the deployment. If you follow this guide, you will hit the same ones; these fixes resolve them completely.&lt;/p&gt;

&lt;p&gt;Error&lt;br&gt;
Root Cause&lt;br&gt;
Fix&lt;br&gt;
Status&lt;br&gt;
1&lt;br&gt;
Duplicate resource aws_db_subnet_group in variables.tf&lt;br&gt;
Resource block accidentally placed in variables.tf instead of main.tf&lt;br&gt;
Removed resource block from variables.tf — kept only variable declarations&lt;br&gt;
FIXED&lt;br&gt;
2&lt;br&gt;
Reference to undeclared resource aws_subnet.private_1&lt;br&gt;
Subnet resource named private but outputs.tf referenced it as private_1&lt;br&gt;
Renamed resource to private_1 and added private_2 in second AZ&lt;br&gt;
FIXED&lt;br&gt;
3&lt;br&gt;
Unsupported attribute vpc_id on module.networking&lt;br&gt;
networking/outputs.tf did not export vpc_id — module created VPC but never exported its ID&lt;br&gt;
Added output "vpc_id" { value = aws_vpc.main.id } to networking/outputs.tf&lt;br&gt;
FIXED&lt;br&gt;
4&lt;br&gt;
CIDR block 10.0.2.0/24 conflicts with existing subnet&lt;br&gt;
Partial apply had already created a subnet — re-apply tried to create same CIDR twice&lt;br&gt;
Changed private_1 subnet CIDR to 10.0.4.0/24 to avoid conflict&lt;br&gt;
FIXED&lt;br&gt;
5&lt;br&gt;
npm ERR! Missing script: "start"&lt;br&gt;
backend/package.json had no start script — npm start had nothing to run&lt;br&gt;
Added "start": "node src/server.js" to scripts in package.json&lt;br&gt;
FIXED&lt;br&gt;
6&lt;br&gt;
dpkg error overwriting /usr/include/node/common.gypi&lt;br&gt;
Old libnode-dev package conflicted with Node.js 18 installer&lt;br&gt;
sudo apt remove -y libnode-dev then re-run NodeSource installer&lt;br&gt;
FIXED&lt;br&gt;
7&lt;br&gt;
Shell stuck at &amp;gt; prompt when using cat heredoc&lt;br&gt;
Leading spaces before EOF marker prevented heredoc from closing&lt;br&gt;
Use nano instead of cat heredoc for all .env file creation&lt;br&gt;
FIXED&lt;/p&gt;

&lt;p&gt;*&lt;strong&gt;&lt;em&gt;Key Concepts Explained&lt;br&gt;
*&lt;/em&gt;&lt;/strong&gt;Three-Tier Architecture&lt;br&gt;
The web tier (EC2 with Nginx + Next.js) is publicly accessible. The app tier (Node.js backend) runs on the same EC2 but is only reachable through the web tier. The database tier (RDS) is in private subnets with no internet route; only the EC2 security group can connect on port 3306. Each tier has a different trust level and a different security boundary.&lt;br&gt;
Security Group Chaining&lt;br&gt;
The RDS security group allows MySQL (port 3306) only from the EC2 security group ID, not from any IP address. This means even with valid RDS credentials, you cannot connect from outside the VPC. Only the EC2 instance can reach the database. This is least-privilege networking applied at the AWS infrastructure level.&lt;br&gt;
Agentic AI in DevOps — Claude Code&lt;br&gt;
I used Claude Code throughout this deployment. For Terraform errors, I would type the error message, and Claude Code would read all module files simultaneously, trace the dependency chain, find the root cause, and apply the fix. This is the future of DevOps AI agents that can hold the entire codebase in context and diagnose cross-file issues instantly.&lt;br&gt;
PM2 — Production Process Manager&lt;br&gt;
If you start a Node.js app with node server.js and close your SSH session, the process dies. PM2 runs apps as background daemons. pm2 startup generates a system service that starts PM2 on boot. pm2 save persists the process list. pm2 list shows CPU, memory, and uptime for all managed processes.&lt;br&gt;
What I Learned — 4 Lessons That Changed How I Think&lt;br&gt;
Modules are a professional infrastructure. Flat Terraform files work for small projects. Modules scale. The separation of networking, security, and compute into independent modules means any engineer on the team can open one folder and understand exactly one thing. That clarity is worth the extra files.&lt;/p&gt;

&lt;p&gt;Agentic AI changes the debugging loop entirely. With Claude Code reading all files simultaneously, what took 30 minutes of manual cross-referencing took 7 minutes. The AI does not replace the engineer; it removes the tedious parts so the engineer can focus on architecture decisions.&lt;/p&gt;

&lt;p&gt;Read the files before trying to fix things. Almost every error I hit was answered by reading the relevant config file. The .env pointed to localhost. The package.json had no start script. The outputs.tf was missing vpc_id. The answer was always in the files read first.&lt;/p&gt;

&lt;p&gt;Infrastructure and deployment are separate concerns. Terraform provisions where the app runs. npm install, npm run build, and pm2 deploy the app itself. Nginx makes it production-ready. Understanding which layer handles which concern prevents debugging in the wrong place.&lt;/p&gt;

&lt;p&gt;****Final Thoughts&lt;/p&gt;

&lt;p&gt;This was the most complex assignment in the DevOps micro internship — and the most rewarding. I designed a real three-tier architecture, wrote production-grade Terraform modules, used Agentic AI as a genuine copilot, and got a full-stack application live on AWS.&lt;br&gt;
The Book Review App is live. The infrastructure is reproducible. Every error is documented. That is what production DevOps looks like.&lt;br&gt;
If you have questions, drop them in the comments. If this helped you, share it with someone learning DevOps. &lt;/p&gt;

&lt;h1&gt;
  
  
  terraform#aws#devops#nextjs#nodejs#mysql#claudecode#agenticai#cloudengineering#womenintech#learninginpublic
&lt;/h1&gt;

</description>
    </item>
    <item>
      <title>Mastering Agentic AI DevOps with Claude Code: My First Full Pipeline Deployment</title>
      <dc:creator>Ebelechukwu Lucy Okafor</dc:creator>
      <pubDate>Tue, 17 Mar 2026 10:45:20 +0000</pubDate>
      <link>https://dev.to/ebelechukwu_lucyokafor/mastering-agentic-ai-devops-with-claude-code-my-first-full-pipeline-deployment-3o6l</link>
      <guid>https://dev.to/ebelechukwu_lucyokafor/mastering-agentic-ai-devops-with-claude-code-my-first-full-pipeline-deployment-3o6l</guid>
      <description>&lt;p&gt;I recently completed the Ultimate Agentic AI DevOps Assignment using Claude Code, and I’m excited to share what I built, what I learned, and how agentic AI is changing the way we think about DevOps automation.&lt;br&gt;
This post walks through each task from setting up Skills to deploying a live site on AWS and includes lessons, screenshots, and a LinkedIn post to prove it all works!&lt;/p&gt;

&lt;p&gt;*&lt;strong&gt;&lt;em&gt;Task 1 — Set Up the Skill Files Correctly&lt;br&gt;
What I did:&lt;br&gt;
Created the .claude/skills/ folder structure with four subfolders: scaffold-terraform, tf-plan, tf-apply, and deploy.&lt;br&gt;
Downloaded and renamed the required files:&lt;br&gt;
scaffold-terraform/SKILL.md&lt;br&gt;
scaffold-terraform/template-spec.md&lt;br&gt;
tf-plan/SKILL.md&lt;br&gt;
tf-apply/SKILL.md&lt;br&gt;
deploy/SKILL.md&lt;br&gt;
Verified everything in VS Code to ensure proper placement and naming.&lt;br&gt;
*&lt;/em&gt;&lt;/strong&gt;What I learned:&lt;br&gt;
Claude Skills require precise folder and file naming to function correctly.&lt;br&gt;
The name: field in each SKILL.md must match the filename — otherwise, the agent won’t load.&lt;br&gt;
Issue faced:&lt;br&gt;
Initially placed template-spec.md outside its folder. Claude failed to scaffold.&lt;br&gt;
Fixed it by moving it into scaffold-terraform/ and re-running the skill.&lt;/p&gt;

&lt;p&gt;****Task 2 — Walk Through and Explain the Four Skills&lt;br&gt;
/scaffold-terraform&lt;br&gt;
What it does: Generates Terraform files based on a template spec.&lt;br&gt;
Tools used: Claude Code + Write access.&lt;br&gt;
Why Write access: It needs to create actual .tf files in the project.&lt;br&gt;
/tf-plan&lt;br&gt;
What it does: Runs terraform plan to preview infrastructure changes.&lt;br&gt;
Tools used: Terminal access only.&lt;br&gt;
Why Read-only: It doesn’t modify anything — just analyzes.&lt;br&gt;
/tf-apply&lt;/p&gt;

&lt;p&gt;****What it does: Applies the Terraform plan to provision resources.&lt;br&gt;
Tools used: Terminal + Write access.&lt;br&gt;
Why Write access: It creates infrastructure on AWS.&lt;br&gt;
/deploy&lt;/p&gt;

&lt;p&gt;****What it does: Syncs site files to S3 and triggers CloudFront invalidation.&lt;/p&gt;

&lt;p&gt;****Tools used: AWS CLI + Write access.&lt;br&gt;
Why Write access: It modifies cloud resources and pushes files.&lt;br&gt;
Key takeaway:&lt;br&gt;
“Needs conversation context? Use a Skill. Self-contained job? Use a Subagent.”&lt;br&gt;
DevOps example (Skill): /tf-plan needs context from previous steps.&lt;br&gt;
DevOps example (Subagent): security-auditor runs independently on Terraform files.&lt;/p&gt;

&lt;p&gt;*&lt;strong&gt;&lt;em&gt;Task 3 — Run the Full Pipeline: Scaffold → Plan → Apply → Deploy&lt;br&gt;
What I did:&lt;br&gt;
Ran /scaffold-terraform to generate Terraform files.&lt;br&gt;
Manually ran terraform init in the terminal.&lt;br&gt;
Executed /tf-plan to preview changes.&lt;br&gt;
Ran /tf-apply to provision AWS resources.&lt;br&gt;
Used /deploy to sync site files to S3 and invalidate CloudFront cache.&lt;br&gt;
*&lt;/em&gt;&lt;/strong&gt;What I learned:&lt;br&gt;
Claude Code can orchestrate a full deployment pipeline with minimal manual steps.&lt;br&gt;
Each skill builds on the previous one — like a relay race.&lt;br&gt;
*&lt;strong&gt;&lt;em&gt;Issue faced:&lt;br&gt;
CloudFront invalidation failed due to missing permissions.&lt;br&gt;
Solved by updating IAM role with CloudFront: CreateInvalidation permission.&lt;br&gt;
Success:&lt;br&gt;
My site is live at: &lt;a href="https://d123abcxyz.cloudfront.net" rel="noopener noreferrer"&gt;https://d123abcxyz.cloudfront.net&lt;/a&gt; &lt;br&gt;
*&lt;/em&gt;&lt;/strong&gt;Task 4 — LinkedIn Post (MANDATORY)&lt;/p&gt;

&lt;p&gt;****What I built:&lt;br&gt;
A live static site hosted on AWS using Terraform and Claude Code.&lt;/p&gt;

&lt;p&gt;****Skills used:&lt;br&gt;
/scaffold-terraform, /tf-plan, /tf-apply, /deploy&lt;/p&gt;

&lt;p&gt;****What I learned:&lt;br&gt;
Agentic AI can automate DevOps pipelines with precision and context-awareness.&lt;br&gt;
Screenshots included:&lt;br&gt;
Claude Code skill execution&lt;br&gt;
Live site in browser&lt;br&gt;
Terraform files in VS Code&lt;/p&gt;

&lt;p&gt;🔗 LinkedIn Post URL: &lt;a href="https://www.linkedin.com/posts/ebelechukwu-okafor_agenticai-mcp-devops-activity-7439480896090537984-PpIl?utm_source=social_share_send&amp;amp;utm_medium=member_desktop_web&amp;amp;rcm=ACoAABglo4IBjif-4VKcp2fiPwev8ImRV7LTaN0" rel="noopener noreferrer"&gt;https://www.linkedin.com/posts/ebelechukwu-okafor_agenticai-mcp-devops-activity-7439480896090537984-PpIl?utm_source=social_share_send&amp;amp;utm_medium=member_desktop_web&amp;amp;rcm=ACoAABglo4IBjif-4VKcp2fiPwev8ImRV7LTaN0&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;✅ Final Thoughts&lt;br&gt;
This assignment taught me how to:&lt;br&gt;
Structure and run Claude Skills correctly&lt;br&gt;
Understand tool permissions and context boundaries&lt;br&gt;
Deploy a full infrastructure pipeline using AI agents&lt;br&gt;
Agentic DevOps is real — and it’s powerful. &lt;/p&gt;

&lt;p&gt;I’m excited to keep building with Claude Code and explore more advanced workflows.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aws</category>
      <category>devops</category>
      <category>terraform</category>
    </item>
    <item>
      <title>I Deployed a Production-Style Three-Tier Application on Azure (Next.js + Node.js + MySQL)</title>
      <dc:creator>Ebelechukwu Lucy Okafor</dc:creator>
      <pubDate>Sun, 08 Mar 2026 06:54:15 +0000</pubDate>
      <link>https://dev.to/ebelechukwu_lucyokafor/i-deployed-a-production-style-three-tier-application-on-azure-nextjs-nodejs-mysql-6a5</link>
      <guid>https://dev.to/ebelechukwu_lucyokafor/i-deployed-a-production-style-three-tier-application-on-azure-nextjs-nodejs-mysql-6a5</guid>
      <description>&lt;p&gt;A few days ago, I challenged myself to deploy a real-world cloud architecture instead of just running applications locally.&lt;br&gt;
*&lt;strong&gt;&lt;em&gt;The goal was simple:&lt;br&gt;
Deploy a Book Review Web Application on the cloud using a secure three-tier architecture.&lt;br&gt;
But like most cloud projects… it quickly became much more than that.&lt;br&gt;
*&lt;/em&gt;&lt;/strong&gt;I had to deal with:&lt;br&gt;
networking&lt;br&gt;
database authentication issues&lt;br&gt;
SSL enforcement&lt;br&gt;
load balancing&lt;br&gt;
debugging Node.js deployment problems&lt;br&gt;
By the end of the project, I had built a fully working cloud system on Microsoft Azure.&lt;br&gt;
This post walks through what I built, the problems I faced, and how I solved them.&lt;br&gt;
*&lt;strong&gt;&lt;em&gt;What I Built&lt;br&gt;
The application is a Book Review platform built with:&lt;br&gt;
Frontend&lt;br&gt;
Next.js&lt;br&gt;
Backend&lt;br&gt;
Node.js + Express&lt;br&gt;
Database&lt;br&gt;
MySQL&lt;br&gt;
*&lt;/em&gt;&lt;/strong&gt;Instead of deploying everything on one server, I used a ****Three-Tier Architecture.&lt;br&gt;
Internet&lt;br&gt;
   │&lt;br&gt;
Public Load Balancer&lt;br&gt;
   │&lt;br&gt;
Web Tier (Next.js + Nginx)&lt;br&gt;
   │&lt;br&gt;
Internal Load Balancer&lt;br&gt;
   │&lt;br&gt;
App Tier (Node.js API)&lt;br&gt;
   │&lt;br&gt;
Database Tier (Azure MySQL)&lt;/p&gt;

&lt;p&gt;Each tier runs independently, which is how most production systems are designed.&lt;/p&gt;

&lt;p&gt;☁️ ****Azure Services Used&lt;br&gt;
Here are the core services I used:&lt;br&gt;
Azure Virtual Network&lt;br&gt;
Azure Virtual Machines&lt;br&gt;
Azure Network Security Groups&lt;br&gt;
Azure Load Balancer&lt;br&gt;
Azure Database for MySQL Flexible Server&lt;br&gt;
This setup allowed me to design secure network segmentation between application layers.&lt;/p&gt;

&lt;p&gt;****Step 1 — Creating the Network Architecture&lt;br&gt;
First I created a Virtual Network with the CIDR block:&lt;br&gt;
10.0.0.0/16&lt;/p&gt;

&lt;p&gt;Then I divided it into six subnets.&lt;br&gt;
****Web Tier (Public)&lt;br&gt;
10.0.1.0/24&lt;br&gt;
10.0.2.0/24&lt;/p&gt;

&lt;p&gt;****App Tier (Private)&lt;br&gt;
10.0.3.0/24&lt;br&gt;
10.0.4.0/24&lt;/p&gt;

&lt;p&gt;****Database Tier (Private)&lt;br&gt;
10.0.5.0/24&lt;br&gt;
10.0.6.0/24&lt;/p&gt;

&lt;p&gt;****Why?&lt;br&gt;
Because databases should never be exposed to the public internet.&lt;/p&gt;

&lt;p&gt;*&lt;strong&gt;&lt;em&gt;Step 2 — Securing the Network&lt;br&gt;
I configured Network Security Groups.&lt;br&gt;
Rules were strict:&lt;br&gt;
*&lt;/em&gt;&lt;/strong&gt;Web Tier&lt;br&gt;
Allow HTTP (80)&lt;br&gt;
Allow HTTPS (443)&lt;br&gt;
Allow SSH (22)&lt;/p&gt;

&lt;p&gt;****App Tier&lt;br&gt;
Allow port 3001 from Web Tier only&lt;/p&gt;

&lt;p&gt;****Database Tier&lt;br&gt;
Allow port 3306 from App Tier only&lt;/p&gt;

&lt;p&gt;This ensured that:&lt;br&gt;
Users can only access the frontend&lt;br&gt;
backend servers remain private&lt;br&gt;
The database stays fully protected&lt;br&gt;
****Step 3 — Deploying the Backend&lt;br&gt;
The backend runs a Node.js API server.&lt;br&gt;
I installed dependencies:&lt;br&gt;
sudo apt update&lt;br&gt;
sudo apt install nodejs npm git&lt;/p&gt;

&lt;p&gt;Then I installed PM2 to manage the application.&lt;br&gt;
sudo npm install -g pm2&lt;/p&gt;

&lt;p&gt;****Start the backend:&lt;br&gt;
pm2 start server.js --name backend-api&lt;/p&gt;

&lt;p&gt;Save the process:&lt;br&gt;
pm2 save&lt;/p&gt;

&lt;p&gt;Enable startup on reboot:&lt;br&gt;
pm2 startup&lt;/p&gt;

&lt;p&gt;Now the backend runs automatically even if the server restarts.&lt;br&gt;
Step 4 — Setting Up the Database&lt;br&gt;
For the database layer, I used:&lt;br&gt;
Azure Database for MySQL Flexible Server&lt;br&gt;
****Important configurations:&lt;br&gt;
Private network access&lt;br&gt;
SSL required&lt;br&gt;
High availability (Zone redundant)&lt;br&gt;
Read replica&lt;br&gt;
This ensures the database is:&lt;br&gt;
secure&lt;br&gt;
scalable&lt;br&gt;
fault tolerant&lt;/p&gt;

&lt;p&gt;****Testing Database Connectivity&lt;br&gt;
From the App VM, I installed the MySQL client.&lt;br&gt;
sudo apt install mysql-client&lt;/p&gt;

&lt;p&gt;Then connected using SSL.&lt;br&gt;
mysql -h book-review-mysql.mysql.database.azure.com \&lt;br&gt;
-u username \&lt;br&gt;
-p \&lt;br&gt;
--ssl-mode=REQUIRED&lt;/p&gt;

&lt;p&gt;If everything works you should see:&lt;br&gt;
mysql&amp;gt;&lt;/p&gt;

&lt;p&gt;**** Problems I Faced (And How I Solved Them)&lt;br&gt;
❌ Problem 1 — MySQL Login Failed&lt;br&gt;
Error:&lt;br&gt;
ERROR 1045 (28000): Access denied for user&lt;/p&gt;

&lt;p&gt;Cause:&lt;br&gt;
Azure requires the username format:&lt;br&gt;
username@servername&lt;/p&gt;

&lt;p&gt;Solution:&lt;br&gt;
Ebelechukwu@book-review-mysql&lt;/p&gt;

&lt;p&gt;❌ *&lt;strong&gt;&lt;em&gt;Problem 2 — SSL Connection Required&lt;br&gt;
Azure MySQL enforces SSL.&lt;br&gt;
Solution:&lt;br&gt;
--ssl-mode=REQUIRED&lt;br&gt;
❌ *&lt;/em&gt;&lt;/strong&gt;Problem 3 — Node.js Would Not Start&lt;br&gt;
Error:&lt;br&gt;
npm error: Missing script: start&lt;/p&gt;

&lt;p&gt;****Solution:&lt;br&gt;
Instead of npm start, I used PM2.&lt;/p&gt;

&lt;p&gt;*&lt;strong&gt;&lt;em&gt;Final Result&lt;br&gt;
After everything was configured:&lt;br&gt;
✔ Backend connected to MySQL&lt;br&gt;
✔ Database schema created automatically&lt;br&gt;
✔ Sample books and reviews inserted&lt;br&gt;
✔ API running on port 3001&lt;br&gt;
*&lt;/em&gt;&lt;/strong&gt;Testing:&lt;br&gt;
curl &lt;a href="http://localhost:3001" rel="noopener noreferrer"&gt;http://localhost:3001&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;****Response:&lt;br&gt;
Book Review API is running&lt;/p&gt;

&lt;p&gt;****Success &lt;/p&gt;

&lt;p&gt;What This Project Taught Me&lt;br&gt;
This project helped me understand:&lt;br&gt;
real cloud networking&lt;br&gt;
secure infrastructure design&lt;br&gt;
debugging distributed systems&lt;br&gt;
production style deployments&lt;br&gt;
Most importantly…&lt;br&gt;
Cloud engineering is not just about launching servers.&lt;br&gt;
It’s about designing systems that are secure, scalable, and resilient.&lt;/p&gt;

&lt;p&gt;****What I Would Improve Next&lt;br&gt;
Next improvements I plan to add:&lt;br&gt;
Terraform infrastructure&lt;br&gt;
CI/CD pipelines&lt;br&gt;
containerization with Docker&lt;br&gt;
Kubernetes deployment&lt;/p&gt;

&lt;p&gt;****Final Thoughts&lt;br&gt;
If you're learning Cloud Engineering or DevOps, I highly recommend building projects like this.&lt;br&gt;
Nothing teaches cloud architecture faster than breaking things and fixing them.&lt;br&gt;
If you’ve built something similar, I’d love to hear about it.&lt;/p&gt;

&lt;p&gt;Tags for Dev.to&lt;br&gt;
Use these tags:&lt;/p&gt;

&lt;h1&gt;
  
  
  azure
&lt;/h1&gt;

&lt;h1&gt;
  
  
  cloud
&lt;/h1&gt;

&lt;h1&gt;
  
  
  devops
&lt;/h1&gt;

&lt;h1&gt;
  
  
  nodejs
&lt;/h1&gt;

&lt;h1&gt;
  
  
  mysql
&lt;/h1&gt;

&lt;h1&gt;
  
  
  cloudarchitecture
&lt;/h1&gt;

</description>
      <category>architecture</category>
      <category>azure</category>
      <category>nextjs</category>
      <category>node</category>
    </item>
    <item>
      <title>Deploy Book Review App (Three-Tier Architecture) on AWS</title>
      <dc:creator>Ebelechukwu Lucy Okafor</dc:creator>
      <pubDate>Fri, 27 Feb 2026 21:12:43 +0000</pubDate>
      <link>https://dev.to/ebelechukwu_lucyokafor/deploy-book-review-app-three-tier-architecture-on-aws-6ha</link>
      <guid>https://dev.to/ebelechukwu_lucyokafor/deploy-book-review-app-three-tier-architecture-on-aws-6ha</guid>
      <description>&lt;p&gt;&lt;strong&gt;&lt;em&gt;Deploying a full-stack app in the cloud sounds glamorous—until you’re debugging a &lt;code&gt;403 Forbidden&lt;/code&gt; error at 2 a.m. Here’s how I survived my DevOps capstone project—and what you can learn from it.&lt;/em&gt;&lt;br&gt;
**&lt;/strong&gt; What Is This Project?&lt;br&gt;
I recently completed &lt;strong&gt;Assignment 4&lt;/strong&gt; of the &lt;em&gt;DevOps Zero to Hero&lt;/em&gt; course by Pravin Mishra, a hands-on challenge to deploy the &lt;strong&gt;Book Review App&lt;/strong&gt; on AWS using &lt;strong&gt;real-world, production-grade architecture&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The app is simple in concept:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Browse books&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Read reviews&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Log in and write your own&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But under the hood? It’s a &lt;strong&gt;three-tier masterpiece&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend&lt;/strong&gt;: Next.js (React with server-side rendering)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend&lt;/strong&gt;: Node.js + Express (REST API)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database&lt;/strong&gt;: MySQL (via Amazon RDS)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And the goal wasn’t just to “make it work.” It was to build it &lt;strong&gt;the way real companies do&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ High availability&lt;/li&gt;
&lt;li&gt;✅ Network isolation&lt;/li&gt;
&lt;li&gt;✅ Auto-scaling&lt;/li&gt;
&lt;li&gt;✅ Secure traffic flow
No pressure, right?
*&lt;strong&gt;&lt;em&gt;The Architecture: Like Building a Castle
Here’s what I built:
Internet
↓
Public Application Load Balancer (ALB)
↓
Web Tier (Next.js on EC2 in public subnets)
↓
Internal ALB (private, not internet-facing)
↓
App Tier (Node.js on EC2 in private subnets)
↓
Database Tier (MySQL RDS Multi-AZ + Read Replica in private subnets)
*&lt;/em&gt;&lt;/strong&gt;Key Security Principles:&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;No public access to backend or database&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Traffic flows only one way&lt;/strong&gt;: Internet → Web → App → DB&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Groups act like bouncers&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;ALB can talk to Web EC2
&lt;/li&gt;
&lt;li&gt;Web EC2 can talk to Internal ALB
&lt;/li&gt;
&lt;li&gt;Internal ALB can talk to App EC2
&lt;/li&gt;
&lt;li&gt;App EC2 can talk to RDS
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Nothing else gets in.&lt;/strong&gt;
This isn’t just “best practice”—it’s how you prevent breaches.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;****Step-by-Step: How I Did It&lt;/p&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;Lay the Foundation: VPC &amp;amp; Subnets&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;I created a custom VPC (&lt;code&gt;10.0.0.0/16&lt;/code&gt;) with &lt;strong&gt;6 subnets across 2 Availability Zones&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;2 public&lt;/strong&gt; (for Web EC2 + Public ALB)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;2 private&lt;/strong&gt; (for App EC2 + Internal ALB)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;2 private&lt;/strong&gt; (for RDS)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then I ****added:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An &lt;strong&gt;Internet Gateway&lt;/strong&gt; (for public access)&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;NAT Gateway&lt;/strong&gt; (so private instances can download packages)&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Pro tip: Always enable &lt;strong&gt;DNS hostnames&lt;/strong&gt; in your VPC—Next.js and Node.js need it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;*&lt;em&gt;**Lock It Down: Security Groups&lt;/em&gt;*
I created 5 security groups that &lt;strong&gt;only trust each other&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;public-alb-sg&lt;/code&gt; → allows HTTP from the world&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;web-ec2-sg&lt;/code&gt; → allows HTTP &lt;strong&gt;only from &lt;code&gt;public-alb-sg&lt;/code&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;internal-alb-sg&lt;/code&gt; → allows HTTP &lt;strong&gt;only from &lt;code&gt;web-ec2-sg&lt;/code&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;app-ec2-sg&lt;/code&gt; → allows port 3001 &lt;strong&gt;only from &lt;code&gt;internal-alb-sg&lt;/code&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;db-sg&lt;/code&gt; → allows MySQL &lt;strong&gt;only from &lt;code&gt;app-ec2-sg&lt;/code&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This “chain of trust” is the backbone of secure cloud architecture.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;*&lt;em&gt;**Deploy the Database: RDS Done Right&lt;/em&gt;*
I launched an &lt;strong&gt;RDS MySQL instance&lt;/strong&gt; with:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multi-AZ&lt;/strong&gt; (automatic failover if one zone dies)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Read Replica&lt;/strong&gt; (for scaling reads later)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No public access&lt;/strong&gt; (critical!)&lt;/li&gt;
&lt;li&gt;Initial database name: &lt;code&gt;bookreview&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;*&lt;strong&gt;*Mistake I made: I forgot to actually **create the &lt;code&gt;bookreview&lt;/code&gt; database&lt;/strong&gt; during setup. The app crashed with &lt;code&gt;Unknown database 'bookreview'&lt;/code&gt;.&lt;br&gt;&lt;br&gt;
*&lt;em&gt;**Fix&lt;/em&gt;*: Connect via &lt;code&gt;mysql&lt;/code&gt; CLI and run &lt;code&gt;CREATE DATABASE bookreview;&lt;/code&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;*&lt;em&gt;**Deploy the Backend (Node.js)&lt;/em&gt;*
On a private EC2 instance:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/pravinmishraaws/book-review-app.git
&lt;span class="nb"&gt;cd &lt;/span&gt;backend
npm &lt;span class="nb"&gt;install

&lt;/span&gt;Then I created &lt;span class="sb"&gt;`&lt;/span&gt;.env&lt;span class="sb"&gt;`&lt;/span&gt;:
&lt;span class="nb"&gt;env
&lt;/span&gt;&lt;span class="nv"&gt;DB_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your-rds-endpoint.us-east-1.rds.amazonaws.com
&lt;span class="nv"&gt;DB_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;bookreview
&lt;span class="nv"&gt;DB_USER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;admin
&lt;span class="nv"&gt;DB_PASS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your-password
&lt;span class="nv"&gt;PORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3001
&lt;span class="nv"&gt;ALLOWED_ORIGINS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://your-public-alb-dns

&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;  Critical: &lt;span class="sb"&gt;`&lt;/span&gt;ALLOWED_ORIGINS&lt;span class="sb"&gt;`&lt;/span&gt; must match your &lt;span class="k"&gt;**&lt;/span&gt;Public ALB DNS exactly&lt;span class="k"&gt;**&lt;/span&gt;—no typos, no truncation.

I used &lt;span class="sb"&gt;`&lt;/span&gt;pm2&lt;span class="sb"&gt;`&lt;/span&gt; to keep the app running:
bash
&lt;span class="nb"&gt;sudo &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; pm2
pm2 start src/server.js &lt;span class="nt"&gt;--name&lt;/span&gt; &lt;span class="s2"&gt;"backend"&lt;/span&gt;
pm2 startup  &lt;span class="c"&gt;# auto-start on reboot&lt;/span&gt;

5. &lt;span class="k"&gt;****&lt;/span&gt;Deploy the Frontend &lt;span class="o"&gt;(&lt;/span&gt;Next.js&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="k"&gt;**&lt;/span&gt;
On a public EC2 instance:
bash
&lt;span class="nb"&gt;cd &lt;/span&gt;frontend
npm &lt;span class="nb"&gt;install
&lt;/span&gt;npm run build

Then I created &lt;span class="sb"&gt;`&lt;/span&gt;.env&lt;span class="sb"&gt;`&lt;/span&gt;:
&lt;span class="nb"&gt;env
&lt;/span&gt;&lt;span class="nv"&gt;NEXT_PUBLIC_API_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/api

This tells the frontend to send API requests to &lt;span class="sb"&gt;`&lt;/span&gt;/api&lt;span class="sb"&gt;`&lt;/span&gt;, which Nginx will proxy to the backend.

I ran it with:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
bash&lt;br&gt;
pm2 start npm --name "frontend" -- run start&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;*&lt;em&gt;**Configure Nginx: The Glue That Holds It Together&lt;/em&gt;*
My &lt;code&gt;/etc/nginx/sites-available/default&lt;/code&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;server_name&lt;/span&gt; &lt;span class="s"&gt;_&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_pass&lt;/span&gt; &lt;span class="s"&gt;http://localhost:3000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  &lt;span class="c1"&gt;# Next.js&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Host&lt;/span&gt; &lt;span class="nv"&gt;$host&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/api/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;rewrite&lt;/span&gt; &lt;span class="s"&gt;^/api/(.*)&lt;/span&gt; &lt;span class="n"&gt;/&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt; &lt;span class="s"&gt;break&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_pass&lt;/span&gt; &lt;span class="s"&gt;http://internal-alb-dns.us-east-1.elb.amazonaws.com&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Host&lt;/span&gt; &lt;span class="nv"&gt;$host&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;The magic:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User visits &lt;code&gt;http://public-alb&lt;/code&gt; → sees Next.js
&lt;/li&gt;
&lt;li&gt;When they click “Login,” the frontend calls &lt;code&gt;/api/login&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Nginx forwards to &lt;strong&gt;Internal ALB&lt;/strong&gt; → &lt;strong&gt;App EC2&lt;/strong&gt; → &lt;strong&gt;RDS&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;*&lt;em&gt;**Add Load Balancers &amp;amp; Auto Scaling&lt;/em&gt;*

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Public ALB&lt;/strong&gt;: Routes traffic to Web EC2 instances&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Internal ALB&lt;/strong&gt;: Routes &lt;code&gt;/api&lt;/code&gt; to App EC2 instances&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto Scaling Groups&lt;/strong&gt;: 2+ instances per tier, across 2 AZs&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If one instance dies? AWS replaces it automatically.&lt;/p&gt;

&lt;p&gt;**** The Struggle Is Real: Debugging Nightmares&lt;/p&gt;

&lt;p&gt;*&lt;strong&gt;*Problem 1: &lt;code&gt;403 Forbidden&lt;/code&gt; on ALB Health Checks&lt;br&gt;
**Symptom&lt;/strong&gt;: ALB shows “Unhealthy” targets.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Root cause&lt;/strong&gt;:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;My &lt;code&gt;.env&lt;/code&gt; had &lt;code&gt;ALLOWED_ORIGINS=http://alb...amazonaws&amp;gt;&lt;/code&gt; (truncated!)
&lt;/li&gt;
&lt;li&gt;Backend blocked all requests due to CORS mismatch.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;*&lt;em&gt;**Fix&lt;/em&gt;*:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Corrected &lt;code&gt;.env&lt;/code&gt; to full DNS: &lt;code&gt;...amazonaws.com&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Added a &lt;code&gt;/health&lt;/code&gt; endpoint for ALB health checks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;*&lt;strong&gt;*Problem 2: “Unknown database ‘bookreview’”&lt;br&gt;
**Symptom&lt;/strong&gt;: Backend crashes on startup.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Cause&lt;/strong&gt;: RDS doesn’t auto-create databases unless you specify it.&lt;br&gt;&lt;br&gt;
*&lt;em&gt;**Fix&lt;/em&gt;*:  &lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
sql
mysql -h &amp;lt;RDS&amp;gt; -u admin -p
CREATE DATABASE bookreview;

****Problem 3: Nginx Serving Blank Page
****Symptom**: `curl localhost` returns 403.  
****Cause**: I was trying to serve static files from `/var/www/html`—but Next.js needs to run as a server.  
**Fix**: Switched to **reverse proxy** mode (`proxy_pass http://localhost:3000`).

****Lessons Learned (The Hard Way)

1. **Infrastructure is code—and configuration is state.**  
   One missing character breaks everything.
2. **Test each layer independently.**  
   - Can you `curl localhost:3000`?  
   - Can you `curl localhost:3001/api/books` from the backend?  
   - Does `mysql -h ...` connect?
3. **ALB health checks are strict.**  
   Use a dedicated `/health` route that always returns `200`.

4. **Security Groups are your first line of defense.**  
   If traffic isn’t flowing, check SG rules before touching code.

5. **Real DevOps isn’t about perfection—it’s about persistence.**  
   I failed 20 times. On the 21st, it worked.

****Final Result



1. 
A fully functional Book Review App  

2. 
 Survives instance termination  

3. 
 Scales across Availability Zones  

4. 
Secure, isolated, and production-ready  

And most importantly: **I understand how it all fits together.**

****Want to Try It Yourself?

**GitHub Repo**:  
[https://github.com/pravinmishraaws/book-review-app](https://github.com/pravinmishraaws/book-review-app)

****What You’ll Need**:
- AWS account (Free Tier eligible)
- Basic Linux &amp;amp; Git knowledge
- Patience (and maybe coffee)
Follow the READMEs in `/frontend` and `/backend`, and use this blog as your troubleshooting guide.
****Final Thought
Deploying this app didn’t just teach me AWS—it taught me **how to think like an engineer**.  
Not “Does it work?” but **“Why does it work—and what happens when it doesn’t?”**
That’s the real DevOps mindset.
Now go build something awesome. 
*Like this post? Give it a share, share it with a fellow learner, or comment below with your own deployment war stories!*  
#DevOps #AWS #CloudEngineering #FullStack #NextJS #NodeJS #LearningInPublic #RealWorldDev

![ ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/brlx9i6cs2k0prpa2b5c.png)****
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>architecture</category>
      <category>aws</category>
      <category>devops</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Executing a Full Scrum Sprint with Jira: From Backlog Refinement to Live Deployment</title>
      <dc:creator>Ebelechukwu Lucy Okafor</dc:creator>
      <pubDate>Sat, 07 Feb 2026 05:52:48 +0000</pubDate>
      <link>https://dev.to/ebelechukwu_lucyokafor/executing-a-full-scrum-sprint-with-jira-from-backlog-refinement-to-live-deployment-880</link>
      <guid>https://dev.to/ebelechukwu_lucyokafor/executing-a-full-scrum-sprint-with-jira-from-backlog-refinement-to-live-deployment-880</guid>
      <description>&lt;p&gt;This week, I executed a complete Scrum Sprint lifecycle using Jira Cloud (team-managed project) — covering backlog refinement, sprint planning, delivery, deployment, reporting, and retrospective.&lt;br&gt;
The focus was not on theory, but on end-to-end execution: planning work, shipping a production change, and providing verifiable proof.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project Context&lt;/strong&gt;&lt;br&gt;
Project: Gotto Job (UI-only enhancements)&lt;br&gt;
Methodology: Scrum (Team-managed, Solo execution)&lt;br&gt;
Core Tools: Jira Cloud, Git, GitHub, EC2 (Ubuntu), Nginx&lt;br&gt;
Scope: Small, high-impact UI improvements delivered incrementally&lt;br&gt;
Scrum Roles &amp;amp; Ownership (Solo Execution)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To mirror real delivery accountability, I assumed all Scrum roles:&lt;/strong&gt;&lt;br&gt;
Product Owner: Prioritized UI changes with the highest user-perceived value&lt;br&gt;
Scrum Master: Enforced Scrum cadence (refinement → planning → sprint → retro)&lt;br&gt;
Dev Lead: Implemented UI changes with clear acceptance criteria&lt;br&gt;
DevOps Lead: Deployed changes to a live environment and validated delivery&lt;br&gt;
Outcome: Clear separation of concerns, even in solo mode.&lt;br&gt;
Backlog Refinement &amp;amp; Estimation&lt;br&gt;
I created a structured product backlog under a single Epic:&lt;br&gt;
Epic: Improve Gotto Job UI discoverability &amp;amp; trust&lt;br&gt;
6+ user stories&lt;br&gt;
Clear acceptance criteria&lt;br&gt;
Fibonacci estimation (1/2/3)&lt;br&gt;
Ranked by business value&lt;br&gt;
This ensured the sprint scope was predictable and executable.&lt;/p&gt;

&lt;p&gt;Sprint Planning (Sprint 1)&lt;br&gt;
Defined a clear Sprint Goal&lt;br&gt;
Selected 3–4 small, deliverable stories&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Broke stories into actionable subtasks:&lt;/strong&gt;&lt;br&gt;
Build&lt;br&gt;
Verify&lt;br&gt;
Deploy&lt;br&gt;
Screenshot (proof)&lt;br&gt;
&lt;strong&gt;Focus:&lt;/strong&gt; Deliver visible value, not partial work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Delivery &amp;amp; Deployment (DevOps Execution)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One story from Sprint 1 was fully implemented and shipped:&lt;br&gt;
Code changes committed with meaningful Git messages&lt;br&gt;
Repository deployed on EC2 using Nginx static hosting&lt;br&gt;
Live URL verified&lt;br&gt;
Jira issues transitioned across the workflow to Done&lt;br&gt;
Deployment proof attached&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This followed a real DevOps loop:&lt;/strong&gt;&lt;br&gt;
Plan → Build → Ship → Verify → Document&lt;/p&gt;

&lt;p&gt;Reporting &amp;amp; Transparency&lt;br&gt;
Enabled and reviewed the Burndown Chart&lt;br&gt;
Used Jira reports to track sprint health and scope&lt;br&gt;
Ensured delivery visibility at all stages&lt;br&gt;
Sprint Retrospective&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A structured retro was documented, covering:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What went well&lt;br&gt;
What to improve&lt;br&gt;
Scrum pillar observed: Transparency&lt;br&gt;
Scrum value demonstrated: Commitment&lt;br&gt;
This reinforced continuous improvement, not just delivery.&lt;br&gt;
Why This Matters (DevOps Perspective)&lt;br&gt;
This sprint demonstrated practical capability in:&lt;br&gt;
Agile execution with Jira (not checkbox usage)&lt;br&gt;
Backlog refinement and estimation discipline&lt;br&gt;
Shipping small, safe increments&lt;br&gt;
Infrastructure + deployment ownership&lt;br&gt;
Evidence-based delivery (live URL + Jira + GitHub)&lt;br&gt;
Links&lt;br&gt;
Live Deployment: &lt;br&gt;
GitHub Repository: &lt;a href="https://github.com/Lucycloud2024/GOTTO-JOB-DEMO" rel="noopener noreferrer"&gt;https://github.com/Lucycloud2024/GOTTO-JOB-DEMO&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Note&lt;/strong&gt;&lt;br&gt;
This assignment strengthened my ability to operate in a real delivery environment, where planning, execution, deployment, and visibility are equally important.&lt;br&gt;
I’m now comfortable contributing to teams that value small batches, fast feedback, and measurable delivery.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>management</category>
      <category>softwaredevelopment</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Creating and Managing Repositories on GitHub (With Git CLI Commands)</title>
      <dc:creator>Ebelechukwu Lucy Okafor</dc:creator>
      <pubDate>Fri, 30 Jan 2026 06:43:13 +0000</pubDate>
      <link>https://dev.to/ebelechukwu_lucyokafor/creating-and-managing-repositories-on-github-with-git-cli-commands-11c0</link>
      <guid>https://dev.to/ebelechukwu_lucyokafor/creating-and-managing-repositories-on-github-with-git-cli-commands-11c0</guid>
      <description>&lt;p&gt;If you have ever worked with code or followed a software project online, chances are you have come across GitHub. GitHub is a popular place for people to store code and work together on projects. The main thing that GitHub uses to organize all of this is something called a repository, or GitHub repository, which people often call a GitHub repo. A GitHub repository is a part of GitHub, and people use GitHub repositories to store their code and track changes to their code.&lt;/p&gt;

&lt;p&gt;Knowing how repositories work is really important for developers and people who are just starting with version control.&lt;/p&gt;

&lt;p&gt;They need to know how to manage them using GitHub and the command line.&lt;/p&gt;

&lt;p&gt;This is a deal for developers, DevOps engineers and beginners who are learning version control with GitHub and the command line.&lt;/p&gt;

&lt;p&gt;Repositories are a part of version control, and developers and beginners need to understand how they work with GitHub and the command line.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Is a GitHub Repository?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A GitHub repository is like a place on the internet where you can store your project. This place has all your project files, like the code you write and the settings for your project. It also has things like a README file that tells people about your project. What is really cool about a GitHub repository is that it keeps track of every change you make to your files. It does this using something called Git, which is a system that helps you keep track of changes to your files. A GitHub repository is a tool because it uses Git to track these changes.&lt;/p&gt;

&lt;p&gt;When you use Git, you can look at what changes were made, who actually made these changes and when these changes happened.&lt;/p&gt;

&lt;p&gt;If something does not go as planned, you can easily go back to a previous version of Git.&lt;/p&gt;

&lt;p&gt;Git repositories can be open for everyone to see. They can be private, which makes GitHub a good tool for open-source projects and for work that companies do privately with Git.&lt;/p&gt;

&lt;p&gt;Creating a Repository on GitHub (Web Method)&lt;/p&gt;

&lt;p&gt;To create a repository on GitHub:&lt;/p&gt;

&lt;p&gt;Log in to your GitHub account.&lt;/p&gt;

&lt;p&gt;Click the “+” icon at the top-right and select “New repository.”&lt;/p&gt;

&lt;p&gt;Enter a repository name.&lt;/p&gt;

&lt;p&gt;(Optional) Add a short description.&lt;/p&gt;

&lt;p&gt;Choose Public or Private.&lt;/p&gt;

&lt;p&gt;Check “Add a README file” (recommended).&lt;/p&gt;

&lt;p&gt;Click “Create repository.”&lt;/p&gt;

&lt;p&gt;When you make a repository on GitHub, GitHub gives you an address that you can use to link your computer to GitHub. This address is called the repository URL. You can use it to connect your local system to GitHub.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating a Repository Using Git CLI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A lot of developers like to use the command line. The command line is faster for them. It gives them more options. They can do things the way they want with the command line. The command line is really good, for developers because it is fast and it is flexible. Developers prefer the command line.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Initialize Git in your project folder&lt;/p&gt;

&lt;p&gt;git init&lt;/p&gt;

&lt;p&gt;This command does something cool to your folder. It actually turns your folder into a Git repository. So now your folder is a Git repository.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt;. This is where you add your files to Git. You have to do this so that Git can keep track of the files. Add all the files you want to Git. This way, Git will know what files you are working with.&lt;/p&gt;

&lt;p&gt;You are adding files to Git. This is a step. Make sure you add the files to Git.&lt;/p&gt;

&lt;p&gt;To add all the files in the directory to git you should use the command git add.&lt;/p&gt;

&lt;p&gt;This is what happens when you want Git to track all your files. It gets them ready, so Git can keep an eye on them. Git needs to track these files.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Commit your changes&lt;/p&gt;

&lt;p&gt;To start with, I will make my first git commit. The message for this git commit will be " commit". So the command for this git commit is git commit -m " commit". This git commit command is used to record the changes I made to my project.&lt;/p&gt;

&lt;p&gt;When you make a commit, it saves a snapshot of your project at that moment. This snapshot is like a picture of your project. It shows exactly what your project looks like at that time. The commit saves this snapshot so you can look back at it later and see how your project was at that moment. This is really useful for keeping track of changes you make to your project over time. You can think of a commit as a way to save a copy of your project at a point in time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Connect your project to GitHub&lt;/p&gt;

&lt;p&gt;To add a remote repository, you need to use the git remote add command. So you will do this by typing git remote add origin. Then you need to add the URL of your repository, which is &lt;a href="https://github.com/username/repository-name.git" rel="noopener noreferrer"&gt;https://github.com/username/repository-name.git&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So the full command is:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;git remote add origin &lt;a href="https://github.com/username/repository-name.git" rel="noopener noreferrer"&gt;https://github.com/username/repository-name.git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we are at the step. This is Step 5. For Step 5 we need to push the code to GitHub. So the main task for GitHub, in Step 5 is to push the code to GitHub.&lt;/p&gt;

&lt;p&gt;git branch -M main&lt;/p&gt;

&lt;p&gt;git push -u origin main&lt;/p&gt;

&lt;p&gt;Your local project is now uploaded to GitHub&lt;/p&gt;

&lt;p&gt;How GitHub Repositories Work with Git&lt;/p&gt;

&lt;p&gt;When you make a copy of a repository Git makes a copy of the project on your computer. This copy is the repository. The repository is what Git uses to keep track of the project. The project is stored in the repository. The repository is, like a folder that holds the project.&lt;/p&gt;

&lt;p&gt;git clone &lt;a href="https://github.com/username/repository-name.git" rel="noopener noreferrer"&gt;https://github.com/username/repository-name.git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can make changes to your work, on your computer then you can save these changes using commits. This way you can keep track of the changes you make to your work using commits.&lt;/p&gt;

&lt;p&gt;git status&lt;/p&gt;

&lt;p&gt;To add a file to git you need to use the command git add filename. When you type git add filename in the terminal you are telling git to add the filename file to the list of files that will be committed. The git add filename command is used to stage the filename file, which means it is preparing the filename file to be committed. You will use the git add filename command every time you want to add a file, such, as filename to your git repository. After you run git add filename you can then commit the filename file using the git commit command.&lt;/p&gt;

&lt;p&gt;I just made a change, to the feature. I am going to save this with git commit and the message will be "Updated feature".&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Finally, push your changes back to GitHub:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;git push&lt;/p&gt;

&lt;p&gt;This workflow is really helpful because it lets many people work on the project at the same time. The project workflow does this without letting people overwrite the work that other people have done on the project. This way the project workflow helps people to work together on the project.&lt;/p&gt;

&lt;p&gt;Managing Files and Branches&lt;/p&gt;

&lt;p&gt;When you are working on features branches are really helpful. They let you do this without messing up the code of the project. This way you can work on features and the main code at the same time and the main code will still work properly. Branches are very useful, for this reason.&lt;/p&gt;

&lt;p&gt;Create a new branch&lt;/p&gt;

&lt;p&gt;git branch feature-login&lt;/p&gt;

&lt;p&gt;Switch to the branch&lt;/p&gt;

&lt;p&gt;git checkout feature-login&lt;/p&gt;

&lt;p&gt;(or)&lt;/p&gt;

&lt;p&gt;git switch feature-login&lt;/p&gt;

&lt;p&gt;After testing, merge it back into the main branch:&lt;/p&gt;

&lt;p&gt;git checkout main&lt;/p&gt;

&lt;p&gt;To combine the changes, from the feature login branch into the branch I will use the command git merge feature-login. This command will merge the feature-login branch into my branch. The feature-login branch contains all the new login features that I want to add to my project so I need to merge feature-login to get these features.&lt;/p&gt;

&lt;p&gt;I will run the command git merge feature-login to do this. The git merge feature-login command is very important because it helps me to merge feature-login into my project.&lt;/p&gt;

&lt;p&gt;Branches are really good for projects because they help keep everything organized. This way projects do not get all messy. Branches are very useful, for this purpose. They help people work on projects and keep them stable and organized which is what branches are supposed to do.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Collaboration Using Pull Requests&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After pushing your branch to GitHub:&lt;/p&gt;

&lt;p&gt;git push origin feature-login&lt;/p&gt;

&lt;p&gt;You can open a Pull Request on GitHub. This is really helpful because your teammates can look at your code and tell you what they think. They can say what you should change. If it is good or not. This way, you can make sure your code is the best it can be. Pull Requests are a way to do this, and a lot of people use them for work and for open-source projects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Useful Repository Management Tools&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GitHub provides tools that make managing projects a lot easier. These tools help people work on projects together. GitHub has a lot of features that make project management easier. For example, GitHub has things like issue trackers and project boards that help people manage projects. GitHub is really useful for managing projects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Issues Track bugs, tasks, and ideas&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the README file for my project. It tells you what my project is about and what it does. My project is something I have been working on. I want to share it with you. The README file is where you can find all the information, about my project. It explains what my project does and how it works.&lt;/p&gt;

&lt;p&gt;You need to control who can make changes to your code. This is about managing access to your code so you can decide who can edit it. Access control is important for your code. It helps you manage who can edit your code and who cannot.&lt;/p&gt;

&lt;p&gt;I use GitHub Actions to automate things like testing and building my projects, and to make deployments easier. GitHub Actions is really helpful for automating these tasks. I do not have to do them manually every time. GitHub Actions automates testing. Deployments that save me a lot of time.&lt;/p&gt;

&lt;p&gt;These tools are invaluable for your repository as it grows. Your repository can get really large. That is when these tools help it to keep going smoothly.&lt;/p&gt;

&lt;p&gt;Creating and managing repositories on GitHub is a core skill in modern software development. Repositories help you store code, track changes, and collaborate efficiently. By learning both the GitHub interface and Git CLI commands, you gain full control over your projects. Whether you’re working alone or in a team, GitHub repositories form the foundation of effective version control and collaboration.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>From Local Git Commits to a Live EC2 Website: Real DevOps Deployment</title>
      <dc:creator>Ebelechukwu Lucy Okafor</dc:creator>
      <pubDate>Mon, 26 Jan 2026 07:32:00 +0000</pubDate>
      <link>https://dev.to/ebelechukwu_lucyokafor/from-local-git-commits-to-a-live-ec2-website-real-devops-deployment-19h5</link>
      <guid>https://dev.to/ebelechukwu_lucyokafor/from-local-git-commits-to-a-live-ec2-website-real-devops-deployment-19h5</guid>
      <description>&lt;p&gt;DevOps stopped being abstract for me the moment my website went live on the internet.&lt;/p&gt;

&lt;p&gt;As part of a hands-on DevOps assignment, I deployed a static portfolio website to an AWS EC2 Ubuntu server using Nginx, following production-style workflows instead of shortcuts. This wasn’t just about “making it work”; it was about doing it the right way.&lt;/p&gt;

&lt;p&gt;In this post, I’ll walk through what I built, what broke, and what I learned while moving from local Git commits to a publicly accessible server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Laying the Foundation with Git&lt;/strong&gt;&lt;br&gt;
I started by creating a project called CodeTrack and initializing it as a Git repository.&lt;br&gt;
Before writing any code, I focused on getting the basics right:&lt;/p&gt;

&lt;p&gt;Initializing the repo correctly (git init)&lt;br&gt;
Verifying repository state with git status&lt;br&gt;
Understanding the role of the hidden .git directory&lt;br&gt;
Configuring Git identity (local vs global)&lt;br&gt;
This step taught me something important early on:&lt;br&gt;
If your Git foundation is wrong, everything built on top of it becomes messy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Clean Commits, Not Just “Any Commit”&lt;/strong&gt;&lt;br&gt;
Instead of committing everything at once, I followed a real-world commit strategy:&lt;/p&gt;

&lt;p&gt;First commit: UI scaffold (index.html, style.css)&lt;br&gt;
Second commit: Small controlled content change (name, group, page text)&lt;br&gt;
This mirrors how professional teams work. Small, focused commits make it easier to:&lt;/p&gt;

&lt;p&gt;Review changes&lt;br&gt;
Debug issues&lt;br&gt;
Roll back safely if something goes wrong&lt;br&gt;
Git stopped feeling like a requirement and started feeling like a safety net.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Deploying to AWS EC2 with Nginx&lt;/strong&gt;&lt;br&gt;
Once the project was ready locally, it was time to deploy.&lt;/p&gt;

&lt;p&gt;I connected to an Ubuntu EC2 instance using SSH key authentication and set up Nginx as the web server. The deployment flow looked like this:&lt;/p&gt;

&lt;p&gt;Secure SSH access using a .pem key&lt;br&gt;
Install and start Nginx&lt;br&gt;
Copy project files from the local machine to EC2 using scp&lt;br&gt;
Move files into Nginx’s web root&lt;br&gt;
Verify deployment using:&lt;br&gt;
curl -I &lt;a href="http://localhost" rel="noopener noreferrer"&gt;http://localhost&lt;/a&gt;&lt;br&gt;
Browser access via the EC2 public IP&lt;br&gt;
Seeing my site load from a public IP was a big moment, that’s when it felt real.&lt;/p&gt;

&lt;p&gt;Real DevOps Moment: Breaking and Fixing Things Safely&lt;br&gt;
One part of the assignment required simulating a production failure.&lt;br&gt;
I intentionally introduced a small syntax error into the Nginx configuration file and then:&lt;/p&gt;

&lt;p&gt;Verified the failure using nginx -t&lt;br&gt;
Diagnosed the issue&lt;br&gt;
Fixed the configuration&lt;br&gt;
Restarted Nginx safely&lt;br&gt;
Confirmed recovery using curl&lt;br&gt;
This exercise taught me a core DevOps lesson:&lt;/p&gt;

&lt;p&gt;Breaking production is easy. Recovering safely is the real skill.&lt;/p&gt;

&lt;p&gt;Challenges I Faced (And What They Taught Me)&lt;br&gt;
I ran into multiple issues along the way, including:&lt;/p&gt;

&lt;p&gt;Permission denied (publickey) errors during deployment&lt;br&gt;
Incorrect SSH key paths&lt;br&gt;
Missing file locations on the EC2 server&lt;br&gt;
Each problem forced me to slow down, read errors properly, and understand why something failed instead of copying commands blindly. That process, not the deployment itself, was where the real learning happened.&lt;/p&gt;

&lt;p&gt;Key DevOps Lessons I’m Taking Forward&lt;br&gt;
Always verify before and after changes&lt;br&gt;
Git discipline matters more than speed&lt;br&gt;
SSH security is non-negotiable&lt;br&gt;
Nginx validation (nginx -t) prevents outages&lt;br&gt;
DevOps is about responsibility, not just automation&lt;br&gt;
Final Outcome&lt;/p&gt;

&lt;p&gt;✔ Website deployed and publicly accessible&lt;/p&gt;

&lt;p&gt;✔ Nginx running and validated&lt;/p&gt;

&lt;p&gt;✔ Clean Git commit history&lt;/p&gt;

&lt;p&gt;✔ Real troubleshooting experience&lt;/p&gt;

&lt;p&gt;This project strengthened my confidence in Linux, Git, AWS, and DevOps fundamentals, and it’s only the beginning.&lt;/p&gt;

&lt;p&gt;live website&lt;/p&gt;

&lt;p&gt;http://&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8iip4lb9524ipvs0itza.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8iip4lb9524ipvs0itza.jpeg" alt=" " width="800" height="428"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Closing Thought&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;DevOps used to feel intimidating.&lt;br&gt;
Now, it feels practical, logical, and powerful.&lt;/p&gt;

&lt;p&gt;I’m excited to keep building one verified step at a time.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>git</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
