<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Daniel Inyang </title>
    <description>The latest articles on DEV Community by Daniel Inyang  (@danyang007).</description>
    <link>https://dev.to/danyang007</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/danyang007"/>
    <language>en</language>
    <item>
      <title>🚀 End-to-End Deployment of “The EpicBook!” Application with Terraform, AWS, and Nginx</title>
      <dc:creator>Daniel Inyang </dc:creator>
      <pubDate>Fri, 10 Apr 2026 14:20:42 +0000</pubDate>
      <link>https://dev.to/danyang007/end-to-end-deployment-of-the-epicbook-application-with-terraform-aws-and-nginx-5875</link>
      <guid>https://dev.to/danyang007/end-to-end-deployment-of-the-epicbook-application-with-terraform-aws-and-nginx-5875</guid>
      <description>&lt;p&gt;Building and deploying real-world applications goes far beyond just writing code—it involves provisioning infrastructure, configuring services, and solving unexpected issues along the way. In this project, I implemented a complete end-to-end deployment of The EpicBook! application using Terraform, AWS EC2, and Nginx, while handling real-world challenges across the stack.&lt;/p&gt;

&lt;p&gt;🧩 Project Overview&lt;/p&gt;

&lt;p&gt;The goal was to deploy a full-stack application in a production-style environment, leveraging Infrastructure as Code (IaC) to ensure consistency, scalability, and repeatability.&lt;/p&gt;

&lt;p&gt;This project covered:&lt;/p&gt;

&lt;p&gt;Infrastructure provisioning&lt;br&gt;
Backend and database setup&lt;br&gt;
Reverse proxy configuration&lt;br&gt;
End-to-end testing and troubleshooting&lt;/p&gt;

&lt;p&gt;⚙️ 1. Infrastructure as Code with Terraform&lt;/p&gt;

&lt;p&gt;I used Terraform to provision the core AWS infrastructure required to run the application. This included:&lt;/p&gt;

&lt;p&gt;EC2 instance deployment&lt;br&gt;
Networking configuration&lt;br&gt;
Public IP allocation&lt;/p&gt;

&lt;p&gt;By defining everything in code, I eliminated manual setup steps and ensured the environment could be recreated reliably at any time.&lt;/p&gt;

&lt;p&gt;🧱 2. System Setup &amp;amp; Dependencies&lt;/p&gt;

&lt;p&gt;Once the infrastructure was ready, the next step was preparing the server environment:&lt;/p&gt;

&lt;p&gt;Installed Git for version control&lt;br&gt;
Installed Node.js using NVM for flexibility&lt;br&gt;
Installed npm for package management&lt;br&gt;
Set up MySQL Server 5.7&lt;/p&gt;

&lt;p&gt;Careful validation was done to ensure all services were installed correctly and running as expected.&lt;/p&gt;

&lt;p&gt;📦 3. Application Deployment&lt;/p&gt;

&lt;p&gt;With the system ready, I deployed the application:&lt;/p&gt;

&lt;p&gt;Cloned The EpicBook! repository from GitHub&lt;br&gt;
Installed dependencies using npm install&lt;br&gt;
Started the Node.js backend server&lt;/p&gt;

&lt;p&gt;This brought the application logic to life on the server.&lt;/p&gt;

&lt;p&gt;🗄️ 4. Database Configuration&lt;/p&gt;

&lt;p&gt;The database layer was configured to support the application:&lt;/p&gt;

&lt;p&gt;Created a MySQL database&lt;br&gt;
Executed schema and seed scripts&lt;br&gt;
Updated the application’s database connection settings&lt;/p&gt;

&lt;p&gt;This ensured proper communication between the backend and the database.&lt;/p&gt;

&lt;p&gt;🌐 5. Reverse Proxy Setup with Nginx&lt;/p&gt;

&lt;p&gt;To make the application accessible externally and production-ready:&lt;/p&gt;

&lt;p&gt;Installed and configured Nginx&lt;br&gt;
Set up a reverse proxy to route traffic to the Node.js app (port 8080)&lt;br&gt;
Enabled access via the EC2 public IP&lt;/p&gt;

&lt;p&gt;Nginx acted as the gateway, efficiently handling incoming requests and forwarding them to the application.&lt;/p&gt;

&lt;p&gt;🧪 6. End-to-End Validation&lt;/p&gt;

&lt;p&gt;After deployment, I performed full testing:&lt;/p&gt;

&lt;p&gt;Accessed the application via a web browser&lt;br&gt;
Verified frontend, backend, and database integration&lt;br&gt;
Ensured all components communicated seamlessly&lt;/p&gt;

&lt;p&gt;🛠️ 7. Troubleshooting in the Real World&lt;/p&gt;

&lt;p&gt;This project wasn’t just about deploying—it was about solving real problems.&lt;/p&gt;

&lt;p&gt;Some of the issues I encountered and resolved included:&lt;/p&gt;

&lt;p&gt;MySQL temporary password and authentication errors&lt;br&gt;
Node.js and npm installation challenges using NVM&lt;br&gt;
Database connection failures (ECONNREFUSED, access denied)&lt;br&gt;
Nginx errors such as 502 Bad Gateway and port conflicts&lt;br&gt;
Permission and dependency-related issues&lt;/p&gt;

&lt;p&gt;Each issue required investigation, debugging, and iterative fixes—exactly what happens in real production environments.&lt;/p&gt;

&lt;p&gt;💡 Key Takeaways&lt;/p&gt;

&lt;p&gt;Terraform enables consistent and repeatable infrastructure deployment&lt;br&gt;
Nginx is essential for production-grade traffic routing and reverse proxying&lt;br&gt;
Full-stack debugging is a critical DevOps skill&lt;br&gt;
Real learning happens when systems break—and you fix them&lt;/p&gt;

&lt;p&gt;🔥 Final Thoughts&lt;/p&gt;

&lt;p&gt;From provisioning infrastructure to configuring services and resolving live issues, this project reflects what practical DevOps engineering truly looks like.&lt;/p&gt;

&lt;p&gt;It reinforced the importance of:&lt;/p&gt;

&lt;p&gt;Automation&lt;br&gt;
System design&lt;br&gt;
Troubleshooting under real conditions&lt;/p&gt;

&lt;p&gt;And most importantly, it showed that growth comes from hands-on experience and persistence.&lt;/p&gt;

&lt;p&gt;📌 Resources&lt;br&gt;
GitHub Repository: &lt;a href="https://lnkd.in/e_yBnf-8" rel="noopener noreferrer"&gt;https://lnkd.in/e_yBnf-8&lt;/a&gt;&lt;br&gt;
Application URL: &lt;a href="http://32.193.246.22" rel="noopener noreferrer"&gt;http://32.193.246.22&lt;/a&gt;&lt;br&gt;
 (terminated after validation)&lt;/p&gt;

&lt;p&gt;Real progress in tech comes from building, breaking, and fixing.&lt;br&gt;
On to the next challenge 🚀&lt;/p&gt;

&lt;p&gt;Till next time, always stay positive 👍&lt;/p&gt;

&lt;p&gt;🙌 Acknowledgment&lt;/p&gt;

&lt;p&gt;Shout out to Pravin Mishra, Lead Co-Mentor: Praveen Pandey&lt;br&gt;
 🤝 Co-Mentors: Egwu Oko, Tanisha Borana, Ranbir Kaur&lt;/p&gt;

&lt;p&gt;P.S. This post is part of the DevOps Micro Internship (DMI) Cohort-2 by Pravin Mishra. You can start your DevOps journey by joining this&lt;br&gt;
 Discord community ( &lt;a href="https://lnkd.in/e4wTfknn" rel="noopener noreferrer"&gt;https://lnkd.in/e4wTfknn&lt;/a&gt; ).&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>nginx</category>
      <category>devops</category>
    </item>
    <item>
      <title>Getting Started with Model Context Protocol (MCP): Automating Terraform Security with Claude Code</title>
      <dc:creator>Daniel Inyang </dc:creator>
      <pubDate>Fri, 27 Mar 2026 12:09:59 +0000</pubDate>
      <link>https://dev.to/danyang007/getting-started-with-model-context-protocol-mcp-automating-terraform-security-with-claude-code-5cmh</link>
      <guid>https://dev.to/danyang007/getting-started-with-model-context-protocol-mcp-automating-terraform-security-with-claude-code-5cmh</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn7od3u8r76xnayn74kwx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn7od3u8r76xnayn74kwx.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;br&gt;
Artificial Intelligence is becoming increasingly useful in DevOps, but most AI tools still rely only on their training data. This means they can suggest solutions, but they often can’t interact with your real infrastructure or validate whether their suggestions actually work.&lt;/p&gt;

&lt;p&gt;In this beginner-friendly guide, I’ll walk through my hands-on experience using &lt;strong&gt;Model Context Protocol (MCP)&lt;/strong&gt; with &lt;strong&gt;Claude Code&lt;/strong&gt; to audit, fix, and verify Terraform security issues automatically. By the end, you’ll understand what MCP is, why it matters, and how it enables AI to safely work with real tools like Terraform and AWS.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is Model Context Protocol (MCP)?
&lt;/h2&gt;

&lt;p&gt;Model Context Protocol (MCP) is a framework that allows AI models to connect to external tools and data sources in a structured and secure way. Instead of relying only on static knowledge from training, MCP lets AI access:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Local development tools (like Terraform or Git)&lt;/li&gt;
&lt;li&gt;Cloud APIs (such as AWS)&lt;/li&gt;
&lt;li&gt;Project files and configurations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This means the AI can not only &lt;strong&gt;suggest&lt;/strong&gt; changes but also &lt;strong&gt;validate and execute&lt;/strong&gt; them using real software.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why MCP is Important for DevOps
&lt;/h2&gt;

&lt;p&gt;In traditional AI-assisted development, you might ask a model how to fix a Terraform issue, and it would generate code based on patterns it learned during training. However, it cannot confirm whether:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the syntax is correct,&lt;/li&gt;
&lt;li&gt;the provider version supports the configuration,&lt;/li&gt;
&lt;li&gt;or the change actually resolves the problem.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With MCP, the AI can:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Analyze your real Terraform files.&lt;/li&gt;
&lt;li&gt;Modify them.&lt;/li&gt;
&lt;li&gt;Run Terraform commands to validate the changes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This creates a much safer and more reliable automation loop for infrastructure work.&lt;/p&gt;




&lt;h2&gt;
  
  
  My Setup: Connecting Claude Code to Terraform and AWS
&lt;/h2&gt;

&lt;p&gt;To enable this workflow, I configured two MCP servers in my project:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Terraform MCP server&lt;/strong&gt; running in Docker&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS API MCP server&lt;/strong&gt; using Python’s &lt;code&gt;uvx&lt;/code&gt; runtime&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These were defined in a &lt;code&gt;.mcp.json&lt;/code&gt; file at the root of my project. This file tells Claude Code which tools it is allowed to start and how to communicate with them.&lt;/p&gt;

&lt;p&gt;Sensitive information such as AWS credentials was stored separately in &lt;code&gt;.claude/settings.local.json&lt;/code&gt; to keep secrets out of version control.&lt;/p&gt;

&lt;p&gt;Once everything was configured, I ran the &lt;code&gt;/mcp&lt;/code&gt; command in Claude Code and confirmed both servers were successfully connected.&lt;/p&gt;




&lt;h2&gt;
  
  
  Running a Real DevOps Workflow: Audit → Fix → Verify
&lt;/h2&gt;

&lt;p&gt;To test the setup, I ran a simple but realistic workflow on my Terraform project.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Auditing Terraform for Security Issues
&lt;/h3&gt;

&lt;p&gt;I asked Claude Code:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Audit my Terraform files for security issues&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The security-auditor agent scanned my Terraform configuration and flagged a problem:&lt;br&gt;
&lt;strong&gt;My S3 bucket did not have server-side encryption enabled.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is a common security issue because unencrypted storage can expose sensitive data if accessed improperly.&lt;/p&gt;




&lt;h3&gt;
  
  
  Step 2: Automatically Fixing the Issue
&lt;/h3&gt;

&lt;p&gt;Next, I asked Claude:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Add S3 server-side encryption to my Terraform code using AES256&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The tf-writer agent updated my Terraform configuration by adding a server-side encryption block to the S3 bucket resource. Behind the scenes, it used the Terraform MCP server to ensure the syntax and resource configuration were valid for my environment.&lt;/p&gt;

&lt;p&gt;This step was important because it ensured the AI-generated code was not just theoretically correct but actually compatible with the Terraform version and provider I was using.&lt;/p&gt;




&lt;h3&gt;
  
  
  Step 3: Verifying the Fix
&lt;/h3&gt;

&lt;p&gt;Finally, I ran the audit again:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Audit my Terraform files for security issues&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This time, the encryption issue was no longer reported. The system confirmed that the configuration was now compliant with recommended security practices.&lt;/p&gt;

&lt;p&gt;This completed the full &lt;strong&gt;audit → fix → verify&lt;/strong&gt; loop without me manually editing a single Terraform file.&lt;/p&gt;




&lt;h2&gt;
  
  
  Understanding the Role of AI Agents in This Workflow
&lt;/h2&gt;

&lt;p&gt;One of the most interesting lessons from this exercise was how different AI agents used MCP differently.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Security Auditor Did Not Need MCP
&lt;/h3&gt;

&lt;p&gt;The security-auditor agent only needed to read Terraform files and apply known security best practices. Since Terraform is a text-based configuration language, the AI could analyze it without running any external tools.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Terraform Writer Did Need MCP
&lt;/h3&gt;

&lt;p&gt;The tf-writer agent, on the other hand, needed MCP because it had to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;generate new Terraform code,&lt;/li&gt;
&lt;li&gt;validate it using the real Terraform binary,&lt;/li&gt;
&lt;li&gt;and ensure the configuration would not break the infrastructure deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This clearly showed that &lt;strong&gt;reasoning tasks&lt;/strong&gt; can often be done without tool access, while &lt;strong&gt;execution and validation tasks&lt;/strong&gt; require MCP.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Takeaways for Beginners
&lt;/h2&gt;

&lt;p&gt;If you’re new to MCP and AI-assisted DevOps, here are the main lessons from this experience:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MCP allows AI to work with live tools instead of relying only on training data.&lt;/li&gt;
&lt;li&gt;This makes AI-generated changes more reliable and safer to apply in real environments.&lt;/li&gt;
&lt;li&gt;Separating configuration (&lt;code&gt;.mcp.json&lt;/code&gt;) from secrets (&lt;code&gt;settings.local.json&lt;/code&gt;) is critical for security.&lt;/li&gt;
&lt;li&gt;Not every AI task needs tool access — but any task that modifies or validates infrastructure usually does.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Why This Matters for the Future of DevOps
&lt;/h2&gt;

&lt;p&gt;As infrastructure becomes more complex, the ability to automate not just code generation but also validation and compliance checks will be a major advantage. MCP represents a step toward AI systems that can act as true assistants in DevOps pipelines, capable of performing real tasks while still operating within controlled and secure boundaries.&lt;/p&gt;

&lt;p&gt;For beginners, learning MCP now provides a strong foundation for understanding how AI will integrate into future cloud and platform engineering workflows.&lt;/p&gt;




&lt;p&gt;If you’re exploring Terraform, cloud security, or AI-assisted development, experimenting with MCP is a great way to see how these technologies can work together in practical, real-world scenarios.&lt;/p&gt;

&lt;p&gt;Till next time, always stay positive 👍&lt;/p&gt;

&lt;p&gt;Shout out to Pravin Mishra, Lead Co-Mentor: Praveen Pandey&lt;br&gt;
🤝 Co-Mentors: Egwu Oko, Tanisha Borana, Ranbir Kaur&lt;/p&gt;

&lt;p&gt;P.S. This post is part of the DevOps Micro Internship (DMI) Cohort-2 by Pravin Mishra. You can start your DevOps journey by joining this&lt;br&gt;
Discord community ( &lt;a href="https://lnkd.in/e4wTfknn" rel="noopener noreferrer"&gt;https://lnkd.in/e4wTfknn&lt;/a&gt; ).&lt;/p&gt;

</description>
      <category>devops</category>
      <category>automation</category>
      <category>claudecode</category>
      <category>terraform</category>
    </item>
    <item>
      <title>3-Tier Architecture Deployment on Azure</title>
      <dc:creator>Daniel Inyang </dc:creator>
      <pubDate>Fri, 06 Mar 2026 19:25:33 +0000</pubDate>
      <link>https://dev.to/danyang007/3-tier-architecture-deployment-on-azure-1317</link>
      <guid>https://dev.to/danyang007/3-tier-architecture-deployment-on-azure-1317</guid>
      <description>&lt;p&gt;A complete step-by-step guide to deploying a production-grade Next.js + Node.js + MySQL stack on Azure&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture Overview&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;→ Internet → Azure Application Gateway (Web Tier)&lt;br&gt;
→ Web VM (Next.js + Nginx) — Public Subnets&lt;br&gt;
→ Internal Load Balancer (App Tier)&lt;br&gt;
→ App VM (Node.js backend) — Private Subnets&lt;br&gt;
→ Azure Database for MySQL (HA + Read Replica) — Private Subnets&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuowfkuvqlfqryym6hp2k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuowfkuvqlfqryym6hp2k.png" alt=" " width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 1: The Foundation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Networking — Virtual Network, 6 Subnets, NAT Gateway, Route Tables&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create a Resource Group&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All Azure resources must live inside a Resource Group. This is your project container.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to the Azure Portal (portal.azure.com) &amp;gt; search Resource Groups &amp;gt; Create.&lt;/li&gt;
&lt;li&gt;Subscription: Select your subscription.&lt;/li&gt;
&lt;li&gt;Resource group name: capstone-rg&lt;/li&gt;
&lt;li&gt;Region: East US (or your preferred region — choose ONE and use it consistently).&lt;/li&gt;
&lt;li&gt;Click Review + Create &amp;gt; Create.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Create the Virtual Network (VNet)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the Azure equivalent of a VPC with a 10.0.0.0/16 address space.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Search Virtual Networks &amp;gt; Create.&lt;/li&gt;
&lt;li&gt;Resource group: capstone-rg&lt;/li&gt;
&lt;li&gt;Name: capstone-vnet&lt;/li&gt;
&lt;li&gt;Region: East US&lt;/li&gt;
&lt;li&gt;Click Next: IP Addresses.&lt;/li&gt;
&lt;li&gt;Set IPv4 address space: 10.0.0.0/16&lt;/li&gt;
&lt;li&gt;Delete any default subnet that appears. We will add ours manually.&lt;/li&gt;
&lt;li&gt;Click Review + Create &amp;gt; Create.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Create the 6 Subnets&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Navigate to capstone-vnet &amp;gt; Subnets. Click + Subnet for each one below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tvvt571d9351tdgg4gy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tvvt571d9351tdgg4gy.png" alt=" " width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;⚠️ Critical: The db-private-a and db-private-b subnets must be delegated to Microsoft.DBforMySQL/flexibleServers. When creating these subnets, under Subnet Delegation select Microsoft.DBforMySQL/flexibleServers. Do NOT attach an NSG to delegated subnets — Azure will block it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Create the NAT Gateway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This allows your private App and DB servers to download packages without direct internet exposure.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Search NAT Gateway &amp;gt; Create.&lt;/li&gt;
&lt;li&gt;Resource group: capstone-rg | Name: capstone-nat | Region: East US.&lt;/li&gt;
&lt;li&gt;Under Outbound IP, click Create a new public IP address. Name: capstone-nat-ip.&lt;/li&gt;
&lt;li&gt;Click Next: Subnet. Associate the NAT gateway with: web-public-a and web-public-b.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;💡 Note: Azure NAT Gateways associate at the subnet level, not a standalone ‘attachment’. By associating it with the public subnets, resources in private subnets that route through them will use this NAT.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click Review + Create &amp;gt; Create.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Configure Route Tables (UDRs)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Azure automatically handles basic routing, but we need custom User-Defined Routes for private subnets to reach the NAT.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Public Route Table&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Search Route Tables &amp;gt; Create.&lt;/li&gt;
&lt;li&gt;Name: capstone-public-rt | Resource group: capstone-rg | Region: East US.&lt;/li&gt;
&lt;li&gt;After creation, go to the resource &amp;gt; Settings &amp;gt; Subnets &amp;gt; Associate. Add web-public-a and web-public-b.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;💡 Note: Azure VNets have built-in internet routing for public subnets. No explicit 0.0.0.0/0 → Internet route is needed unless you have overriding rules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Private Route Table&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create another Route Table: capstone-private-rt.&lt;/li&gt;
&lt;li&gt;Go to Settings &amp;gt; Routes &amp;gt; Add. Add a route: Name: default-to-nat | Address prefix: 0.0.0.0/0 | Next hop type: Internet. (Azure NAT Gateway intercepts this and applies SNAT automatically).&lt;/li&gt;
&lt;li&gt;Go to Settings &amp;gt; Subnets &amp;gt; Associate. Add: app-private-a, app-private-b.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;💡 Note: Do NOT associate the db subnets with this route table — they are delegated to MySQL and Azure manages their routing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 2: The Firewalls&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Network Security Groups (NSGs) — Azure’s equivalent to AWS Security Groups&lt;br&gt;
In Azure, NSGs can be attached to subnets or individual VM NICs. We follow the same strict chain as the AWS architecture:&lt;/p&gt;

&lt;p&gt;• Internet → App Gateway NSG&lt;br&gt;
• App Gateway NSG → Web VM NSG&lt;br&gt;
• Web VM NSG → Internal LB NSG&lt;br&gt;
• Internal LB NSG → App VM NSG&lt;br&gt;
• App VM NSG → DB NSG&lt;/p&gt;

&lt;p&gt;⚠️ Critical: Azure NSGs use Priority numbers (100–4096). Lower number = higher priority. Always assign priorities with gaps (e.g. 100, 200, 300) to leave room for future rules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Public App Gateway NSG&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Search Network Security Groups &amp;gt; Create.&lt;/li&gt;
&lt;li&gt;Name: capstone-appgw-nsg | Resource group: capstone-rg.&lt;/li&gt;
&lt;li&gt;After creation, go to Inbound security rules &amp;gt; Add:
• Name: Allow-HTTP-Internet | Priority: 100 | Source: Any | Dest. Port: 80 | Protocol: TCP | Action: Allow
• Name: Allow-HTTPS-Internet | Priority: 110 | Source: Any | Dest. Port: 443 | Protocol: TCP | Action: Allow
• Name: Allow-AppGW-Infra | Priority: 120 | Source: GatewayManager | Dest. Port: 65200–65535 | Protocol: TCP | Action: Allow&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;⚠️ Critical: The GatewayManager rule on ports 65200–65535 is MANDATORY for Azure Application Gateway. Without it, the App Gateway health probes will fail and it will not provision correctly.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Attach this NSG to web-public-a and web-public-b subnets.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Web VM NSG&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create NSG: capstone-web-vm-nsg&lt;/li&gt;
&lt;li&gt;Inbound rules:
• Name: Allow-HTTP-FromAppGW | Priority: 100 | Source: (App Gateway subnet CIDR 10.0.1.0/24 &amp;amp; 10.0.2.0/24) | Dest. Port: 80 | Action: Allow
• Name: Allow-SSH-MyIP | Priority: 200 | Source: Your IP address | Dest. Port: 22 | Protocol: TCP | Action: Allow&lt;/li&gt;
&lt;li&gt;Attach to web-public-a and web-public-b subnets.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Internal Load Balancer NSG&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create NSG: capstone-internal-lb-nsg&lt;/li&gt;
&lt;li&gt;Inbound rules:
• Name: Allow-HTTP-FromWebVMs | Priority: 100 | Source: 10.0.1.0/24, 10.0.2.0/24 | Dest. Port: 80 | Action: Allow
• Name: Allow-AzureLoadBalancer | Priority: 200 | Source: AzureLoadBalancer (service tag) | Dest. Port: * | Action: Allow&lt;/li&gt;
&lt;li&gt;Attach to app-private-a and app-private-b subnets.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 4: App VM NSG&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create NSG: capstone-app-vm-nsg&lt;/li&gt;
&lt;li&gt;Inbound rules:
• Name: Allow-3001-FromInternalLB | Priority: 100 | Source: 10.0.3.0/24, 10.0.4.0/24 | Dest. Port: 3001 | Action: Allow
• Name: Allow-SSH-FromWebVM | Priority: 200 | Source: 10.0.1.0/24, 10.0.2.0/24 | Dest. Port: 22 | Protocol: TCP | Action: Allow
• Name: Allow-AzureLoadBalancer | Priority: 300 | Source: AzureLoadBalancer | Dest. Port: * | Action: Allow&lt;/li&gt;
&lt;li&gt;Attach to app-private-a and app-private-b subnets.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Database NSG&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create NSG: capstone-db-nsg&lt;/li&gt;
&lt;li&gt;Inbound rules:
• Name: Allow-MySQL-FromAppVM | Priority: 100 | Source: 10.0.3.0/24, 10.0.4.0/24 | Dest. Port: 3306 | Protocol: TCP | Action: Allow&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;💡 Note: For delegated MySQL subnets, Azure may manage some NSG rules automatically. If you encounter issues, verify the NSG is not conflicting with delegation requirements.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Attach to db-private-a and db-private-b subnets.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Phase 3: The Data Layer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Azure Database for MySQL — Flexible Server (HA + Read Replica)&lt;br&gt;
Azure Database for MySQL Flexible Server is the direct replacement for AWS RDS MySQL. It supports zone-redundant high availability (equivalent to Multi-AZ) and read replicas.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create the Primary MySQL Flexible Server (HA)&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Search Azure Database for MySQL flexible servers &amp;gt; Create.&lt;/li&gt;
&lt;li&gt;Resource group: capstone-rg&lt;/li&gt;
&lt;li&gt;Server name: capstone-db (This becomes part of your hostname: capstone-db.mysql.database.azure.com)&lt;/li&gt;
&lt;li&gt;Region: East US&lt;/li&gt;
&lt;li&gt;MySQL version: 8.0&lt;/li&gt;
&lt;li&gt;Compute + storage: Click Configure server. Select Burstable tier &amp;gt; B2ms (2 vCores, 8GB RAM). This is equivalent to db.t3.micro cost-tier.&lt;/li&gt;
&lt;li&gt;Under High Availability: Check Enable High Availability. Mode: Zone-redundant (equivalent to RDS Multi-AZ — deploys a standby in a separate Availability Zone).&lt;/li&gt;
&lt;li&gt;Standby availability zone: 2 (Primary will be in Zone 1).&lt;/li&gt;
&lt;li&gt;Admin username: admin&lt;/li&gt;
&lt;li&gt;Password: Password123! (or note whatever you use)&lt;/li&gt;
&lt;li&gt;Click Next: Networking.
Networking Configuration&lt;/li&gt;
&lt;li&gt;Connectivity method: Private access (VNet Integration).&lt;/li&gt;
&lt;li&gt;Virtual network: capstone-vnet.&lt;/li&gt;
&lt;li&gt;Subnet: Select db-private-a (must be the delegated subnet).&lt;/li&gt;
&lt;li&gt;Private DNS zone: Create new. Name: capstone-db.private.mysql.database.azure.com.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;⚠️ Critical: Private DNS zone is critical. Without it, your App VMs cannot resolve the MySQL hostname. Azure creates this automatically if you select ‘Create new’.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click Review + Create &amp;gt; Create. This takes 5–10 minutes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Create the Initial Database&lt;/strong&gt;&lt;br&gt;
Unlike AWS RDS which lets you set an initial DB name during creation, in Azure you must create the database after the server is provisioned.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Once the server is Available, go to the resource &amp;gt; Databases (left menu) &amp;gt; Add.&lt;/li&gt;
&lt;li&gt;Database name: bookreview&lt;/li&gt;
&lt;li&gt;Click Save.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Create the Read Replica&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to your capstone-db server &amp;gt; Settings &amp;gt; Replication &amp;gt; Add replica.&lt;/li&gt;
&lt;li&gt;Server name: capstone-db-replica&lt;/li&gt;
&lt;li&gt;Location: East US (same region for low latency).&lt;/li&gt;
&lt;li&gt;Click OK. Wait 5–10 minutes for it to become available.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;💡 Note: Azure MySQL Flexible Server read replicas are asynchronous, same as AWS RDS read replicas. They are read-only and can be used for reporting/analytics queries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 4: The Business Layer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Internal Load Balancer + Node.js Backend VM&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Step 1: Create the App VM (Backend)&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Search Virtual Machines &amp;gt; Create &amp;gt; Azure virtual machine.&lt;/li&gt;
&lt;li&gt;Resource group: capstone-rg&lt;/li&gt;
&lt;li&gt;Virtual machine name: capstone-app-vm&lt;/li&gt;
&lt;li&gt;Region: East US | Availability zone: Zone 1&lt;/li&gt;
&lt;li&gt;Image: Ubuntu Server 22.04 LTS&lt;/li&gt;
&lt;li&gt;Size: Click See all sizes. Choose Standard_B1ms (1 vCPU, 2GB) — equivalent to t2.micro.&lt;/li&gt;
&lt;li&gt;Authentication type: SSH public key.&lt;/li&gt;
&lt;li&gt;Username: azureuser&lt;/li&gt;
&lt;li&gt;SSH public key source: Generate new key pair. Name: capstone-key. Download the .pem file when prompted.&lt;/li&gt;
&lt;li&gt;Click Next: Disks &amp;gt; Next: Networking.
Networking Settings&lt;/li&gt;
&lt;li&gt;Virtual network: capstone-vnet&lt;/li&gt;
&lt;li&gt;Subnet: app-private-a&lt;/li&gt;
&lt;li&gt;Public IP: None (CRITICAL — must be private only!)&lt;/li&gt;
&lt;li&gt;NIC network security group: None (our NSG is attached to the subnet already).&lt;/li&gt;
&lt;li&gt;Click Review + Create &amp;gt; Create.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Create the Web VM (Jump Host + Frontend)&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create another VM: capstone-web-vm&lt;/li&gt;
&lt;li&gt;Region: East US | Availability zone: Zone 1&lt;/li&gt;
&lt;li&gt;Image: Ubuntu Server 22.04 LTS | Size: Standard_B1ms&lt;/li&gt;
&lt;li&gt;SSH key: Use existing key (upload your capstone-key.pub).&lt;/li&gt;
&lt;li&gt;Networking: Virtual network: capstone-vnet | Subnet: web-public-a&lt;/li&gt;
&lt;li&gt;Public IP: Create new. Name: capstone-web-pip | SKU: Standard | Assignment: Static.&lt;/li&gt;
&lt;li&gt;NIC network security group: None.&lt;/li&gt;
&lt;li&gt;Click Review + Create &amp;gt; Create.
**
Step 3: Connect and Configure the App VM (SSH Jump)**
Because the App VM has no public IP, you must SSH into the Web VM first, then jump to the App VM.
From your local machine — copy the key to the Web VM
scp -i capstone-key.pem capstone-key.pem azureuser@:/home/azureuser/&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;NOTE: “azureuser” is my VMs username, so replace it with your VMs username&lt;/p&gt;

&lt;p&gt;SSH into the Web VM&lt;br&gt;
ssh -i capstone-key.pem azureuser@&lt;br&gt;
 From inside Web VM — jump to App VM&lt;br&gt;
chmod 400 capstone-key.pem&lt;br&gt;
ssh -i capstone-key.pem azureuser@&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Configure the Node.js Backend&lt;/strong&gt;&lt;br&gt;
Run the following commands inside the App VM:&lt;br&gt;
&lt;strong&gt;Install Packages&lt;/strong&gt;&lt;br&gt;
sudo apt update &amp;amp;&amp;amp; sudo apt install nodejs npm mysql-client -y&lt;br&gt;
Clone and Configure the App&lt;br&gt;
git clone &lt;a href="https://github.com/pravinmishraaws/book-review-app.git" rel="noopener noreferrer"&gt;https://github.com/pravinmishraaws/book-review-app.git&lt;/a&gt;&lt;br&gt;
cd book-review-app/backend&lt;br&gt;
npm install&lt;br&gt;
sudo npm install -g pm2&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Edit the Environment File&lt;/strong&gt;&lt;br&gt;
nano .env&lt;br&gt;
Update these values:&lt;br&gt;
• DB_HOST=capstone-db.mysql.database.azure.com&lt;br&gt;
• DB_USER=admin&lt;br&gt;
• DB_PASSWORD=Password123!&lt;br&gt;
• DB_NAME=bookreview&lt;br&gt;
• PORT=3001&lt;/p&gt;

&lt;p&gt;💡 Note: The Azure MySQL hostname format is .mysql.database.azure.com — find it on the Azure Portal under your MySQL Flexible Server &amp;gt; Overview &amp;gt; Server name.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Seed and Start the Backend&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;node src/server.js&lt;/p&gt;

&lt;p&gt;You should see ‘Server running on port 3001’ and ‘Database connected’. Press Ctrl+C to stop.&lt;br&gt;
pm2 start src/server.js — name “backend”&lt;br&gt;
pm2 save&lt;br&gt;
pm2 startup&lt;br&gt;
Copy and run the generated sudo command that pm2 startup outputs.&lt;br&gt;
pm2 status&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Create the Internal Load Balancer&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Create the Backend Pool&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Search Load Balancers &amp;gt; Create.&lt;/li&gt;
&lt;li&gt;Name: capstone-internal-lb | SKU: Standard | Type: Internal | Tier: Regional.&lt;/li&gt;
&lt;li&gt;Virtual network: capstone-vnet | Subnet: app-private-a.&lt;/li&gt;
&lt;li&gt;IP address assignment: Dynamic.&lt;/li&gt;
&lt;li&gt;Click Next: Backend Pools &amp;gt; Add a backend pool.&lt;/li&gt;
&lt;li&gt;Name: capstone-app-pool. Add capstone-app-vm by NIC.&lt;/li&gt;
&lt;li&gt;Click Next: Inbound Rules &amp;gt; Add a load balancing rule.&lt;/li&gt;
&lt;li&gt;Name: app-lb-rule | Frontend IP: (auto-assigned private IP) | Protocol: TCP | Port: 80 | Backend port: 3001.&lt;/li&gt;
&lt;li&gt;Health probe: Create new. Name: app-health-probe | Protocol: HTTP | Port: 3001 | Path: /&lt;/li&gt;
&lt;li&gt;Click Review + Create &amp;gt; Create.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;💡 Note: Note the Internal LB’s private IP address from the Overview page. This is your equivalent to the Internal ALB DNS name.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 5: The Presentation Layer&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Web VM (Next.js + Nginx) + Azure Application Gateway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Configure the Web VM (Frontend)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;SSH back into your Web VM. Run these commands:&lt;br&gt;
sudo apt update &amp;amp;&amp;amp; sudo apt install nodejs npm nginx -y&lt;br&gt;
sudo npm install -g pm2&lt;br&gt;
git clone &lt;a href="https://github.com/pravinmishraaws/book-review-app.git" rel="noopener noreferrer"&gt;https://github.com/pravinmishraaws/book-review-app.git&lt;/a&gt;&lt;br&gt;
cd book-review-app/frontend&lt;br&gt;
npm install&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Create the Environment File&lt;/strong&gt;&lt;br&gt;
nano .env&lt;br&gt;
Add this line:&lt;br&gt;
NEXT_PUBLIC_API_URL=/api&lt;br&gt;
Save and exit (Ctrl+O, Enter, Ctrl+X).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Build and Start Next.js&lt;/strong&gt;&lt;br&gt;
npm run build&lt;br&gt;
pm2 start npm — name “frontend” start&lt;br&gt;
pm2 save &amp;amp;&amp;amp; pm2 startup&lt;br&gt;
Run the generated sudo command from pm2 startup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Configure Nginx&lt;/strong&gt;&lt;br&gt;
sudo nano /etc/nginx/sites-available/default&lt;/p&gt;

&lt;p&gt;Delete everything and paste this configuration (replace the proxy_pass address with your Internal LB private IP):&lt;/p&gt;

&lt;p&gt;server {&lt;br&gt;
listen 80;&lt;br&gt;
server_name _;&lt;br&gt;
location / {&lt;br&gt;
proxy_pass &lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt;;&lt;br&gt;
proxy_http_version 1.1;&lt;br&gt;
proxy_set_header Upgrade $http_upgrade;&lt;br&gt;
proxy_set_header Connection ‘upgrade’;&lt;br&gt;
proxy_set_header Host $host;&lt;br&gt;
proxy_cache_bypass $http_upgrade;&lt;br&gt;
}&lt;br&gt;
location /api/ {&lt;br&gt;
rewrite ^/api/(.*) /$1 break;&lt;br&gt;
proxy_pass http://;&lt;br&gt;
proxy_http_version 1.1;&lt;br&gt;
proxy_set_header Host $host;&lt;br&gt;
}&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;Save and exit (Ctrl+O, Enter, Ctrl+X)&lt;/p&gt;

&lt;p&gt;Then,&lt;br&gt;
sudo nginx -t&lt;br&gt;
sudo systemctl restart nginx&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Create the Azure Application Gateway (Public ALB)&lt;/strong&gt;&lt;br&gt;
Azure Application Gateway is the equivalent of AWS’s Public Application Load Balancer. It operates at Layer 7.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Search Application Gateway &amp;gt; Create.&lt;/li&gt;
&lt;li&gt;Name: capstone-appgw | Region: East US | Tier: Standard V2.&lt;/li&gt;
&lt;li&gt;Autoscaling: Minimum 1, Maximum 2 (cost saving).&lt;/li&gt;
&lt;li&gt;Virtual network: capstone-vnet | Subnet: web-public-a.&lt;/li&gt;
&lt;li&gt;Click Next: Frontends.&lt;/li&gt;
&lt;li&gt;Frontend IP type: Public. Create new public IP: capstone-appgw-ip.&lt;/li&gt;
&lt;li&gt;Click Next: Backends. Add a backend pool.&lt;/li&gt;
&lt;li&gt;Name: capstone-web-pool. Add capstone-web-vm (by NIC or IP).&lt;/li&gt;
&lt;li&gt;Click Next: Configuration. Add a routing rule.&lt;/li&gt;
&lt;li&gt;Rule name: http-rule | Priority: 100.&lt;/li&gt;
&lt;li&gt;Listener: Name: http-listener | Frontend IP: Public | Protocol: HTTP | Port: 80.&lt;/li&gt;
&lt;li&gt;Backend targets: Target type: Backend pool | Backend pool: capstone-web-pool.&lt;/li&gt;
&lt;li&gt;HTTP settings: Create new. Name: web-http-settings | Protocol: HTTP | Port: 80.&lt;/li&gt;
&lt;li&gt;Click Review + Create &amp;gt; Create. This takes 5–10 minutes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;💡 Note: After creation, get the Application Gateway’s public IP from the Frontend IP configurations. This is what your users will browse to.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;POSSIBLE PROBLEM YOU MIGHT ENCOUNTER IN STEP 5&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I keep getting this error at step 5 “Subnet must only have application gateway”&lt;br&gt;
This is a very common Azure Application Gateway gotcha.&lt;br&gt;
Exact cause. The error is clear and the fix is straightforward. Here’s what’s happening and how to fix it:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Problem&lt;/strong&gt;&lt;br&gt;
Azure Application Gateway requires a dedicated subnet — you cannot deploy any other resource in the Application Gateway subnet.&lt;/p&gt;

&lt;p&gt;You are trying to deploy the Application Gateway into web-public-a, which already has your Web VM sitting in it. Azure is rejecting this because the subnet is not empty.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Fix&lt;/strong&gt;: Create a dedicated subnet for the Application Gateway&lt;br&gt;
Go to capstone-vnet &amp;gt; Subnets &amp;gt; + Subnet and add a new one:&lt;/p&gt;

&lt;p&gt;Name: appgw-subnet&lt;br&gt;
CIDR: 10.0.7.0/24&lt;br&gt;
NSG: capstone-appgw-nsg&lt;/p&gt;

&lt;p&gt;Then go back to your Application Gateway creation and select appgw-subnet instead of web-public-a.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Update CORS in the Backend&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;SSH back to the App VM (via Web VM jump), then update CORS:&lt;br&gt;
cd ~/book-review-app/backend&lt;br&gt;
nano .env&lt;br&gt;
Add or update:&lt;br&gt;
ALLOWED_ORIGINS=http://&lt;br&gt;
pm2 restart backend&lt;/p&gt;

&lt;p&gt;Open your browser and navigate to: http://&lt;br&gt;
The Book Review App should load successfully!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;QUICK TEST:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Restart one of the DBs and see if your application is still reachable&lt;br&gt;
Restart the VM also to see if pm2 auto starts the application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OPTIONAL&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Phase 6: High Availability with VM Scale Sets&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Equivalent to AWS Auto Scaling Groups (ASGs) with Launch Templates&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create Golden VM Images&lt;/strong&gt;&lt;br&gt;
Before scaling, capture the perfectly configured VMs as reusable images.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to capstone-app-vm &amp;gt; Click Capture (from the top menu).&lt;/li&gt;
&lt;li&gt;Share image to Azure Compute Gallery: No (use Managed Image for simplicity).&lt;/li&gt;
&lt;li&gt;Image name: capstone-app-golden-image.&lt;/li&gt;
&lt;li&gt;Check: Automatically delete this virtual machine after creating the image.&lt;/li&gt;
&lt;li&gt;Click Review + Create &amp;gt; Create. Wait for the image to be created.&lt;/li&gt;
&lt;li&gt;Repeat for capstone-web-vm. Image name: capstone-web-golden-image.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;⚠️ Critical: Capturing an image deallocates and deletes the source VM. Make sure your backend is fully configured and tested before capturing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Create the App Tier VM Scale Set (VMSS)&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Search Virtual machine scale sets &amp;gt; Create.&lt;/li&gt;
&lt;li&gt;Resource group: capstone-rg | Name: capstone-app-vmss.&lt;/li&gt;
&lt;li&gt;Region: East US | Availability zones: 1, 2.&lt;/li&gt;
&lt;li&gt;Image: Click See all images &amp;gt; My images tab &amp;gt; Select capstone-app-golden-image.&lt;/li&gt;
&lt;li&gt;Size: Standard_B1ms.&lt;/li&gt;
&lt;li&gt;Authentication: SSH key &amp;gt; username: ubuntu &amp;gt; Use existing key.&lt;/li&gt;
&lt;li&gt;Scaling: Scaling mode: Autoscaling. Instances: Min 2, Max 4, Default 2.&lt;/li&gt;
&lt;li&gt;Click Next: Networking.&lt;/li&gt;
&lt;li&gt;Virtual network: capstone-vnet | NIC subnet: app-private-a.&lt;/li&gt;
&lt;li&gt;Load balancing: Select Load balancer &amp;gt; capstone-internal-lb &amp;gt; Backend pool: capstone-app-pool.&lt;/li&gt;
&lt;li&gt;Public inbound ports: None.&lt;/li&gt;
&lt;li&gt;Click Review + Create &amp;gt; Create.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Create the Web Tier VM Scale Set (VMSS)&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create another scale set: capstone-web-vmss.&lt;/li&gt;
&lt;li&gt;Image: capstone-web-golden-image.&lt;/li&gt;
&lt;li&gt;Availability zones: 1, 2.&lt;/li&gt;
&lt;li&gt;Scaling: Min 2, Max 4, Default 2.&lt;/li&gt;
&lt;li&gt;Networking: VNet: capstone-vnet | Subnet: web-public-a.&lt;/li&gt;
&lt;li&gt;Public IP per instance: Enabled (so each VM gets a public IP for the App Gateway health checks).&lt;/li&gt;
&lt;li&gt;Load balancing: Application Gateway &amp;gt; capstone-appgw &amp;gt; Backend pool: capstone-web-pool.&lt;/li&gt;
&lt;li&gt;Click Review + Create &amp;gt; Create.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Chaos Test — Prove High Availability&lt;/strong&gt;&lt;br&gt;
Your architecture is now highly available. Prove it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to Virtual Machines. Manually stop or delete one of the VMSS instances.&lt;/li&gt;
&lt;li&gt;The VMSS will detect the missing instance and auto-launch a replacement using the golden image.&lt;/li&gt;
&lt;li&gt;pm2 will auto-start the application on the new VM.&lt;/li&gt;
&lt;li&gt;Wait 3–5 minutes, then refresh the Application Gateway URL. The app will still be running.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Phase 7: Cleanup — Destroy All Resources&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Always delete resources after your lab to avoid unnecessary charges. Follow this order:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Delete Scale Sets &amp;amp; VMs&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to Virtual machine scale sets &amp;gt; Delete capstone-app-vmss and capstone-web-vmss.&lt;/li&gt;
&lt;li&gt;Go to Virtual machines &amp;gt; Delete any remaining VMs.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Delete Load Balancers &amp;amp; Application Gateway&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Search Application Gateway &amp;gt; Delete capstone-appgw.&lt;/li&gt;
&lt;li&gt;Search Load Balancers &amp;gt; Delete capstone-internal-lb.&lt;/li&gt;
&lt;li&gt;Go to Public IP addresses &amp;gt; Delete capstone-appgw-ip and capstone-web-pip and capstone-nat-ip.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Delete MySQL Flexible Server&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Search Azure Database for MySQL flexible servers.&lt;/li&gt;
&lt;li&gt;Delete capstone-db-replica first.&lt;/li&gt;
&lt;li&gt;Delete capstone-db (primary). Confirm deletion — no final snapshot needed for lab cleanup.&lt;/li&gt;
&lt;li&gt;Delete the Private DNS Zone: capstone-db.private.mysql.database.azure.com.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Delete NAT Gateway&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Search NAT Gateways &amp;gt; Delete capstone-nat.&lt;/li&gt;
&lt;li&gt;Delete capstone-nat-ip public IP address.
Step 5: Delete the Resource Group (Nuclear Option)&lt;/li&gt;
&lt;li&gt;Go to Resource Groups &amp;gt; capstone-rg.&lt;/li&gt;
&lt;li&gt;Click Delete resource group.&lt;/li&gt;
&lt;li&gt;Type the resource group name to confirm &amp;gt; Delete.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;⚠️ Critical: Deleting the resource group deletes ALL resources inside it in one sweep — VNet, subnets, NSGs, route tables, VMs, disks, and everything else. This is the fastest cleanup method.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Delete VM Images &amp;amp; Key&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Search Images &amp;gt; Delete capstone-app-golden-image and capstone-web-golden-image.&lt;/li&gt;
&lt;li&gt;If you stored the SSH key in Azure Key Vault or SSH Keys resource, delete it.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;And its a wrap&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Thank you for the opportunity to share my little knowledge. I hope this was informative.&lt;/p&gt;

&lt;p&gt;Comments are welcome.&lt;/p&gt;

&lt;p&gt;Till next time, always stay positive 👍&lt;/p&gt;

&lt;p&gt;Shout out to Pravin Mishra, Lead Co-Mentor: Praveen Pandey&lt;br&gt;
🤝 Co-Mentors: Egwu Oko, Tanisha Borana, Ranbir Kaur&lt;/p&gt;

&lt;p&gt;P.S. This post is part of the DevOps Micro Internship (DMI) Cohort-2 by Pravin Mishra. You can start your DevOps journey by joining this&lt;br&gt;
Discord community ( &lt;a href="https://lnkd.in/e4wTfknn" rel="noopener noreferrer"&gt;https://lnkd.in/e4wTfknn&lt;/a&gt; ).&lt;/p&gt;

&lt;h1&gt;
  
  
  DMI #Azure #CloudComputing #DevOps #CloudEngineering #Architecture #NextJS #NodeJS #MySQL #LearningInPublic
&lt;/h1&gt;

</description>
      <category>azure</category>
      <category>cloud</category>
      <category>devops</category>
      <category>cloudengineering</category>
    </item>
    <item>
      <title>From Local VM to AWS: My Journey Mastering Git and Nginx</title>
      <dc:creator>Daniel Inyang </dc:creator>
      <pubDate>Thu, 29 Jan 2026 16:21:16 +0000</pubDate>
      <link>https://dev.to/danyang007/from-local-vm-to-aws-my-journey-mastering-git-and-nginx-132e</link>
      <guid>https://dev.to/danyang007/from-local-vm-to-aws-my-journey-mastering-git-and-nginx-132e</guid>
      <description>&lt;p&gt;In the world of DevOps, understanding the flow of code—from a developer’s keyboard to a live server—is the most critical skill you can have. This week, I took a hands-on approach to mastering this "code-to-cloud" journey.&lt;/p&gt;

&lt;p&gt;Here is a breakdown of how I used Git, GitHub, and AWS to build a reliable development workflow.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. Why Git is the Heart of Development
&lt;/h2&gt;

&lt;p&gt;Before diving into the cloud, I started with the basics of &lt;strong&gt;Version Control&lt;/strong&gt;. Git is more than just a "save" button; it’s a time machine for your code.&lt;/p&gt;

&lt;p&gt;By practicing locally on a Virtual Machine (VM), I mastered the core essentials:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Initialization:&lt;/strong&gt; Transforming a regular folder into a tracked repository.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Staging Area:&lt;/strong&gt; Learning to "bundle" specific changes before committing them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Logging:&lt;/strong&gt; Using &lt;code&gt;git log&lt;/code&gt; to audit history and see exactly who changed what and when.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  2. Safe Experimentation via Branching
&lt;/h2&gt;

&lt;p&gt;One of my biggest "aha!" moments was working with &lt;strong&gt;branches&lt;/strong&gt;. In a professional setting, you never want to experiment directly on the &lt;code&gt;main&lt;/code&gt; (production) branch.&lt;/p&gt;

&lt;p&gt;I practiced creating secondary branches to test new features. Once I was satisfied, I merged them back into the &lt;code&gt;main&lt;/code&gt; branch. This process taught me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to keep the "production" code stable.&lt;/li&gt;
&lt;li&gt;How to resolve &lt;strong&gt;merge conflicts&lt;/strong&gt; when two branches have different ideas about the same line of code.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  3. Taking it to the Cloud: AWS EC2 &amp;amp; Nginx
&lt;/h2&gt;

&lt;p&gt;Once the code was versioned and ready, it was time to move it to the real world. I migrated my repository from my local VM to an &lt;strong&gt;AWS EC2 instance&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To make the website accessible to the public, I set up an &lt;strong&gt;Nginx web server&lt;/strong&gt;. This involved:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Installing Nginx on the Linux instance.&lt;/li&gt;
&lt;li&gt;Moving my website files into the &lt;code&gt;/var/www/html&lt;/code&gt; directory.&lt;/li&gt;
&lt;li&gt;Configuring permissions so the server could "serve" the content to visitors.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  The Verdict: Why This Matters
&lt;/h2&gt;

&lt;p&gt;This exercise wasn't just about learning commands like &lt;code&gt;git commit&lt;/code&gt; or &lt;code&gt;sudo apt install nginx&lt;/code&gt;. It was about understanding the &lt;strong&gt;DevOps lifecycle&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;By using Git, I ensured my work was safe and collaborative. By using AWS and Nginx, I learned how to make that work available to the world. Whether you are working alone or in a team of hundreds, these tools provide the structure needed to build and deploy software reliably.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's next?&lt;/strong&gt; I’m looking forward to automating this entire process so that every time I "push" my code, the server updates itself automatically!&lt;/p&gt;

&lt;p&gt;Till next time, always stay positive 👍&lt;/p&gt;

&lt;p&gt;P.S. This post is part of the DevOps Micro Internship (DMI) Cohort-2 by Pravin Mishra. You can start your DevOps journey by joining this&lt;br&gt;
Discord community ( &lt;a href="https://lnkd.in/e4wTfknn" rel="noopener noreferrer"&gt;https://lnkd.in/e4wTfknn&lt;/a&gt; ).&lt;/p&gt;

</description>
      <category>git</category>
      <category>github</category>
      <category>aws</category>
      <category>nginx</category>
    </item>
    <item>
      <title>Host a simple Website using Nginx</title>
      <dc:creator>Daniel Inyang </dc:creator>
      <pubDate>Fri, 23 Jan 2026 09:16:54 +0000</pubDate>
      <link>https://dev.to/danyang007/host-a-simple-website-using-nginx-1a8a</link>
      <guid>https://dev.to/danyang007/host-a-simple-website-using-nginx-1a8a</guid>
      <description>&lt;p&gt;Hi Guys,&lt;/p&gt;

&lt;p&gt;I worked on a simple website and hosted it on EC2 and served using Nginx.&lt;br&gt;
The website wasn’t designed or deployed by me, I just used it to practice and update my skills on Linux and Nginx.&lt;/p&gt;

&lt;p&gt;Here are the steps I followed:&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Create and launch an EC2 Instance&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Used Ubuntu OS&lt;/p&gt;

&lt;p&gt;Make sure you allow ports 22 (SSH) and 80 (http) &lt;/p&gt;

&lt;p&gt;✅&lt;em&gt;Install Nginx&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Sudo apt install nginx -y&lt;/p&gt;

&lt;p&gt;Nginx -v&lt;/p&gt;

&lt;p&gt;✅&lt;strong&gt;Downloaded the websites zipped file from a github repo&lt;/strong&gt; (&lt;a href="https://github.com/pravinmishraaws/Pravin-Mishra-Portfolio-Template" rel="noopener noreferrer"&gt;https://github.com/pravinmishraaws/Pravin-Mishra-Portfolio-Template&lt;/a&gt;) &lt;/p&gt;

&lt;p&gt;wget &lt;a href="https://github.com/pravinmishraaws/Pravin-Mishra-Portfolio-Template" rel="noopener noreferrer"&gt;https://github.com/pravinmishraaws/Pravin-Mishra-Portfolio-Template&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then I unzipped the downloaded zip file&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;First install zip:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;sudo apt install zip&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Now unzip the file:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;unzip file.zip&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3wra28554lds9tb1ph2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3wra28554lds9tb1ph2.png" alt=" " width="800" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;✅&lt;strong&gt;Deploy Website Files to Nginx Web Directory&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Copied all uzipped files to nginx web directory which is at /var/www/html&lt;/p&gt;

&lt;p&gt;Then I made some modification to the index.html file&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Follf2ww4gxue3t3kqlb1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Follf2ww4gxue3t3kqlb1.png" alt=" " width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;✅&lt;strong&gt;Verify the Deployment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ensure Nginx is correctly serving the website:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;curl your-public-ip&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;✅&lt;strong&gt;Access the Website using the servers Public IP&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;&lt;a href="http://your-public-ip" rel="noopener noreferrer"&gt;http://your-public-ip&lt;/a&gt;&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;✅Now the Website is deployed on an Ubuntu VM with Nginx, accessible from a public IP.&lt;/p&gt;

&lt;p&gt;Live Link (Available for 12hours): &lt;a href="http://samplewebsite.cloudworks.com.ng/" rel="noopener noreferrer"&gt;http://samplewebsite.cloudworks.com.ng/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;or &lt;a href="http://52.201.61.175/" rel="noopener noreferrer"&gt;http://52.201.61.175/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7p30wymr4kr1cxnjw7j.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7p30wymr4kr1cxnjw7j.jpeg" alt=" " width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8huoeel8soqsiqcql1a.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8huoeel8soqsiqcql1a.jpeg" alt=" " width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I hope this was insightful&lt;/p&gt;

&lt;p&gt;Till next time, always stay positive 👍&lt;/p&gt;

</description>
      <category>nginx</category>
      <category>cloud</category>
      <category>github</category>
    </item>
    <item>
      <title>Objective Truths I have Discovered</title>
      <dc:creator>Daniel Inyang </dc:creator>
      <pubDate>Thu, 15 Jan 2026 13:42:14 +0000</pubDate>
      <link>https://dev.to/danyang007/objective-truths-i-have-discovered-25n4</link>
      <guid>https://dev.to/danyang007/objective-truths-i-have-discovered-25n4</guid>
      <description>&lt;p&gt;Hello Guys, so this is my little writeup about some objective truths I have come to know. Fisrt of all, what is an objective truth?&lt;/p&gt;

&lt;p&gt;Objective truths are facts or realities that are true regardless of personal feelings, opinions, beliefs, or perspectives. They can be verified independently and remain the same for everyone.&lt;/p&gt;

&lt;p&gt;So these are three of mine I have experimented with and there results.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Truth #1:&lt;/strong&gt; Consistency Beats Talent and Intensity&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Evidence from my life&lt;/strong&gt;: I used to always be that person that was never consistent when learning things and it showed gaps in knowledge. Then I started being consistent by learning things everyday no matter how little it was and it automatically improved my knowledge retention and skill upgrade.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Truth #2:&lt;/strong&gt; Sleep is Non-Negotiable for Health&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Evidence from my life:&lt;/strong&gt; Since I started having a healthy sleep habit, I noticed that I rarely fall sick or feel tired when I start focusing on work or learning or upskilling. Prioritizing sleep improves productivity, mood, and longevity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Truth #3:&lt;/strong&gt; Having a healthy goal/target to achieve helps with motivation&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Evidence from my life:&lt;/strong&gt; I used to just read or learn stuff without even knowing why , I just wanted to learn stuff, but at one point it got tiring and boring. Then I decided to start having a target or a goal to achieve and that gave me the motivation to keep going until I hit the stated target.&lt;/p&gt;

&lt;p&gt;I just thought I should share the outcomes of these truths as it affected me.&lt;/p&gt;

&lt;p&gt;Till next time, stay positive .&lt;/p&gt;

</description>
    </item>
    <item>
      <title>My Version 2.0 self (Journalistic Writeup)</title>
      <dc:creator>Daniel Inyang </dc:creator>
      <pubDate>Tue, 13 Jan 2026 17:28:57 +0000</pubDate>
      <link>https://dev.to/danyang007/my-version-20-self-journalistic-writeup-1akh</link>
      <guid>https://dev.to/danyang007/my-version-20-self-journalistic-writeup-1akh</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpneuw9pximhc1idocpoq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpneuw9pximhc1idocpoq.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;This is a writeup about one of the brilliant and successful minds in the DevOps landscape, Mr Daniel Inyang.&lt;/p&gt;

&lt;p&gt;Daniel is a skilled DevOps Engineer with a strong reputation for building reliable infrastructure and automating complex systems. His transition into DevOps was marked by deliberate hands-on experimentation and consistent delivery of real-world solutions rather than theory alone.&lt;/p&gt;

&lt;p&gt;By the late 2020s, Daniel had built and shipped multiple end-to-end DevOps projects that demonstrated deep technical competence. His portfolio included CI/CD pipelines using GitHub Actions and Jenkins, containerized applications deployed with Docker and Kubernetes, and cloud infrastructure provisioned using Terraform on Microsoft Azure. He designed secure Azure environments featuring Virtual Networks, Network Security Groups, VPN Gateways, and VNet Peering, all documented with clear architecture diagrams and deployment steps. These projects were publicly available on his GitHub, where regular commits, issue tracking, and versioned releases showed professional engineering practices.&lt;/p&gt;

&lt;p&gt;In 2027, he had earned industry-recognized certifications that validated his skills, including Linux administration and cloud fundamentals certifications aligned with his day-to-day responsibilities. Daniel published technical blogs that broke down DevOps concepts such as CI/CD workflows, DNS resolution, Linux permissions, Git version control, and infrastructure as code. His writing focused on practical explanations backed by labs and screenshots, making his content especially valuable to beginners. Several of his blog posts were shared across DevOps learning communities and study groups.&lt;/p&gt;

&lt;p&gt;In his professional role, Daniel worked as a DevOps Engineer supporting production systems. He automated deployment pipelines, reduced manual configuration errors, and improved system reliability through monitoring and logging solutions. He collaborated closely with developers, QA engineers, and network teams to resolve incidents and optimize release processes. His contributions directly reduced deployment time and improved system uptime.&lt;/p&gt;

&lt;p&gt;Leadership showed through action. Daniel mentored junior engineers, reviewed pull requests, and created internal documentation that simplified onboarding. Outside work, he contributed to the DevOps community by sharing reusable scripts, lab guides, and troubleshooting walkthroughs, particularly supporting learners from emerging tech markets.&lt;/p&gt;

&lt;p&gt;By this stage, Daniel had contributed immensely in the DevOps space defined by evidence: shipped systems, verified skills, documented learning, and meaningful impact. His journey reflected disciplined growth, curiosity, and a career built on genuine interest rather than short-lived trends.&lt;/p&gt;

&lt;p&gt;This is what I aspire to become in the near future through consistency, dedication and focus.&lt;/p&gt;

&lt;p&gt;My LinkedIn Profile: &lt;a href="https://www.linkedin.com/in/daniel-inyang/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/daniel-inyang/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;“This is part of DevOps Micro-Internship (DMI) by Pravin Mishra.”&lt;/p&gt;

&lt;p&gt;Connect with him on LinkedIn: &lt;a href="https://www.linkedin.com/in/pravin-mishra-aws-trainer/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/pravin-mishra-aws-trainer/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cloud</category>
      <category>motivation</category>
    </item>
    <item>
      <title>My First DevOps Project</title>
      <dc:creator>Daniel Inyang </dc:creator>
      <pubDate>Wed, 26 Nov 2025 15:30:38 +0000</pubDate>
      <link>https://dev.to/danyang007/my-first-devops-project-3f8p</link>
      <guid>https://dev.to/danyang007/my-first-devops-project-3f8p</guid>
      <description>&lt;p&gt;Hi Guys, welcome to my DevOps project writeup. I was tasked with deploying a personal portfolio website and hosting it on Killercoda using Nginx. To accomplish this, I used &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;startbootstrap for their templates&lt;/li&gt;
&lt;li&gt;Vscode for editing files&lt;/li&gt;
&lt;li&gt;Nginx as my webserver of choice&lt;/li&gt;
&lt;li&gt;Killercoda ubuntu playground&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is a walkthrough on how I accomplished it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Getting a Template&lt;/strong&gt;&lt;br&gt;
I went to &lt;a href="https://startbootstrap.com/" rel="noopener noreferrer"&gt;https://startbootstrap.com/&lt;/a&gt; and selected my preferred template to download.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fomkzsqcsbn6qtacjd1wf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fomkzsqcsbn6qtacjd1wf.png" alt="Startboostrap" width="800" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Accessing the files via VS Code&lt;/strong&gt;&lt;br&gt;
I extracted the downloaded template and opened the folder via VS Code&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjs39j0csdzlyp0sk4fd5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjs39j0csdzlyp0sk4fd5.png" alt="VS Code" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Changes I made include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;My name&lt;/li&gt;
&lt;li&gt;A brief bio&lt;/li&gt;
&lt;li&gt;DevOp skills I would like to learn
and so on&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Viewing the site via "Open with Live Server" displayed:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4cgaa6bkebxwi6mbfz8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4cgaa6bkebxwi6mbfz8.png" alt="Localhost" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice the URL: &lt;a href="http://127.0.0.1:5500/index.html" rel="noopener noreferrer"&gt;http://127.0.0.1:5500/index.html&lt;/a&gt;&lt;br&gt;
or &lt;a href="http://localhost:5500/index.html" rel="noopener noreferrer"&gt;http://localhost:5500/index.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Moving Files to Github&lt;/strong&gt;&lt;br&gt;
I went to &lt;a href="https://github.com" rel="noopener noreferrer"&gt;https://github.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Created a "tsacademystaticwebsite" repository on Github&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On Local system; 
Lauched VS Code &amp;gt; Terminal window
Navigated to the directory where my template files are located&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Started running git commands&lt;/p&gt;

&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;git init&lt;br&gt;
git config --global user.name "Daniel Inyang"&lt;br&gt;
git config --global user.email "iny**********@gmail.com" &lt;br&gt;
git add .&lt;br&gt;
git commit -m "Tracking all files"&lt;br&gt;
git remote add origin &lt;a href="https://github.com/danielinyang/tsacademystaticwebsite.git" rel="noopener noreferrer"&gt;https://github.com/danielinyang/tsacademystaticwebsite.git&lt;/a&gt;&lt;br&gt;
git push -u origin main&lt;/p&gt;
&lt;/blockquote&gt;


&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F78uf5ym71fc0ccrulk1f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F78uf5ym71fc0ccrulk1f.png" alt=" " width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Accessing Killercoda Ubuntu Playground&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I went to &lt;a href="https://killercoda.com/playgrounds/scenario/ubuntu" rel="noopener noreferrer"&gt;https://killercoda.com/playgrounds/scenario/ubuntu&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I got access to a cloud ubuntu environment where I could deploy and host my project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8nqbuhg310esomr4okix.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8nqbuhg310esomr4okix.png" alt=" " width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Install NGINX Webserver&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;on the ubuntu terminal prompt, i ran the command to insatll nginx&lt;/p&gt;

&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;sudo apt install nginx -y&lt;/p&gt;
&lt;/blockquote&gt;


&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6yy5u8y7df5bna817yj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6yy5u8y7df5bna817yj.png" alt=" " width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; Check that the nginx service is running&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;sudo systemctl status nginx&lt;/p&gt;
&lt;/blockquote&gt;


&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2fo9uc2lk9qoldbg8c38.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2fo9uc2lk9qoldbg8c38.png" alt=" " width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I also Opened the Web Traffic/Port on port 80 to also confirm that Nginx was running&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxz9tuv80b76q9gaz8jgx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxz9tuv80b76q9gaz8jgx.png" alt=" " width="800" height="302"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Cloning the Github Repo&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Now I will create a folder on the ubuntu server called "First_Project"&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;sudo mkdir First_Project&lt;br&gt;
cd First_Project&lt;/p&gt;
&lt;/blockquote&gt;


&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Now I intialize the folder with git and theclone the Github repo created earlier (tsacademystaticwebsite)&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;git init&lt;br&gt;
git clone &lt;a href="https://github.com/danielinyang/tsacademystaticwebsite.git" rel="noopener noreferrer"&gt;https://github.com/danielinyang/tsacademystaticwebsite.git&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;


&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16dg7w29kvogxpjb3ztv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16dg7w29kvogxpjb3ztv.png" alt=" " width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 7: Navigating to the Webserver Directory&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By default nginx serves files from /var/www/html&lt;/p&gt;

&lt;p&gt;I checked to see the content of that folder &lt;/p&gt;

&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;sudo ls /var/www/html&lt;/p&gt;
&lt;/blockquote&gt;


&lt;/blockquote&gt;

&lt;p&gt;i found the file "index.nginx-debian.html" which serves the default nginx welcome screen&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcc9y9w2vfvbngo016vtd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcc9y9w2vfvbngo016vtd.png" alt=" " width="800" height="87"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 8: Delete the contents of the Webserver Directory&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I have to delete that file "index.nginx-debian.html"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To delete&lt;/p&gt;

&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;cd /var/www/html&lt;br&gt;
rm index.nginx-debian.html&lt;/p&gt;
&lt;/blockquote&gt;


&lt;/blockquote&gt;

&lt;p&gt;Now when you check the content of /var/www/html, it will be empty &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo6d1mcgrg7qw107vwcy7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo6d1mcgrg7qw107vwcy7.png" alt=" " width="800" height="101"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The nginx welcome screen will show an error message&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8my9vjg5aww3jtx0spwc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8my9vjg5aww3jtx0spwc.png" alt=" " width="800" height="111"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 9: Move files to the NGINX Webserver Directory&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I have to move the files I cloned from Github into First_Project folder to the webserver directory (/var/www/html)&lt;/p&gt;

&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;cd First_Project&lt;br&gt;
cd tsacademystaticwebsite&lt;br&gt;
mv * /var/www/html&lt;/p&gt;
&lt;/blockquote&gt;


&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ezp3pl33qun4jv890c1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ezp3pl33qun4jv890c1.png" alt=" " width="800" height="219"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 10: My Website is Live&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Using the site URL provided by killercoda to view the site&lt;/p&gt;

&lt;p&gt;&lt;a href="https://2760473c40b0-10-244-11-177-80.spch.r.killercoda.com/" rel="noopener noreferrer"&gt;https://2760473c40b0-10-244-11-177-80.spch.r.killercoda.com/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now if we refresh the nginx site again, we see this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F29qzbya6tv507qig1ckn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F29qzbya6tv507qig1ckn.png" alt=" " width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;!!! SUCCESS !!!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTICE: The site on my localhost URL is the same as that on the Killercoda  URL&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Unfortunately the killercoda playground access only last for 60 minutes, so by the time you checkout the site, it most probably will already be down.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This was my first project I successfully deployed as an aspiring DevOps Engineer. I hope this was helpful and simple enough to understand and follow.&lt;/p&gt;

&lt;p&gt;I am still learning, so please feel free to give feedback and advice.&lt;/p&gt;

&lt;p&gt;Kindly leave a comment or a question If you have any; and I will be glad to respond.&lt;/p&gt;

&lt;p&gt;Till next time, stay safe.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cloud</category>
      <category>linux</category>
    </item>
  </channel>
</rss>
