DEV Community

Odoworitse Afari
Odoworitse Afari

Posted on

# Building a Three-Tier Architecture on Azure

Introduction

Some assignments teach you tools. This one taught me how to think.

As part of DMI Cohort 2, I was tasked with deploying the Book Review App — a Next.js frontend, Node.js backend, and MySQL database — in a fully production-style three-tier architecture on Azure. No step-by-step guide. Just the requirements, Azure documentation, and whatever problem-solving I could bring.

This post covers what I built, the real challenges I hit, and the mental shift this assignment forced.


What Is a Three-Tier Architecture?

A three-tier architecture separates an application into three distinct layers:

Tier What it does
Web Tier Serves the frontend. Handles HTTP requests from users.
App Tier Business logic. Processes requests, talks to the database.
Database Tier Where the data lives. Never exposed to the internet.

Each tier lives in its own subnet with its own security rules:

  • Web tier → can talk to App tier
  • App tier → can talk to Database tier
  • Nothing else

This isolation means a breach in one layer doesn't automatically compromise the others.


Step 1: Designing the VNet and Subnets

I created a custom VNet with the CIDR block 10.0.0.0/20. The assignment called for 6 subnets across 2 Availability Zones, but my Azure free subscription had quota limits — so I provisioned 4 subnets and documented the constraint honestly.

Subnet CIDR Tier
web-public 10.0.2.0/24 Web (public)
app-vm-1 10.0.6.0/24 App (private)
db-vm-1 10.0.10.0/24 Database (private)
db-vm-2 10.0.12.0/24 Database (private)

The key principle: only the web subnet is public. Everything else is private and unreachable from the internet directly.


Step 2: NSGs and Load Balancers

I configured chained NSGs to enforce strict traffic rules across tiers:

  • Web Tier NSG: Allow inbound HTTP on Port 80
  • App Tier NSG: Allow inbound Port 3001 — only from the Web Tier subnet (10.0.2.0/24)
  • DB Tier NSG: Allow inbound Port 3306 — only from the App Tier subnet (10.0.6.0/23)

This is a zero-trust internal network. Each layer only trusts traffic from the layer directly above it.

I then deployed two load balancers:

  • Public Load Balancer — Faces the internet, routes traffic to Web Tier VMs
  • Internal Load Balancer — Lives inside the VNet, routes traffic from Web Tier to App Tier (frontend IP: 10.0.6.100)

The internal load balancer is the piece most people overlook. It means the App Tier never needs a public IP. The traffic flow looks like this:

Internet → Public LB → Web VM → Internal LB → App VM → Database
Enter fullscreen mode Exit fullscreen mode

At no point does the backend touch the public internet.


Step 3: VM Deployment and the Quota Problem

This is where things got interesting.

I deployed Ubuntu VMs for both tiers:

  • Web Tier: Next.js behind Nginx on Port 80, in the public subnet
  • App Tier: Node.js on Port 3001, in private subnets with no public internet access

Midway through, I hit a 0 vCPU quota limit on my Azure free subscription. The portal blocked me from provisioning the next VM.

Fix: Pivot to Standard_B1s instances, which consumed fewer quota units and unblocked me.

Then I hit the next problem. The App Tier VM sits in a private subnet — which is correct by design, but means you can't SSH into it directly from your local machine. The managed Azure Bastion service failed to connect in my environment.

Fix: Build a manual jump-box. I used the Web Tier VM as a stepping stone:

# Step 1 — From local machine into the Web VM (public)
ssh -i web-vm-key.pem azureuser@<web-vm-public-ip>

# Step 2 — From Web VM into the App VM (private IP only)
ssh -i mvn.pem azureuser@10.0.6.4
Enter fullscreen mode Exit fullscreen mode

This is a classic production pattern. You never expose backend servers directly — you always go through a bastion or jump-box. I just had to build mine manually instead of using the managed service.


Step 4: Database — Private, Redundant, and Ready

I provisioned an Azure Database for MySQL Flexible Server entirely within the private database subnets with the following configuration:

  • Multi-AZ enabled for high availability
  • Read replica configured for read scaling
  • VNet integration — no public endpoint, zero internet exposure

The database is invisible to the outside world. The only machines that can reach it are App Tier VMs within the designated subnet (10.0.10.0/24 and 10.0.12.0/24).


The Moment It Worked

After the quota limits, the Bastion failure, and the jump-box setup — I opened a browser and navigated to the Web Tier's public IP: 20.227.32.150.

The Book Review App loaded. The Pragmatic Programmer. Clean Code. JavaScript: The Good Parts. All pulling live data from a database sitting in a private subnet that the internet cannot touch.

That moment made all the friction worth it.


What I Learned

Troubleshooting is the job.
The 0 vCPU quota limit and the Bastion failure weren't obstacles to the assignment — they were the assignment. Real infrastructure work is 40% planning and 60% adapting to what actually happens.

Constraints force better decisions.
Not having 6 subnets didn't make my architecture wrong. It made me document the tradeoff and move forward. That's a professional skill.

Private networking is a mindset, not a feature.
Once you understand why each tier is isolated, the subnets and NSGs stop feeling like configuration work. They become decisions about trust — and trust is what security is built on.


What's Next

The AWS portion of this project is coming up next.

After that, I'll be moving into Agentic AI for DevOps — exploring how AI agents are starting to reshape how infrastructure is managed, automated, and reasoned about. It's the next frontier for this field, and I want to understand it deeply before it becomes the standard.

Follow along: linkedin.com/in/odoworitse-afari


This post is part of my DMI Cohort 2 learning journey with Pravin Mishra — CloudAdvisory.

Top comments (0)