DEV Community

Aditya Bhatt
Aditya Bhatt

Posted on

My VPC Journey: Hosting a Website in a Multi-AZ Production Environment


When you’re building something for production, “just launch an EC2” is never the full story. Networking, routing, gateways, and security controls define whether your app is resilient and secure or completely exposed.

I recently set up a multi-AZ AWS VPC environment to host a website, and this process turned into a real-world lesson in cloud networking. Below is a breakdown of the entire journey from subnets and gateways to bastion access and load balancers.


1. Subnets and Internet Gateway

I provisioned a VPC in us-west-1 with CIDR 10.0.0.0/16. Inside it, I created four subnets across two Availability Zones for fault tolerance:

  • 10.0.1.0/24 → Public subnet in AZ1 (us-west-1a)
  • 10.0.2.0/24 → Public subnet in AZ2 (us-west-1c)
  • 10.0.3.0/24 → Private subnet in AZ1
  • 10.0.4.0/24 → Private subnet in AZ2

By default, subnets are private (no route to the internet). To make the public subnets truly public:

  1. Attached an Internet Gateway (IGW) to the VPC
  2. Created a public route table:
Destination   Target
0.0.0.0/0     igw-xxxxxx
Enter fullscreen mode Exit fullscreen mode
  1. Associated this route table with the public subnets
  2. Enabled auto-assign public IP in subnet settings

Tech Note: Without enabling auto-assign IP, even a “public subnet” won’t let your EC2 instances reach the internet.

Key Points

  • Subnets are private until routed to an IGW
  • Public subnet = IGW route + auto public IPs
  • Always allocate Elastic IPs for stability

2. NAT Gateway – Controlled Outbound Access

My private subnets (10.0.3.0/24, 10.0.4.0/24) needed outbound internet (for updates, patching, yum installs) without being directly reachable. Enter the NAT Gateway.

Steps I followed:

  • Created NAT in a public subnet (important -> NAT itself needs to reach IGW)
  • Allocated an Elastic IP and bound it to NAT
  • Updated private route table:
Destination   Target
0.0.0.0/0     nat-xxxxxx
Enter fullscreen mode Exit fullscreen mode
  • Associated this private route table with the private subnets

Tech Note: NAT is stateful. It tracks connections opened by private instances and allows return traffic, but drops unsolicited inbound traffic.

Key Points

  • NAT must live in a public subnet
  • Elastic IP is mandatory for consistency
  • Route private subnets through NAT, not IGW

3. Bastion Host – The Secure Jump Point

You don’t expose private EC2s directly for SSH. Instead, you deploy a bastion host in the public subnet to act as a controlled entry point.

What I did:

  • Launched a CIS-hardened Amazon Linux AMI
  • Configured Security Group (SG):
    • Inbound tcp/22 allowed only from my corporate/public IP
    • Outbound tcp/22 allowed to private instances SG
  • Used ProxyJump for SSH chaining:
ssh -i bastion.pem -J ec2-user@bastion-public-ip ec2-user@private-instance
Enter fullscreen mode Exit fullscreen mode
  • Used scp with ProxyJump to move SSH keys/files:
scp -i bastion.pem -o ProxyJump=ec2-user@bastion-public-ip mykey.pem ec2-user@private-instance:/home/ec2-user/
Enter fullscreen mode Exit fullscreen mode
  • Secured the keys with chmod 400

Security Practices

  • Limit bastion access to known IPs only
  • Use hardened AMIs to avoid zero-day risks
  • Enable monitoring (CloudTrail, GuardDuty) for SSH attempts

4. Private Instances with Load Balancer

My app servers ran inside the private subnets. They needed to serve traffic without being directly exposed. The solution: Application Load Balancer (ALB).

Configuration:

  • Deployed ALB in both public subnets (multi-AZ)
  • Listener: HTTP port 80
  • Target Group: private EC2s on port 8080
  • SGs:
    • ALB SG allowed inbound port 80 from 0.0.0.0/0
    • Instance SG allowed inbound port 8080 only from ALB SG

Health Checks

Path: /health
Protocol: HTTP
Port: 8080
Healthy threshold: 3
Unhealthy threshold: 2
Enter fullscreen mode Exit fullscreen mode

Tech Note: Health checks ensure unhealthy nodes are drained and traffic automatically reroutes to healthy backends.

Key Points

  • ALB must live in public subnets
  • Private EC2s never need direct public IPs
  • SG rules enforce controlled traffic paths

5. DNS and Internal Resolution

Networking works better with names than raw IPs. In VPC settings, I:

  • Enabled DNS hostnames and DNS resolution
  • Allowed private instances to resolve each other as ip-10-0-3-5.ec2.internal
  • Mapped a custom domain via Route 53 → ALB DNS name

Key Points

  • Always enable DNS hostnames in VPC
  • Route 53 + ALB = production-ready domain routing

6. Default VPC

Do not delete the Default VPC. AWS services sometimes depend on it in the background.

Key Points

  • Leave the Default VPC alone — it’s a safety net

Production-Grade Practices Recap

  • Multi-AZ subnets for high availability
  • Separate route tables for public and private zones
  • NAT with Elastic IP for outbound-only private access
  • Bastion as a hardened jump server with strict SG rules
  • ALB in public subnets exposing private app servers
  • Route 53 for DNS + ALB hostname mapping
  • Security Groups with least privilege design

1. Subnets and Internet Gateway

I provisioned a VPC in us-west-1 with CIDR 10.0.0.0/16. Inside it, I created four subnets across two Availability Zones for fault tolerance:

  • 10.0.1.0/24 → Public subnet in AZ1 (us-west-1a)
  • 10.0.2.0/24 → Public subnet in AZ2 (us-west-1c)
  • 10.0.3.0/24 → Private subnet in AZ1
  • 10.0.4.0/24 → Private subnet in AZ2

By default, subnets are private (no route to the internet). To make the public subnets truly public:

  1. Attached an Internet Gateway (IGW) to the VPC
  2. Created a public route table:
Destination   Target
0.0.0.0/0     igw-xxxxxx
Enter fullscreen mode Exit fullscreen mode
  1. Associated this route table with the public subnets
  2. Enabled auto-assign public IP in subnet settings

Tech Note: Without enabling auto-assign IP, even a “public subnet” won’t let your EC2 instances reach the internet.

Key Points

  • Subnets are private until routed to an IGW
  • Public subnet = IGW route + auto public IPs
  • Always allocate Elastic IPs for stability

2. NAT Gateway – Controlled Outbound Access

My private subnets (10.0.3.0/24, 10.0.4.0/24) needed outbound internet (for updates, patching, yum installs) without being directly reachable. Enter the NAT Gateway.

Steps I followed:

  • Created NAT in a public subnet (important — NAT itself needs to reach IGW)
  • Allocated an Elastic IP and bound it to NAT
  • Updated private route table:
Destination   Target
0.0.0.0/0     nat-xxxxxx
Enter fullscreen mode Exit fullscreen mode
  • Associated this private route table with the private subnets

Tech Note: NAT is stateful. It tracks connections opened by private instances and allows return traffic, but drops unsolicited inbound traffic.

Key Points

  • NAT must live in a public subnet
  • Elastic IP is mandatory for consistency
  • Route private subnets through NAT, not IGW

3. Bastion Host – The Secure Jump Point

You don’t expose private EC2s directly for SSH. Instead, you deploy a bastion host in the public subnet to act as a controlled entry point.

What I did:

  • Launched a CIS-hardened Amazon Linux AMI
  • Configured Security Group (SG):
    • Inbound tcp/22 allowed only from my corporate/public IP
    • Outbound tcp/22 allowed to private instances SG
  • Used ProxyJump for SSH chaining:
ssh -i bastion.pem -J ec2-user@bastion-public-ip ec2-user@private-instance
Enter fullscreen mode Exit fullscreen mode
  • Used scp with ProxyJump to move SSH keys/files:
scp -i bastion.pem -o ProxyJump=ec2-user@bastion-public-ip mykey.pem ec2-user@private-instance:/home/ec2-user/
Enter fullscreen mode Exit fullscreen mode
  • Secured the keys with chmod 400

Security Practices

  • Limit bastion access to known IPs only
  • Use hardened AMIs to avoid zero-day risks
  • Enable monitoring (CloudTrail, GuardDuty) for SSH attempts

4. Private Instances with Load Balancer

My app servers ran inside the private subnets. They needed to serve traffic without being directly exposed. The solution: Application Load Balancer (ALB).

Configuration:

  • Deployed ALB in both public subnets (multi-AZ)
  • Listener: HTTP port 80
  • Target Group: private EC2s on port 8080
  • SGs:
    • ALB SG allowed inbound port 80 from 0.0.0.0/0
    • Instance SG allowed inbound port 8080 only from ALB SG

Health Checks

Path: /health
Protocol: HTTP
Port: 8080
Healthy threshold: 3
Unhealthy threshold: 2
Enter fullscreen mode Exit fullscreen mode

Tech Note: Health checks ensure unhealthy nodes are drained and traffic automatically reroutes to healthy backends.

Key Points

  • ALB must live in public subnets
  • Private EC2s never need direct public IPs
  • SG rules enforce controlled traffic paths

5. DNS and Internal Resolution

Networking works better with names than raw IPs. In VPC settings, I:

  • Enabled DNS hostnames and DNS resolution

Key Points

  • Always enable DNS hostnames in VPC

6. Default VPC

Do not delete the Default VPC. AWS services sometimes depend on it in the background.

Key Points

  • Leave the Default VPC alone ,it’s a safety net

Production-Grade Practices Recap

  • Multi-AZ subnets for high availability
  • Separate route tables for public and private zones
  • NAT with Elastic IP for outbound-only private access
  • Bastion as a hardened jump server with strict SG rules
  • ALB in public subnets exposing private app servers
  • Route 53 for DNS + ALB hostname mapping
  • Security Groups with least privilege design

Final Thoughts

This wasn’t about launching a single EC2 instance. It was about designing a network fabric:

  • IGW handled ingress
  • NAT secured egress
  • Bastion managed SSH
  • ALB distributed web traffic
  • SGs enforced boundaries

The result: a resilient, secure, production-grade VPC environment where every resource plays a defined role.

Think of it like plumbing. Subnets are pipes, IGW is the tap, NAT is the filter, bastion is the access gate, and ALB is the pressure valve. Get the flow wrong, nothing works. Get it right, and your users sip your website seamlessly.


Pro Checklist

Public subnets → IGW + auto IP

Private subnets → NAT for outbound

Bastion with restricted IP SSH

ALB for private instance exposure

Default VPC untouched

Top comments (0)