DEV Community

Cover image for Building a Global HTTP Load Balancer with Managed Instance Groups
Hritik Raj
Hritik Raj

Posted on

Building a Global HTTP Load Balancer with Managed Instance Groups

πŸš€ Scaling Applications the Right Way: My First GCP Load Balancer Setup

Hey Engineers πŸ‘‹

Today, I stepped into something that every production system relies on a Global HTTP Load Balancer on Google Cloud Platform.

Instead of running a single VM and hoping it survives traffic spikes, I built a proper architecture with:

  • Managed Instance Group
  • Health Checks
  • Backend Service
  • Global HTTP Load Balancer

Let’s break it down.


🎯 Objective

  • Create an Instance Template with Nginx installed
  • Launch 2 VM instances using a Managed Instance Group
  • Configure HTTP Health Checks
  • Create a Global HTTP Load Balancer
  • Verify traffic distribution across VMs

πŸ—οΈ Architecture Overview

Load Balancer

This setup ensures:

  • High availability
  • Traffic distribution
  • Automatic health monitoring

πŸ› οΈ Phase A: Create Instance Template

I first created an Instance Template to ensure uniform VM configuration.

Machine Type: e2-micro

Region: us-central1

Firewall: Allow HTTP traffic

Startup Script:

#!/bin/bash
apt update -y
apt install nginx -y
echo "Hello from $(hostname)" > /var/www/html/index.html
systemctl restart nginx
Enter fullscreen mode Exit fullscreen mode

Each VM now serves its hostname via Nginx.


πŸ› οΈ Phase B: Create Managed Instance Group

Using the template, I created a Managed Instance Group (MIG) with:

  • 2 instances
  • Same zone
  • Auto-healing enabled

Why MIG?

Because:

  • Ensures identical VM deployment
  • Supports auto scaling
  • Works seamlessly with Load Balancer

πŸ› οΈ Phase C: Configure Health Check

I created an HTTP health check with:

  • Protocol: HTTP
  • Port: 80
  • Path: /

This ensures the Load Balancer only routes traffic to healthy instances.

Without this, traffic distribution will fail.


πŸ› οΈ Phase D: Create Global HTTP Load Balancer

Under:

Network Services β†’ Load Balancing β†’ Create Load Balancer

Configuration:

Backend

  • Backend Type: Instance Group
  • Attached: Managed Instance Group
  • Attached: Health Check

Frontend

  • Protocol: HTTP
  • IP: Ephemeral public IP

Deployment took around 3–5 minutes (important: GCP takes time to propagate globally).


βœ… Testing Traffic Distribution

After deployment, I accessed the Load Balancer IP in the browser.

Refreshing multiple times showed:

Hello from instance-1
Hello from instance-2
Hello from instance-1
Enter fullscreen mode Exit fullscreen mode

To test via terminal:

while true; do curl http://LOAD_BALANCER_IP; sleep 1; done
Enter fullscreen mode Exit fullscreen mode

This confirmed proper traffic rotation.


🧠 Key Concepts Learned

  • Managed Instance Groups simplify scaling.
  • Health checks are mandatory for traffic routing.
  • Firewall rules directly impact load balancer behaviour.
  • Global HTTP Load Balancer configuration takes propagation time.

πŸš€ What’s Next

To take this further, I plan to:

  • Enable auto scaling
  • Add HTTPS with SSL certificates
  • Attach a custom domain
  • Deploy a real application instead of static Nginx

πŸ“ Final Thoughts

This was my first hands-on experience building a scalable web architecture in GCP.

Instead of just reading about load balancing, I implemented it debugged it and understood how production-grade systems maintain availability.

That’s the difference between theory and engineering.


πŸ”— Let’s Connect

If you're also building in cloud, let’s learn together πŸš€

Top comments (0)