DEV Community

Cover image for AWS VPC Peering Using Terraform: A Complete Multi-Region Hands-On Guide
Amit Kushwaha
Amit Kushwaha

Posted on • Edited on

AWS VPC Peering Using Terraform: A Complete Multi-Region Hands-On Guide

AWS networking sounds simple on paper.
“Just connect two VPCs.”
Cool… until you do it once and realize nothing talks to anything 😵‍💫

VPC Peering is one of those AWS concepts that feels easy but quickly turns into a rabbit hole of CIDR blocks, route tables, security groups, and a lot of “why is ping not working?” moments. I’ve been there - and that confusion is exactly why I built this project.

In this blog, I will walk you through AWS VPC Peering using Terraform, not as a toy example with two VPCs in one region, but as a real multi-region setup that actually behaves like something you’d see in production. The goal wasn’t just to “make it work,” but to deeply understand how traffic flows between VPCs and what AWS expects you to configure for things to function correctly.

This Blog Covers:

By the end of this post you will learn:

  • What AWS VPC Peering really does (and what it doesn’t)
  • How to design a multi-region VPC architecture
  • How Terraform handles multi-provider and cross-region resources
  • Why route tables are more important than the peering connection itself
  • How to test and verify VPC peering like an engineer, not a guesser

What we’re building

We will build a full-mesh VPC peering architecture across three AWS regions:

One VPC in us-east-1

One VPC in us-west-2

One VPC in eu-west-2

All VPCs will be connected using VPC peering, with:

  • Non-overlapping CIDR blocks
  • Proper route table configuration
  • Security groups allowing cross-VPC traffic
  • EC2 instances deployed to prove connectivity

Terraform will manage everything - from VPC creation to peering, routing, security, and testing outputs.

Prerequisites

Before continuing, you should be comfortable with:

  • Basic AWS concepts (VPC, EC2, Security Groups)
  • Terraform fundamentals (init, plan, apply)
  • Reading infrastructure code

Let’s break VPC Peering properly — and then rebuild it the right way 🚀

What is AWS VPC Peering?

AWS VPC Peering allows two VPCs to communicate privately using AWS’s internal network. Once peered, resources like EC2 instances can talk to each other using private IPs, as if they were in the same network.

No public internet.
No NAT.
No VPN tunnels.

Sounds simple, right?
This is where people get baited.

What VPC Peering Actually Does (and Doesn’t)

VPC Peering only creates a connection between two VPCs.
It does NOT:

  • Automatically route traffic ❌
  • Bypass security groups ❌
  • Enable transitive routing ❌

Think of it like this:

VPC Peering opens the door, but route tables decide whether traffic walks through it.

The Golden Rules of VPC Peering (Non-Negotiable)

  1. CIDR Blocks Must NOT Overlap

Each VPC must have a unique CIDR range.

Why?
Because AWS needs to know where to send packets.
If two VPCs claim the same IP range, AWS basically goes:

“Yeah… no.” ❌

  1. Routing Is Mandatory (Both Sides)

This is the #1 reason peering “doesn’t work”.

For traffic to flow:

  • VPC A must have a route to VPC B
  • VPC B must have a route to VPC A

No route = no traffic. Period.

  1. Security Groups Still Apply

Even with peering:

  • Security groups still block or allow traffic
  • ICMP (ping), SSH, HTTP - all must be explicitly allowed

Peering ≠ permission.

  1. VPC Peering Is NOT Transitive

This one hurts the most.

If:

  • VPC A is peered with VPC B
  • VPC B is peered with VPC C

🚨 VPC A CANNOT talk to VPC C through VPC B

No shortcuts.
No hub routing.
Each VPC pair needs direct peering.

This is why, in this project, we use a full-mesh peering setup.

Common Use Cases for VPC Peering

VPC Peering is great when:

  • You need simple, low-latency VPC-to-VPC communication
  • You have a small number of VPCs
  • You want minimal operational overhead

Typical scenarios:

  • Dev ←> Prod VPC communication
  • Shared services VPC (auth, logging, monitoring)
  • Cross-region application access

VPC Peering is not about connecting VPCs.
It’s about teaching AWS how traffic should move.
Peering + Routes + Security Groups
All three must agree — or nothing works.

Architecture Overview

We are building a multi-region, full-mesh VPC peering architecture consisting of:

  • Primary VPC in us-east-1
  • Secondary VPC in us-west-2
  • Tertiary VPC in eu-west-2

Each VPC:

  • Has its own CIDR block
  • Contains a public subnet
  • Hosts an EC2 instance

Is peered with the other two VPCs

This creates direct connectivity between every VPC pair, without relying on transitive routing (because AWS doesn’t allow that).

Why Multi-Region?

Multi-region architectures are common in real-world systems for:

  • Reduced latency
  • Fault tolerance
  • Geographic availability
  • Disaster recovery scenarios

By spreading VPCs across North America (East + West) and Europe, this project simulates how globally distributed applications communicate internally.

This is also where many people get confused - because cross-region VPC peering behaves slightly differently than same-region peering and forces you to think carefully about routing.

CIDR Block Allocation

Each VPC is assigned a non-overlapping /16 CIDR block:

Primary VPC:    10.0.0.0/16
Secondary VPC:  10.1.0.0/16
Tertiary VPC:   10.2.0.0/16

Enter fullscreen mode Exit fullscreen mode

Why /16?

  • Provides 65,536 IP addresses per VPC
  • Leaves room for future subnet expansion

Each VPC contains a /24 public subnet, for example:

Primary Subnet: 10.0.1.0/24
Enter fullscreen mode Exit fullscreen mode

Key rule: CIDR blocks must not overlap — VPC peering simply won’t work otherwise.

Core Networking Components

Each VPC includes:

  • Internet Gateway – for outbound internet access
  • Route Table – controls where traffic goes
  • Security Groups – control who is allowed to talk
  • EC2 Instance – used to validate connectivity

The EC2 instances are not “the application” here - they are network probes.
If ping and HTTP work across VPCs, the architecture is correct.

Traffic Flow (What Actually Happens)

When an instance in the Primary VPC talks to an instance in the Secondary VPC:

  1. Packet leaves Primary instance
  2. Route table matches destination CIDR (10.1.0.0/16)
  3. Traffic is sent via the VPC peering connection
  4. Secondary route table allows return traffic
  5. Security groups permit ICMP/TCP
  6. Response flows back

Miss any one of these steps - and traffic dies silently.

Why Terraform for This Setup?

Managing one VPC from the console is doable.
Managing three VPCs across three regions with peering, routing, and security rules?

Yeah… no thanks 😵

Terraform (by HashiCorp) gives us:

  • Infrastructure as Code (IaC)
  • Version control for networking
  • Repeatable multi-region deployments
  • One command to destroy everything (underrated superpower)

VPCs & Subnets

Each region gets:

  • One VPC
  • One public subnet

The VPC configuration is intentionally consistent:

  • DNS support enabled
  • DNS hostnames enabled
  • Clear tagging for identification

Why consistency matters:

  • Easier debugging
  • Easier scaling
  • Easier explanation in interviews 😉

Subnets are:

  • /24 size
  • Placed in the first available AZ per region
  • Public, with auto-assigned public IPs

This keeps networking simple and observable, which is perfect for learning and testing.

Internet Gateways & Route Tables (Basic Connectivity)

Before VPCs can talk to each other, they need to:

  • Talk to the internet (for SSH, updates, testing)

Each VPC gets:

  • One Internet Gateway
  • One route table
  • A default route (0.0.0.0/0 -> IGW)

Important note:

Internet access is not related to VPC peering — but without it, testing becomes painful.

This separation helps reinforce:

  • Internet routing ≠ peering routing

VPC Peering Connections (The Handshake)

Now comes the most misunderstood part.

Creating a peering connection does only one thing:
👉 It establishes a relationship between two VPCs.

That’s it.

In this project, we create three peering connections:

  • Primary ←> Secondary
  • Primary ←> Tertiary
  • Secondary ←> Tertiary

Because this is cross-region peering, each connection has:

  • A requester (one region)
  • An accepter (another region)

Terraform models this explicitly using:

  • aws_vpc_peering_connection
  • aws_vpc_peering_connection_accepter

This makes the peering lifecycle visible and deterministic.

💡 Key insight:
If you forget the accepter resource, the connection will sit in pending-acceptance forever.

Route Tables for Peering

VPC Peering without routes does nothing.

For every peering connection:

  • Each VPC needs a route
  • Destination = remote VPC CIDR
  • Target = peering connection ID

That means:

  • 1 peering = 2 routes
  • 3 peerings = 6+ routes

Terraform forces you to be explicit, which is good.
It prevents “it works sometimes” networking.

We also use depends_on to ensure:

  • Routes are created only after peering is active

This avoids race conditions and weird apply-time errors.

💡 Real-world lesson:
If ping doesn’t work, check route tables before blaming security groups.

Security Groups (Permission Still Matters)

Even with:

  • Active peering
  • Correct routes

Traffic will still fail if security groups block it.

In this project:

  • SSH is allowed (demo only, don’t @ me)
  • ICMP (ping) is explicitly allowed between VPC CIDRs
  • TCP traffic is allowed for HTTP testing

Why this matters:

  • VPC peering does not bypass security
  • Security groups are stateful, but still strict

This step completes the networking triangle:

Peering + Routes + Security Groups

Miss one → nothing works.

EC2 Instances

The EC2 instances are not the “app.”
They are validation tools.

Each instance:

  • Runs in a different VPC
  • Uses user data to install Apache
  • Serves a page showing its VPC and private IP

If:

  • Ping works ✅
  • curl over private IP works ✅

Then the infrastructure is correct.

No guessing. No vibes-based networking.

Project Structure (Quick Recap)

The Terraform project is structured to stay readable even as it scales:

terraform/
├── providers.tf
├── variables.tf
├── data.tf
├── main.tf
├── locals.tf
├── outputs.tf
├── backend.tf
└── terraform.tfvars
Enter fullscreen mode Exit fullscreen mode

This separation helps with:

  • Easier debugging
  • Cleaner diffs
  • Sanity preservation at 2 AM 😂

Initializing Terraform

From the project directory, start with:

terraform init
Enter fullscreen mode Exit fullscreen mode

What this does:

  • Downloads required providers
  • Initializes the backend
  • Prepares Terraform to manage state

If this step fails, stop here.
Most issues later on are just delayed init problems in disguise.

Validating the Configuration

Before touching AWS resources:

terraform validate
terraform fmt
Enter fullscreen mode Exit fullscreen mode

Why this matters:

  • Catches syntax errors early
  • Ensures consistent formatting
  • Saves you from dumb mistakes (we all make them)

Terraform being quiet here is a good sign.

Planning the Deployment

Next:

terraform plan
Enter fullscreen mode Exit fullscreen mode

This is Terraform saying:
“Here’s exactly what I’m about to do. You sure about this?”

You should see:

  • ~35 resources to be created
  • VPCs across 3 regions
  • Peering connections
  • Route tables and security groups
  • EC2 instances

Read this output carefully.
If something looks off here, it will be wrong after apply.

Applying the Infrastructure

When everything looks good:

terraform apply
Enter fullscreen mode Exit fullscreen mode

Type yes and let Terraform cook 🔥

What happens next:

  • VPCs are created
  • Subnets, IGWs, and route tables come online
  • Peering requests are sent and accepted
  • Routes are added
  • EC2 instances launch and bootstrap

Deployment usually takes 5–10 minutes, depending on region speed.

Terraform State & Cleanup

Terraform tracks everything using a state file.
This project supports remote state (S3), which is important for:

  • Team collaboration
  • State safety
  • Locking

And when you’re done experimenting:

terraform destroy
Enter fullscreen mode Exit fullscreen mode

One command.
Everything gone.
No leftover AWS bill jumpscare 💸

By building a multi-region, full-mesh VPC peering setup with Terraform, a few things became very clear:

  • VPC peering is simple in concept, but strict in execution
  • Peering alone does nothing - routes and security rules do the real work
  • Most networking bugs aren’t mysterious… they’re just misconfigurations
  • Terraform doesn’t hide complexity - it forces clarity

Resources:

Connect with Me On:

Happy Terraform Deploying!!

Top comments (0)