DEV Community

Cover image for No More Middlemen: Native AWS to Google Cloud Connectivity Explained
Gergo Vadasz
Gergo Vadasz

Posted on • Originally published at gergovadasz.hu

No More Middlemen: Native AWS to Google Cloud Connectivity Explained

Until now, connecting AWS and Google Cloud meant stitching together VPNs over the public internet, colocating in the same facility with two separate cross-connects, or paying a third-party network provider to bridge the gap. These approaches work - plenty of companies rely on them daily - but they all come with operational overhead and added complexity that make multi-cloud connectivity harder to set up and maintain than it needs to be.

That changed on April 14, 2026, when AWS launched AWS Interconnect - multicloud with Google Cloud as the first partner. For the first time, you can provision a dedicated, private connection between AWS and GCP directly from the console - no middleman, no colo, no VPN tunnels. I got hands-on with it the same week it went GA, and in this post I'll walk you through exactly how to set it up, step by step.

 

What is AWS Interconnect - multicloud?

AWS Interconnect - multicloud is a new service under the AWS Direct Connect family that lets you create private, high-bandwidth connections directly to other cloud providers. At launch, Google Cloud is the only supported partner, with Microsoft Azure and Oracle Cloud Infrastructure coming later in 2026.

The key things to know:

  • It lives under Direct Connect in the AWS console
  • Connections are region-to-region (e.g., AWS eu-central-1 to GCP europe-west3)
  • It uses a Direct Connect gateway on the AWS side and Partner Cross-Cloud Interconnect on the GCP side
  • Bandwidth ranges from 1 Gbps to 100 Gbps, with granular sizing - unlike traditional Cross-Cloud Interconnect which only offers 10G/100G increments
  • Redundancy is built into the underlying resources - no need to manually configure redundant connections like with traditional Cross-Cloud Interconnect
  • The connection can be initiated from either the AWS or the GCP side
  • Each GCP project is limited to one transport resource per region
  • Pricing is based on bandwidth and geographic scope
  • Free tier: One free local 500 Mbps interconnect per region, starting May 2026

 

Architecture Overview

Before diving into the setup, here's what the end-to-end architecture looks like:

AWS to Google Cloud Interconnect architecture

On the AWS side, traffic flows from your EC2 instances through a VPC Attachment into a Transit Gateway, then via a Direct Connect Attachment to a Direct Connect Gateway. The Direct Connect Gateway connects to GCP through the Partner Cross-Cloud Interconnect - this is the actual cross-cloud link that AWS provisions behind the scenes. On the Google Cloud side, the interconnect attaches to your VPC via GCP VPC Network Peering - to be clear, this is not a peering between the AWS VPC and the GCP VPC. It's a standard GCP VPC Network Peering used to connect the interconnect's managed network to your own GCP VPC, giving your Compute Engine instances direct reachability to AWS resources.

VPC Network Peering is not the only option on the GCP side - you can also use Network Connectivity Center (NCC) to connect the Partner Cross-Cloud Interconnect to your Google Cloud environment. NCC is the better choice if you need to connect multiple VPCs or integrate this into a broader hub-and-spoke topology on the GCP side. In this walkthrough, I'm using VPC Network Peering for simplicity.

The key takeaway from this diagram is that both sides use familiar networking primitives - there's no new proprietary overlay. If you've worked with AWS Transit Gateway or GCP Partner Interconnect before, the building blocks will feel familiar. What makes this work is that AWS and Google maintain pre-established network links between their regions. This service automates the provisioning of a dedicated connection over that shared infrastructure - no manual cross-connects or third-party involvement needed.

 

Prerequisites

Before you begin, you'll need:

  • An AWS account with access to Direct Connect in a supported region
  • A Google Cloud project with the Network Connectivity API enabled
  • Appropriate IAM permissions on both sides

 

Step 1: Create the Multicloud Interconnect in AWS

Navigate to Direct Connect > AWS Interconnect - multicloud in the AWS console. You'll see the interconnect dashboard - click Create Multicloud Interconnect.

AWS Interconnect multicloud dashboard

Select a provider

The first step is selecting your cloud provider. Currently, only Google Cloud is available.

Select Google Cloud as the provider

Select regions

Choose the AWS region and the corresponding Google Cloud region for your interconnect. Your region choices determine the physical connection path, which affects latency and performance. In my setup, I'm using eu-central-1 (Frankfurt) on the AWS side and europe-west3 (Frankfurt) on the GCP side - keeping both in the same metro for the lowest latency.

Select AWS and GCP regions

Configure options

Configure the interconnect details: give it a description, select a Direct Connect gateway (or create one), choose your bandwidth, and add any tags. On the right side, you'll see the option to create a new Direct Connect gateway if you don't have one yet.

Configure interconnect options and Direct Connect gateway

Once submitted, AWS provisions the interconnect and generates an activation key. This key is what ties the AWS side to the GCP side - copy it, you'll need it in the next step.

Activation key generated by AWS

 

Step 2: Create the Transport in Google Cloud

Now switch to the Google Cloud Console. Navigate to Network Connectivity > Partner Cross-Cloud Interconnect and click Create Transport.

GCP Partner Cross-Cloud Interconnect dashboard

Connection start point

Paste the activation key from AWS. Google Cloud will automatically detect the remote cloud provider and region. A transport profile is pre-provisioned for you.

Paste activation key and transport profile

Transport profile

The transport profile confirms the connection details: the remote cloud service provider (Amazon Web Services), region, description, and bandwidth.

Transport profile details

Basic configuration

Configure the transport name, bandwidth, IP stack type (IPv4 single stack), and transport connectivity settings.

Basic configuration

Connection

Select the appropriate VPC on GCP side, and provide what ip ranges should GCP advertise towards AWS.

Connection configuration

Verify the transport via CLI

You can also verify and manage the transport using gcloud. Note that at the time of writing, the transport commands are only available in the beta track:

gcloud beta network-connectivity transports list

NAME: gcp-to-aws
REGION: europe-west3
REMOTE_PROFILE: aws-eu-central-1
BANDWIDTH: BPS_1G
STATE: ACTIVE
Enter fullscreen mode Exit fullscreen mode

For more details, use describe with the --region flag:

gcloud beta network-connectivity transports describe gcp-to-aws --region europe-west3
Enter fullscreen mode Exit fullscreen mode

Set up VPC Network Peering via CLI

Once the transport is active, create the VPC Network Peering between your VPC and the transport's managed network. You can find the peering network URI in the transport's peeringNetwork field from the describe output above.

gcloud compute networks peerings create "gcp-to-aws" \
    --network="gcp-vpc" \
    --peer-network="projects/n088e7d12bbcf2d64p-tp/global/networks/transport-5c75b1ed8bc1eeec-vpc" \
    --stack-type=IPV4_ONLY \
    --import-custom-routes \
    --export-custom-routes
Enter fullscreen mode Exit fullscreen mode

Make sure to enable --import-custom-routes and --export-custom-routes so that routes are exchanged between your VPC and the interconnect.

 

Step 3: Wait for the Connection to come up

Since we initiated the connection from the AWS side and pasted the activation key into GCP, there's no additional key exchange needed. The GCP side confirms there is no pairing key to share back - the activation key from AWS was sufficient.

GCP transport - no pairing key needed

Back in the AWS console, the interconnect status will transition from Pending to Available automatically once both sides have completed their configuration. No manual acceptance is required when the connection is initiated from AWS.

AWS interconnect status changed to available

 

Step 4: AWS Networking Setup

With the interconnect link established, you now need to wire up the AWS networking side to make traffic flow.

Create a Transit Gateway

Create a Transit Gateway to act as the central hub for routing between your VPCs and the interconnect.

Create Transit Gateway

Direct Connect Gateway

The Direct Connect gateway bridges the interconnect and your AWS networking. Link it to the Transit Gateway so traffic can flow between your VPCs and GCP.

Direct Connect gateway configuration

Configure Allowed Prefixes

Finally, specify the allowed prefixes - these are the AWS-side CIDR ranges that will be advertised towards GCP over the interconnect.

Associate gateway with allowed prefixes

 

Step 5: Verify the Connection

AWS side: Transit Gateway route table

Once everything is associated, check the Transit Gateway route table. You should see routes learned from the GCP side via the interconnect.

Transit Gateway route table with learned routes

GCP side: Transport details

Back in Google Cloud, the transport details should show an Active status, confirming the connection is up and running.

GCP transport details showing active status

GCP side: VPC routes

Finally, check your GCP VPC routes. You should see the AWS prefixes appearing in the route table, learned through the cross-cloud interconnect.

GCP VPC routes showing AWS prefixes

Ping test

With routes in place on both sides, let's verify end-to-end connectivity. From the AWS EC2 instance (192.168.0.10), pinging the GCP VM (10.0.0.2):

ubuntu@ip-192-168-0-10:~$ ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=61 time=4.16 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=61 time=1.60 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=61 time=1.62 ms
--- 10.0.0.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 1.598/2.456/4.155/1.201 ms
Enter fullscreen mode Exit fullscreen mode

And from the GCP VM (10.0.0.2), pinging back to the AWS instance:

gergo@gcp-test-vm:~$ ping 192.168.0.10
PING 192.168.0.10 (192.168.0.10) 56(84) bytes of data.
64 bytes from 192.168.0.10: icmp_seq=1 ttl=61 time=2.42 ms
64 bytes from 192.168.0.10: icmp_seq=2 ttl=61 time=1.33 ms
64 bytes from 192.168.0.10: icmp_seq=3 ttl=61 time=1.44 ms
64 bytes from 192.168.0.10: icmp_seq=4 ttl=61 time=1.36 ms
--- 192.168.0.10 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3005ms
rtt min/avg/max/mdev = 1.334/1.636/2.419/0.453 ms
Enter fullscreen mode Exit fullscreen mode

Sub-2ms latency between Frankfurt regions - that's what you'd expect from a dedicated interconnect within the same metro.

 

First Impressions

After setting this up, a few things stand out:

What I liked:

  • The activation key / pairing key exchange is straightforward - it's similar to how Partner Interconnect works in GCP today
  • End-to-end setup took under an hour, which is remarkable compared to traditional cross-connect provisioning
  • The integration with existing AWS networking primitives (Direct Connect gateway, Transit Gateway) means you can plug this into an existing hub-and-spoke architecture without redesigning anything

What to watch out for:

  • The Direct Connect gateway and Transit Gateway association is an extra step that could trip up users who are new to AWS networking. AWS VPN Gateway can be used as well instead of TGW.
  • Terraform support is not yet available, though it's being tracked - for now, it's console/CLI only

What I'm curious about:

  • How the free 500 Mbps tier (coming May 2026) will work in practice
  • Performance characteristics compared to VPN-over-internet approaches
  • How Azure and Oracle Cloud integrations will look when they launch later this year - cross-cloud connectivity is already possible between Azure and Oracle Cloud, so it will be interesting to see how the AWS Interconnect approach compares

 

Conclusion

AWS Interconnect - multicloud is a significant step forward for multi-cloud networking. It removes the biggest friction point - the physical connectivity - and turns what used to be a weeks-long procurement process into something you can set up in an afternoon. If you're running workloads across AWS and Google Cloud, this is worth evaluating immediately, especially with the free 500 Mbps tier on the horizon.

 

References


Originally published at gergovadasz.hu

Top comments (0)