DEV Community

Cover image for Connecting Your Hybrid Cloud with GCP Connectivity Center and Router Appliance - with Terraform
Gergo Vadasz
Gergo Vadasz

Posted on • Originally published at gergovadasz.hu

Connecting Your Hybrid Cloud with GCP Connectivity Center and Router Appliance - with Terraform

In hybrid cloud environments, connecting on-premises networks to Google Cloud in a scalable and manageable way is a common challenge. Google Cloud's Network Connectivity Center (NCC) implements a hub-and-spoke model for streamlined network management.

In this post, I'll walk through deploying two VPCs — one serving as a network hub with a Router Appliance VM, another for testing connectivity with a standard VM instance.

 

What is GCP Network Connectivity Center?

Network Connectivity Center (NCC) is Google Cloud's centralized network connectivity management solution. It enables a hub-and-spoke topology, simplifying connections across VPCs, VPNs, interconnects, and SD-WAN gateways in hybrid and multi-cloud settings.

Key benefits:

  • Reduced complexity in peering and route configuration
  • Enhanced visibility into hybrid network topology
  • Scalable network expansion through spoke attachment

 

Architecture

GCP NCC Architecture

The environment consists of:

  • A Connectivity Center hub
  • Internal VPC spoke with test VM
  • Router appliance VPC
  • Router appliance VM attached as spoke
  • Cloud Router for BGP route exchange

 

Step-by-Step Setup

1. Create the Internal VPC

Create an internal VPC with a 10.0.0.0/24 subnet and deploy a Linux VM in it.

Internal VPC

Internal VM

 

2. Create the Router Appliance VPC

Create a router appliance VPC with a 10.1.0.0/24 subnet and deploy a Linux VM that will serve as the router appliance.

Router Appliance VPC

Router Appliance VM

 

3. Create the Cloud Router

Create a GCP Cloud Router in the router appliance subnet with AS number 64512.

Cloud Router

 

4. Create the NCC Hub and Spokes

Create an NCC Hub, then attach two spokes: one VPC spoke (internal VPC) and one Router Appliance spoke.

NCC Hub

NCC Spokes

Configure the Router Appliance spoke with BGP using AS number 65001.

BGP Configuration

 

5. Configure the Router Appliance VM

SSH into the Router Appliance VM. Since I want to demonstrate a hybrid connection, we need to simulate routing towards a remote location. In real-life scenarios, this could be a remote data center, branch office, SD-WAN solution, or even another cloud environment.

Create a loopback interface with a 192.168.0.0/24 IP to simulate the remote network:

# Create loopback ip address
ip addr add 192.168.0.1/24 dev lo

# Add entry to the route table
ip route add 192.168.0.0/24 dev lo
Enter fullscreen mode Exit fullscreen mode

Verify the configuration:

gergo@nva-instance:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 192.168.0.1/24 scope global lo
Enter fullscreen mode Exit fullscreen mode

 

6. Install FRR and Configure BGP

# Install FRR
sudo apt install frr

# Enable BGP in FRR
sudo sed -i 's/bgpd=no/bgpd=yes/' /etc/frr/daemons

# Restart FRR to take effect
sudo systemctl restart frr

# Configure BGP
sudo vtysh -c 'conf t' \
-c 'route-map ACCEPT-ALL permit 10' \
-c 'exit' \
-c 'router bgp 65001' \
-c 'neighbor 10.1.0.4 remote-as 64512' \
-c 'neighbor 10.1.0.4 description "GCP Peer 1"' \
-c 'neighbor 10.1.0.4 ebgp-multihop' \
-c 'neighbor 10.1.0.4 disable-connected-check' \
-c 'neighbor 10.1.0.5 remote-as 64512' \
-c 'neighbor 10.1.0.5 description "GCP 2"' \
-c 'neighbor 10.1.0.5 ebgp-multihop' \
-c 'neighbor 10.1.0.5 disable-connected-check' \
-c 'address-family ipv4 unicast' \
-c 'network 192.168.0.0/24' \
-c 'neighbor 10.1.0.4 soft-reconfiguration inbound' \
-c 'neighbor 10.1.0.4 route-map ACCEPT-ALL in' \
-c 'neighbor 10.1.0.4 route-map ACCEPT-ALL out' \
-c 'neighbor 10.1.0.5 soft-reconfiguration inbound' \
-c 'neighbor 10.1.0.5 route-map ACCEPT-ALL in' \
-c 'neighbor 10.1.0.5 route-map ACCEPT-ALL out' \
-c 'end' \
-c 'write'
Enter fullscreen mode Exit fullscreen mode

 

Verifying Route Exchange

After configuration, BGP neighborship with Cloud Router should be established:

nva-instance# show ip bgp summary

IPv4 Unicast Summary:
BGP router identifier 192.168.0.1, local AS number 65001 VRF default vrf-id 0
BGP table version 2
RIB entries 3, using 384 bytes of memory
Peers 2, using 47 KiB of memory

Neighbor        V         AS   MsgRcvd   MsgSent   TblVer  InQ OutQ  Up/Down State/PfxRcd   PfxSnt Desc
10.1.0.4        4      64512       101       104        2    0    0 00:32:29            1        2 "GCP Cloud Router
10.1.0.5        4      64512       101       104        2    0    0 00:32:29            1        2 "GCP Cloud Router
Enter fullscreen mode Exit fullscreen mode

The BGP table shows the internal VPC subnet learned from Cloud Router, and the 192.168.0.0/24 loopback advertised locally:

nva-instance# show ip bgp
BGP table version is 2, local router ID is 192.168.0.1, vrf id 0

     Network          Next Hop            Metric LocPrf Weight Path
 *>  10.1.0.0/24      10.1.0.1               100             0 64512 ?
 *=                   10.1.0.1               100             0 64512 ?
 *>  192.168.0.0/24   0.0.0.0                  0         32768 i
Enter fullscreen mode Exit fullscreen mode

 

Route Tables in Google Cloud Console

The Router Appliance VPC routing table shows the NCC Hub advertising the internal VPC, and the Router Appliance advertising 192.168.0.0/24:

Router Appliance VPC routes

The Internal VPC routing table shows 192.168.0.0/24 being advertised with next hop as NCC Hub:

Internal VPC routes

 

Connectivity Test

With firewall rules permitting traffic, connectivity testing succeeds:

gergo@internal-vm:~$ ping 192.168.0.1
PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data.
64 bytes from 192.168.0.1: icmp_seq=1 ttl=64 time=0.823 ms
64 bytes from 192.168.0.1: icmp_seq=2 ttl=64 time=0.336 ms
64 bytes from 192.168.0.1: icmp_seq=3 ttl=64 time=0.268 ms
Enter fullscreen mode Exit fullscreen mode

 

Conclusion

NCC is a powerful tool that can simplify networking setup and operation in Google Cloud. From here, you could expand the demo with additional VPCs, more Cloud Routers, or failover scenario testing.

The complete Terraform code provisions everything automatically — NCC hub, spokes, Router Appliance, Cloud Router, VMs, and even the BGP routing. No manual steps required in the Google Cloud Console.

Check out the full Terraform code and guide at gergovadasz.hu.


Originally published on gergovadasz.hu. I write hands-on cloud networking guides with production-ready Terraform code for AWS, Azure, and GCP. Subscribe for more.

Top comments (0)