DEV Community

Cover image for Hybrid DNS with GCP Network Connectivity Center and Enterprise IPAM
Gergo Vadasz
Gergo Vadasz

Posted on • Originally published at gergovadasz.hu

Hybrid DNS with GCP Network Connectivity Center and Enterprise IPAM

I recently worked through a hybrid DNS design for a Google Cloud environment with some interesting constraints that I think are worth writing up.

The setup involved implementing a company-wide on-premises DNS system built on enterprise IPAM platforms (Infoblox, EfficientIP, or BlueCat) with two critical requirements:

  1. Security policies prohibit DNS queries originating from Google's public IP ranges
  2. The IPAM must remain the authoritative source for all DNS records, including GCP-hosted zones

The solution involved deploying virtual machines within GCP to bridge these constraints.

 

How DNS Works in Google Cloud

By default, Compute Engine instances use the VPC-internal DNS resolver at 169.254.169.254, handled by Cloud DNS based on the VPC network configuration.

Cloud DNS Zone Types

  • Private zones: Cloud DNS hosts records directly and is authoritative
  • Forwarding zones: Cloud DNS forwards queries to target name servers; with private routing, source IPs originate from 35.199.192.0/19
  • Peering zones: Cloud DNS delegates resolution to another VPC network's DNS context via metadata-plane operations (no actual DNS packets exchanged between VPCs)

The 35.199.192.0/19 Source IP Challenge

Cloud DNS forwarding zones support standard and private routing modes:

  • Standard forwarding: Source IP depends on target (public IPs use Google ranges; RFC 1918 addresses use 35.199.192.0/19)
  • Private forwarding: Forces all queries through the VPC network using 35.199.192.0/19, regardless of target IP type

Critical characteristics of this range:

Google Cloud automatically installs a non-removable route for 35.199.192.0/19 in every VPC network. This route is not visible in route tables and cannot be modified or exchanged between VPCs.

Enterprise firewalls typically block this range as it appears to be from a public IP range.

 

Why This Solution Is Necessary

Problem 1: On-premises firewalls reject Cloud DNS forwarding queries arriving from 35.199.192.0/19

Problem 2: Organizations require IPAM platforms to remain the single authoritative source for all DNS records across hybrid environments

The Solution: Deploy an IPAM grid member (simulated as a BIND VM) within GCP that:

  • Is authoritative for GCP zones (e.g., gcp.example.com)
  • Forwards on-premises zone queries to on-prem DNS servers using its private IP
  • Receives all GCP workload DNS queries via Cloud DNS forwarding
  • Receives GCP zone queries from on-premises via conditional forwarding

 

Network Topology

GCP Cloud DNS Topology

NCC Hub-and-Spoke Setup

The design uses three VPC networks connected through Google Cloud's Network Connectivity Center:

Infra VPC Spoke:

  • Hosts the DNS VM (IPAM grid member)
  • Hosts the SD-WAN VM
  • Central VPC where DNS forwarding zones reside

App VPC Spoke:

  • Hosts application workloads
  • VMs query DNS through Cloud DNS, which peers to Infra VPC

SD-WAN Router Appliance Spoke:

  • The SD-WAN VM (physically in Infra VPC) registered as a router appliance in NCC
  • Runs FRR (Free Range Routing)
  • Peers BGP with NCC Cloud Router (ASN 64515 ↔ 64514)
  • Advertises on-premises subnet 192.168.1.0/24

The SD-WAN VM includes:

  • NIC0 in Infra VPC (for BGP peering with NCC)
  • NIC1 in On-Prem VPC (simulating WAN link to data center)
  • IP forwarding enabled for inter-network routing

NCC route exchange ensures all VPCs learn about each other's subnets, enabling the App VPC to reach on-premises resources via the SD-WAN VM.

 

Cloud DNS Peering and Forwarding Configuration

Cloud DNS includes four managed zones with no private authoritative zones:

Peering Zones (in App VPC):

Zone DNS Name From To
gcp-dns-peering gcp.example.com App VPC Infra VPC
onprem-dns-peering on-prem.example.com App VPC Infra VPC

Forwarding Zones (in Infra VPC):

Zone DNS Name Target
gcp-dns-forwarding gcp.example.com DNS VM (10.0.1.10)
onprem-dns-forwarding on-prem.example.com DNS VM (10.0.1.10)

Both forwarding zones use forwarding_path = "private" to ensure VPC routing rather than internet routing.

 

Why Peering Instead of Cross-VPC Forwarding?

Cloud DNS forwarding cannot work across VPCs because of the 35.199.192.0/19 return path behavior.

When Cloud DNS uses a forwarding zone with private routing, queries originate from 35.199.192.0/19. Every VPC contains a special, non-removable route for this range pointing to that VPC's own Cloud DNS context. This route is not exchanged between VPCs through any mechanism (NCC, VPC peering, etc.).

If the App VPC forwards a query to the DNS VM in the Infra VPC:

  1. The query arrives with a source IP from 35.199.192.0/19
  2. The DNS VM responds to that source IP
  3. The Infra VPC's special route for 35.199.192.0/19 points to its own Cloud DNS, not back to the App VPC
  4. The response is silently dropped

DNS peering solves this entirely by evaluating queries in the target VPC's DNS context without sending packets between VPCs, eliminating cross-VPC return path issues.

Referenced Documentation:

 

Traffic Flows

Flow 1: GCP Workload Resolves a GCP Hostname

Query: app-server.gcp.example.com from App VM (10.0.2.10)

App VM (10.0.2.10)
  ↓ dig app-server.gcp.example.com
Cloud DNS (169.254.169.254) -- App VPC context
  ↓ Peering zone: gcp.example.com → Infra VPC (metadata-plane)
Cloud DNS -- Infra VPC context
  ↓ Forwarding zone: gcp.example.com → 10.0.1.10, Source: 35.199.192.0/19
DNS VM / IPAM (10.0.1.10)
  ↓ Authoritative zone: gcp.example.com
  ↓ app-server = 10.0.2.10
Response: 10.0.2.10
  ↓ Returns to 35.199.192.0/19 (Infra VPC route, correct context)
Cloud DNS returns answer to App VM
Enter fullscreen mode Exit fullscreen mode

The query remains within GCP, with Cloud DNS peering then forwarding to the DNS VM.

 

Flow 2: GCP Workload Resolves an On-Premises Hostname

Query: app1.on-prem.example.com from App VM (10.0.2.10)

App VM (10.0.2.10)
  ↓ dig app1.on-prem.example.com
Cloud DNS (169.254.169.254) -- App VPC context
  ↓ Peering zone: on-prem.example.com → Infra VPC
Cloud DNS -- Infra VPC context
  ↓ Forwarding zone: on-prem.example.com → 10.0.1.10, Source: 35.199.192.0/19
DNS VM / IPAM (10.0.1.10)
  ↓ Forward zone: on-prem.example.com → 192.168.1.10
  ↓ Source IP: 10.0.1.10 (private IP, on-prem firewall allows)
  ↓ Path: via SD-WAN VM (NCC router appliance, BGP-learned route)
On-Prem DNS (192.168.1.10)
  ↓ Authoritative zone: on-prem.example.com
  ↓ app1 = 192.168.1.50
Response travels back the same path
Enter fullscreen mode Exit fullscreen mode

The on-premises DNS server sees the query from 10.0.1.10, a private RFC1918 address, never from 35.199.192.0/19.

 

Flow 3: On-Premises Resolves a GCP Hostname

Query: app-server.gcp.example.com from on-premises client

On-Prem Client
  ↓ dig app-server.gcp.example.com
On-Prem DNS (192.168.1.10)
  ↓ Conditional forwarder: gcp.example.com → 10.0.1.10
  ↓ Path: via SD-WAN VM (static route in on-prem VPC)
DNS VM / IPAM (10.0.1.10)
  ↓ Authoritative zone: gcp.example.com
  ↓ app-server = 10.0.2.10
Response: 10.0.2.10
  ↓ Returns to on-prem DNS via SD-WAN VM
On-prem client receives answer
Enter fullscreen mode Exit fullscreen mode

The DNS VM resolves this locally as it is authoritative for gcp.example.com.

 

Traffic Flow Summary

From To Zone Path
App VM GCP hostname gcp.example.com Cloud DNS peer → forward → DNS VM (authoritative)
App VM On-prem hostname on-prem.example.com Cloud DNS peer → forward → DNS VM → on-prem DNS
On-prem GCP hostname gcp.example.com On-prem DNS → DNS VM (authoritative)
On-prem On-prem hostname on-prem.example.com On-prem DNS (authoritative, local)

 

Key Design Decisions

  1. No Cloud DNS private zones: The IPAM remains the single authoritative source; Cloud DNS only peers and forwards

  2. DNS peering is mandatory for cross-VPC resolution: The 35.199.192.0/19 return path constraint makes forwarding zones across VPCs non-functional

  3. DNS VM uses private IP for on-premises queries: Queries originate from 10.0.1.10 rather than 35.199.192.0/19, allowing enterprise firewalls to accept them into trusted zones

  4. NCC provides hybrid routing: The SD-WAN VM advertises on-premises routes via BGP to NCC, which propagates them to all spoke VPCs

 

Try It Yourself

The complete Terraform code for this setup is available on my blog — it provisions the entire environment including the NCC hub, DNS VM with BIND, SD-WAN VM with FRR, and all Cloud DNS zones. A single terraform apply gets you a working lab.

Check it out at gergovadasz.hu.


Originally published on gergovadasz.hu. I write hands-on cloud networking guides with production-ready Terraform code for AWS, Azure, and GCP. Subscribe for more.

Top comments (0)