In this tutorial we build a complete AKS networking lab to demonstrate:
- AKS Overlay networking
- AKS Underlay (Azure CNI) networking
- Cross-region VNet peering
- Pod-to-pod communication across clusters
- Access to Azure SQL via Private Endpoint
- Service exposure via NodePort
Region 1 (Southeast Asia)
VNet: 10.0.0.0/16
--------------------------------
AKS Overlay Cluster
Nodes: 10.0.x.x
Pods : 192.168.x.x
Private Endpoint
Azure SQL
10.0.4.4
Private DNS Zone
privatelink.database.windows.net
Region 2 (East Asia)
VNet: 10.1.0.0/16
--------------------------------
AKS Underlay Cluster
Nodes: 10.1.x.x
Pods : 10.1.x.x
VNet Peering
10.0.0.0/16 <----> 10.1.0.0/16
1. Create the Overlay AKS Cluster
Overlay networking allows pods to use non-VNet IP ranges, which prevents subnet exhaustion.
Create resource group
RG_NETWORK=rg-aks-network
LOCATION=southeastasia
az group create \
--name $RG_NETWORK \
--location $LOCATION
Create VNet
VNET_NAME=vnet-aks-overlay
az network vnet create \
--resource-group $RG_NETWORK \
--name $VNET_NAME \
--address-prefix 10.0.0.0/16 \
--subnet-name aks-subnet \
--subnet-prefix 10.0.1.0/24
Create AKS overlay cluster
AKS_OVERLAY=aks-overlay
SUBNET_ID=$(az network vnet subnet show \
--resource-group $RG_NETWORK \
--vnet-name $VNET_NAME \
--name aks-subnet \
--query id -o tsv)
az aks create \
--resource-group $RG_NETWORK \
--name $AKS_OVERLAY \
--location $LOCATION \
--node-count 2 \
--network-plugin azure \
--network-plugin-mode overlay \
--pod-cidr 192.168.0.0/16 \
--vnet-subnet-id $SUBNET_ID \
--generate-ssh-keys
Connect kubectl
az aks get-credentials \
--resource-group $RG_NETWORK \
--name $AKS_OVERLAY \
--context aks-overlay
Verify overlay pod networking
Deploy a test pod.
kubectl run overlay-test \
--image=busybox \
--restart=Never \
-- sleep 3600
Check pod IP:
kubectl get pods -o wide
Example:
overlay-test 192.168.0.4
Overlay pods use 192.168.x.x, not VNet IPs.
2. Create Azure SQL with Private Endpoint
Create SQL server:
SQL_SERVER=aks-lab-sql-31445
az sql server create \
--name $SQL_SERVER \
--resource-group $RG_NETWORK \
--location $LOCATION \
--admin-user sqladmin \
--admin-password <password>
Create private endpoint
az network private-endpoint create \
--resource-group $RG_NETWORK \
--name sql-pe \
--vnet-name $VNET_NAME \
--subnet aks-subnet \
--private-connection-resource-id $(az sql server show --name $SQL_SERVER --resource-group $RG_NETWORK --query id -o tsv) \
--group-id sqlServer \
--connection-name sql-pe-connection
Create Private DNS zone
az network private-dns zone create \
--resource-group $RG_NETWORK \
--name privatelink.database.windows.net
Link overlay VNet
az network private-dns link vnet create \
--resource-group $RG_NETWORK \
--zone-name privatelink.database.windows.net \
--name link-overlay-vnet \
--virtual-network $VNET_NAME \
--registration-enabled false
Test SQL access from overlay pod
kubectl exec -it overlay-test -- sh
nslookup <sql-server>.database.windows.net
Expected:
10.0.x.x
3. Create Underlay AKS Cluster in Another Region
Underlay clusters assign real VNet IPs to pods.
Create resource group
RG_UNDERLAY=rg-aks-underlay
UNDERLAY_LOCATION=eastasia
az group create \
--name $RG_UNDERLAY \
--location $UNDERLAY_LOCATION
Create VNet
VNET_UNDERLAY=vnet-aks-underlay
az network vnet create \
--resource-group $RG_UNDERLAY \
--name $VNET_UNDERLAY \
--location $UNDERLAY_LOCATION \
--address-prefix 10.1.0.0/16 \
--subnet-name aks-subnet \
--subnet-prefix 10.1.1.0/24
Create underlay AKS cluster
SUBNET_UNDERLAY=$(az network vnet subnet show \
--resource-group $RG_UNDERLAY \
--vnet-name $VNET_UNDERLAY \
--name aks-subnet \
--query id -o tsv)
az aks create \
--resource-group $RG_UNDERLAY \
--name aks-underlay \
--location $UNDERLAY_LOCATION \
--node-count 2 \
--network-plugin azure \
--vnet-subnet-id $SUBNET_UNDERLAY \
--service-cidr 172.17.0.0/16 \
--dns-service-ip 172.17.0.10 \
--generate-ssh-keys
Connect kubectl
az aks get-credentials \
--resource-group $RG_UNDERLAY \
--name aks-underlay \
--context aks-underlay
Verify pod IP
kubectl --context aks-underlay run underlay-test \
--image=busybox \
--restart=Never \
-- sleep 3600
Check IP:
10.1.x.x
Pods now consume VNet IPs.
4. Peer the VNets
Overlay and underlay clusters must communicate.
Overlay → Underlay
az network vnet peering create \
--resource-group $RG_NETWORK \
--name overlay-to-underlay \
--vnet-name $VNET_NAME \
--remote-vnet /subscriptions/<sub>/resourceGroups/$RG_UNDERLAY/providers/Microsoft.Network/virtualNetworks/$VNET_UNDERLAY \
--allow-vnet-access
Underlay → Overlay
az network vnet peering create \
--resource-group $RG_UNDERLAY \
--name underlay-to-overlay \
--vnet-name $VNET_UNDERLAY \
--remote-vnet /subscriptions/<sub>/resourceGroups/$RG_NETWORK/providers/Microsoft.Network/virtualNetworks/$VNET_NAME \
--allow-vnet-access
5. Link Private DNS to Underlay VNet
Without this step, the underlay cluster resolves public SQL endpoints.
az network private-dns link vnet create \
--resource-group $RG_NETWORK \
--zone-name privatelink.database.windows.net \
--name link-underlay-vnet \
--virtual-network /subscriptions/<sub>/resourceGroups/$RG_UNDERLAY/providers/Microsoft.Network/virtualNetworks/$VNET_UNDERLAY \
--registration-enabled false
6. Communication Testing Scenarios
Scenario 1 — Overlay Pod → Underlay Pod
Get underlay pod IP:
10.1.1.18
From overlay pod:
kubectl --context aks-overlay exec -it overlay-test -- sh
ping 10.1.1.18
Success proves cross-region VNet routing works.
Scenario 2 — Underlay Pod → Azure SQL Private Endpoint
Enter underlay pod:
kubectl exec -it underlay-test -- sh
nslookup <sql-server>.database.windows.net
Result:
10.0.4.4
Test SQL port:
nc -zv <sql-server>.database.windows.net 1433
Expected:
Connection succeeded
Scenario 3 — Overlay Pod → Underlay Service
Deploy nginx in underlay cluster:
kubectl create deployment nginx --image=nginx
Expose via NodePort:
kubectl expose deployment nginx \
--type NodePort \
--port 80 \
--name nginx-nodeport
Check NodePort:
80:31904/TCP
Get node IP:
10.1.1.33
Test from overlay pod:
kubectl --context aks-overlay exec -it overlay-test -- sh
wget -qO- http://10.1.1.33:31904
Result:
Welcome to nginx!
Key Networking Differences
| Feature | Overlay | Underlay |
|---|---|---|
| Pod IP | Overlay CIDR | VNet subnet |
| VNet IP usage | No | Yes |
| Subnet exhaustion risk | Low | High |
| Direct VNet reachability | No | Yes |
Final Result
This lab demonstrates:
- Overlay networking
- Underlay networking
- Cross-region VNet peering
- AKS pod communication
- Azure SQL Private Endpoint connectivity
- Kubernetes service exposure

Top comments (0)