DEV Community

iapilgrim
iapilgrim

Posted on

Phase 2: Deploying a Production-Ready AKS Cluster in East US 2 (Azure CNI + Managed Identity + Monitoring)

In Phase 1, we prepared:

  • Resource Group
  • Virtual Network + Subnet
  • Managed Identity
  • Provider registrations

Now we deploy a production-grade AKS cluster in eastus2 using best practices.

This is not a demo cluster.
This is how platform engineers deploy AKS in real environments.


🎯 Architecture Goal

We will create:

  • AKS attached to existing VNet
  • Azure CNI networking (not kubenet)
  • Managed Identity (no service principal)
  • OIDC issuer enabled
  • Workload Identity enabled
  • Azure Monitor integration enabled
  • Separate system/user node pools (optional)

Integrated services:

  • Kubernetes
  • Azure Virtual Network
  • Azure Monitor
  • Log Analytics

πŸ›  Step 1 β€” Get Required Resource IDs

AKS must be attached to an existing subnet.

Get Subnet ID

SUBNET_ID=$(az network vnet subnet show \
  --resource-group aks-east2-rg \
  --vnet-name aks-vnet \
  --name aks-subnet \
  --query id -o tsv)
Enter fullscreen mode Exit fullscreen mode

Get Managed Identity ID

MI_ID=$(az identity show \
  --resource-group aks-east2-rg \
  --name aks-mi \
  --query id -o tsv)

MI_PRINCIPAL_ID=$(az identity show \
  --resource-group aks-east2-rg \
  --name aks-mi \
  --query principalId -o tsv)
Enter fullscreen mode Exit fullscreen mode

πŸ” Step 2 β€” Grant Network Permissions

AKS must manage IP allocations inside the subnet.

Assign Network Contributor role:

az role assignment create \
  --assignee $MI_PRINCIPAL_ID \
  --role "Network Contributor" \
  --scope $SUBNET_ID
Enter fullscreen mode Exit fullscreen mode

Without this, Azure CNI will fail.

This is one of the most common production misconfigurations.


πŸ— Step 3 β€” Create Production AKS Cluster

Now we deploy.

az aks create \
  --resource-group aks-east2-rg \
  --name aks-prod-east2 \
  --location eastus2 \
  --node-count 2 \
  --node-vm-size Standard_DS3_v2 \
  --network-plugin azure \
  --vnet-subnet-id $SUBNET_ID \
  --assign-identity $MI_ID \
  --enable-managed-identity \
  --enable-oidc-issuer \
  --enable-workload-identity \
  --enable-addons monitoring \
  --generate-ssh-keys
Enter fullscreen mode Exit fullscreen mode

πŸ” What Each Flag Does (Production Perspective)

Flag Why It Matters
--network-plugin azure Enables Azure CNI (real VNet IPs for pods)
--vnet-subnet-id Attaches cluster to enterprise network
--assign-identity Uses managed identity
--enable-oidc-issuer Required for workload identity
--enable-workload-identity Modern cloud auth model
--enable-addons monitoring Enables logs + metrics
--node-vm-size Standard_DS3_v2 Balanced CPU/memory for workloads

This is enterprise-ready configuration.


🚨 Common Error (If Monitoring Fails)

If you see:

MissingSubscriptionRegistration:
Microsoft.OperationalInsights
Enter fullscreen mode Exit fullscreen mode

Register provider:

az provider register --namespace Microsoft.OperationalInsights
Enter fullscreen mode Exit fullscreen mode

This is required because monitoring depends on Log Analytics.


πŸ”‘ Step 4 β€” Get kubeconfig

Once deployment completes:

az aks get-credentials \
  --resource-group aks-east2-rg \
  --name aks-prod-east2
Enter fullscreen mode Exit fullscreen mode

Verify:

kubectl get nodes -o wide
Enter fullscreen mode Exit fullscreen mode

You should see:

  • 2 nodes
  • Private IPs from 10.240.0.0/16
  • VM size Standard_DS3_v2

πŸ”Ž Step 5 β€” Validate Cluster Features

Check OIDC Issuer

az aks show \
  --resource-group aks-east2-rg \
  --name aks-prod-east2 \
  --query "oidcIssuerProfile.issuerUrl"
Enter fullscreen mode Exit fullscreen mode

If it returns a URL, OIDC is enabled.


Check Monitoring Addon

az aks show \
  --resource-group aks-east2-rg \
  --name aks-prod-east2 \
  --query addonProfiles
Enter fullscreen mode Exit fullscreen mode

You should see monitoring enabled.

This means:

  • Log Analytics workspace created
  • Cluster connected to Azure Monitor

πŸ— Optional β€” Add Dedicated User Node Pool

Production clusters separate system and user workloads.

az aks nodepool add \
  --resource-group aks-east2-rg \
  --cluster-name aks-prod-east2 \
  --name userpool \
  --node-count 2 \
  --node-vm-size Standard_DS3_v2 \
  --mode User
Enter fullscreen mode Exit fullscreen mode

Now check:

kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

You will see:

  • System pool (critical components)
  • User pool (applications)

This improves stability and scaling flexibility.


🧠 What You Learned in Phase 2

You now understand:

  • Azure CNI networking
  • Role assignment for managed identity
  • OIDC for Kubernetes
  • Workload Identity configuration
  • Monitoring integration
  • Node pool separation strategy
  • How AKS integrates with Azure networking and logging services

Most AKS tutorials stop at:

az aks create --node-count 2
Enter fullscreen mode Exit fullscreen mode

You built a production baseline instead.


πŸ— Resulting Architecture

eastus2
β”‚
β”œβ”€β”€ Resource Group: aks-east2-rg
β”‚   β”œβ”€β”€ AKS Control Plane (managed by Azure)
β”‚   β”œβ”€β”€ System Node Pool
β”‚   β”œβ”€β”€ User Node Pool (optional)
β”‚   β”œβ”€β”€ Managed Identity
β”‚   β”œβ”€β”€ VNet + Subnet
β”‚   └── Log Analytics Workspace
Enter fullscreen mode Exit fullscreen mode

🎯 What’s Next?

Phase 3 could cover:

  • Private AKS cluster (no public API endpoint)
  • Ingress controller setup (NGINX or Application Gateway)
  • Horizontal Pod Autoscaler + Cluster Autoscaler
  • Azure Workload Identity with real example
  • Hub-spoke production architecture

Top comments (0)