In Phase 1, we prepared:
- Resource Group
- Virtual Network + Subnet
- Managed Identity
- Provider registrations
Now we deploy a production-grade AKS cluster in eastus2 using best practices.
This is not a demo cluster.
This is how platform engineers deploy AKS in real environments.
π― Architecture Goal
We will create:
- AKS attached to existing VNet
- Azure CNI networking (not kubenet)
- Managed Identity (no service principal)
- OIDC issuer enabled
- Workload Identity enabled
- Azure Monitor integration enabled
- Separate system/user node pools (optional)
Integrated services:
- Kubernetes
- Azure Virtual Network
- Azure Monitor
- Log Analytics
π Step 1 β Get Required Resource IDs
AKS must be attached to an existing subnet.
Get Subnet ID
SUBNET_ID=$(az network vnet subnet show \
--resource-group aks-east2-rg \
--vnet-name aks-vnet \
--name aks-subnet \
--query id -o tsv)
Get Managed Identity ID
MI_ID=$(az identity show \
--resource-group aks-east2-rg \
--name aks-mi \
--query id -o tsv)
MI_PRINCIPAL_ID=$(az identity show \
--resource-group aks-east2-rg \
--name aks-mi \
--query principalId -o tsv)
π Step 2 β Grant Network Permissions
AKS must manage IP allocations inside the subnet.
Assign Network Contributor role:
az role assignment create \
--assignee $MI_PRINCIPAL_ID \
--role "Network Contributor" \
--scope $SUBNET_ID
Without this, Azure CNI will fail.
This is one of the most common production misconfigurations.
π Step 3 β Create Production AKS Cluster
Now we deploy.
az aks create \
--resource-group aks-east2-rg \
--name aks-prod-east2 \
--location eastus2 \
--node-count 2 \
--node-vm-size Standard_DS3_v2 \
--network-plugin azure \
--vnet-subnet-id $SUBNET_ID \
--assign-identity $MI_ID \
--enable-managed-identity \
--enable-oidc-issuer \
--enable-workload-identity \
--enable-addons monitoring \
--generate-ssh-keys
π What Each Flag Does (Production Perspective)
| Flag | Why It Matters |
|---|---|
--network-plugin azure |
Enables Azure CNI (real VNet IPs for pods) |
--vnet-subnet-id |
Attaches cluster to enterprise network |
--assign-identity |
Uses managed identity |
--enable-oidc-issuer |
Required for workload identity |
--enable-workload-identity |
Modern cloud auth model |
--enable-addons monitoring |
Enables logs + metrics |
--node-vm-size Standard_DS3_v2 |
Balanced CPU/memory for workloads |
This is enterprise-ready configuration.
π¨ Common Error (If Monitoring Fails)
If you see:
MissingSubscriptionRegistration:
Microsoft.OperationalInsights
Register provider:
az provider register --namespace Microsoft.OperationalInsights
This is required because monitoring depends on Log Analytics.
π Step 4 β Get kubeconfig
Once deployment completes:
az aks get-credentials \
--resource-group aks-east2-rg \
--name aks-prod-east2
Verify:
kubectl get nodes -o wide
You should see:
- 2 nodes
- Private IPs from 10.240.0.0/16
- VM size Standard_DS3_v2
π Step 5 β Validate Cluster Features
Check OIDC Issuer
az aks show \
--resource-group aks-east2-rg \
--name aks-prod-east2 \
--query "oidcIssuerProfile.issuerUrl"
If it returns a URL, OIDC is enabled.
Check Monitoring Addon
az aks show \
--resource-group aks-east2-rg \
--name aks-prod-east2 \
--query addonProfiles
You should see monitoring enabled.
This means:
- Log Analytics workspace created
- Cluster connected to Azure Monitor
π Optional β Add Dedicated User Node Pool
Production clusters separate system and user workloads.
az aks nodepool add \
--resource-group aks-east2-rg \
--cluster-name aks-prod-east2 \
--name userpool \
--node-count 2 \
--node-vm-size Standard_DS3_v2 \
--mode User
Now check:
kubectl get nodes
You will see:
- System pool (critical components)
- User pool (applications)
This improves stability and scaling flexibility.
π§ What You Learned in Phase 2
You now understand:
- Azure CNI networking
- Role assignment for managed identity
- OIDC for Kubernetes
- Workload Identity configuration
- Monitoring integration
- Node pool separation strategy
- How AKS integrates with Azure networking and logging services
Most AKS tutorials stop at:
az aks create --node-count 2
You built a production baseline instead.
π Resulting Architecture
eastus2
β
βββ Resource Group: aks-east2-rg
β βββ AKS Control Plane (managed by Azure)
β βββ System Node Pool
β βββ User Node Pool (optional)
β βββ Managed Identity
β βββ VNet + Subnet
β βββ Log Analytics Workspace
π― Whatβs Next?
Phase 3 could cover:
- Private AKS cluster (no public API endpoint)
- Ingress controller setup (NGINX or Application Gateway)
- Horizontal Pod Autoscaler + Cluster Autoscaler
- Azure Workload Identity with real example
- Hub-spoke production architecture
Top comments (0)