Kubernetes has become the go-to platform for deploying and managing containerized applications at scale. In this guide, we’ll walk through how to provision an AKS cluster and deploy a sample NGINX application — all using Terraform.
By the end, you’ll have a fully running NGINX service exposed on a public IP via an Azure Load Balancer, deployed automatically from Infrastructure as Code.
Why Use Terraform with AKS?
Terraform lets you manage both your infrastructure and your application workloads as code — a concept known as Infrastructure as Code (IaC).
With IaC, you describe everything your environment needs (from clusters to pods) in declarative configuration files instead of setting them up manually. This makes your deployments consistent, repeatable, and version-controlled — just like your application source code.
In practice, Terraform can handle both:
Provisioning the AKS cluster (the infrastructure)
Deploying the Kubernetes resources (the workloads)
That means no need to manually run az aks create or kubectl apply. Terraform can build the cluster, connect to it, and deploy your app — all in one automated workflow.
Prerequisites
Before you start, ensure you have:
An Azure account
Terraform installed (v1.5+)
kubectl installed and configured
Azure CLI installed and authenticated
Step 1: Create the Terraform Configuration
Create a new directory for your project and a file named main.tf
.
mkdir terraform-aks-nginx
cd terraform-aks-nginx
touch main.tf
Add the following Terraform code to main.tf
to define your AKS cluster and supporting resources:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~>3.0"
}
}
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "aks_rg" {
name = "aks-nginx-rg"
location = "East US"
}
resource "azurerm_kubernetes_cluster" "aks_cluster" {
name = "nginx-aks-cluster"
location = azurerm_resource_group.aks_rg.location
resource_group_name = azurerm_resource_group.aks_rg.name
dns_prefix = "nginxaksdemo"
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_B2s"
}
identity {
type = "SystemAssigned"
}
}
output "kube_config" {
value = azurerm_kubernetes_cluster.aks_cluster.kube_config_raw
sensitive = true
}
- Explanation for each component in
main.tf
:
Component / Block | Type | Description / Purpose | Tags / Notes |
---|---|---|---|
terraform |
Block | Defines global Terraform settings and required providers. | N/A |
required_providers |
Sub-block | Specifies providers Terraform uses (here, AzureRM). |
azurerm is from HashiCorp registry, version ~>3.0 . |
provider "azurerm" |
Provider | Connects Terraform to Microsoft Azure resources. |
features {} enables provider functionality (mandatory). |
azurerm_resource_group "aks_rg"
|
Resource | Creates an Azure Resource Group to organize related resources. | Tags added: environment , project , managed_by . |
name |
Attribute | Name of the resource group (aks-nginx-rg ). |
Descriptive and project-specific. |
location |
Attribute | Azure region where resources are deployed (West US ). |
Matches cluster location. |
azurerm_kubernetes_cluster "aks_cluster"
|
Resource | Deploys an Azure Kubernetes Service (AKS) cluster. | Tags match the resource group for consistency. |
name |
Attribute | Name of the AKS cluster (nginx-aks-cluster ). |
Helps identify cluster. |
location |
Attribute | Inherits location from resource group. | Ensures both resources are co-located. |
resource_group_name |
Attribute | Links AKS cluster to the resource group created above. | References azurerm_resource_group.aks_rg.name . |
dns_prefix |
Attribute | Prefix for the cluster’s public DNS name. | Used by Azure-managed API server endpoint. |
default_node_pool |
Block | Defines node configuration (VM size, count, etc.). | Node pool name: default . |
node_count |
Attribute | Number of worker nodes to start with (here, 1 ). |
Can be scaled later. |
vm_size |
Attribute | VM type used for each node (Standard_B2s ). |
Small, cost-effective VM for demos. |
identity |
Block | Assigns a managed identity for Azure authentication. | Type SystemAssigned allows Azure to manage the identity. |
tags |
Attribute Block | Adds metadata to resources for organization and management. |
environment , project , managed_by . |
output "kube_config" |
Output Block | Displays AKS kubeconfig data for kubectl access. |
Marked sensitive = true to hide from logs. |
sensitive |
Attribute | Protects sensitive data from appearing in plain text output. | Ensures secrets are hidden. |
tags.environment |
Tag | Describes the environment (e.g., demo , prod , dev ). |
Helps separate environments. |
tags.project |
Tag | Indicates which project the resource belongs to (nginx-aks ). |
Aids in billing and tracking. |
tags.managed_by |
Tag | Specifies management source (terraform ). |
Helps identify IaC-managed resources. |
Step 2: Initialize and Apply Terraform
Run the following commands to create your AKS cluster:
terraform init
terraform apply -auto-approve
This process will:
Create a resource group
Deploy an AKS cluster
Output the Kubernetes configuration
Step 3: Connect to Your AKS Cluster
After deployment, connect kubectl
to your new cluster:
az aks get-credentials --resource-group aks-nginx-rg --name nginx-aks-cluster
Step 4: Deploy NGINX
Create a file named nginx.yaml
with the following content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
Then apply it:
kubectl apply -f nginx.yaml
This will:
Deploy two NGINX pods
Expose them through a LoadBalancer Service, which automatically provisions an Azure Load Balancer
Step 5: Verify Deployment
Check the running pods:
kubectl get pods
And get the external IP address:
kubectl get service nginx-service
- You should see something like:
Do You Need a Load Balancer?
The LoadBalancer type service is not strictly required but highly recommended if you want public access.
If you only need internal or private traffic, you can use:
ClusterIP – for internal cluster communication only
NodePort – for debugging or development access on node IPs
Step 6: Clean Up
To remove all resources and avoid extra costs:
terraform destroy -auto-approve
Once complete, the Resource Group will be destroyed;
Summary
You’ve successfully deployed a scalable NGINX application on Azure Kubernetes Service (AKS) using Terraform!
This setup demonstrates how infrastructure as code (IaC) simplifies provisioning and how AKS handles orchestration and scaling out of the box.
Top comments (0)