DEV Community

Cover image for Getting Started with Azure Kubernetes Service (AKS).
lotanna obianefo
lotanna obianefo

Posted on

Getting Started with Azure Kubernetes Service (AKS).

As organizations increasingly adopt microservices and cloud-native architectures, container orchestration platforms have become essential. Azure Kubernetes Service (AKS) is a fully managed Kubernetes offering from Microsoft that simplifies deploying, managing, and scaling containerized applications in the cloud.

AKS reduces the operational overhead of managing Kubernetes by handling critical control plane components while providing deep integration with Azure services.

Workspace Initialization

Set up a local repository using Git and link it to a remote repository on GitHub to systematically track, version, and manage changes to your project, enabling progress monitoring, collaboration, and recovery of previous states if needed.

1. Initialize Git Repository

Create a new project directory, initialize it as a repository using Git, and perform an initial commit with a README.md file to establish a baseline.

This is essential in cloud engineering because version control enables tracking of code and infrastructure changes, supports collaboration, and allows safe rollback to previous states.

It aligns with the Operational Excellence pillar of the AKS Well-Architected Framework, ensuring effective change management, traceability, and reliable deployment practices.

      # Creates a new directory (folder).
      mkdir my-cloud-project 

      # Changes the current directory.
      cd my-cloud-project 

      # Initializes a new Git repository in the current directory.
      git init 

      # Prints text to the terminal — also used to write to files with > or >>.
      echo "# Cloud project Repository" > README.md 

      # Stages the specified file(s) for the next commit.
      git add README.md 

      # Creates a new commit with all staged changes and the message after -m.
      git commit -m "Initial commit" 
Enter fullscreen mode Exit fullscreen mode

kdsjscc

Setup & Prerequisites

Prepare your local development environment and install the Kubernetes command-line interface, kubectl, which is required to interact with and manage Kubernetes clusters by deploying applications, inspecting resources, and executing administrative commands.

1. Login and Set Variables

Authenticate with Microsoft Azure using the command-line interface and define reusable variables for key resources (such as cluster name, region, and resource group) to standardize configuration.

This approach reduces manual input errors, ensures consistency during resource provisioning, and improves repeatability across deployments.

It aligns with the Operational Excellence pillar of the AKS Well-Architected Framework, as using variables and CLI-based automation promotes efficient, reliable, and well-governed cloud operations.

      # Signs you in to your Azure account so you can run commands against your subscription.
      az login

      # Sets the shell variable RG so later commands can reference it with $RG.
      RG="aks-project-rg"

      # Sets the shell variable CLUSTER_NAME so later commands can reference it with $CLUSTER_NAME.
      CLUSTER_NAME="skill-aks-cluster"

      # Sets the shell variable LOCATION so later commands can reference it with $LOCATION.
      LOCATION="australiacentral"
Enter fullscreen mode Exit fullscreen mode

ouhuhi
Iuuuiij

2. Install Kubectl

Install the Kubernetes command-line tool, kubectl, which serves as the primary interface for communicating with the Kubernetes API server to deploy applications, manage cluster resources, and perform administrative tasks.

Properly initializing this essential tooling ensures consistent and reliable interaction with the cluster, aligning with the Operational Excellence pillar of the AKS Well-Architected Framework by promoting standardized, efficient, and error-resistant operations.

      # Runs the Azure CLI command "az aks install-cli" — see "az aks install-cli --help" for details.
      az aks install-cli


      # Runs a Kubernetes cluster management command.
      kubectl version --client
Enter fullscreen mode Exit fullscreen mode

Ijhghygfyt

3. Create Resource Group

Create a resource group in Microsoft Azure to act as a logical container for all cluster-related resources. This is required because Azure mandates that resources are organized within resource groups for consistent management, access control, and cost tracking.

It supports the Reliability pillar of the Azure Well-Architected Framework by enabling structured lifecycle management, including deployment, updates, and deletion of resources as a single unit.

      # Creates a resource group called "$RG" — a logical container for all the Azure resources in this project.
      az group create --name $RG --location $LOCATION
Enter fullscreen mode Exit fullscreen mode

hgffdtrg

Provision the AKS Cluster

Create a managed Kubernetes cluster using Azure Kubernetes Service (AKS) and configure a single system node pool, which hosts core Kubernetes components and system workloads required for cluster operation. This setup provides a stable foundation for running containerized applications while benefiting from automated cluster management, scaling, and maintenance handled by Azure.

1. Create the Cluster

Provision a managed Kubernetes cluster using Azure Kubernetes Service (AKS), which automatically creates the Kubernetes control plane and a worker node for running workloads.

This is important because AKS handles the complexity of managing control plane components such as the API server, scheduler, and etcd, allowing you to focus on deploying and scaling applications rather than maintaining infrastructure.

It also aligns with the Performance Efficiency pillar of the Azure Well-Architected Framework by offloading operational overhead to the cloud provider, enabling more efficient use of resources and faster deployment cycles.

      az aks create \
        --resource-group $RG \
        --name $CLUSTER_NAME \
        --node-count 1 \
        --generate-ssh-keys \
        --node-vm-size Standard_D2_v3
Enter fullscreen mode Exit fullscreen mode

grtwgew

gyugyuyt

Connect to the Cluster

Configure your local kubectl environment to securely communicate with the Kubernetes cluster by downloading and storing the cluster credentials, API endpoint information, and authentication certificates in the local kubeconfig file. This setup enables authenticated and encrypted interaction with the cluster for managing workloads and resources.

1. Download Kubeconfig

Download the Kubernetes cluster credentials from Azure Kubernetes Service (AKS) and merge them into the local ~/.kube/config file, which stores cluster connection details and authentication settings for kubectl.

This configuration is required because kubectl uses the cluster API endpoint, certificates, and access credentials to securely authenticate and communicate with the Kubernetes API server.

This process also supports the Security pillar of the Azure Well-Architected Framework by enabling encrypted authentication and controlled access through certificates and Role-Based Access Control (RBAC).

      az aks install-cli
      az aks get-credentials --resource-group $RG --name $CLUSTER_NAME
Enter fullscreen mode Exit fullscreen mode

uygytyjj

2. Verify Connection

Use kubectl to list the worker nodes in the Kubernetes cluster and verify the status of the cluster API endpoint.

This validation step confirms successful end-to-end connectivity between your local environment and the cluster, ensuring that the control plane and worker nodes are operational and ready to receive workloads.

This aligns with the Reliability pillar of the Azure Well-Architected Framework by verifying the operational readiness and availability of the Kubernetes environment before deploying applications.

      kubectl get nodes
      kubectl cluster-info
Enter fullscreen mode Exit fullscreen mode

igyfttdty

Deploy Your First App

Deploy a simple NGINX application to the Kubernetes cluster using a Kubernetes Deployment resource, which defines the desired application state and automatically manages pod creation, scaling, and recovery to ensure the application remains available and operational.

1. Create the Deployment

Create a Kubernetes Deployment that defines the desired state of running two replicas of the NGINX container within the cluster.

This is important because Kubernetes Deployments continuously monitor the application state and automatically replace failed pods to maintain the specified number of running replicas.

This self-healing capability improves application availability and supports the Reliability pillar of the Azure Well-Architected Framework by ensuring automated recovery and consistent workload operation.

      kubectl create deployment nginx-app --image=nginx --replicas=2
Enter fullscreen mode Exit fullscreen mode

ugytytugf

2. View Resources

Use kubectl to monitor the rollout status of your application deployment and track the progress of updates being applied to the cluster.

This is important because it provides real-time visibility into whether new pods are being created successfully and whether the application has reached its desired operational state without errors.

Monitoring deployment rollouts supports the Operational Excellence pillar of the Azure Well-Architected Framework by improving observability, enabling faster troubleshooting, and ensuring reliable workload management.

      kubectl get deployments
      kubectl get pods
Enter fullscreen mode Exit fullscreen mode

yutddydtr

Expose to the Internet

Create a Kubernetes Service of type LoadBalancer to expose your application to external users by automatically assigning a public IP address and routing incoming internet traffic to the application pods running inside the cluster. This allows the application to be securely accessed from outside the Kubernetes environment.

1. Create LoadBalancer Service

Create a Kubernetes Service of type LoadBalancer in Azure Kubernetes Service (AKS) to automatically provision an Azure Load Balancer and route external traffic to the application pods running in the cluster.

This is necessary because Kubernetes pods are accessible only within the cluster by default, and a LoadBalancer service provides a stable public IP address and entry point for external client access.

This approach supports the Performance Efficiency pillar of the Azure Well-Architected Framework by leveraging cloud-native networking and scalable traffic distribution to efficiently handle inbound application requests.

      kubectl expose deployment nginx-app --type=LoadBalancer --port=80
Enter fullscreen mode Exit fullscreen mode

iuhfrdrr

2. Retrieve Public IP

Use kubectl to continuously monitor the Kubernetes Service until the EXTERNAL-IP field changes from to an assigned public IP address.

This process is necessary because the cloud provider requires time to provision and configure the external load balancer that exposes the application to the internet. Once assigned, the public IP address serves as the endpoint used to access the application from a web browser.

This supports the Operational Excellence pillar of the Azure Well-Architected Framework by enabling real-time visibility and dynamic tracking of infrastructure resource provisioning.

      kubectl get service nginx-app --watch
Enter fullscreen mode Exit fullscreen mode

ugyftrffyt

3. Test Access

Verify that the web server hosted in Azure Kubernetes Service (AKS) is accessible from the internet by testing connectivity through the application’s public endpoint.

This end-to-end validation confirms that the networking components including the Kubernetes Service, load balancer, and application pods are correctly configured and functioning as expected.

Performing this final connectivity check supports the Reliability pillar of the Azure Well-Architected Framework by ensuring the application is available and reachable

        curl http://<YOUR_EXTERNAL_IP>
Enter fullscreen mode Exit fullscreen mode

iuhyugytyu

yugyuuyi

Top comments (0)