WHAT IS AZURE KUBERNETES SERVICES?
Azure Kubernetes Service (AKS) is a fully managed, serverless Kubernetes service provided by Microsoft Azure that simplifies deploying, scaling, and managing containerized applications. It removes the operational burden of maintaining the Kubernetes control plane—handling upgrades, patching, and scaling—while offering high availability and enterprise-grade security via Azure Active Directory.
Key Aspects of AKS:
Reduced Management Overhead: Azure manages the master components (API server, etcd) at no extra cost, meaning you only pay for the worker nodes.
Automatic Scaling: AKS uses features like the Cluster Autoscaler and Horizontal Pod Autoscaler (HPA) to adjust node and pod counts based on demand.
Security and Compliance: Integration with Azure Policy and Active Directory enables robust, enterprise-level security and access control.
Development Tools: Streamlines CI/CD workflows with support for tools like GitHub Actions and Azure DevOps.
Use Cases: Highly effective for deploying microservices, web apps, IoT scenarios, and migrating legacy apps to a modern cloud-native environment.
Hybrid Flexibility: Through Azure Arc, AKS can run on-premises, on Azure Stack HCI, or across other public clouds.
AKS enables developers to focus on building apps rather than managing complex infrastructure, providing a reliable platform for production-grade workloads.
Steps to initialize AKS work
** A. Setup & Prerequisites**
Prepare your environment and ensure the Kubernetes CLI is installed on your VS code or any terminal of choice.
1
Login and Set Variables
📘Instruction Summary
Authenticates with Azure and sets up reusable variables for the cluster.
📘Why It's Needed?
Variables prevent errors and make resource creation consistent.
🏗️Pillar Connection
Operational Excellence — Using variables and CLI automation.
a. Type the command: az login
💡 Signs you in to your Azure account so you can run commands against your subscription.
b. $ RG="aks-lab-rg"
💡 Sets the shell variable RG so later commands can reference it with $RG.
c. $ CLUSTER_NAME="skill-aks-cluster"
💡 Sets the shell variable CLUSTER_NAME so later commands can reference it with $CLUSTER_NAME.
d. $ LOCATION="eastus"
💡 Sets the shell variable LOCATION so later commands can reference it with $LOCATION.
Note: all the commands above are in Linux(bash), kindly change it to windows command prompt if you are to use a windows PC.
**2**
📘Install Kubectl
Instruction Summary
Installs the Kubernetes command-line tool (kubectl).
🎯Why It's Needed
Kubectl is the primary tool used to communicate with the Kubernetes API server.
🏗️Pillar Connection
Operational Excellence — Proper tooling initialization.
a. az aks install-cli
💡 Runs the Azure CLI command "az aks install-cli" — see "az aks install-cli --help" for details.
b. kubectl version --client
💡 Runs a Kubernetes cluster management command.
**3**
📘Create Resource Group
Instruction Summary
Creates a container for all cluster resources.
🎯Why It's Needed
All Azure resources must live in a resource group for logical grouping and billing.
🏗️Pillar Connection
Reliability — Resource grouping for lifecycle management.
** B**
Provision the AKS Cluster
Create a managed Kubernetes cluster with one system node pool.
1📘Create the Cluster
Instruction Summary
Triggers the creation of a managed Kubernetes control plane and one worker node.
🎯Why It's Needed
AKS manages the complex Kubernetes control plane for you, allowing you to focus on running applications.
🏗️Pillar Connection
Performance Efficiency — Offloading management overhead to the cloud provider.
(a) az aks create \
--resource-group $RG \
--name $CLUSTER_NAME \
--node-count 1 \
--generate-ssh-keys \
--node-vm-size "Standard_dc2s_B3"
**C**
Connect to the Cluster
Configure your local kubectl to securely talk to the new cluster
Implemetation Guide
1.
📘Download Kubeconfig
Instruction Summary
Downloads the cluster credentials and merges them into your local ~/.kube/config file.
🎯Why It's Needed?
Kubectl needs these certificates and endpoint details to authenticate with the cluster API.
🏗️Pillar Connection
Security — Encrypted authentication via RBAC and certificates Download Kubeconfig
(a) az aks install-cli
💡 Runs the Azure CLI command "az aks install-cli" — see "az aks install-cli --help" for details.
(b) az aks get-credentials --resource-group $RG --name $CLUSTER_NAME
💡 Downloads Kubernetes credentials so kubectl can connect to your AKS cluster.
2.
📘Verify Connection
Instruction Summary
Lists the worker nodes in the cluster and shows the API endpoint status.
🎯Why It's Needed
Confirms that you have successful end-to-end connectivity to the cluster.
🏗️Pillar Connection
Reliability — Verification of operational readiness.
(a) kubectl get nodes
💡 Lists all nodes (servers) in the Kubernetes cluster.
(b) kubectl cluster-info
💡 Runs a Kubernetes cluster management command.
** D**
Deploy Your First App
Deploy a simple Nginx application using a Kubernetes Deployment.
_ Implementation Guide_
1
📘Create the Deployment
Instruction Summary
Creates a desired state for 2 replicas of the Nginx container.
🎯Why It's Needed
Deployments ensure that if a pod fails, another is started automatically to maintain availability.
🏗️Pillar Connection
Reliability — Self-healing through automated pod replacement.
(a) kubectl create deployment nginx-app --image=nginx --replicas=2
💡 Runs a Kubernetes cluster management command.
**What is a Deployment?**
Deployment in DevOps is the automated process of releasing software code changes, features, or updates from development to production environments. It utilizes CI/CD pipelines to ensure consistent, fast, and low-risk releases, minimizing downtime through strategies like blue/green or canary deployments. This bridges the gap between development and operations teams.
**Key Components of DevOps Deployment**
Automation: Manual, error-prone steps are replaced by automated scripts and pipelines, often using tools like Jenkins, GitHub Actions, or Azure Pipelines.
Infrastructure as Code (IaC): Environments are defined in code, ensuring the production environment matches staging/testing environments.
Containerization & Orchestration: Technologies like Docker and Kubernetes are used to package code and manage its deployment across servers.
Speed & Frequency: Instead of infrequent large releases, DevOps emphasizes frequent, small, and continuous deployments.
2
📘View Resources
Instruction Summary
Shows the status of your app's rollout.
🎯Why It's Needed
Monitoring the progress of your application deployment.
🏗️Pillar Connection
Operational Excellence — Real-time visibility into workload status
(a) $ kubectl get deployments
💡 Lists deployments which manage the desired state of your pods (replicas, image version, etc.).
(b) $ kubectl get pods
💡 Lists all pods (running containers) in the current namespace with their status.
**What is a Pod?**
A pod is the smallest, most basic deployable unit in Kubernetes, representing a single instance of a running process in a cluster. It acts as a wrapper around one or more containers (like Docker), sharing the same network IP, storage volumes, and resources. Pods are ephemeral and designed to work together to run applications.
E
Expose to the Internet
Create a Service of type LoadBalancer to give your app a public IP.
Implementation Guide
1
📘Create LoadBalancer Service
Instruction Summary.
Tells AKS to provision an Azure Load Balancer and point it to your pods.
🎯Why It's Needed
Pods are internal only by default. A LoadBalancer service provides a stable public entry point.
🏗️Pillar Connection
Performance Efficiency — Leveraging cloud-native networking for ingress traffic.
$ kubectl expose deployment nginx-app --type=LoadBalancer --port=80
💡 Creates a Service that exposes a deployment to network traffic (ClusterIP, NodePort, or LoadBalancer).
2
📘Retrieve Public IP.
Instruction Summary
Watches the service until the 'EXTERNAL-IP' changes from to a real address.
🎯Why It's Needed
You need this IP to access the application from your browser.
🏗️Pillar Connection
Operational Excellence — Dynamic resource tracking.
$ kubectl get service nginx-app --watch
💡 Runs a Kubernetes cluster management command.
3
📘Test Access
Instruction Summary
Verifies the web server is reachable from the internet.
🎯Why It's Needed
End-to-end validation of the networking path.
🏗️Pillar Connection
Reliability — Final connectivity check.
Test Commands
$ curl http://
💡 Transfers data from or to a server — commonly used to test APIs or download files.
The app created in working while tested on the browser.
F
Scale and Cleanup
Scale your application and then delete all resources to avoid costs.
Implementation Guide
1
📘Scale the Application
Instruction Summary
Instantly increases the number of running instances to 5.
🎯Why It's Needed
Kubernetes allows for near-instant scaling to handle traffic spikes.
🏗️Pillar Connection
Performance Efficiency — Horizontal pod autoscaling capabilities.
$ kubectl scale deployment nginx-app --replicas=5
💡 Changes the number of pod replicas in a deployment (scale up or down).
$ kubectl get pods
💡 Lists all pods (running containers) in the current namespace with
their status.
What is Scale Deployment?
A scale deployment in the software development process refers to increasing the capacity of a system—by adding more instances (horizontal scaling) or increasing resource power (vertical scaling)—to handle higher loads and traffic. It ensures application reliability and performance during growth, often involving updating Kubernetes replicas.
Key Aspects of Scaling a Deployment
Horizontal Scaling (Scaling Out): Adding more replicas or instances of a service to share the workload, often automated using tools like Kubernetes.
Vertical Scaling (Scaling Up): Upgrading existing servers with more CPU, memory, or storage to enhance their performance.
Automation: Using tools (such as Kubernetes with kubectl scale) allows developers to automatically scale the number of pods based on traffic.
Process Management: Ensuring that as more infrastructure is added, the application remains stable and available.
Why Scaling Matters in Development
Handling Increased Load: As apps grow in popularity, they need to handle more user traffic without crashing.
High Availability: Proper scaling ensures that if one server fails, others are available to manage the demand.
Performance Stability: It ensures the application remains fast and responsive regardless of user volume.
Scale Deployment Techniques
Kubernetes Scaling: Modifying the number of replicas in a deployment using commands like kubectl scale deployment/example-app --replicas=4.
Auto-scaling: Implementing systems (like KEDA) that automatically scale based on triggers, such as queue depth or CPU load.
Containerization: Using Docker and orchestration platforms (e.g., AWS Elastic Beanstalk) to manage the deployment and scaling of applications efficiently
2
📘Delete Resource Group
Instruction Summary
Destroys all resources, including the cluster, networking, and disks.
🎯Why It's Needed
Managed Kubernetes clusters incur daily costs. Cleaning up prevents unwanted charges.
🏗️Pillar Connection
Cost Optimization — Proactive resource de-provisioning.
$ az group delete --name $RG --yes --no-wait
💡 Deletes the resource group and all resources inside it — use this to
clean up after the lab.























Top comments (0)