Introduction
Kubernetes is powerful, but getting a cluster up and running can feel daunting without the right tools. Azure Kubernetes Service (AKS) removes much of that complexity by providing a managed Kubernetes environment, tightly integrated with Azure’s ecosystem. With AKS, you can focus on building and deploying containerized applications instead of worrying about cluster maintenance, upgrades, or scaling.
This article, Getting Started with Azure Kubernetes Service (AKS) – Workspace Initialization, will guide you through the essential setup process. We’ll cover how to initialize your workspace, configure the Azure CLI, and prepare your environment for cluster creation. By the end, you’ll have a working AKS workspace ready to host workloads, complete with the foundational configurations that make future deployments smoother.
Whether you’re experimenting with Kubernetes for the first time or standardizing your cloud-native workflows, this initialization step is the cornerstone of a successful AKS journey.
Initialize Git Repository
mkdir my-cloud-lab
cd my-cloud-lab
git init
echo "# Cloud Lab Repository" > README.md
git add README.md
git commit -m "Initial commit"
1. mkdir my-cloud-lab
Significance: Creates a new directory called my-cloud-lab.
Why Important: This gives you a dedicated workspace to keep all project files organized, avoiding clutter and mixing with unrelated files.
2. cd my-cloud-lab
Significance: Moves into the newly created directory.
Why Important: Ensures that all subsequent commands (like initializing Git or adding files) are executed inside the correct project folder.
3. git init
Significance: Initializes a new Git repository in the current directory.
Why Important: This step transforms the folder into a version-controlled workspace, allowing you to track changes, collaborate, and roll back if needed.
4. echo "# Cloud Lab Repository" > README.md
Significance: Creates a README file with a simple project description.
Why Important: The README is often the first file others see in a repository. It documents the purpose of the project and sets the stage for collaboration.
5. git add README.md
Significance: Stages the README file for commit.
Why Important: Staging tells Git which changes you want to include in the next snapshot. It’s a deliberate step that prevents accidental commits of unwanted files.
6. git commit -m "Initial commit"
Significance: Saves the staged changes into the repository history with a descriptive message.
Why Important: The initial commit marks the official starting point of your project. It creates a baseline that future changes can be compared against.
Setup & Prerequisites
- Implementation Guide
Step 1: Login and Set Variables
The first step is to authenticate with Azure using az login, which connects your local environment to your Azure subscription. Once authenticated, you define reusable shell variables such as the resource group name, cluster name, and location. These variables act as placeholders that simplify commands, reduce typing errors, and ensure consistency across all subsequent operations. By standardizing values at the start, you establish a reliable foundation for automation and repeatable deployments.
az login
RG="aks-lab-rg"
CLUSTER_NAME="skill-aks-cluster"
LOCATION="westus3"
Step 2: Install Kubectl
Next, you install kubectl, the Kubernetes command‑line tool, using az aks install-cli. Kubectl is essential because it provides the interface to communicate directly with the Kubernetes API server. With it, you can deploy applications, inspect cluster resources, and troubleshoot workloads. Verifying the installation with kubectl version --client ensures your environment is properly equipped to manage Kubernetes clusters. This step is critical because without kubectl, you cannot interact with or control your AKS cluster.
az aks install-cli
kubectl version --client
Step 3: Create Resource Group
After preparing your environment, you create a resource group with az group create --name $RG --location $LOCATION. A resource group is a logical container that holds all related Azure resources for your AKS cluster, including networking, storage, and compute. Organizing resources into a group makes them easier to manage, monitor, and delete as a unit. It also simplifies billing and lifecycle management, ensuring that your cluster and its dependencies remain structured and reliable throughout their use.
az group create --name $RG --location $LOCATION
Step 4: Create the Cluster
Running the az aks create command provisions a fully managed Kubernetes cluster in Azure, automatically setting up the control plane and worker nodes so you don’t have to manage them manually. By specifying the resource group with --resource-group $RG, you ensure the cluster is organized within the logical container you created earlier. The --name $CLUSTER_NAME flag assigns a unique identifier to your cluster, making it easy to reference in future commands. Setting --node-count 1 creates a single worker node to start with, which is sufficient for testing and learning environments. The --generate-ssh-keys option securely generates SSH keys for node access, removing the need for manual key management. Finally, --node-vm-size Standard_B2s defines the size of the virtual machine used for the node, balancing cost and performance for lightweight workloads.
Together, these parameters simplify cluster creation by offloading the complexity of Kubernetes infrastructure management to Azure, allowing you to focus on deploying and scaling applications rather than maintaining the control plane.
First, regenerate SSH Keys before creating cluster
Run this in PowerShell:
ssh-keygen -t rsa -b 2048
az aks create `
--resource-group $RG `
--name $CLUSTER_NAME `
--node-count 1 `
--generate-ssh-keys `
--node-vm-size Standard_B2s
Resources group and cluster created in Azure Portal

Step 5: Download Kubeconfig
The az aks get-credentials command securely downloads the cluster’s authentication details and merges them into your local kubeconfig file, allowing kubectl to communicate with the AKS cluster. By pulling down these certificates and endpoint information, you enable encrypted access to the Kubernetes API server, ensuring that all interactions are authenticated and authorized. This step is critical because without kubeconfig, kubectl has no way of knowing how to connect to your cluster or prove your identity.
Step 6: Verify Connection
Once credentials are in place, you confirm connectivity by running kubectl get nodes, which lists the worker nodes provisioned in your cluster. This verifies that the cluster is active and responding to API requests. Following that, kubectl cluster-info displays the cluster’s control plane and service endpoints, ensuring that the Kubernetes API is reachable and operational. Together, these checks validate that your environment is correctly configured and ready to host workloads, providing confidence in the reliability of your AKS setup.
az aks install-cli
az aks get-credentials --resource-group $RG --name $CLUSTER_NAME
kubectl get nodes
Step 7: Create the Deployment
By running kubectl create deployment nginx-app --image=nginx --replicas=2, you instruct Kubernetes to maintain two replicas of the Nginx container, ensuring that if one pod fails another is automatically started to keep the application available. This step defines the desired state of your application and leverages Kubernetes’ self‑healing capabilities to provide reliability and resilience without manual intervention.
kubectl create deployment nginx-app --image=nginx --replicas=2
Step 8: View Resources
After creating the deployment, you use kubectl get deployments to check the rollout status and confirm that Kubernetes is managing the desired number of replicas. Following that, kubectl get pods lists the individual pods running in the cluster, showing their names, statuses, and readiness. These commands give you real‑time visibility into how your application is being deployed and maintained, allowing you to monitor progress and quickly identify any issues. Together, they provide operational excellence by ensuring that your workloads are running as expected.
kubectl get deployments
kubectl get pods
Step 9: Create LoadBalancer Service
By running kubectl expose deployment nginx-app --type=LoadBalancer --port=80, you instruct AKS to provision an Azure Load Balancer and connect it to your Nginx pods. This step is essential because pods are only accessible inside the cluster by default, and the LoadBalancer service provides a stable public IP address that external clients can use to reach your application. Leveraging Azure’s cloud‑native networking ensures efficient ingress traffic handling without manual configuration.
kubectl expose deployment nginx-app --type=LoadBalancer --port=80
VSCODE Terminal
Step 10: Retrieve Public IP
After creating the service, you use kubectl get service nginx-app --watch to monitor the service until the EXTERNAL-IP field changes from to a real IP address. This dynamic allocation process is important because it confirms that Azure has successfully provisioned the load balancer and assigned a public endpoint. Having this IP address is critical, as it is the entry point you’ll use to access your application from outside the cluster.
kubectl get service nginx-app --watch
Step 11: Test Access
Finally, you validate connectivity by running curl http:// or by opening the IP address in a browser. This confirms that the Nginx web server is reachable from the internet, completing the end‑to‑end deployment process. Testing access ensures reliability by verifying that traffic flows correctly from the public internet through the Azure Load Balancer into your Kubernetes pods, proving that your application is live and accessible.
curl http://<YOUR_EXTERNAL_IP>
Conclusion: From Setup to Internet Exposure
By following this guided journey, you’ve gone from initializing your Azure environment to deploying and exposing a live application on AKS. Starting with authentication and variable setup, you established a consistent foundation for automation. Installing kubectl and creating a resource group prepared your workspace for Kubernetes operations. Provisioning the cluster offloaded the complexity of managing control planes, while downloading kubeconfig and verifying connectivity ensured secure and reliable access. Deploying the Nginx application demonstrated Kubernetes’ self‑healing and scaling capabilities, and finally, exposing it with a LoadBalancer service showed how AKS integrates seamlessly with Azure networking to deliver applications to the internet.
This end‑to‑end process highlights the core benefits of AKS: operational excellence through automation, reliability via self‑healing workloads, performance efficiency by leveraging managed infrastructure, and security through encrypted authentication. With these fundamentals in place, you now have a working Kubernetes environment ready to host real workloads, experiment with scaling, and explore advanced features like monitoring, ingress controllers, and CI/CD pipelines.








Top comments (0)