DEV Community

Daniel Favour
Daniel Favour

Posted on • Edited on

Deploy your Application to Azure Kubernetes Service

Cloud computing has become a critical component of application development and deployment in today's fast-paced digital world. Azure, a major cloud platform, provides a comprehensive suite of tools and services to enable easy deployment, management, and scaling of applications. However, deploying applications to the cloud can be a complex and overwhelming task without appropriate guidance and the right set of tools and services.

This article guides you through deploying your application to Azure, from containerization to monitoring and scaling.

Prerequisites

Before we get started, ensure that you have the following in place:

Application Overview

The application utilized for this project is developed using Node.js and Express.js. It serves as a feedback app, allowing users to provide their feedback, which is then stored in the designated feedback directory within the application.
It provides an interface or endpoint through which users can submit their feedback input. Once submitted, the application stores the feedback in the specified feedback directory, ensuring proper organization and retention of user input.

GitHub URL

https://github.com/FavourDaniel/SCA-Project

To clone this repository:

git clone https://github.com/FavourDaniel/SCA-Project
cd SCA-Project
Enter fullscreen mode Exit fullscreen mode

Architecture Diagram

Below is a high level overview of what we will be building. There will be some other configurations in place like Virtual Networks, Load Balancers, etc, but this architecture diagram should give you a clear picture of what the infrastructure will look like.

Architecture Diagram

Let's get started.

Testing the application on your local

npm install express
node server.js
Enter fullscreen mode Exit fullscreen mode

Open your browser and navigate to localhost:80 to view the running application.
Application

Login to your Azure Account

To set up your infrastructure on the Azure cloud platform, a connection between your terminal and your Azure account needs to be established. To accomplish this using the Azure CLI, run the following command:

az login
Enter fullscreen mode Exit fullscreen mode

You will be redirected to sign in to your account. After successful authentication, your terminal should display an output similar to the one below:

[
  {
    "cloudName": "AzureCloud",
    "homeTenantId": "56d87bs9-fr5n-8zzq-qq01-419l20234j0f",
    "id": "5h12gg64-60d2-1b4h-6j7c-c810419k33v2",
    "isDefault": true,
    "managedByTenants": [],
    "name": "Visual Studio Enterprise Subscription",
    "state": "Enabled",
    "tenantId": "56d87bs9-fr5n-8zzq-qq01-419l20234j0f",
    "user": {
      "name": "username@gmail.com",
      "type": "user"
    }
  }
]
Enter fullscreen mode Exit fullscreen mode

Copy the id value, it is your subscription ID. Next, run the following command:

subscription="<subscriptionId>" # add subscription here
az account set -s $subscription
Enter fullscreen mode Exit fullscreen mode

Replace <subscriptionId> with the Id value you copied and the connection should be established.

Setting up the Infrastructure

The infrastructure will be set up on Azure using Terraform. From the GitHub link provided, you need to change directory into terraform-infrastructure directory. To do this, run the following commands:

cd terraform-infrastructure
terraform init
terraform fmt
terraform validate
terraform plan
terraform apply
Enter fullscreen mode Exit fullscreen mode

terraform

I suggest running these commands individually. Terraform will proceed to create the infrastructure specified in the main.tf file.

NB: You might encounter errors if any of the names used in the vars.tf file is already in use on Azure by someone else. By names here, I mean default which are the default names assigned to a resource and can be found in the vars.tf file. In such case, assign a different name to the infrastructure that failed to create.

To confirm that your resources have been created successfully, you can check your Azure portal and you should see the following resources created:

  • Resource Group

Resource Group

  • Azure Container Registry

ACR

  • Azure Kubernetes Service

AKS

  • Storage Accounts Storage Account

These resources were specified in the Terraform configuration file.

NB: Azure Kubernetes Service being a managed service will by default create other services on its own when created which include Virtual Networks, new Resource Groups, Load Balancer, Network Security Group, Storage Class, Route Table, etc. These extra services do not need to be created by you, that is why it is a managed service.

Build and Push the Docker Image

In the root of the the project directory, there is Dockerfile file which serves as a template for creating the Docker image. This Docker Image will not be created manually as a job for this has been setup in the GitHub Actions workflow which can also be found at the root directory.

GitHub Actions is a CI/CD tool that automates software development workflows, including building, testing, and deploying code, directly from your GitHub repository.

To build the Docker Image, create your own repository on GitHub and push the project there. Because there is a workflow that has already been set up but not configured properly, the job will fail.

The image will be built but will not be pushed. This is because in the workflow, the below was specified:

- name: Set up Docker Buildx
  uses: docker/setup-buildx-action@v2
Enter fullscreen mode Exit fullscreen mode

This first step uses the docker/setup-buildx-action@v2 action to creates and boot a builder using the docker-container driver.

After it successfully does this, it proceeds to login to your Azure Container Registry which was previously created.

- name: Login to Azure Container Registry
  uses: Azure/docker-login@v1
  with:
    login-server: ${{ secrets.ACR_REGISTRY_NAME }}.azurecr.io
    username: ${{ secrets.ACR_USERNAME }}
    password: ${{ secrets.ACR_PASSWORD }}
Enter fullscreen mode Exit fullscreen mode

From the above, Azure/docker-login@v1 is an action that logs in to an Azure Container Registry. Because the login server, username and password of your Container Registry has not been provided yet, it is expected to fail.

Same as the next step:

- name: Build Docker image
  uses: docker/build-push-action@v4
  with:
    context: .
    file: ./Dockerfile
    push: true
    tags: ${{ secrets.ACR_REGISTRY_NAME }}.azurecr.io/myrepository:${{ github.sha }}
Enter fullscreen mode Exit fullscreen mode

From the above, docker/build-push-action@v4 is an action that builds and pushes Docker images using Docker Buildx. If it manages to build the image, it will fail to push it because it doesn't have the credentials that grants it access to our Azure Container Registry.

Grant the Workflow access

To grant the workflow access to your Container Registry, head over to your Azure portal. Under Container Registries, select your Container registry and from the left panel, select Access keys and enable Admin user.

Access Keys

This will generate passwords for you which you can always regenerate later for security reasons.

Copy the Login Server, Username and one of the password.

Head back to your GitHub repository and select Settings

Settings

Scroll down and select Actions under Secrets and variables

Actions

Click on the New repository secret button

Repository secret

Name the first secret ACR_REGISTRY_NAME. This is your login server which was copied previously. Paste the login server into the secret box and click on Add secret.

Do the same for ACR_USERNAME and ACR_PASSWORD replacing them with the username and password that was copied. Afterwards, you should have this:

Image secrets

Now that the workflow has access to your Container Registry, re-run the workflow and it should pass.

rebuilt pipeline

Check you container registry under Repositories and you should see your image there.

Repository

Install the Kubernetes CLI

To manage your AKS cluster using the Kubernetes command-line interface (CLI), you will need to install the kubectl tool on your local machine.
You can do this by running the following command:

az aks install-cli
Enter fullscreen mode Exit fullscreen mode

This command will install the kubectl tool on your machine, allowing you to interact with your AKS cluster using the Kubernetes API. Once the installation is complete, you can verify that kubectl is installed by running the kubectl version command.

Connect to the AKS cluster using kubectl

To connect to your AKS cluster using kubectl, you need to configure your local kubectl client with the credentials for your cluster. You can do this by running the following command:

az aks get-credentials --resource-group DemoRG --name myAKSCluster
Enter fullscreen mode Exit fullscreen mode

The Resource Group name DemoRG and cluster name myAKSCluster are defined in the vars.tf file.

The above command retrieves the Kubernetes configuration from the AKS cluster and merges it with your local Kubernetes configuration file. This makes it possible for you to use kubectl to interact with the AKS cluster.

After running the command, you should see output similar to the following:

Merged "myAKSCluster" as current context in /home/daniel/.kube/config
Enter fullscreen mode Exit fullscreen mode

This indicates that the Kubernetes context named myAKSCluster has been merged into the kubeconfig file located at /home/daniel/.kube/config.

This kubeconfig file is used to store cluster access information, such as cluster, namespace, and user details, for the kubectl command-line tool kubernetes.io.

In this case, the myAKSCluster context has been set as the current context, which means that subsequent kubectl commands will use the configuration from this context to communicate with the Kubernetes cluster.

To confirm that your kubectl client is properly configured to connect to your AKS cluster, you can run the following command:

kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

This command will return a list of the nodes in your AKS cluster, which should include the node that was created at cluster creation. In the Terraform configuration, only one (1) node was specified to be created so you should see only one.
The output should look similar to the below:

NAME                                  STATUS   ROLES   AGE     VERSION
aks-aksnodepool-16067207-vmss000000   Ready    agent   2d13h   v1.25.6
Enter fullscreen mode Exit fullscreen mode

Deploy your Application to the Cluster

To deploy our application to the AKS cluster, we need to create a deployment, a service, a persistent volume claim, and a storage class.

Deployment: Manages a set of replicas of your application's containers and handles scaling, rolling updates, and rollbacks.
Service: Provides a stable IP address and DNS name for your application within the cluster.
Persistent Volume Claim: Requests a specific amount of storage for your application's data.
Storage Class: Defines the type of storage that will be provisioned for the claim. In this case, we will be using Azure File Storage as our storage class.

To create our resources, we need to first write the manifest files for them. We will start creating the Storage Class first. This is because the Persistent Volume Claim relies on the Storage Class for dynamic provisioning.

Storage Class

To create a manifest for the storage class, run the below command:

touch storageclass.yml
Enter fullscreen mode Exit fullscreen mode

Then, paste the below configuration into it and save:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: azure-file-storage
provisioner: kubernetes.io/azure-file
mountOptions:
  - dir_mode=0777
  - file_mode=0777
  - uid=1000
  - gid=1000
  - mfsymlinks
  - nobrl
  - cache=none
parameters:
  skuName: Standard_LRS
  location: eastus
Enter fullscreen mode Exit fullscreen mode

Here we defined the Azure File Storage to be our storage provisioner.

To create the storage class, run the following command:

kubectl apply -f storageclass.yml
Enter fullscreen mode Exit fullscreen mode

To see the deployed storage class, run:

kubectl get storageclass
Enter fullscreen mode Exit fullscreen mode

This will show you a list of all the available storage classes, including the one you just created.

You should see an output similar to the following:

NAME                    PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
azure-file-storage      kubernetes.io/azure-file   Delete          Immediate              false                  2d12h
azurefile               file.csi.azure.com         Delete          Immediate              true                   2d14h
azurefile-csi           file.csi.azure.com         Delete          Immediate              true                   2d14h
azurefile-csi-premium   file.csi.azure.com         Delete          Immediate              true                   2d14h
azurefile-premium       file.csi.azure.com         Delete          Immediate              true                   2d14h
default (default)       disk.csi.azure.com         Delete          WaitForFirstConsumer   true                   2d14h
managed                 disk.csi.azure.com         Delete          WaitForFirstConsumer   true                   2d14h
managed-csi             disk.csi.azure.com         Delete          WaitForFirstConsumer   true                   2d14h
managed-csi-premium     disk.csi.azure.com         Delete          WaitForFirstConsumer   true                   2d14h
managed-premium         disk.csi.azure.com         Delete          WaitForFirstConsumer   true                   2d14h
Enter fullscreen mode Exit fullscreen mode

From the above output, we can see both Azure file storage and Azure disk storage provisioned.

When the Azure Kubernetes Service was created, it created the Disk storage. We are creating this File storage because we want to use a File storage rather than a Disk storage. If you decide to use the Disk storage, creating this storage class will not be necessary.

Persistent Volume Claim

The Persistent Volume Claim is the next resource we will create. This is because when we create our deployment, it will look for the persistent volume claim we specified and if it cannot locate it, it will fail.

To create the manifest for the PVC, run the below command:

touch pvc.yml
Enter fullscreen mode Exit fullscreen mode

Paste the following configuration into it and save.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: scademo-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
  storageClassName: azure-file-storage
Enter fullscreen mode Exit fullscreen mode

See Persistent Volume Claim for manifest breakdown.

To create the PVC, run the below:

kubectl apply -f pvc.yml
Enter fullscreen mode Exit fullscreen mode

To check the created PVC, run the below command

kubectl get pvc
Enter fullscreen mode Exit fullscreen mode

It should return the below output

NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
scademo-pvc   Bound    pvc-20a3d37b-f734-4f53-ba96-d94e63463623   20Gi       RWO            azure-file-storage   2d12h
Enter fullscreen mode Exit fullscreen mode

We can see that the PVC has been bound to the azure file storage class created.

If you check your Storage Account on your azure portal, you should see the file storage that was created.

pvc

Deployment

Now that the storage class and PVC have been created, we can create the deployment.

To create a manifest for the deployment, run the below command:

touch deployment.yml
Enter fullscreen mode Exit fullscreen mode

Paste the below configuration into the file and save.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: scademo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: scademo
  template:
    metadata:
      labels:
        app: scademo
    spec:
      nodeSelector:
        "kubernetes.io/os": linux
      containers:
      - name: scademo
        image: scacontainerregistry.azurecr.io/scademo:v1
        resources:
          requests:
            cpu: 100m 
            memory: 128Mi
          limits:
            cpu: 100m
            memory: 256Mi
        ports:
        - containerPort: 80 
        env:
        - name: TEST
          value: "scademo"
        volumeMounts:
        - mountPath: /tmp/data
          name: scademo-pvc
      volumes:
      - name: scademo-pvc
        persistentVolumeClaim:
          claimName: scademo-pvc
Enter fullscreen mode Exit fullscreen mode

Refer to Deployments and YAML, to see a breakdown of the file configuration.

To create this deployment, run:

kubectl apply -f deployment.yml
Enter fullscreen mode Exit fullscreen mode

To check your deployment, run:

kubectl get deployments
Enter fullscreen mode Exit fullscreen mode

It should return the below:

NAME      READY   UP-TO-DATE   AVAILABLE   AGE
scademo   1/1     1            1           44h
Enter fullscreen mode Exit fullscreen mode

NB: Deployments automatically create and manage pods for you, eliminating the need for manual pod creation. Whenever you create a deployment, it creates a pod along with it as well as a Replica Set.

To see the deployed pod, run:

kubectl get pods
Enter fullscreen mode Exit fullscreen mode

It should return the below output:

NAME                       READY   STATUS    RESTARTS   AGE
scademo-7885bbb755-q464b   1/1     Running   0          44h
Enter fullscreen mode Exit fullscreen mode

To see the Replica Set, run:

kubectl get rs
Enter fullscreen mode Exit fullscreen mode

It should return the below output:

NAME                 DESIRED   CURRENT   READY   AGE
scademo-7885bbb755   1         1         1       44h
Enter fullscreen mode Exit fullscreen mode

Service

Now we need to expose this deployment so that we can access it from outside the cluster. To do this, we will create a service.

To create a manifest for the service, run the below:

touch svc.yml
Enter fullscreen mode Exit fullscreen mode

Paste the below configuration into it and save.

apiVersion: v1
kind: Service
metadata:
  name: scademo
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: scademo   
Enter fullscreen mode Exit fullscreen mode

In the above configuration, we are utilizing a LoadBalancer to access the deployment. A LoadBalancer is a type of service that creates an external load balancer in a cloud environment and assigns a static external IP address to the service. It is commonly used to distribute incoming traffic across worker nodes and enables access to a service from outside the cluster.

To create the service, run the below command:

kubectl apply -f svc.yml
Enter fullscreen mode Exit fullscreen mode

To see the created service, execute the following command:

kubectl get svc
Enter fullscreen mode Exit fullscreen mode

It will return an output similar to the below:

NAME         TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)        AGE
kubernetes   ClusterIP      10.0.0.1      <none>         443/TCP        2d14h
scademo      LoadBalancer   10.0.44.214   40.88.195.57   80:32591/TCP   44h
Enter fullscreen mode Exit fullscreen mode

The service will show pending at creation, after some seconds, the external IP address will be allocated.

Copy and paste the external IP on your web browser and you should see your application running:

Running application

Setup Monitoring for your Cluster

To effectively monitor resources in your Kubernetes cluster, it's recommended to set up Container Insights in Azure, which provides comprehensive monitoring of the entire cluster. Container Insights can be integrated with a Log Analytics Workspace to enable additional visibility and monitoring capabilities for containerized applications running in a Kubernetes environment.

Log Analytics Workspace

This is a centralized storage and management location for log data collected by Azure Monitor, including data from your AKS clusters. It allows you to query, analyze, and visualize the collected data to obtain insights and troubleshoot issues.

To enable monitoring for your AKS cluster, you need to specify a Log Analytics Workspace where the collected monitoring data will be stored if you have an existing Workspace.

You can create a Log Analytics Workspace using a default workspace for the resource group if you do not have an existing Workspace. This default Workspace will be created automatically with a name in the format DefaultWorkspace-<GUID>-<Region>.

To create this default workspace, execute the following command:

az aks enable-addons -a monitoring -n myAKSCluster -g DemoRG
Enter fullscreen mode Exit fullscreen mode

A monitoring addon is a component that enables you to monitor the performance and health of your AKS cluster using Azure Monitor for containers. When you enable the monitoring addon, it deploys a containerized version of the Log Analytics agent on each node in your AKS allowing Azure Monitor for containers to monitor the performance and health of the AKS cluster.

This Log Analytics agent is a component running on each node of your Kubernetes cluster, responsible for collecting logs and metrics from the node and the applications running on it. The agent is deployed as a DaemonSet in the cluster, ensuring an instance of the agent runs on every node.

NB: The command requires the registration of the Microsoft.OperationsManagement resource provider with your subscription.

Microsoft.OperationsManagement stands for to Microsoft Operations Management Suite (OMS). OMS is a cloud-based service that provides monitoring and management capabilities across Azure.

If this service is not registered with your subscription, you will receive the following error output:

(AddContainerInsightsSolutionError) Code="MissingSubscriptionRegistration" Message="The subscription is not registered to use namespace 'Microsoft.OperationsManagement'. See https://aka.ms/rps-not-found for how to register subscriptions." Details=[{"code":"MissingSubscriptionRegistration","message":"The subscription is not registered to use namespace 'Microsoft.OperationsManagement'. See https://aka.ms/rps-not-found for how to register subscriptions.","target":"Microsoft.OperationsManagement"}]
Code: AddContainerInsightsSolutionError
Message: Code="MissingSubscriptionRegistration" Message="The subscription is not registered to use namespace 'Microsoft.OperationsManagement'. See https://aka.ms/rps-not-found for how to register subscriptions." Details=[{"code":"MissingSubscriptionRegistration","message":"The subscription is not registered to use namespace 'Microsoft.OperationsManagement'. See https://aka.ms/rps-not-found for how to register subscriptions.","target":"Microsoft.OperationsManagement"}]
Enter fullscreen mode Exit fullscreen mode

To check the registration status using the following commands:

az provider show -n Microsoft.OperationsManagement -o table
az provider show -n Microsoft.OperationalInsights -o table
Enter fullscreen mode Exit fullscreen mode

To register if not registered, execute the following commands:

az provider register --namespace Microsoft.OperationsManagement
az provider register --namespace Microsoft.OperationalInsights
Enter fullscreen mode Exit fullscreen mode

You can check your Azure portal under Log Analytics Workspace to see the created Workspace.

LAW

Verify Agent Deployment

To verify that monitoring agent was successfully deployed into the cluster, run the following command:

kubectl get ds ama-logs --namespace=kube-system
Enter fullscreen mode Exit fullscreen mode

The above command retrieves information about a DaemonSet named ama-logs in the kube-system namespace.

ama stands for Azure Monitor Agent.

The command should return the below output:

NAME       DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
ama-logs   1         1         1       1            1           <none>          35h
Enter fullscreen mode Exit fullscreen mode

We have only one (1) running because we have a single node. If we had 2 nodes, we will have two (2) agents.

Head back to your Azure portal. Under Azure monitor, select view under Container Insights or select Containers.

Container Insight

Under Monitored Clusters you should see your Cluster being monitored.
Monitored Cluster

Select your Cluster and you should see Insights of your Cluster. You can change it to show insights of nodes, controllers, containers and also get a status report.

Cluster

You can also choose to see the metrics as well other settings and configurations.

Metrics.

Scaling the deployment

To scale our Kubernetes deployment and ensure it can handle increased traffic, we need to edit our deployment configuration.

Initially, we specified our deployment to have only one replica, which is why it created a single pod. To scale the deployment, we can edit the deployment.yml file and change the replica value from 1 to 4 (or any desired number) to create four replicas of the application.

In the spec section, change the number of replicas to four (4)

spec:
  replicas: 4
  selector:
    matchLabels:
      app: scademo
Enter fullscreen mode Exit fullscreen mode

After making the change, save the file and apply the updated configuration by running the following command:

kubectl apply -f deployment.yml
Enter fullscreen mode Exit fullscreen mode

After applying the changes to the deployment file, give it some time to take effect. This will update the deployment with the new configuration and create the additional replicas. To confirm that the replicas have been created, we can run the following command to get the status of the deployment:

kubectl get deployment
Enter fullscreen mode Exit fullscreen mode

The output should show that the scademo deployment now has 4 replicas, as indicated by the READY column.

NAME      READY   UP-TO-DATE   AVAILABLE   AGE
scademo   4/4     4            4           13h
Enter fullscreen mode Exit fullscreen mode

Previously, there was only one replica of the deployment, so the output showed 1/1. After increasing the replica count to 4, the output now shows 4/4, indicating that there are four replicas of the deployment running and all of them are available.

You can also verify that there are 4 running pods by running the command:

kubectl get pods
Enter fullscreen mode Exit fullscreen mode

It should return the below output:

NAME                       READY   STATUS    RESTARTS   AGE
scademo-7885bbb755-644r9   1/1     Running   0          13h
scademo-7885bbb755-q464b   1/1     Running   0          35h
scademo-7885bbb755-gjnjp   1/1     Running   0          13h
scademo-7885bbb755-kvgg5   1/1     Running   0          13h
Enter fullscreen mode Exit fullscreen mode

Additionally, since deployments are used to manage replica sets, you can check the replica sets by running the command:

kubectl get replicaset
Enter fullscreen mode Exit fullscreen mode

It should return the below output:

NAME                 DESIRED   CURRENT   READY   AGE
scademo-7885bbb755   4         4         4       35h
Enter fullscreen mode Exit fullscreen mode

The output shows that the scademo-7885bbb755 replica set has 4 replicas that are all ready and available.

That is how you scale your application in Kubernetes.

Conclusion

In this article, we explored the process of deploying an application to Azure and the various tools and services that can make the process seamless and efficient. By leveraging the right knowledge and resources, you can unlock the full potential of the Azure platform and take advantage of its vast array of features and capabilities.

Top comments (2)

Collapse
 
bravinsimiyu profile image
Bravin Wasike

Nice

Collapse
 
danielfavour profile image
Daniel Favour

Thank you