DEV Community

Lohith S
Lohith S

Posted on

Deploying a Microservices Application with Azure devops, Kubernetes and CI/CD

Hellooo, this is a step-by-step approach to deploying a microservices-based application locally using Docker Compose, followed by setting up a CI/CD pipeline with Azure DevOps, Azure Container Registry (ACR), Azure Kubernetes Service (AKS), and ArgoCD for continuous deployment.

The Docker Example Voting App is a microservices-based application, typically written in Python and Node.js, with the following components:

1> Voting Frontend (Python/Flask): Users cast votes via a simple web interface.
2> Vote Processor Backend (Node.js/Express): Receives and processes the votes.
3> Redis Database (Redis): Temporarily stores votes for fast access.
4> Worker (Python/Flask): Processes votes from Redis and updates the final count.
5> Results Frontend (Python/Flask): Displays real-time voting results.
PostgreSQL Database (PostgreSQL): Stores the final vote count.

How It Works:

The user votes on the frontend, the vote is processed and stored in Redis, then the worker updates the final count in PostgreSQL, and the results are shown on a results page. All components run in separate Docker containers for scalability and isolation

Step 1: Clone and Deploy the Application Locally Using Docker Compose

To test the application locally, create a virtual machine (VM) and deploy the application using Docker Compose.
a. Create and Access an Azure Linux Ubuntu VM

Provision an Azure Linux Ubuntu VM. Refer to Azure's official documentation for guidance if needed.
Once the VM is created, navigate to the VM's folder and connect via SSH using the command:

ssh -i <your-key-name> azureuser@<your-ip-address>

Note: Ensure port 22 is open during VM creation (this is typically enabled by default for Linux VMs).
b. Update the VM
Run the following command to update the package list:
sudo apt-get update

c. Install Docker and Docker Compose
To run the application locally, install Docker and Docker Compose to enable the docker-compose up -d command for managing multi-container applications, such as microservices.

  1. Install Required Packages

sudo apt-get install \
ca-certificates \
curl \
gnupg \
lsb-release

  1. Add Docker's Official GPG Key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

Enter fullscreen mode Exit fullscreen mode
  1. Set Up the Stable Repository
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Enter fullscreen mode Exit fullscreen mode
  1. Install Docker Engine
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io -y
Enter fullscreen mode Exit fullscreen mode
  1. Install Docker Compose
sudo curl -L "https://github.com/docker/compose/releases/download/$(curl -s https://api.github.com/repos/docker/compose/releases/latest | grep -oP '"tag_name": "\K(.*)(?=")')/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
Enter fullscreen mode Exit fullscreen mode
  1. Manage Docker as a Non-Root User
sudo usermod -aG docker $USER
newgrp docker
Enter fullscreen mode Exit fullscreen mode
  1. Verify Versions
    sudo docker --version
    docker-compose --version

  2. Verify Docker Command Without Sudo

docker ps
Enter fullscreen mode Exit fullscreen mode

Note: The docker ps output should be empty if no containers are running.
d. Clone the Repository

Clone or fork the application repository from GitHub. Navigate to the repository directory and run:

docker-compose up -d

Verify that all containers are running:

docker ps
Enter fullscreen mode Exit fullscreen mode

The output will show the application container mapped to port 5000. Access the application in two ways:

Run curl http://localhost:5000 to view the app in the terminal.
In a browser, enter http://:5000.

Step 2: Create an Azure DevOps Project and Import the Repository

a. Sign In to Azure DevOps
Access Azure DevOps and sign in. If you're a first-time user, follow the prompts to create an organization.
b. Create a Project

Create a new project named VotingApp.
Set the visibility to Private and click Create Project.

c. Import the Git Repository

In the left menu, navigate to Repos > Files.
Ensure the repository type is set to Git.
Enter the Git repository URL and click Import.

Upon successful import, the entire repository will be visible in Azure DevOps.

Step 3: Create an Azure Container Registry (ACR)

The Docker images built during the CI process will be stored in Azure Container Registry (ACR).
a. Create an ACR

In the Azure Portal, search for Container Registry and click Create.
Select or create a resource group.
Provide a name for the ACR and choose either the Basic or Standard pricing plan.
Click Review & Create, then Create.

b. Access the ACR
After creation, navigate to the resource and note the ACR server name for use in Step 5.

Step 4: Set Up a Self-Hosted Agent for the Pipeline
To optimize resources, reuse the VM created in Step 1 for the self-hosted agent. Ensure Docker is installed if using a new VM.
a. Configure the Agent Pool

In Azure DevOps, go to Project Settings > Agent Pools > Add Pool.
Select New, choose Self-hosted, name the pool (e.g., VotingApp-Agent), and grant access to all pipelines.

b. Set Up the Agent

In the agent pool, click New Agent and select Linux.
Run the following commands on the VM:

# Create a directory and navigate to it
mkdir myagent && cd myagent

# Install wget
sudo apt install wget -y

# Download the agent (replace with the URL from Azure DevOps)
wget https://vstsagentpackage.azureedge.net/agent/3.243.0/vsts-agent-linux-x64-3.243.0.tar.gz

# Extract the agent files
tar zxvf vsts-agent-linux-x64-3.243.0.tar.gz

# Configure the agent
./config.sh

# Start the agent
./run.sh
Enter fullscreen mode Exit fullscreen mode

During ./config.sh, provide:
The Azure DevOps URL (e.g., https://dev.azure.com/).
A Personal Access Token (PAT) created in Azure DevOps.
The agent pool name created earlier.
Accept defaults for remaining prompts.

Step 5: Create CI Pipeline Scripts for Microservice

s
Set up CI pipelines for the vote, result, and worker microservices, each with build and push stages.
a. Configure the Pipeline

In Azure DevOps, go to Repos > Pipelines > New Pipeline.
Select the Docker template for pushing images to ACR.
Choose your Azure subscription and the ACR created earlier.
Specify the image name and ensure the correct repository file is selected.

b. Pipeline Script for the vote Microservice
`# Docker

Build and push an image to Azure Container Registry

https://docs.microsoft.com/azure/devops/pipelines/languages/docker

trigger:
paths:
include:
- vote/*

resources:

  • repo: self

variables:
dockerRegistryServiceConnection: '37868c72-32ef-488d-a490-1415f4b73792'
imageRepository: 'votingapp'
containerRegistry: 'gabvotingappacr.azurecr.io'
dockerfilePath: '$(Build.SourcesDirectory)/vote/Dockerfile'
tag: '$(Build.BuildId)'

pool:
name: 'VotingApp-Agent'

stages:

  • stage: Build
    displayName: Build the Voting App
    jobs:

    • job: Build displayName: Build steps:
    • task: Docker@2 inputs: containerRegistry: 'gabVotingAppACR' repository: 'votingapp/vote' command: 'build' Dockerfile: 'vote/Dockerfile'
  • stage: Push
    displayName: Push the Voting App
    jobs:

    • job: Push displayName: Pushing the Voting App steps:
    • task: Docker@2 inputs: containerRegistry: 'gabVotingAppACR' repository: 'votingapp/vote' command: 'push'`

Note: Update the pool.name to match your self-hosted agent name.
c. Run the Pipeline
Execute the pipeline to build and push the vote microservice image to ACR.
d. Pipeline Scripts for result and worker Microservices
Repeat Step 5 for the result and worker microservices, updating the service name (vote to result or worker) in the pipeline scripts. Full scripts are provided at the end of this guide.
e. Verify the Push
In the Azure Portal, navigate to Container Registry > Repositories to confirm that all three images (vote, result, worker) are pushed.

Stage Two: Continuous Delivery

Step 1: Create an Azure Managed Kubernetes Cluster (AKS)

In the Azure Portal, search for Azure Kubernetes Service (AKS) and click Create.
Select your subscription and resource group.
Choose Dev/Test preset, name the cluster (e.g., VotingApp-k8s), and select a region.
Set Availability Zones to Zone 1 and keep other settings as default.
In Node Pools, select the agentpool, set the scale method to Automatic, and configure min/max node counts to 1 and 2.
Enable Public IP per node and click Update.
Click Review & Create to deploy the cluster.

Step 2: Install Azure CLI and Configure AKS
Create a new Azure VM to serve as a workstation for managing AKS and ArgoCD. Run the following commands:
`# Update package list
sudo apt-get update

Install Azure CLI

echo "Installing Azure CLI..."
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash

Log in to Azure

az login --use-device-code

Follow the prompted URL and code to authenticate. Verify the installation:
az --version

Install kubectl and configure AKS credentials:

Install kubectl

sudo az aks install-cli

Get AKS credentials

RESOURCE_GROUP="gabRG"
AKS_NAME="votingApp-k8s"
az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME --overwrite-existing

Verify connection

kubectl get nodes

The output should show a single-node cluster with a Ready status.
Step 3: Install ArgoCD
Use the following script to install ArgoCD on the AKS cluster:
#!/bin/bash

Install Argo CD

echo "Installing Argo CD..."
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Wait for Argo CD components

echo "Waiting for Argo CD components to be ready..."
kubectl wait --for=condition=Ready pods --all -n argocd --timeout=600s

Retrieve initial admin password

echo "Retrieving the Argo CD initial admin password..."
ARGOCD_INITIAL_PASSWORD=$(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d)
echo "Argo CD initial admin password: $ARGOCD_INITIAL_PASSWORD"

Expose Argo CD server via NodePort

echo "Exposing Argo CD server via NodePort..."
kubectl -n argocd patch svc argocd-server -p '{"spec": {"type": "NodePort"}}'

Retrieve Argo CD server URL

ARGOCD_SERVER=$(kubectl -n argocd get svc argocd-server -o jsonpath='{.spec.clusterIP}')
ARGOCD_PORT=$(kubectl -n argocd get svc argocd-server -o jsonpath='{.spec.ports[0].nodePort}')
echo "You can access the Argo CD server at http://$ARGOCD_SERVER:$ARGOCD_PORT"

Install Argo CD CLI (Optional)

echo "Installing Argo CD CLI..."
sudo curl -sSL -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64
sudo chmod +x /usr/local/bin/argocd

echo "Logging into Argo CD CLI..."
argocd login $ARGOCD_SERVER:$ARGOCD_PORT --username admin --password $ARGOCD_INITIAL_PASSWORD --insecure

echo "Argo CD installation and setup complete!"
`
Save the script as install-argo-cd.sh, make it executable (chmod +x install-argo-cd.sh), and run it (./install-argo-cd.sh).
a. Configure Port Rule for ArgoCD

Expose port 31436 for the ArgoCD NodePort service.
In the Azure Portal, search for Virtual Machine Scale Set (VMSS), navigate to Networking > Create Port Rule, and add an Inbound Rule for port 31436.
Access ArgoCD at http://:31436.

b. Log In to ArgoCD

Retrieve the ArgoCD admin password:

kubectl get secret -n argocd
kubectl edit secret argocd-initial-admin-secret -n argocd

Decode the password:

echo | base64 --decode

Log in to ArgoCD with:
Username: admin
Password:

Step 4: Configure ArgoCD

Connect ArgoCD to the Azure repository containing Kubernetes manifest files to monitor and deploy changes to AKS.
a. Connect to Azure Repository

Copy the HTTPS URL of your Azure repository.
In Azure DevOps, create a Personal Access Token (PAT) under User Settings > Personal Access Tokens with read or full access.
In ArgoCD, go to Settings > Connect Repo > VIA HTTPS, paste the repository URL, and add the PAT.
Test the connection by clicking CONNECT.

b. Connect to AKS

In ArgoCD, create a New Application.
Set:
Application Name: Choose a name.
Project: default.
SYNC POLICY: Automatic.
Repository URL: Select the Azure repository URL.
Path: k8s-specifications.
Namespace: default.

Create the application and wait for the manifest files to deploy.

Verify pod status in ArgoCD or via the terminal:
kubectl get pods

Step 5: Automate Kubernetes Manifest Updates

To integrate the CI and CD stages, use a Bash script to update Kubernetes manifests in the Azure repository when new images are pushed to ACR.
a. Add the Update Script
Place the following script in the vote service folder:
`#!/bin/bash

set -x

Set the repository URL

REPO_URL="https://@dev.azure.com/GabrielOkom/votingApp/_git/votingApp"

Clone the git repository into the /tmp directory

git clone "$REPO_URL" /tmp/temp_repo

Navigate into the cloned repository directory

cd /tmp/temp_repo

Update the Kubernetes manifest file

sed -i "s|image:.*|image: /$2:$3|g" k8s-specifications/$1-deployment.yaml

Add the modified files

git add .

Commit the changes

git commit -m "Update Kubernetes manifest"

Push the changes back to the repository

git push

Cleanup

rm -rf /tmp/temp_repo`

This script updates the image field in the Kubernetes manifest (vote-deployment.yaml) with the new image tag from ACR.
b. Update the vote Pipeline
Add a new stage to the vote pipeline:
`# Docker

Build and push an image to Azure Container Registry

https://docs.microsoft.com/azure/devops/pipelines/languages/docker

trigger:
paths:
include:
- vote/*

resources:

  • repo: self

variables:
dockerRegistryServiceConnection: 'a777b3f1-28d4-40f3-bbdf-0904d5c89545'
imageRepository: 'votingapp'
containerRegistry: 'gabvotingappacr.azurecr.io'
dockerfilePath: '$(Build.SourcesDirectory)/vote/Dockerfile'
tag: '$(Build.BuildId)'

pool:
name: 'VotingApp-Agent'

stages:

  • stage: Build
    displayName: Build the Voting App
    jobs:

    • job: Build displayName: Build steps:
    • task: Docker@2 inputs: containerRegistry: 'gabVotingAppACR' repository: 'votingapp/vote' command: 'build' Dockerfile: 'vote/Dockerfile'
  • stage: Push
    displayName: Push the Voting App
    jobs:

    • job: Push displayName: Push steps:
    • task: Docker@2 inputs: containerRegistry: 'gabVotingAppACR' repository: 'votingapp/vote' command: 'push'
  • stage: Update_bash_script
    displayName: Update Bash Script
    jobs:

    • job: Updating_repo_with_bash displayName: Updating repo using bash script steps:
    • task: ShellScript@2 inputs: scriptPath: 'vote/updateK8sManifests.sh' args: 'vote $(imageRepository) $(tag)'

c. Optional: Adjust ArgoCD Sync Interval
If updates are delayed, edit the ArgoCD ConfigMap:
kubectl edit cm argocd-cm -n argocd

Add:
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
data:
timeout.reconciliation: 10s`

Note: For production, set timeout.reconciliation to at least 180s to avoid overloading services.
Step 6: Resolve ImagePullBackOff Error
If Kubernetes fails to pull images from ACR, configure an imagePullSecret.

In the Azure Portal, go to Container Registry > Settings > Access Keys, enable Admin User, and copy the password.
Create a secret:
kubectl create secret docker-registry <secret-name> \
--namespace <namespace> \
--docker-server=<container-registry-name>.azurecr.io \
--docker-username=<service-principal-ID> \
--docker-password=<service-principal-password>

Edit vote-deployment.yaml in the Azure repository under k8s-specifications to include:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: vote
  name: vote
spec:
  replicas: 1
  selector:
    matchLabels:
      app: vote
  template:
    metadata:
      labels:
        app: vote
    spec:
      containers:
      - image: gabvotingappacr.azurecr.io/votingapp/vote:18
        name: vote
        ports:
        - containerPort: 80
          name: vote
      imagePullSecrets:
      - name: <secret-name>

Enter fullscreen mode Exit fullscreen mode

Commit the changes and verify pod status:

kubectl get pods
kubectl get svc

Access the application at http://:31000 (ensure port 31000 is open in VMSS inbound rules).

Step 7: Verify the CI/CD Process

In the Azure repository, edit the app.py file in the vote directory to update the voting options (e.g., change Rain and Snow to Summer and Winter):

from flask import Flask, render_template, request, make_response, g
from redis import Redis
import os
import socket
import random
import json
import logging

option_a = os.getenv('OPTION_A', "Summer")
option_b = os.getenv('OPTION_B', "Winter")
hostname = socket.gethostname()
app = Flask(__name__)
gunicorn_error_logger = logging.getLogger('gunicorn.error')
app.logger.handlers.extend(gunicorn_error_logger.handlers)
app.logger.setLevel(logging.INFO)

def get_redis():
    if not hasattr(g, 'redis'):
        g.redis = Redis(host="redis", db=0, socket_timeout=5)
    return g.redis

@app.route("/", methods=['POST', 'GET'])
def hello():
    voter_id = request.cookies.get('voter_id')
    if not voter_id:
        voter_id = hex(random.getrandbits(64))[2:-1]
    vote = None
    if request.method == 'POST':
        redis = get_redis()
        vote = request.form['vote']
        app.logger.info('Received vote for %s', vote)
        data = json.dumps({'voter_id': voter_id, 'vote': vote})
        redis.rpush('votes', data)
    resp = make_response(render_template(
        'index.html',
        option_a=option_a,
        option_b=option_b,
        hostname=hostname,
        vote=vote,
    ))
    resp.set_cookie('voter_id', voter_id)
    return resp

if __name__ == "__main__":
    app.run(host='0.0.0.0', port=80, debug=True, threaded=True)
Enter fullscreen mode Exit fullscreen mode

Commit the changes to trigger the pipeline.
Verify the application at http://:31000 to confirm the updated options are reflected.

Step 8: Complete the CI/CD for worker and result Microservices

Add the updateK8sManifests.sh stage to the worker and result pipelines, similar to the vote pipeline. Update result-deployment.yaml and worker-deployment.yaml to include imagePullSecrets.
Worker Pipeline Script
`trigger:
paths:
include:
- worker/*

resources:

  • repo: self

variables:
dockerRegistryServiceConnection: 'ad3eca0a-4219-4a32-9df0-29fd9ba340b8'
imageRepository: 'votingapp'
containerRegistry: 'gabvotingappacr.azurecr.io'
dockerfilePath: '$(Build.SourcesDirectory)/worker/Dockerfile'
tag: '$(Build.BuildId)'

pool:
name: 'voting-agent-app'

stages:

  • stage: Build
    displayName: Build the Voting App
    jobs:

    • job: Build displayName: Build steps:
    • task: Docker@2 inputs: containerRegistry: '$(dockerRegistryServiceConnection)' repository: '$(imageRepository)' command: 'build' Dockerfile: 'worker/Dockerfile' tags: '$(tag)'
  • stage: Push
    displayName: Push the Voting App
    jobs:

    • job: Push displayName: Push steps:
    • task: Docker@2 inputs: containerRegistry: '$(dockerRegistryServiceConnection)' repository: '$(imageRepository)' command: 'push' tags: '$(tag)'
  • stage: Update_bash_script
    displayName: Update Bash Script
    jobs:

    • job: Updating_repo_with_bash displayName: Updating repo using bash script steps:
    • script: | dos2unix scripts/updateK8sManifests.sh bash scripts/updateK8sManifests.sh "worker" "$(imageRepository)" "$(tag)" displayName: Run UpdateK8sManifests Script

Result Pipeline Script
trigger:
paths:
include:
- result/*

resources:

  • repo: self

variables:
dockerRegistryServiceConnection: 'ad3eca0a-4219-4a32-9df0-29fd9ba340b8'
imageRepository: 'votingapp'
containerRegistry: 'gabvotingappacr.azurecr.io'
dockerfilePath: '$(Build.SourcesDirectory)/result/Dockerfile'
tag: '$(Build.BuildId)'

pool:
name: 'voting-agent-app'

stages:

  • stage: Build
    displayName: Build the Voting App
    jobs:

    • job: Build displayName: Build steps:
    • task: Docker@2 inputs: containerRegistry: '$(dockerRegistryServiceConnection)' repository: '$(imageRepository)' command: 'build' Dockerfile: 'result/Dockerfile' tags: '$(tag)'
  • stage: Push
    displayName: Push the Voting App
    jobs:

    • job: Push displayName: Push steps:
    • task: Docker@2 inputs: containerRegistry: '$(dockerRegistryServiceConnection)' repository: '$(imageRepository)' command: 'push' tags: '$(tag)'
  • stage: Update_bash_script
    displayName: Update Bash Script
    jobs:

    • job: Updating_repo_with_bash displayName: Updating repo using bash script steps:
    • script: | dos2unix scripts/updateK8sManifests.sh bash scripts/updateK8sManifests.sh "result" "$(imageRepository)" "$(tag)" displayName: Run UpdateK8sManifests Script ## Step 9: Verify Vote Counts Check vote counts in Redis or the database: kubectl exec -it -- redis-cli ` This completes the setup of a fully functional CI/CD pipeline for the microservices application.

Top comments (0)