<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Lohith S</title>
    <description>The latest articles on DEV Community by Lohith S (@lohith_s_09e8e9d6b5dfc5c7).</description>
    <link>https://dev.to/lohith_s_09e8e9d6b5dfc5c7</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/lohith_s_09e8e9d6b5dfc5c7"/>
    <language>en</language>
    <item>
      <title>Deploying a Microservices Application with Azure devops, Kubernetes and CI/CD</title>
      <dc:creator>Lohith S</dc:creator>
      <pubDate>Wed, 06 Aug 2025 17:21:34 +0000</pubDate>
      <link>https://dev.to/lohith_s_09e8e9d6b5dfc5c7/deploying-a-microservices-application-with-azure-and-kubernetes-38d7</link>
      <guid>https://dev.to/lohith_s_09e8e9d6b5dfc5c7/deploying-a-microservices-application-with-azure-and-kubernetes-38d7</guid>
      <description>&lt;p&gt;Hellooo, this is a step-by-step approach to deploying a microservices-based application locally using Docker Compose, followed by setting up a CI/CD pipeline with Azure DevOps, Azure Container Registry (ACR), Azure Kubernetes Service (AKS), and ArgoCD for continuous deployment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ow30patg3jsur8fq6u7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ow30patg3jsur8fq6u7.png" alt=" " width="760" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Docker Example Voting App is a microservices-based application, typically written in Python and Node.js, with the following components:&lt;/p&gt;

&lt;p&gt;1&amp;gt; Voting Frontend (Python/Flask): Users cast votes via a simple web interface.&lt;br&gt;
2&amp;gt; Vote Processor Backend (Node.js/Express): Receives and processes the votes.&lt;br&gt;
3&amp;gt; Redis Database (Redis): Temporarily stores votes for fast access.&lt;br&gt;
4&amp;gt; Worker (Python/Flask): Processes votes from Redis and updates the final count.&lt;br&gt;
5&amp;gt; Results Frontend (Python/Flask): Displays real-time voting results.&lt;br&gt;
PostgreSQL Database (PostgreSQL): Stores the final vote count.&lt;/p&gt;

&lt;p&gt;How It Works:&lt;/p&gt;

&lt;p&gt;The user votes on the frontend, the vote is processed and stored in Redis, then the worker updates the final count in PostgreSQL, and the results are shown on a results page. All components run in separate Docker containers for scalability and isolation&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 1: Clone and Deploy the Application Locally Using Docker Compose
&lt;/h2&gt;

&lt;p&gt;To test the application locally, create a virtual machine (VM) and deploy the application using Docker Compose.&lt;br&gt;
a. Create and Access an Azure Linux Ubuntu VM&lt;/p&gt;

&lt;p&gt;Provision an Azure Linux Ubuntu VM. Refer to Azure's official documentation for guidance if needed.&lt;br&gt;
Once the VM is created, navigate to the VM's folder and connect via SSH using the command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ssh -i &amp;lt;your-key-name&amp;gt; azureuser@&amp;lt;your-ip-address&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Note: Ensure port 22 is open during VM creation (this is typically enabled by default for Linux VMs).&lt;br&gt;
b. Update the VM&lt;br&gt;
Run the following command to update the package list:&lt;br&gt;
&lt;code&gt;sudo apt-get update&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;c. Install Docker and Docker Compose&lt;br&gt;
To run the application locally, install Docker and Docker Compose to enable the docker-compose up -d command for managing multi-container applications, such as microservices.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install Required Packages&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;sudo apt-get install \&lt;br&gt;
  ca-certificates \&lt;br&gt;
  curl \&lt;br&gt;
  gnupg \&lt;br&gt;
  lsb-release&lt;/code&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add Docker's Official GPG Key
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Set Up the Stable Repository
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list &amp;gt; /dev/null
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Install Docker Engine
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Install Docker Compose
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo curl -L "https://github.com/docker/compose/releases/download/$(curl -s https://api.github.com/repos/docker/compose/releases/latest | grep -oP '"tag_name": "\K(.*)(?=")')/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Manage Docker as a Non-Root User
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo usermod -aG docker $USER
newgrp docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Verify Versions&lt;br&gt;
sudo docker --version&lt;br&gt;
docker-compose --version&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Verify Docker Command Without Sudo&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Note: The docker ps output should be empty if no containers are running.&lt;br&gt;
d. Clone the Repository&lt;/p&gt;

&lt;p&gt;Clone or fork the application repository from GitHub. Navigate to the repository directory and run:&lt;br&gt;
&lt;code&gt;&lt;br&gt;
docker-compose up -d&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Verify that all containers are running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output will show the application container mapped to port 5000. Access the application in two ways:&lt;/p&gt;

&lt;p&gt;Run curl &lt;a href="http://localhost:5000" rel="noopener noreferrer"&gt;http://localhost:5000&lt;/a&gt; to view the app in the terminal.&lt;br&gt;
In a browser, enter http://:5000.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjkp1xgyonewk27jkmyw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjkp1xgyonewk27jkmyw.png" alt=" " width="753" height="419"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 2: Create an Azure DevOps Project and Import the Repository
&lt;/h2&gt;

&lt;p&gt;a. Sign In to Azure DevOps&lt;br&gt;
Access Azure DevOps and sign in. If you're a first-time user, follow the prompts to create an organization.&lt;br&gt;
b. Create a Project&lt;/p&gt;

&lt;p&gt;Create a new project named VotingApp.&lt;br&gt;
Set the visibility to Private and click Create Project.&lt;/p&gt;

&lt;p&gt;c. Import the Git Repository&lt;/p&gt;

&lt;p&gt;In the left menu, navigate to Repos &amp;gt; Files.&lt;br&gt;
Ensure the repository type is set to Git.&lt;br&gt;
Enter the Git repository URL and click Import.&lt;/p&gt;

&lt;p&gt;Upon successful import, the entire repository will be visible in Azure DevOps.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 3: Create an Azure Container Registry (ACR)
&lt;/h2&gt;

&lt;p&gt;The Docker images built during the CI process will be stored in Azure Container Registry (ACR).&lt;br&gt;
a. Create an ACR&lt;/p&gt;

&lt;p&gt;In the Azure Portal, search for Container Registry and click Create.&lt;br&gt;
Select or create a resource group.&lt;br&gt;
Provide a name for the ACR and choose either the Basic or Standard pricing plan.&lt;br&gt;
Click Review &amp;amp; Create, then Create.&lt;/p&gt;

&lt;p&gt;b. Access the ACR&lt;br&gt;
After creation, navigate to the resource and note the ACR server name for use in Step 5.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Step 4: Set Up a Self-Hosted Agent for the Pipeline&lt;/code&gt;&lt;br&gt;
To optimize resources, reuse the VM created in Step 1 for the self-hosted agent. Ensure Docker is installed if using a new VM.&lt;br&gt;
a. Configure the Agent Pool&lt;/p&gt;

&lt;p&gt;In Azure DevOps, go to Project Settings &amp;gt; Agent Pools &amp;gt; Add Pool.&lt;br&gt;
Select New, choose Self-hosted, name the pool (e.g., VotingApp-Agent), and grant access to all pipelines.&lt;/p&gt;

&lt;p&gt;b. Set Up the Agent&lt;/p&gt;

&lt;p&gt;In the agent pool, click New Agent and select Linux.&lt;br&gt;
Run the following commands on the VM:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create a directory and navigate to it
mkdir myagent &amp;amp;&amp;amp; cd myagent

# Install wget
sudo apt install wget -y

# Download the agent (replace with the URL from Azure DevOps)
wget https://vstsagentpackage.azureedge.net/agent/3.243.0/vsts-agent-linux-x64-3.243.0.tar.gz

# Extract the agent files
tar zxvf vsts-agent-linux-x64-3.243.0.tar.gz

# Configure the agent
./config.sh

# Start the agent
./run.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;During ./config.sh, provide:&lt;br&gt;
The Azure DevOps URL (e.g., &lt;a href="https://dev.azure.com/" rel="noopener noreferrer"&gt;https://dev.azure.com/&lt;/a&gt;).&lt;br&gt;
A Personal Access Token (PAT) created in Azure DevOps.&lt;br&gt;
The agent pool name created earlier.&lt;br&gt;
Accept defaults for remaining prompts.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 5: Create CI Pipeline Scripts for Microservice
&lt;/h2&gt;

&lt;p&gt;s&lt;br&gt;
Set up CI pipelines for the vote, result, and worker microservices, each with build and push stages.&lt;br&gt;
a. Configure the Pipeline&lt;/p&gt;

&lt;p&gt;In Azure DevOps, go to Repos &amp;gt; Pipelines &amp;gt; New Pipeline.&lt;br&gt;
Select the Docker template for pushing images to ACR.&lt;br&gt;
Choose your Azure subscription and the ACR created earlier.&lt;br&gt;
Specify the image name and ensure the correct repository file is selected.&lt;/p&gt;

&lt;p&gt;b. Pipeline Script for the vote Microservice&lt;br&gt;
`# Docker&lt;/p&gt;
&lt;h1&gt;
  
  
  Build and push an image to Azure Container Registry
&lt;/h1&gt;
&lt;h1&gt;
  
  
  &lt;a href="https://docs.microsoft.com/azure/devops/pipelines/languages/docker" rel="noopener noreferrer"&gt;https://docs.microsoft.com/azure/devops/pipelines/languages/docker&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;trigger:&lt;br&gt;
  paths:&lt;br&gt;
    include:&lt;br&gt;
      - vote/*&lt;/p&gt;

&lt;p&gt;resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;repo: self&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;variables:&lt;br&gt;
  dockerRegistryServiceConnection: '37868c72-32ef-488d-a490-1415f4b73792'&lt;br&gt;
  imageRepository: 'votingapp'&lt;br&gt;
  containerRegistry: 'gabvotingappacr.azurecr.io'&lt;br&gt;
  dockerfilePath: '$(Build.SourcesDirectory)/vote/Dockerfile'&lt;br&gt;
  tag: '$(Build.BuildId)'&lt;/p&gt;

&lt;p&gt;pool:&lt;br&gt;
  name: 'VotingApp-Agent'&lt;/p&gt;

&lt;p&gt;stages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;stage: Build&lt;br&gt;
displayName: Build the Voting App&lt;br&gt;
jobs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;job: Build
displayName: Build
steps:&lt;/li&gt;
&lt;li&gt;task: Docker@2
inputs:
containerRegistry: 'gabVotingAppACR'
repository: 'votingapp/vote'
command: 'build'
Dockerfile: 'vote/Dockerfile'&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;stage: Push&lt;br&gt;
displayName: Push the Voting App&lt;br&gt;
jobs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;job: Push
displayName: Pushing the Voting App
steps:&lt;/li&gt;
&lt;li&gt;task: Docker@2
inputs:
containerRegistry: 'gabVotingAppACR'
repository: 'votingapp/vote'
command: 'push'`&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note: Update the pool.name to match your self-hosted agent name.&lt;br&gt;
c. Run the Pipeline&lt;br&gt;
Execute the pipeline to build and push the vote microservice image to ACR.&lt;br&gt;
d. Pipeline Scripts for result and worker Microservices&lt;br&gt;
Repeat Step 5 for the result and worker microservices, updating the service name (vote to result or worker) in the pipeline scripts. Full scripts are provided at the end of this guide.&lt;br&gt;
e. Verify the Push&lt;br&gt;
In the Azure Portal, navigate to Container Registry &amp;gt; Repositories to confirm that all three images (vote, result, worker) are pushed.&lt;/p&gt;
&lt;h2&gt;
  
  
  Stage Two: Continuous Delivery
&lt;/h2&gt;
&lt;h2&gt;
  
  
  Step 1: Create an Azure Managed Kubernetes Cluster (AKS)
&lt;/h2&gt;

&lt;p&gt;In the Azure Portal, search for Azure Kubernetes Service (AKS) and click Create.&lt;br&gt;
Select your subscription and resource group.&lt;br&gt;
Choose Dev/Test preset, name the cluster (e.g., VotingApp-k8s), and select a region.&lt;br&gt;
Set Availability Zones to Zone 1 and keep other settings as default.&lt;br&gt;
In Node Pools, select the agentpool, set the scale method to Automatic, and configure min/max node counts to 1 and 2.&lt;br&gt;
Enable Public IP per node and click Update.&lt;br&gt;
Click Review &amp;amp; Create to deploy the cluster.&lt;/p&gt;

&lt;p&gt;Step 2: Install Azure CLI and Configure AKS&lt;br&gt;
Create a new Azure VM to serve as a workstation for managing AKS and ArgoCD. Run the following commands:&lt;br&gt;
`# Update package list&lt;br&gt;
sudo apt-get update&lt;/p&gt;
&lt;h1&gt;
  
  
  Install Azure CLI
&lt;/h1&gt;

&lt;p&gt;echo "Installing Azure CLI..."&lt;br&gt;
curl -sL &lt;a href="https://aka.ms/InstallAzureCLIDeb" rel="noopener noreferrer"&gt;https://aka.ms/InstallAzureCLIDeb&lt;/a&gt; | sudo bash&lt;/p&gt;
&lt;h1&gt;
  
  
  Log in to Azure
&lt;/h1&gt;

&lt;p&gt;az login --use-device-code&lt;/p&gt;

&lt;p&gt;Follow the prompted URL and code to authenticate. Verify the installation:&lt;br&gt;
az --version&lt;/p&gt;

&lt;p&gt;Install kubectl and configure AKS credentials:&lt;/p&gt;
&lt;h1&gt;
  
  
  Install kubectl
&lt;/h1&gt;

&lt;p&gt;sudo az aks install-cli&lt;/p&gt;
&lt;h1&gt;
  
  
  Get AKS credentials
&lt;/h1&gt;

&lt;p&gt;RESOURCE_GROUP="gabRG"&lt;br&gt;
AKS_NAME="votingApp-k8s"&lt;br&gt;
az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME --overwrite-existing&lt;/p&gt;
&lt;h1&gt;
  
  
  Verify connection
&lt;/h1&gt;

&lt;p&gt;kubectl get nodes&lt;br&gt;
&lt;code&gt;&lt;br&gt;
The output should show a single-node cluster with a Ready status.&lt;br&gt;
Step 3: Install ArgoCD&lt;br&gt;
Use the following script to install ArgoCD on the AKS cluster:&lt;br&gt;
&lt;/code&gt;#!/bin/bash&lt;/p&gt;
&lt;h1&gt;
  
  
  Install Argo CD
&lt;/h1&gt;

&lt;p&gt;echo "Installing Argo CD..."&lt;br&gt;
kubectl create namespace argocd&lt;br&gt;
kubectl apply -n argocd -f &lt;a href="https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Wait for Argo CD components
&lt;/h1&gt;

&lt;p&gt;echo "Waiting for Argo CD components to be ready..."&lt;br&gt;
kubectl wait --for=condition=Ready pods --all -n argocd --timeout=600s&lt;/p&gt;
&lt;h1&gt;
  
  
  Retrieve initial admin password
&lt;/h1&gt;

&lt;p&gt;echo "Retrieving the Argo CD initial admin password..."&lt;br&gt;
ARGOCD_INITIAL_PASSWORD=$(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d)&lt;br&gt;
echo "Argo CD initial admin password: $ARGOCD_INITIAL_PASSWORD"&lt;/p&gt;
&lt;h1&gt;
  
  
  Expose Argo CD server via NodePort
&lt;/h1&gt;

&lt;p&gt;echo "Exposing Argo CD server via NodePort..."&lt;br&gt;
kubectl -n argocd patch svc argocd-server -p '{"spec": {"type": "NodePort"}}'&lt;/p&gt;
&lt;h1&gt;
  
  
  Retrieve Argo CD server URL
&lt;/h1&gt;

&lt;p&gt;ARGOCD_SERVER=$(kubectl -n argocd get svc argocd-server -o jsonpath='{.spec.clusterIP}')&lt;br&gt;
ARGOCD_PORT=$(kubectl -n argocd get svc argocd-server -o jsonpath='{.spec.ports[0].nodePort}')&lt;br&gt;
echo "You can access the Argo CD server at http://$ARGOCD_SERVER:$ARGOCD_PORT"&lt;/p&gt;
&lt;h1&gt;
  
  
  Install Argo CD CLI (Optional)
&lt;/h1&gt;

&lt;p&gt;echo "Installing Argo CD CLI..."&lt;br&gt;
sudo curl -sSL -o /usr/local/bin/argocd &lt;a href="https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64" rel="noopener noreferrer"&gt;https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64&lt;/a&gt;&lt;br&gt;
sudo chmod +x /usr/local/bin/argocd&lt;/p&gt;

&lt;p&gt;echo "Logging into Argo CD CLI..."&lt;br&gt;
argocd login $ARGOCD_SERVER:$ARGOCD_PORT --username admin --password $ARGOCD_INITIAL_PASSWORD --insecure&lt;/p&gt;

&lt;p&gt;echo "Argo CD installation and setup complete!"&lt;br&gt;
`&lt;br&gt;
Save the script as install-argo-cd.sh, make it executable (chmod +x install-argo-cd.sh), and run it (./install-argo-cd.sh).&lt;br&gt;
a. Configure Port Rule for ArgoCD&lt;/p&gt;

&lt;p&gt;Expose port 31436 for the ArgoCD NodePort service.&lt;br&gt;
In the Azure Portal, search for Virtual Machine Scale Set (VMSS), navigate to Networking &amp;gt; Create Port Rule, and add an Inbound Rule for port 31436.&lt;br&gt;
Access ArgoCD at http://:31436.&lt;/p&gt;

&lt;p&gt;b. Log In to ArgoCD&lt;/p&gt;

&lt;p&gt;Retrieve the ArgoCD admin password:&lt;/p&gt;

&lt;p&gt;kubectl get secret -n argocd&lt;br&gt;
kubectl edit secret argocd-initial-admin-secret -n argocd&lt;/p&gt;

&lt;p&gt;Decode the password:&lt;/p&gt;

&lt;p&gt;echo  | base64 --decode&lt;/p&gt;

&lt;p&gt;Log in to ArgoCD with:&lt;br&gt;
Username: admin&lt;br&gt;
Password: &lt;/p&gt;
&lt;h2&gt;
  
  
  Step 4: Configure ArgoCD
&lt;/h2&gt;

&lt;p&gt;Connect ArgoCD to the Azure repository containing Kubernetes manifest files to monitor and deploy changes to AKS.&lt;br&gt;
a. Connect to Azure Repository&lt;/p&gt;

&lt;p&gt;Copy the HTTPS URL of your Azure repository.&lt;br&gt;
In Azure DevOps, create a Personal Access Token (PAT) under User Settings &amp;gt; Personal Access Tokens with read or full access.&lt;br&gt;
In ArgoCD, go to Settings &amp;gt; Connect Repo &amp;gt; VIA HTTPS, paste the repository URL, and add the PAT.&lt;br&gt;
Test the connection by clicking CONNECT.&lt;/p&gt;

&lt;p&gt;b. Connect to AKS&lt;/p&gt;

&lt;p&gt;In ArgoCD, create a New Application.&lt;br&gt;
Set:&lt;br&gt;
Application Name: Choose a name.&lt;br&gt;
Project: default.&lt;br&gt;
SYNC POLICY: Automatic.&lt;br&gt;
Repository URL: Select the Azure repository URL.&lt;br&gt;
Path: k8s-specifications.&lt;br&gt;
Namespace: default.&lt;/p&gt;

&lt;p&gt;Create the application and wait for the manifest files to deploy.&lt;/p&gt;

&lt;p&gt;Verify pod status in ArgoCD or via the terminal:&lt;br&gt;
kubectl get pods&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 5: Automate Kubernetes Manifest Updates
&lt;/h2&gt;

&lt;p&gt;To integrate the CI and CD stages, use a Bash script to update Kubernetes manifests in the Azure repository when new images are pushed to ACR.&lt;br&gt;
a. Add the Update Script&lt;br&gt;
Place the following script in the vote service folder:&lt;br&gt;
`#!/bin/bash&lt;/p&gt;

&lt;p&gt;set -x&lt;/p&gt;
&lt;h1&gt;
  
  
  Set the repository URL
&lt;/h1&gt;

&lt;p&gt;REPO_URL="https://@dev.azure.com/GabrielOkom/votingApp/_git/votingApp"&lt;/p&gt;
&lt;h1&gt;
  
  
  Clone the git repository into the /tmp directory
&lt;/h1&gt;

&lt;p&gt;git clone "$REPO_URL" /tmp/temp_repo&lt;/p&gt;
&lt;h1&gt;
  
  
  Navigate into the cloned repository directory
&lt;/h1&gt;

&lt;p&gt;cd /tmp/temp_repo&lt;/p&gt;
&lt;h1&gt;
  
  
  Update the Kubernetes manifest file
&lt;/h1&gt;

&lt;p&gt;sed -i "s|image:.*|image: /$2:$3|g" k8s-specifications/$1-deployment.yaml&lt;/p&gt;
&lt;h1&gt;
  
  
  Add the modified files
&lt;/h1&gt;

&lt;p&gt;git add .&lt;/p&gt;
&lt;h1&gt;
  
  
  Commit the changes
&lt;/h1&gt;

&lt;p&gt;git commit -m "Update Kubernetes manifest"&lt;/p&gt;
&lt;h1&gt;
  
  
  Push the changes back to the repository
&lt;/h1&gt;

&lt;p&gt;git push&lt;/p&gt;
&lt;h1&gt;
  
  
  Cleanup
&lt;/h1&gt;

&lt;p&gt;rm -rf /tmp/temp_repo`&lt;/p&gt;

&lt;p&gt;This script updates the image field in the Kubernetes manifest (vote-deployment.yaml) with the new image tag from ACR.&lt;br&gt;
b. Update the vote Pipeline&lt;br&gt;
Add a new stage to the vote pipeline:&lt;br&gt;
`# Docker&lt;/p&gt;
&lt;h1&gt;
  
  
  Build and push an image to Azure Container Registry
&lt;/h1&gt;
&lt;h1&gt;
  
  
  &lt;a href="https://docs.microsoft.com/azure/devops/pipelines/languages/docker" rel="noopener noreferrer"&gt;https://docs.microsoft.com/azure/devops/pipelines/languages/docker&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;trigger:&lt;br&gt;
  paths:&lt;br&gt;
    include:&lt;br&gt;
      - vote/*&lt;/p&gt;

&lt;p&gt;resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;repo: self&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;variables:&lt;br&gt;
  dockerRegistryServiceConnection: 'a777b3f1-28d4-40f3-bbdf-0904d5c89545'&lt;br&gt;
  imageRepository: 'votingapp'&lt;br&gt;
  containerRegistry: 'gabvotingappacr.azurecr.io'&lt;br&gt;
  dockerfilePath: '$(Build.SourcesDirectory)/vote/Dockerfile'&lt;br&gt;
  tag: '$(Build.BuildId)'&lt;/p&gt;

&lt;p&gt;pool:&lt;br&gt;
  name: 'VotingApp-Agent'&lt;/p&gt;

&lt;p&gt;stages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;stage: Build&lt;br&gt;
displayName: Build the Voting App&lt;br&gt;
jobs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;job: Build
displayName: Build
steps:&lt;/li&gt;
&lt;li&gt;task: Docker@2
inputs:
containerRegistry: 'gabVotingAppACR'
repository: 'votingapp/vote'
command: 'build'
Dockerfile: 'vote/Dockerfile'&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;stage: Push&lt;br&gt;
displayName: Push the Voting App&lt;br&gt;
jobs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;job: Push
displayName: Push
steps:&lt;/li&gt;
&lt;li&gt;task: Docker@2
inputs:
containerRegistry: 'gabVotingAppACR'
repository: 'votingapp/vote'
command: 'push'&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;stage: Update_bash_script&lt;br&gt;
displayName: Update Bash Script&lt;br&gt;
jobs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;job: Updating_repo_with_bash
displayName: Updating repo using bash script
steps:&lt;/li&gt;
&lt;li&gt;task: ShellScript@2
inputs:
scriptPath: 'vote/updateK8sManifests.sh'
args: 'vote $(imageRepository) $(tag)'&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;c. Optional: Adjust ArgoCD Sync Interval&lt;br&gt;
If updates are delayed, edit the ArgoCD ConfigMap:&lt;br&gt;
kubectl edit cm argocd-cm -n argocd&lt;/p&gt;

&lt;p&gt;Add:&lt;br&gt;
apiVersion: v1&lt;br&gt;
kind: ConfigMap&lt;br&gt;
metadata:&lt;br&gt;
  name: argocd-cm&lt;br&gt;
  namespace: argocd&lt;br&gt;
data:&lt;br&gt;
  timeout.reconciliation: 10s`&lt;/p&gt;

&lt;p&gt;Note: For production, set timeout.reconciliation to at least 180s to avoid overloading services.&lt;br&gt;
Step 6: Resolve ImagePullBackOff Error&lt;br&gt;
If Kubernetes fails to pull images from ACR, configure an imagePullSecret.&lt;/p&gt;

&lt;p&gt;In the Azure Portal, go to Container Registry &amp;gt; Settings &amp;gt; Access Keys, enable Admin User, and copy the password.&lt;br&gt;
Create a secret:&lt;br&gt;
&lt;code&gt;kubectl create secret docker-registry &amp;lt;secret-name&amp;gt; \&lt;br&gt;
    --namespace &amp;lt;namespace&amp;gt; \&lt;br&gt;
    --docker-server=&amp;lt;container-registry-name&amp;gt;.azurecr.io \&lt;br&gt;
    --docker-username=&amp;lt;service-principal-ID&amp;gt; \&lt;br&gt;
    --docker-password=&amp;lt;service-principal-password&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Edit vote-deployment.yaml in the Azure repository under k8s-specifications to include:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: vote
  name: vote
spec:
  replicas: 1
  selector:
    matchLabels:
      app: vote
  template:
    metadata:
      labels:
        app: vote
    spec:
      containers:
      - image: gabvotingappacr.azurecr.io/votingapp/vote:18
        name: vote
        ports:
        - containerPort: 80
          name: vote
      imagePullSecrets:
      - name: &amp;lt;secret-name&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Commit the changes and verify pod status:&lt;/p&gt;

&lt;p&gt;kubectl get pods&lt;br&gt;
kubectl get svc&lt;/p&gt;

&lt;p&gt;Access the application at http://:31000 (ensure port 31000 is open in VMSS inbound rules).&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 7: Verify the CI/CD Process
&lt;/h2&gt;

&lt;p&gt;In the Azure repository, edit the app.py file in the vote directory to update the voting options (e.g., change Rain and Snow to Summer and Winter):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from flask import Flask, render_template, request, make_response, g
from redis import Redis
import os
import socket
import random
import json
import logging

option_a = os.getenv('OPTION_A', "Summer")
option_b = os.getenv('OPTION_B', "Winter")
hostname = socket.gethostname()
app = Flask(__name__)
gunicorn_error_logger = logging.getLogger('gunicorn.error')
app.logger.handlers.extend(gunicorn_error_logger.handlers)
app.logger.setLevel(logging.INFO)

def get_redis():
    if not hasattr(g, 'redis'):
        g.redis = Redis(host="redis", db=0, socket_timeout=5)
    return g.redis

@app.route("/", methods=['POST', 'GET'])
def hello():
    voter_id = request.cookies.get('voter_id')
    if not voter_id:
        voter_id = hex(random.getrandbits(64))[2:-1]
    vote = None
    if request.method == 'POST':
        redis = get_redis()
        vote = request.form['vote']
        app.logger.info('Received vote for %s', vote)
        data = json.dumps({'voter_id': voter_id, 'vote': vote})
        redis.rpush('votes', data)
    resp = make_response(render_template(
        'index.html',
        option_a=option_a,
        option_b=option_b,
        hostname=hostname,
        vote=vote,
    ))
    resp.set_cookie('voter_id', voter_id)
    return resp

if __name__ == "__main__":
    app.run(host='0.0.0.0', port=80, debug=True, threaded=True)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Commit the changes to trigger the pipeline.&lt;br&gt;
Verify the application at http://:31000 to confirm the updated options are reflected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 8: Complete the CI/CD for worker and result Microservices
&lt;/h2&gt;

&lt;p&gt;Add the updateK8sManifests.sh stage to the worker and result pipelines, similar to the vote pipeline. Update result-deployment.yaml and worker-deployment.yaml to include imagePullSecrets.&lt;br&gt;
Worker Pipeline Script&lt;br&gt;
`trigger:&lt;br&gt;
  paths:&lt;br&gt;
    include:&lt;br&gt;
      - worker/*&lt;/p&gt;

&lt;p&gt;resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;repo: self&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;variables:&lt;br&gt;
  dockerRegistryServiceConnection: 'ad3eca0a-4219-4a32-9df0-29fd9ba340b8'&lt;br&gt;
  imageRepository: 'votingapp'&lt;br&gt;
  containerRegistry: 'gabvotingappacr.azurecr.io'&lt;br&gt;
  dockerfilePath: '$(Build.SourcesDirectory)/worker/Dockerfile'&lt;br&gt;
  tag: '$(Build.BuildId)'&lt;/p&gt;

&lt;p&gt;pool:&lt;br&gt;
  name: 'voting-agent-app'&lt;/p&gt;

&lt;p&gt;stages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;stage: Build&lt;br&gt;
displayName: Build the Voting App&lt;br&gt;
jobs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;job: Build
displayName: Build
steps:&lt;/li&gt;
&lt;li&gt;task: Docker@2
inputs:
containerRegistry: '$(dockerRegistryServiceConnection)'
repository: '$(imageRepository)'
command: 'build'
Dockerfile: 'worker/Dockerfile'
tags: '$(tag)'&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;stage: Push&lt;br&gt;
displayName: Push the Voting App&lt;br&gt;
jobs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;job: Push
displayName: Push
steps:&lt;/li&gt;
&lt;li&gt;task: Docker@2
inputs:
containerRegistry: '$(dockerRegistryServiceConnection)'
repository: '$(imageRepository)'
command: 'push'
tags: '$(tag)'&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;stage: Update_bash_script&lt;br&gt;
displayName: Update Bash Script&lt;br&gt;
jobs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;job: Updating_repo_with_bash
displayName: Updating repo using bash script
steps:&lt;/li&gt;
&lt;li&gt;script: |
dos2unix scripts/updateK8sManifests.sh
bash scripts/updateK8sManifests.sh "worker" "$(imageRepository)" "$(tag)"
displayName: Run UpdateK8sManifests Script&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Result Pipeline Script&lt;br&gt;
trigger:&lt;br&gt;
  paths:&lt;br&gt;
    include:&lt;br&gt;
      - result/*&lt;/p&gt;

&lt;p&gt;resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;repo: self&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;variables:&lt;br&gt;
  dockerRegistryServiceConnection: 'ad3eca0a-4219-4a32-9df0-29fd9ba340b8'&lt;br&gt;
  imageRepository: 'votingapp'&lt;br&gt;
  containerRegistry: 'gabvotingappacr.azurecr.io'&lt;br&gt;
  dockerfilePath: '$(Build.SourcesDirectory)/result/Dockerfile'&lt;br&gt;
  tag: '$(Build.BuildId)'&lt;/p&gt;

&lt;p&gt;pool:&lt;br&gt;
  name: 'voting-agent-app'&lt;/p&gt;

&lt;p&gt;stages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;stage: Build&lt;br&gt;
displayName: Build the Voting App&lt;br&gt;
jobs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;job: Build
displayName: Build
steps:&lt;/li&gt;
&lt;li&gt;task: Docker@2
inputs:
containerRegistry: '$(dockerRegistryServiceConnection)'
repository: '$(imageRepository)'
command: 'build'
Dockerfile: 'result/Dockerfile'
tags: '$(tag)'&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;stage: Push&lt;br&gt;
displayName: Push the Voting App&lt;br&gt;
jobs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;job: Push
displayName: Push
steps:&lt;/li&gt;
&lt;li&gt;task: Docker@2
inputs:
containerRegistry: '$(dockerRegistryServiceConnection)'
repository: '$(imageRepository)'
command: 'push'
tags: '$(tag)'&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;stage: Update_bash_script&lt;br&gt;
displayName: Update Bash Script&lt;br&gt;
jobs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;job: Updating_repo_with_bash
displayName: Updating repo using bash script
steps:&lt;/li&gt;
&lt;li&gt;script: |
dos2unix scripts/updateK8sManifests.sh
bash scripts/updateK8sManifests.sh "result" "$(imageRepository)" "$(tag)"
displayName: Run UpdateK8sManifests Script&lt;code&gt;
## Step 9: Verify Vote Counts
Check vote counts in Redis or the database:
&lt;/code&gt;kubectl exec -it  -- redis-cli
`
This completes the setup of a fully functional CI/CD pipeline for the microservices application.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>OpenTelemetry Ecommerce website: Devops deployment with Terraform, CI/CD, Kubernetes and AWS</title>
      <dc:creator>Lohith S</dc:creator>
      <pubDate>Wed, 06 Aug 2025 12:30:25 +0000</pubDate>
      <link>https://dev.to/lohith_s_09e8e9d6b5dfc5c7/opentelemetry-ecommerce-website-devops-deployment-with-terraform-cicd-kubernetes-and-aws-5bho</link>
      <guid>https://dev.to/lohith_s_09e8e9d6b5dfc5c7/opentelemetry-ecommerce-website-devops-deployment-with-terraform-cicd-kubernetes-and-aws-5bho</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fitz7v9zw5kmxii9d4n17.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fitz7v9zw5kmxii9d4n17.png" alt=" " width="800" height="698"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Table of Contents&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Project Overview&lt;/li&gt;
&lt;li&gt;Prerequisites&lt;/li&gt;
&lt;li&gt;Phase 1: Local Development Setup&lt;/li&gt;
&lt;li&gt;Phase 2: AWS Account Setup&lt;/li&gt;
&lt;li&gt;Phase 3: Infrastructure as Code with Terraform&lt;/li&gt;
&lt;li&gt;Phase 4: Container Orchestration with Kubernetes&lt;/li&gt;
&lt;li&gt;Phase 5: Domain Setup with Route53&lt;/li&gt;
&lt;li&gt;Phase 6: Monitoring and Observability&lt;/li&gt;
&lt;li&gt;Troubleshooting&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Project Overview
&lt;/h2&gt;

&lt;p&gt;This project demonstrates a complete DevOps implementation using the OpenTelemetry Astronomy Shop, a microservice-based e-commerce application. The project showcases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multi-language microservices architecture (Go, Java, Python, C#, TypeScript, Ruby, PHP, Rust, Elixir)&lt;/li&gt;
&lt;li&gt;Complete DevOps pipeline from local development to production deployment&lt;/li&gt;
&lt;li&gt;Infrastructure as Code using Terraform&lt;/li&gt;
&lt;li&gt;Container orchestration with Kubernetes on AWS EKS&lt;/li&gt;
&lt;li&gt;Observability with OpenTelemetry, Jaeger, and Grafana&lt;/li&gt;
&lt;li&gt;Cloud-native deployment on AWS&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Architecture Overview
&lt;/h2&gt;

&lt;p&gt;The application consists of 14+ microservices:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Frontend: TypeScript/Next.js web interface&lt;/li&gt;
&lt;li&gt;Product Catalog: Go service managing product information&lt;/li&gt;
&lt;li&gt;Cart Service: C# service handling shopping cart operations&lt;/li&gt;
&lt;li&gt;Payment Service: Node.js service processing payments&lt;/li&gt;
&lt;li&gt;Checkout Service: Go service managing the checkout process&lt;/li&gt;
&lt;li&gt;Ad Service: Java service for advertising&lt;/li&gt;
&lt;li&gt;Recommendation Service: Python service providing product recommendations&lt;/li&gt;
&lt;li&gt;Shipping Service: Rust service calculating shipping costs&lt;/li&gt;
&lt;li&gt;Email Service: Ruby service handling email notifications&lt;/li&gt;
&lt;li&gt;Currency Service: C++ service for currency conversion&lt;/li&gt;
&lt;li&gt;Quote Service: PHP service generating quotes&lt;/li&gt;
&lt;li&gt;Load Generator: Python service simulating user traffic&lt;/li&gt;
&lt;li&gt;Feature Flag Service: Providing feature flag functionality&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before starting, ensure you have the following tools installed:&lt;br&gt;
Required Software&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Git: Version control system&lt;/li&gt;
&lt;li&gt;Docker: Container runtime (version 20.10+)&lt;/li&gt;
&lt;li&gt;Docker Compose: Multi-container orchestration (v2.0.0+)&lt;/li&gt;
&lt;li&gt;AWS CLI: AWS command-line interface&lt;/li&gt;
&lt;li&gt;kubectl: Kubernetes command-line tool&lt;/li&gt;
&lt;li&gt;Terraform: Infrastructure as Code tool (version 1.0+)&lt;/li&gt;
&lt;li&gt;Text Editor: VS Code with recommended extensions:
Terraform
YAML
Docker
Kubernetes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;System Requirements&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;RAM: Minimum 8GB, recommended 16GB&lt;/li&gt;
&lt;li&gt;Storage: At least 20GB free space&lt;/li&gt;
&lt;li&gt;Operating System: Windows 10/11, macOS, or Linux&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Phase 1: Local Development Setup
&lt;/h2&gt;

&lt;p&gt;Step 1.1: Clone the Project Repository&lt;br&gt;
Clone the OpenTelemetry demo repository to your local machine:&lt;br&gt;
&lt;code&gt;git clone https://github.com/open-telemetry/opentelemetry-demo.git&lt;br&gt;
cd opentelemetry-demo&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Step 1.2: Understand the Project Structure&lt;br&gt;
The project follows this structure:&lt;br&gt;
opentelemetry-demo/&lt;br&gt;
├── src/&lt;br&gt;
│   ├── accounting/          # .NET accounting service&lt;br&gt;
│   ├── ad/                 # Java ad service&lt;br&gt;
│   ├── cart/               # C# cart service&lt;br&gt;
│   ├── checkout/           # Go checkout service&lt;br&gt;
│   ├── currency/           # C++ currency service&lt;br&gt;
│   ├── email/              # Ruby email service&lt;br&gt;
│   ├── frontend/           # TypeScript frontend&lt;br&gt;
│   ├── payment/            # Node.js payment service&lt;br&gt;
│   ├── product-catalog/    # Go product catalog&lt;br&gt;
│   └── recommendation/     # Python recommendation service&lt;br&gt;
.&lt;br&gt;
.&lt;br&gt;
.&lt;/p&gt;

&lt;p&gt;├── kubernetes/             # Kubernetes manifests&lt;br&gt;
├── docker-compose.yml      # Local development setup&lt;br&gt;
└── README.md&lt;/p&gt;

&lt;p&gt;Step 1.3: Run the Application Locally&lt;br&gt;
Start the application stack using Docker Compose:&lt;br&gt;
`# Start all services with pre-built images&lt;br&gt;
docker compose up --no-build&lt;/p&gt;

&lt;h1&gt;
  
  
  Alternative: Build from source
&lt;/h1&gt;

&lt;p&gt;docker compose up`&lt;/p&gt;

&lt;p&gt;Note: Use --no-build to pull pre-built images instead of building from source.&lt;br&gt;
Step 1.4: Verify Local Deployment&lt;br&gt;
Access these endpoints once containers are running:&lt;/p&gt;

&lt;p&gt;Web Store: &lt;a href="http://localhost:8080" rel="noopener noreferrer"&gt;http://localhost:8080&lt;/a&gt;&lt;br&gt;
Grafana: &lt;a href="http://localhost:8080/grafana" rel="noopener noreferrer"&gt;http://localhost:8080/grafana&lt;/a&gt;&lt;br&gt;
Feature Flags UI: &lt;a href="http://localhost:8080/feature" rel="noopener noreferrer"&gt;http://localhost:8080/feature&lt;/a&gt;&lt;br&gt;
Load Generator: &lt;a href="http://localhost:8080/loadgen" rel="noopener noreferrer"&gt;http://localhost:8080/loadgen&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 1.5: Understanding Docker Compose Architecture&lt;br&gt;
The Docker Compose setup includes:&lt;br&gt;
&lt;code&gt;services:&lt;br&gt;
  accounting:&lt;br&gt;
    image: ${IMAGE_NAME}:${DEMO_VERSION}-accounting&lt;br&gt;
    environment:&lt;br&gt;
      - KAFKA_ADDR&lt;br&gt;
      - OTEL_EXPORTER_OTLP_ENDPOINT&lt;br&gt;
    depends_on:&lt;br&gt;
      - otel-collector&lt;br&gt;
      - kafka&lt;br&gt;
  frontend:&lt;br&gt;
    image: ${IMAGE_NAME}:${DEMO_VERSION}-frontend&lt;br&gt;
    ports:&lt;br&gt;
      - "8080:8080"&lt;br&gt;
    environment:&lt;br&gt;
      - FRONTEND_ADDR&lt;br&gt;
      - AD_SERVICE_ADDR&lt;br&gt;
      - CART_SERVICE_ADDR&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Key components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Services: Each microservice runs in its own container&lt;/li&gt;
&lt;li&gt;Networks: Services communicate through Docker networks&lt;/li&gt;
&lt;li&gt;Volumes: Persistent data storage for databases&lt;/li&gt;
&lt;li&gt;Environment Variables: Configuration management&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Phase 2: AWS Account Setup
&lt;/h2&gt;

&lt;p&gt;Step 2.1: Create AWS Account&lt;br&gt;
Step 2.2: Create IAM User&lt;br&gt;
For security, create an IAM user instead of using the root account:&lt;/p&gt;

&lt;p&gt;Navigate to IAM Service in AWS Console&lt;br&gt;
Create user:Username: devops-user&lt;br&gt;
Access Type: Programmatic access + AWS Management Console access&lt;/p&gt;

&lt;p&gt;Attach policies:&lt;br&gt;
AdministratorAccess (for this demo project)&lt;br&gt;
In production, use least privilege principle&lt;/p&gt;

&lt;p&gt;Download credentials:&lt;br&gt;
Access Key ID&lt;br&gt;
Secret Access Key&lt;br&gt;
Save the CSV file securely&lt;/p&gt;

&lt;p&gt;Step 2.3: Configure AWS CLI&lt;br&gt;
Install and configure AWS CLI:&lt;/p&gt;

&lt;h1&gt;
  
  
  Install AWS CLI
&lt;/h1&gt;

&lt;h1&gt;
  
  
  On Windows: Download from AWS website
&lt;/h1&gt;

&lt;h1&gt;
  
  
  On macOS: brew install awscli
&lt;/h1&gt;

&lt;h1&gt;
  
  
  On Linux: sudo apt install awscli
&lt;/h1&gt;

&lt;h1&gt;
  
  
  Configure AWS CLI
&lt;/h1&gt;

&lt;p&gt;aws configure&lt;/p&gt;

&lt;h1&gt;
  
  
  Enter credentials:
&lt;/h1&gt;

&lt;p&gt;AWS Access Key ID: [Your Access Key]&lt;br&gt;
AWS Secret Access Key: [Your Secret Key]&lt;br&gt;
Default region name: us-west-2&lt;br&gt;
Default output format: json&lt;/p&gt;

&lt;h1&gt;
  
  
  Verify configuration
&lt;/h1&gt;

&lt;p&gt;aws sts get-caller-identity&lt;/p&gt;

&lt;p&gt;Expected output:&lt;br&gt;
{&lt;br&gt;
  "UserId": "AIDACKCEVSQ6C2EXAMPLE",&lt;br&gt;
  "Account": "123456789012",&lt;br&gt;
  "Arn": "arn:aws:iam::123456789012:user/devops-user"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;Step 2.4: Create EC2 Key Pair&lt;br&gt;
Create a key pair for EC2 access:&lt;/p&gt;

&lt;h1&gt;
  
  
  Create key pair
&lt;/h1&gt;

&lt;p&gt;aws ec2 create-key-pair --key-name devops-key --query 'KeyMaterial' --output text &amp;gt; devops-key.pem&lt;/p&gt;

&lt;h1&gt;
  
  
  Set permissions (Linux/macOS)
&lt;/h1&gt;

&lt;p&gt;chmod 400 devops-key.pem&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 3: Infrastructure as Code with Terraform
&lt;/h2&gt;

&lt;p&gt;Step 3.1: Understanding Terraform Structure&lt;br&gt;
The project includes a Terraform setup in the eks-install directory:&lt;br&gt;
eks-install/&lt;br&gt;
├── main.tf                 # Main configuration&lt;br&gt;
├── variables.tf            # Input variables&lt;br&gt;
├── outputs.tf              # Output values&lt;br&gt;
├── backend/                # Remote state setup&lt;br&gt;
│   ├── main.tf&lt;br&gt;
│   └── outputs.tf&lt;br&gt;
└── modules/&lt;br&gt;
    ├── vpc/                # VPC module&lt;br&gt;
    │   ├── main.tf&lt;br&gt;
    │   ├── variables.tf&lt;br&gt;
    │   └── outputs.tf&lt;br&gt;
    └── eks/                # EKS module&lt;br&gt;
        ├── main.tf&lt;br&gt;
        ├── variables.tf&lt;br&gt;
        └── outputs.tf&lt;/p&gt;

&lt;p&gt;Step 3.2: Set Up Terraform Backend&lt;br&gt;
Create the S3 bucket and DynamoDB table for remote state:&lt;br&gt;
cd eks-install/backend&lt;br&gt;
terraform init&lt;br&gt;
terraform plan&lt;br&gt;
terraform apply&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu5h97gd6r8hzgyosmxqf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu5h97gd6r8hzgyosmxqf.png" alt=" " width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 3.3: Configure Main Terraform&lt;br&gt;
Update the variables.tf file:&lt;br&gt;
variable "region" {&lt;br&gt;
  description = "AWS region"&lt;br&gt;
  type        = string&lt;br&gt;
  default     = "us-west-2"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable "cluster_name" {&lt;br&gt;
  description = "EKS cluster name"&lt;br&gt;
  type        = string&lt;br&gt;
  default     = "opentelemetry-demo-cluster"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable "vpc_cidr" {&lt;br&gt;
  description = "CIDR block for VPC"&lt;br&gt;
  type        = string&lt;br&gt;
  default     = "10.0.0.0/16"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable "availability_zones" {&lt;br&gt;
  description = "Availability zones"&lt;br&gt;
  type        = list(string)&lt;br&gt;
  default     = ["us-west-2a", "us-west-2b", "us-west-2c"]&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable "private_subnet_cidrs" {&lt;br&gt;
  description = "CIDR blocks for private subnets"&lt;br&gt;
  type        = list(string)&lt;br&gt;
  default     = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable "public_subnet_cidrs" {&lt;br&gt;
  description = "CIDR blocks for public subnets"&lt;br&gt;
  type        = list(string)&lt;br&gt;
  default     = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;Step 3.4: Deploy EKS Infrastructure&lt;br&gt;
&lt;code&gt;cd ..&lt;br&gt;
terraform init&lt;br&gt;
terraform validate&lt;br&gt;
terraform plan -out=tfplan&lt;br&gt;
terraform apply tfplan&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
This creates:&lt;/p&gt;

&lt;p&gt;VPC with public and private subnets&lt;br&gt;
Internet Gateway and NAT Gateways&lt;br&gt;
EKS Cluster with managed node groups&lt;br&gt;
Security Groups and IAM roles&lt;br&gt;
Route tables and associations&lt;/p&gt;

&lt;p&gt;Expected Duration: 15-20 minutes&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwjkfrknksj4160u2mvw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwjkfrknksj4160u2mvw.png" alt=" " width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 3.5: Verify EKS Cluster&lt;br&gt;
aws eks --region us-west-2 update-kubeconfig --name opentelemetry-demo-cluster&lt;br&gt;
kubectl get nodes&lt;br&gt;
kubectl cluster-info&lt;/p&gt;

&lt;p&gt;Expected output:&lt;br&gt;
NAME                                          STATUS   ROLES    AGE   VERSION&lt;br&gt;
ip-10-0-1-123.us-west-2.compute.internal     Ready       5m    v1.28.0&lt;br&gt;
ip-10-0-2-456.us-west-2.compute.internal     Ready       5m    v1.28.0&lt;br&gt;
ip-10-0-3-789.us-west-2.compute.internal     Ready       5m    v1.28.0&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 4: Container Orchestration with Kubernetes
&lt;/h2&gt;

&lt;p&gt;Step 4.1: Understanding Kubernetes Deployment&lt;br&gt;
Example frontend deployment manifest:&lt;br&gt;
apiVersion: apps/v1&lt;br&gt;
kind: Deployment&lt;br&gt;
metadata:&lt;br&gt;
  name: opentelemetry-demo-frontend&lt;br&gt;
spec:&lt;br&gt;
  replicas: 1&lt;br&gt;
  selector:&lt;br&gt;
    matchLabels:&lt;br&gt;
      app.kubernetes.io/name: opentelemetry-demo-frontend&lt;br&gt;
  template:&lt;br&gt;
    metadata:&lt;br&gt;
      labels:&lt;br&gt;
        app.kubernetes.io/name: opentelemetry-demo-frontend&lt;br&gt;
    spec:&lt;br&gt;
      serviceAccountName: opentelemetry-demo&lt;br&gt;
      containers:&lt;br&gt;
      - name: frontend&lt;br&gt;
        image: ghcr.io/open-telemetry/demo:latest-frontend&lt;br&gt;
        ports:&lt;br&gt;
        - containerPort: 8080&lt;br&gt;
        env:&lt;br&gt;
        - name: FRONTEND_ADDR&lt;br&gt;
          value: ":8080"&lt;br&gt;
        - name: AD_SERVICE_ADDR&lt;br&gt;
          value: "opentelemetry-demo-adservice:8080"&lt;/p&gt;

&lt;p&gt;Step 4.2: Deploy Application to Kubernetes&lt;br&gt;
kubectl create namespace opentelemetry-demo&lt;br&gt;
kubectl apply -f kubernetes/serviceaccount.yaml -n opentelemetry-demo&lt;br&gt;
kubectl apply -f kubernetes/complete-deploy.yaml -n opentelemetry-demo&lt;br&gt;
kubectl get deployments -n opentelemetry-demo&lt;br&gt;
kubectl get pods -n opentelemetry-demo&lt;br&gt;
kubectl get services -n opentelemetry-demo&lt;/p&gt;

&lt;p&gt;Step 4.3: Set Up Ingress Controller&lt;br&gt;
Install AWS Load Balancer Controller:&lt;br&gt;
kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master"&lt;/p&gt;

&lt;p&gt;aws iam create-policy \&lt;br&gt;
    --policy-name AWSLoadBalancerControllerIAMPolicy \&lt;br&gt;
    --policy-document file://iam_policy.json&lt;/p&gt;

&lt;p&gt;eksctl create iamserviceaccount \&lt;br&gt;
    --cluster=opentelemetry-demo-cluster \&lt;br&gt;
    --namespace=kube-system \&lt;br&gt;
    --name=aws-load-balancer-controller \&lt;br&gt;
    --role-name=AmazonEKSLoadBalancerControllerRole \&lt;br&gt;
    --attach-policy-arn=arn:aws:iam::YOUR-ACCOUNT-ID:policy/AWSLoadBalancerControllerIAMPolicy \&lt;br&gt;
    --approve&lt;/p&gt;

&lt;p&gt;Step 4.4: Configure Application Ingress&lt;br&gt;
Create an ingress resource:&lt;br&gt;
apiVersion: networking.k8s.io/v1&lt;br&gt;
kind: Ingress&lt;br&gt;
metadata:&lt;br&gt;
  name: opentelemetry-demo-ingress&lt;br&gt;
  namespace: opentelemetry-demo&lt;br&gt;
  annotations:&lt;br&gt;
    alb.ingress.kubernetes.io/scheme: internet-facing&lt;br&gt;
    alb.ingress.kubernetes.io/target-type: ip&lt;br&gt;
spec:&lt;br&gt;
  ingressClassName: alb&lt;br&gt;
  rules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;host: your-domain.com
http:
  paths:

&lt;ul&gt;
&lt;li&gt;path: /
pathType: Prefix
backend:
  service:
    name: opentelemetry-demo-frontendproxy
    port:
      number: 8080&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Apply the ingress:&lt;br&gt;
kubectl apply -f ingress.yaml&lt;/p&gt;

&lt;p&gt;Step 4.5: Verify Deployment&lt;br&gt;
kubectl get all -n opentelemetry-demo&lt;br&gt;
kubectl get ingress -n opentelemetry-demo&lt;br&gt;
kubectl logs -l app.kubernetes.io/name=opentelemetry-demo-frontend -n opentelemetry-demo&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6wgu40ebcg58qh6m0wb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6wgu40ebcg58qh6m0wb.png" alt=" " width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 5: Domain Setup with Route53
&lt;/h2&gt;

&lt;p&gt;Step 5.1: Register Domain (Optional)&lt;br&gt;
Options:&lt;/p&gt;

&lt;p&gt;Register through AWS Route 53&lt;br&gt;
Use an existing domain&lt;br&gt;
Use the Load Balancer DNS name for testing&lt;/p&gt;

&lt;p&gt;Step 5.2: Create Hosted Zone&lt;br&gt;
aws route53 create-hosted-zone \&lt;br&gt;
    --name your-domain.com \&lt;br&gt;
    --caller-reference $(date +%s)&lt;/p&gt;

&lt;p&gt;Step 5.3: Update Name Servers&lt;/p&gt;

&lt;p&gt;Get name servers from the hosted zone&lt;br&gt;
Update your domain registrar to use Route53 name servers&lt;/p&gt;

&lt;p&gt;Step 5.4: Create DNS Records&lt;br&gt;
kubectl get ingress opentelemetry-demo-ingress -n opentelemetry-demo -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'&lt;/p&gt;

&lt;p&gt;aws route53 change-resource-record-sets \&lt;br&gt;
    --hosted-zone-id ZXXXXXXXXXXXXX \&lt;br&gt;
    --change-batch file://dns-record.json&lt;/p&gt;

&lt;p&gt;Example dns-record.json:&lt;br&gt;
{&lt;br&gt;
  "Changes": [&lt;br&gt;
    {&lt;br&gt;
      "Action": "CREATE",&lt;br&gt;
      "ResourceRecordSet": {&lt;br&gt;
        "Name": "opentelemetry-demo.your-domain.com",&lt;br&gt;
        "Type": "CNAME",&lt;br&gt;
        "TTL": 300,&lt;br&gt;
        "ResourceRecords": [&lt;br&gt;
          {&lt;br&gt;
            "Value": "k8s-opentel-opentel-xxxxxxxxx-yyyyyyyyyy.us-west-2.elb.amazonaws.com"&lt;br&gt;
          }&lt;br&gt;
        ]&lt;br&gt;
      }&lt;br&gt;
    }&lt;br&gt;
  ]&lt;br&gt;
}&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting
&lt;/h2&gt;

&lt;p&gt;Common Issues and Solutions&lt;br&gt;
Issue 1: EKS Nodes Not Ready&lt;br&gt;
kubectl describe node NODE_NAME&lt;br&gt;
 Common causes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Insufficient IAM permissions&lt;/li&gt;
&lt;li&gt;Security group issues&lt;/li&gt;
&lt;li&gt;Subnet configuration problems&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Issue 2: Pods in Pending State&lt;br&gt;
kubectl describe pod POD_NAME -n opentelemetry-demo&lt;br&gt;
 Common causes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Insufficient resources&lt;/li&gt;
&lt;li&gt;Image pull errors&lt;/li&gt;
&lt;li&gt;Volume mounting issues&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Issue 3: Service Not Accessible&lt;br&gt;
kubectl get svc,endpoints -n opentelemetry-demo&lt;br&gt;
kubectl describe ingress opentelemetry-demo-ingress -n opentelemetry-demo&lt;/p&gt;

&lt;p&gt;Issue 4: Terraform State Issues&lt;br&gt;
terraform refresh&lt;br&gt;
terraform import aws_instance.example i-1234567890abcdef0&lt;/p&gt;

&lt;p&gt;Debugging Commands&lt;br&gt;
kubectl get all --all-namespaces&lt;br&gt;
kubectl get events --sort-by=.metadata.creationTimestamp&lt;br&gt;
kubectl logs -f POD_NAME -n opentelemetry-demo&lt;br&gt;
kubectl exec -it POD_NAME -n opentelemetry-demo -- /bin/bash&lt;br&gt;
kubectl top nodes&lt;br&gt;
kubectl top pods -n opentelemetry-demo&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgq34yi3lbcyji5cf4qr7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgq34yi3lbcyji5cf4qr7.png" alt=" " width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This project demonstrates a complete DevOps implementation using modern cloud-native technologies. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deployed a complex microservices application locally&lt;/li&gt;
&lt;li&gt;Set up AWS infrastructure using Terraform&lt;/li&gt;
&lt;li&gt;Orchestrated containers with Kubernetes on EKS&lt;/li&gt;
&lt;li&gt;Implemented monitoring and observability&lt;/li&gt;
&lt;li&gt;Configured DNS and load balancing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next Steps&lt;/p&gt;

&lt;p&gt;Enhance security with network policies and RBAC&lt;br&gt;
Implement autoscaling with HPA and cluster autoscaling&lt;br&gt;
Optimize costs with resource requests/limits and spot instances&lt;br&gt;
Set up disaster recovery procedures&lt;/p&gt;

&lt;p&gt;This documentation serves as a complete guide for building and deploying the OpenTelemetry Astronomy Shop using modern DevOps practices. For questions or improvements, refer to the official OpenTelemetry documentation and AWS best practices guides.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
