π― This Project contains the OpenTelemetry Astronomy Shop, a microservice-based distributed system intended to illustrate the implementation of OpenTelemetry in a near real-world environment.
Welcome to the OpenTelemetry Astronomy Shop Project Demonstration:
π Here the goals are:
Realistic Distributed System Example This repository demonstrates a real-world, microservice-based distributed application using the Open-Telemetry Astronomy Shop, designed to showcase end-to-end observability in modern cloud-native systems.
Multi-Microservice, Multi-Language Architecture The application consists of multiple independent microservices implemented in different programming languages, reflecting how real organizations build and maintain heterogeneous technology stacks.
Hands-On Observability Experience By implementing this project, you gain practical, hands-on experience with distributed tracing, context propagation, and observability tools as used in real production environments.
Industry-Aligned Learning Approach This project mirrors how organizations design, instrument, and monitor microservice architectures, helping learners understand real-world implementation patterns and best practices.
π Tools used to implement the application:
- Containerize the micro-service using "Docker"
- AWS services used - EC2 instance, IAM roles & policies, EKS cluster, Route53, VPC, Subnets
- Kubernetes for managing the containerized micro-services.
- AWS infrastructure creation using Terraform modules
- Implemented the CI using GitHub Actions and CD using ArgoCD
- Used Ingress and Ingress Controller to expose the application to external world.
- Helm chart for downloading and installing the software's and dependencies.
π What You Will Learn:
- Cloud Infrastructure Setup β Learn how to configure and deploy a cloud environment for DevOps implementation.
- Understanding the Project & SDLC β Gain in-depth knowledge of software development lifecycles in microservices-based architectures.
- Containerization with Docker β Learn how to package and manage applications efficiently using Docker.
- Docker Compose Setup β Manage multi-container applications with Docker Compose.
- Kubernetes for Orchestration β Deploy and manage containers at scale using Kubernetes.
- Infrastructure as Code (IaC) with Terraform β Automate and manage cloud infrastructure effortlessly.
π Refer the Application architecture: https://opentelemetry.io/docs/demo/architecture/
π Refer the Source code of the application: https://github.com/open-telemetry/opentelemetry-demo
π Project architecture diagram and Overview of microservices is available at the below link.
Architecture
https://opentelemetry.io/docs/demo/architecture/
Overview of microservices used in the project
https://opentelemetry.io/docs/demo/services/
Running the Application in Local environment:
β
Create EC2 Instance:
type: t2.large
storage: 30GB
β Resize File system: (if running out of space while creating the application)
- Increase the instance volume to 30GB
- Change the file system in CLI $lsblk
$sudo apt install cloud-guest-utils
$sudo growpart /dev/xvda 1
$sudo resize2fs /dev/xvda1
β
If you face any issue related to "permission denied", while executing the docker commands. Add the user to docker group using:
$sudo usermod -aG docker ubuntu
β Running the application local environment using docker compose:
Why docker compose?
- I'm deploying e-commerce application, which contains multiple micro-services. So we containerize and running as one model using docker compose is efficient.
- It runs multiple containers at once and establish the dependencies between the containers.
β
You will find the docker-compose.yaml file in my GitHub repository:
Link: https://github.com/Nandan3/End-to-End-DevOps-Projects/blob/main/docker-compose.yaml
β
Run the below command
$docker compose up -d
It will pull the images from docker registry and run the micro-services
β Edit security groups inbound rules to access the application from web browser:
β
Access the application using: http://:8080
Public-IP: EC2 Public IP
Containerize the multiple Micro-services using Dockerfiles:
π This is an e-commerce based application, which contains multiple micro-services. Each micro-services are developed using different programming languages. For example
- Product-catalogue service: Go lang
- Ad service: Java
- Recommendation service: Python So I took 3 micro-services from different programming language, where I containerize the micro-services using Dockerfile and build the images.
π Product-catalog service: (Go programming language)
- $sudo apt install golang-go
- Files: go.mod & go.sum - takes care of dependencies
Build the service in local using below steps:
a. $export PRODUCT_CATALOG_PORT=8088
b. $go build -o product-catalog .
c. $./product-catalog
O/P:
INFO[0000] Loaded 10 products
INFO[0000] Product Catalog gRPC server started on port: 8088
β Using Dockerfile:
β
Run the below commands to build the image and to run the container:
a. $docker build -t nandandocker07/product-catalog:v1 .
-t >> tag (docker-hub-user)/repo-name:version
. >> location of dockerfile
b. $docker run nandandocker07/product-catalog:v1
π Ad-service: (Java programming language)
- Build using gradle or maven
- File: gradlew (gradle wrapper for Linux & Mac) or gradlew.bat (for windows) takes care of dependencies. File: pom.xml (take care of dependencies in case of maven)
Build the service in local using below steps:
a. Install java:
$sudo apt install openjdk-21-jre-headless
b. Give permission to gradlew file
$chmod +x gradlew
c. Run the below command:
$./gradlew installDist
ΓΌ Start the gradle daemon, which is like a server
ΓΌ Install the dependencies
ΓΌ Perform code compilation
ΓΌ Build the application and stored in perticular directory
β
Using Dockerfile:
β
Run the below commands to build the image and to run the container:
a. $docker build -t nandandocker07/ad-service:v1 .
b. $docker run nandandocker07/ad-service:v1
π Recommendation service: (Python programming language)
- File: requirments.txt take care of dependencies β Using Dockerfile:
β
Run the below commands to build the image and to run the container:
a. $docker build -t nandandocker07/recommandation-service:v1 .
b. $docker run nandandocker07/recommandation-service:v1
π Push the docker images into docker hub: (registry or container artifactory)
- Registries are: dockerhub, quay.io, ECR, ACR, GHCR
- First login to registry where you want to push the image Eg: $docker login docker.io $docker login quay.io $docker login ARN #AWS
- Push the image using below command $docker tag docker.io/nandandocker07/product-catalog:v1 $docker push docker.io/nandandocker07/product-catalog:v1
docker.io >> docker hub registry
nandandocker07 >> registry user name
product-catalog >> repo name
:v1 >> tag of the image
π Preparation of docker-compose file to run the entire environment:
Main components of docker-compose.yml
ΓΌ Service object >> define how to pull or run the micro-services
ΓΌ Network object >> create network for all micro-services
ΓΌ Volume objectYou will find the docker-compose.yaml file in my github repository
Link: https://github.com/Nandan3/End-to-End-DevOps-Projects/blob/main/docker-compose.yamlRun the application using docker compose commands
$docker compose up -d
β
Access the application using: http://:8080
Public-IP: EC2 Public IP
Deploy the Application into AWS EKS Cluster:
π Why AWS EKS?
- Upgrades becomes easy
- Manage control plane
- Scaling become easy
- Integrated K8s UI
- Cost efficient
π Creating the following infrastructure using Terraform:
- VPC, Public subnet and Private subnet
- Internet gateway, NAT gateway, Route tables
- EKS cluster
π Create S3 bucket and DynamoDB for state file management and locking:
π Create VPC & EKS cluster using Terraform:
VPC & Cluster creation code: https://github.com/Nandan3/End-to-End-DevOps-Projects/tree/main/EKS_install
β Cluster information:
- cluster_endpoint = "https://D9F38304FBE9AFF51F006DE59DEC4993.gr7.us-west-1.eks.amazonaws.com"
- cluster_name = "my-eks-cluster"
- vpc_id = "vpc-09bcf1df61d87cf8b"
β
Run the following command to create the cluster:
$terraform init
$terraform plan
$terraform apply --auto-approve
π How to connect to 1 or many Kubernetes clusters from command line ?
- Cluster info are stored in kube-config files $kubectl config view $kubectl config current-context #context represents the cluster $kubectl config use-context #switch between the clusters
$cat ~/.aws/credentials >> Path where aws credentials are stored
$aws eks update-kubeconfig --region us-west-1 --name my-eks-cluster
o/p:
Added new context arn:aws:eks:us-west-1:454842419673:cluster/my-eks-cluster to /home/ubuntu/.kube/config
Context >> complete K8s related information (creates kube-config file)
$kubectl config view
$kubectl config current-context
$kubectl get nodes
π Deploying Project on K8:
- EKS cluster creation is completed.
- Check in CLI $kubectl config current-context
π Create a service account:
$kubectl apply -f complete-deploy.yaml
$kubectl get pods
π How one micro-services are connected another in this project?
- By using environment variables ΓΌ Service name or URL is injected via ConfigMap ΓΌ Application reads it at runtime Eg: If want to connect shipping service to quote service, in the environment variable of shipping pod have service name of the quote service
In shipping service:
β Other ways:
- By using service discovery mechanism using services
- DNS-Based Service Discovery (Implicit - Kubernetes automatically creates DNS entries for Services.)
- Headless Services (Direct Pod-to-Pod) [When to use - Stateful apps (Kafka, Cassandra), Client needs individual Pod identities]
β Access the project from outside the VPC/Cluster:
- Change service type to LB mode to access via internet: $kubectl edit svc opentelemetry-demo-frontendproxy Add - type: LoadBalancer
Configure the Ingress and Ingress controller to expose the application into internet:
π Why Ingress and Ingress Controller:
- Configuration of LB is not declarative or lack of declarative approach Eg: if I want to change from http to https, I have to do it in 2. AWS UI not in svc.yaml manifest.
- Not cost effective Eg: if 10 microservice require external access, AWS will 10 LB
- CCM only tied to AWS or any cloud provider LB, what about F5, nginx, Envy, Traffik
- LB type will not work in mini-kube, kind, k3d
π Ingress:
- It helps you to define routing rules for incoming traffic
- Declarative (yaml) - LB can be updated and modified
- Cost effective - using path based & host based routing, we can route the request 100's of target groups
- Not dependent on CCM - by using reverse proxy or other mechanism
π Creation of Ingress & Ingress Controller:
- Create Ingress resource for Front-end service >> creation of ALB Ingress controller (AWS), which reads the ingress resource >> accordingly ALB Ingress controller creates LB for us.
π Steps to install ALB ingress controller on EKS cluster:
- ALB ingress controller is pod within the EKS cluster >> it creates an elastic LB >> creates only if service account of the ALB binds to the IAM roles or policies >> it binds through "IAM OIDC connector or provider"
- Install eksctl in CLI
- Make sure you are in the correct cluster
- Follow the steps to set up OIDC provider and ALB controller: https://github.com/Nandan3/End-to-End-DevOps-Projects/blob/main/kubernetes_files/kubernetes-ingress-controller
oidc_id: D9F38304FBE9AFF51F006DE59DEC4993
Check if there is an IAM OIDC provider configured already [adding OIDC provider to my cluster]
β Create service account & IAM role & policy and attach it to SA:
Download IAM policy.json from AWS for ELB
Link: curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.11.0/docs/install/iam_policy.jsonCreate an IAM policy
- Create IAM role and attach them to a service account created previously.
π Use Helm chart to download ALB controller:
Add helm repo for eks
$helm repo add eks https://aws.github.io/eks-chartsInstall ALB controllers using helm & verify that the deployments are running.
$helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=my-eks-cluster --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller --set region=us-west-1 --set vpcId=vpc-09bcf1df61d87cf8b
π Setup Ingress resources and access the Project:
- Using ingress, access the project as external user
- Change the type of the LB in frontendproxy service yaml $kubectl edit svc opentelemetry-demo-frontendproxy
β Create ingress resource using yaml manifest:
- Link of yaml file: https://github.com/Nandan3/End-to-End-DevOps-Projects/blob/main/kubernetes_files/ingress.yaml
- Run: $kubectl apply -f ingress.yaml
Accessing the application using IP address or DNS will not work until it will add it in local DNS file
β Steps to add IP & DNS into local DNS file:
- In linux: add it in /etc/hosts
- In windows: a. Open notepad as administrator >> file -> open -> select C:\windows\system32\drivers\etc\hosts file add C:\windows
Set up CICD pipeline using Github Actions and GitOps approach (ArgoCD):
π Built Continuous Integration pipeline using Github Actions:
Write a ci.yaml file under .github/worflows/ directory
Link of the file: https://github.com/Nandan3/End-to-End-DevOps-Projects/tree/main/.github/workflowCreate a new git branch in CLI
$git checkout -b cicheck
Switched to a new branch 'cicheck'
Modify main.go file in product-catalog directory
π Built Continuous Deployment pipeline using GitOps approach:
β Why ArgoCD is preferred over other tools (like Ansible, shell scripting, python using helm)
- Constant monitoring and automatic deployment to K8s cluster 2. Reconciliation of state: if any changes made in k8s cluster directly argoCD removes it and fix it into previous version. (version control system is the source of truth)

































Top comments (0)