DEV Community

Nandan K
Nandan K

Posted on

DevOps Project: Containerize and Run an e-commerce based application in AWS EKS cluster using Terraform with all best practices

🎯 This Project contains the OpenTelemetry Astronomy Shop, a microservice-based distributed system intended to illustrate the implementation of OpenTelemetry in a near real-world environment.

Welcome to the OpenTelemetry Astronomy Shop Project Demonstration:

πŸ“Œ Here the goals are:

  1. Realistic Distributed System Example This repository demonstrates a real-world, microservice-based distributed application using the Open-Telemetry Astronomy Shop, designed to showcase end-to-end observability in modern cloud-native systems.

  2. Multi-Microservice, Multi-Language Architecture The application consists of multiple independent microservices implemented in different programming languages, reflecting how real organizations build and maintain heterogeneous technology stacks.

  3. Hands-On Observability Experience By implementing this project, you gain practical, hands-on experience with distributed tracing, context propagation, and observability tools as used in real production environments.

  4. Industry-Aligned Learning Approach This project mirrors how organizations design, instrument, and monitor microservice architectures, helping learners understand real-world implementation patterns and best practices.

πŸ“Œ Tools used to implement the application:

  1. Containerize the micro-service using "Docker"
  2. AWS services used - EC2 instance, IAM roles & policies, EKS cluster, Route53, VPC, Subnets
  3. Kubernetes for managing the containerized micro-services.
  4. AWS infrastructure creation using Terraform modules
  5. Implemented the CI using GitHub Actions and CD using ArgoCD
  6. Used Ingress and Ingress Controller to expose the application to external world.
  7. Helm chart for downloading and installing the software's and dependencies.

πŸ“Œ What You Will Learn:

  1. Cloud Infrastructure Setup – Learn how to configure and deploy a cloud environment for DevOps implementation.
  2. Understanding the Project & SDLC – Gain in-depth knowledge of software development lifecycles in microservices-based architectures.
  3. Containerization with Docker – Learn how to package and manage applications efficiently using Docker.
  4. Docker Compose Setup – Manage multi-container applications with Docker Compose.
  5. Kubernetes for Orchestration – Deploy and manage containers at scale using Kubernetes.
  6. Infrastructure as Code (IaC) with Terraform – Automate and manage cloud infrastructure effortlessly.

πŸ“Œ Refer the Application architecture: https://opentelemetry.io/docs/demo/architecture/

πŸ“Œ Refer the Source code of the application: https://github.com/open-telemetry/opentelemetry-demo

πŸ“Œ Project architecture diagram and Overview of microservices is available at the below link.

Architecture
https://opentelemetry.io/docs/demo/architecture/

Overview of microservices used in the project

https://opentelemetry.io/docs/demo/services/

Running the Application in Local environment:

βœ… Create EC2 Instance:
type: t2.large
storage: 30GB

βœ… Resize File system: (if running out of space while creating the application)

  1. Increase the instance volume to 30GB

  1. Change the file system in CLI $lsblk

$sudo apt install cloud-guest-utils
$sudo growpart /dev/xvda 1
$sudo resize2fs /dev/xvda1

βœ… If you face any issue related to "permission denied", while executing the docker commands. Add the user to docker group using:
$sudo usermod -aG docker ubuntu

βœ… Running the application local environment using docker compose:

Why docker compose?

  1. I'm deploying e-commerce application, which contains multiple micro-services. So we containerize and running as one model using docker compose is efficient.
  2. It runs multiple containers at once and establish the dependencies between the containers.

βœ… You will find the docker-compose.yaml file in my GitHub repository:
Link: https://github.com/Nandan3/End-to-End-DevOps-Projects/blob/main/docker-compose.yaml

βœ… Run the below command
$docker compose up -d

It will pull the images from docker registry and run the micro-services

βœ… Edit security groups inbound rules to access the application from web browser:

βœ… Access the application using: http://:8080
Public-IP: EC2 Public IP

Containerize the multiple Micro-services using Dockerfiles:

πŸ“Œ This is an e-commerce based application, which contains multiple micro-services. Each micro-services are developed using different programming languages. For example

  1. Product-catalogue service: Go lang
  2. Ad service: Java
  3. Recommendation service: Python So I took 3 micro-services from different programming language, where I containerize the micro-services using Dockerfile and build the images.

πŸ“Œ Product-catalog service: (Go programming language)

  1. $sudo apt install golang-go
  2. Files: go.mod & go.sum - takes care of dependencies

Build the service in local using below steps:
a. $export PRODUCT_CATALOG_PORT=8088
b. $go build -o product-catalog .
c. $./product-catalog

O/P:
INFO[0000] Loaded 10 products
INFO[0000] Product Catalog gRPC server started on port: 8088

βœ… Using Dockerfile:

βœ… Run the below commands to build the image and to run the container:
a. $docker build -t nandandocker07/product-catalog:v1 .

-t >> tag (docker-hub-user)/repo-name:version
. >> location of dockerfile

b. $docker run nandandocker07/product-catalog:v1

πŸ“Œ Ad-service: (Java programming language)

  1. Build using gradle or maven
  2. File: gradlew (gradle wrapper for Linux & Mac) or gradlew.bat (for windows) takes care of dependencies. File: pom.xml (take care of dependencies in case of maven)

Build the service in local using below steps:
a. Install java:
$sudo apt install openjdk-21-jre-headless
b. Give permission to gradlew file
$chmod +x gradlew
c. Run the below command:
$./gradlew installDist

ΓΌ Start the gradle daemon, which is like a server
ΓΌ Install the dependencies
ΓΌ Perform code compilation
ΓΌ Build the application and stored in perticular directory
βœ… Using Dockerfile:

βœ… Run the below commands to build the image and to run the container:
a. $docker build -t nandandocker07/ad-service:v1 .

b. $docker run nandandocker07/ad-service:v1

πŸ“Œ Recommendation service: (Python programming language)

  1. File: requirments.txt take care of dependencies βœ… Using Dockerfile:

βœ… Run the below commands to build the image and to run the container:
a. $docker build -t nandandocker07/recommandation-service:v1 .

b. $docker run nandandocker07/recommandation-service:v1

πŸ“Œ Push the docker images into docker hub: (registry or container artifactory)

  1. Registries are: dockerhub, quay.io, ECR, ACR, GHCR
  2. First login to registry where you want to push the image Eg: $docker login docker.io $docker login quay.io $docker login ARN #AWS

  1. Push the image using below command $docker tag docker.io/nandandocker07/product-catalog:v1 $docker push docker.io/nandandocker07/product-catalog:v1

docker.io >> docker hub registry
nandandocker07 >> registry user name
product-catalog >> repo name
:v1 >> tag of the image

πŸ“Œ Preparation of docker-compose file to run the entire environment:

  1. Main components of docker-compose.yml
    ΓΌ Service object >> define how to pull or run the micro-services
    ΓΌ Network object >> create network for all micro-services
    ΓΌ Volume object

  2. You will find the docker-compose.yaml file in my github repository
    Link: https://github.com/Nandan3/End-to-End-DevOps-Projects/blob/main/docker-compose.yaml

  3. Run the application using docker compose commands
    $docker compose up -d

βœ… Access the application using: http://:8080
Public-IP: EC2 Public IP

Deploy the Application into AWS EKS Cluster:

πŸ“Œ Why AWS EKS?

  1. Upgrades becomes easy
  2. Manage control plane
  3. Scaling become easy
  4. Integrated K8s UI
  5. Cost efficient

πŸ“Œ Creating the following infrastructure using Terraform:

  1. VPC, Public subnet and Private subnet
  2. Internet gateway, NAT gateway, Route tables
  3. EKS cluster

πŸ“Œ Create S3 bucket and DynamoDB for state file management and locking:

πŸ“Œ Create VPC & EKS cluster using Terraform:
VPC & Cluster creation code: https://github.com/Nandan3/End-to-End-DevOps-Projects/tree/main/EKS_install

βœ… Cluster information:

  1. cluster_endpoint = "https://D9F38304FBE9AFF51F006DE59DEC4993.gr7.us-west-1.eks.amazonaws.com"
  2. cluster_name = "my-eks-cluster"
  3. vpc_id = "vpc-09bcf1df61d87cf8b"

βœ… Run the following command to create the cluster:
$terraform init
$terraform plan
$terraform apply --auto-approve

πŸ“Œ How to connect to 1 or many Kubernetes clusters from command line ?

  1. Cluster info are stored in kube-config files $kubectl config view $kubectl config current-context #context represents the cluster $kubectl config use-context #switch between the clusters
  2. $cat ~/.aws/credentials >> Path where aws credentials are stored

  3. $aws eks update-kubeconfig --region us-west-1 --name my-eks-cluster

o/p:
Added new context arn:aws:eks:us-west-1:454842419673:cluster/my-eks-cluster to /home/ubuntu/.kube/config
Context >> complete K8s related information (creates kube-config file)

$kubectl config view
$kubectl config current-context
$kubectl get nodes

πŸ“Œ Deploying Project on K8:

  1. EKS cluster creation is completed.
  2. Check in CLI $kubectl config current-context

πŸ“Œ Create a service account:

$kubectl apply -f complete-deploy.yaml
$kubectl get pods

πŸ“Œ How one micro-services are connected another in this project?

  1. By using environment variables ΓΌ Service name or URL is injected via ConfigMap ΓΌ Application reads it at runtime Eg: If want to connect shipping service to quote service, in the environment variable of shipping pod have service name of the quote service

In shipping service:

βœ… Other ways:

  1. By using service discovery mechanism using services
  2. DNS-Based Service Discovery (Implicit - Kubernetes automatically creates DNS entries for Services.)
  3. Headless Services (Direct Pod-to-Pod) [When to use - Stateful apps (Kafka, Cassandra), Client needs individual Pod identities]

βœ… Access the project from outside the VPC/Cluster:

  1. Change service type to LB mode to access via internet: $kubectl edit svc opentelemetry-demo-frontendproxy Add - type: LoadBalancer

Configure the Ingress and Ingress controller to expose the application into internet:

πŸ“Œ Why Ingress and Ingress Controller:

  1. Configuration of LB is not declarative or lack of declarative approach Eg: if I want to change from http to https, I have to do it in 2. AWS UI not in svc.yaml manifest.
  2. Not cost effective Eg: if 10 microservice require external access, AWS will 10 LB
  3. CCM only tied to AWS or any cloud provider LB, what about F5, nginx, Envy, Traffik
  4. LB type will not work in mini-kube, kind, k3d

πŸ“Œ Ingress:

  1. It helps you to define routing rules for incoming traffic
  2. Declarative (yaml) - LB can be updated and modified
  3. Cost effective - using path based & host based routing, we can route the request 100's of target groups
  4. Not dependent on CCM - by using reverse proxy or other mechanism

πŸ“Œ Creation of Ingress & Ingress Controller:

  1. Create Ingress resource for Front-end service >> creation of ALB Ingress controller (AWS), which reads the ingress resource >> accordingly ALB Ingress controller creates LB for us.

πŸ“Œ Steps to install ALB ingress controller on EKS cluster:

  1. ALB ingress controller is pod within the EKS cluster >> it creates an elastic LB >> creates only if service account of the ALB binds to the IAM roles or policies >> it binds through "IAM OIDC connector or provider"
  2. Install eksctl in CLI
  3. Make sure you are in the correct cluster

  1. Follow the steps to set up OIDC provider and ALB controller: https://github.com/Nandan3/End-to-End-DevOps-Projects/blob/main/kubernetes_files/kubernetes-ingress-controller

oidc_id: D9F38304FBE9AFF51F006DE59DEC4993

Check if there is an IAM OIDC provider configured already [adding OIDC provider to my cluster]

βœ… Create service account & IAM role & policy and attach it to SA:

  1. Download IAM policy.json from AWS for ELB
    Link: curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.11.0/docs/install/iam_policy.json

  2. Create an IAM policy

  1. Create IAM role and attach them to a service account created previously.

πŸ“Œ Use Helm chart to download ALB controller:

  1. Add helm repo for eks
    $helm repo add eks https://aws.github.io/eks-charts

  2. Install ALB controllers using helm & verify that the deployments are running.

$helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=my-eks-cluster --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller --set region=us-west-1 --set vpcId=vpc-09bcf1df61d87cf8b

πŸ“Œ Setup Ingress resources and access the Project:

  1. Using ingress, access the project as external user
  2. Change the type of the LB in frontendproxy service yaml $kubectl edit svc opentelemetry-demo-frontendproxy

βœ… Create ingress resource using yaml manifest:

  1. Link of yaml file: https://github.com/Nandan3/End-to-End-DevOps-Projects/blob/main/kubernetes_files/ingress.yaml
  2. Run: $kubectl apply -f ingress.yaml

Accessing the application using IP address or DNS will not work until it will add it in local DNS file

βœ… Steps to add IP & DNS into local DNS file:

  1. In linux: add it in /etc/hosts
  2. In windows: a. Open notepad as administrator >> file -> open -> select C:\windows\system32\drivers\etc\hosts file add C:\windows

Set up CICD pipeline using Github Actions and GitOps approach (ArgoCD):

πŸ“Œ Built Continuous Integration pipeline using Github Actions:

  1. Write a ci.yaml file under .github/worflows/ directory
    Link of the file: https://github.com/Nandan3/End-to-End-DevOps-Projects/tree/main/.github/workflow

  2. Create a new git branch in CLI
    $git checkout -b cicheck
    Switched to a new branch 'cicheck'
    Modify main.go file in product-catalog directory

πŸ“Œ Built Continuous Deployment pipeline using GitOps approach:

βœ… Why ArgoCD is preferred over other tools (like Ansible, shell scripting, python using helm)

  1. Constant monitoring and automatic deployment to K8s cluster 2. Reconciliation of state: if any changes made in k8s cluster directly argoCD removes it and fix it into previous version. (version control system is the source of truth)

Top comments (0)