DEV Community

Lohith S
Lohith S

Posted on

OpenTelemetry Ecommerce website: Devops deployment with Terraform, CI/CD, Kubernetes and AWS

Table of Contents

  1. Project Overview
  2. Prerequisites
  3. Phase 1: Local Development Setup
  4. Phase 2: AWS Account Setup
  5. Phase 3: Infrastructure as Code with Terraform
  6. Phase 4: Container Orchestration with Kubernetes
  7. Phase 5: Domain Setup with Route53
  8. Phase 6: Monitoring and Observability
  9. Troubleshooting

Project Overview

This project demonstrates a complete DevOps implementation using the OpenTelemetry Astronomy Shop, a microservice-based e-commerce application. The project showcases:

  • Multi-language microservices architecture (Go, Java, Python, C#, TypeScript, Ruby, PHP, Rust, Elixir)
  • Complete DevOps pipeline from local development to production deployment
  • Infrastructure as Code using Terraform
  • Container orchestration with Kubernetes on AWS EKS
  • Observability with OpenTelemetry, Jaeger, and Grafana
  • Cloud-native deployment on AWS

Architecture Overview

The application consists of 14+ microservices:

  • Frontend: TypeScript/Next.js web interface
  • Product Catalog: Go service managing product information
  • Cart Service: C# service handling shopping cart operations
  • Payment Service: Node.js service processing payments
  • Checkout Service: Go service managing the checkout process
  • Ad Service: Java service for advertising
  • Recommendation Service: Python service providing product recommendations
  • Shipping Service: Rust service calculating shipping costs
  • Email Service: Ruby service handling email notifications
  • Currency Service: C++ service for currency conversion
  • Quote Service: PHP service generating quotes
  • Load Generator: Python service simulating user traffic
  • Feature Flag Service: Providing feature flag functionality

Prerequisites

Before starting, ensure you have the following tools installed:
Required Software

  • Git: Version control system
  • Docker: Container runtime (version 20.10+)
  • Docker Compose: Multi-container orchestration (v2.0.0+)
  • AWS CLI: AWS command-line interface
  • kubectl: Kubernetes command-line tool
  • Terraform: Infrastructure as Code tool (version 1.0+)
  • Text Editor: VS Code with recommended extensions: Terraform YAML Docker Kubernetes

System Requirements

  • RAM: Minimum 8GB, recommended 16GB
  • Storage: At least 20GB free space
  • Operating System: Windows 10/11, macOS, or Linux

Phase 1: Local Development Setup

Step 1.1: Clone the Project Repository
Clone the OpenTelemetry demo repository to your local machine:
git clone https://github.com/open-telemetry/opentelemetry-demo.git
cd opentelemetry-demo

Step 1.2: Understand the Project Structure
The project follows this structure:
opentelemetry-demo/
├── src/
│ ├── accounting/ # .NET accounting service
│ ├── ad/ # Java ad service
│ ├── cart/ # C# cart service
│ ├── checkout/ # Go checkout service
│ ├── currency/ # C++ currency service
│ ├── email/ # Ruby email service
│ ├── frontend/ # TypeScript frontend
│ ├── payment/ # Node.js payment service
│ ├── product-catalog/ # Go product catalog
│ └── recommendation/ # Python recommendation service
.
.
.

├── kubernetes/ # Kubernetes manifests
├── docker-compose.yml # Local development setup
└── README.md

Step 1.3: Run the Application Locally
Start the application stack using Docker Compose:
`# Start all services with pre-built images
docker compose up --no-build

Alternative: Build from source

docker compose up`

Note: Use --no-build to pull pre-built images instead of building from source.
Step 1.4: Verify Local Deployment
Access these endpoints once containers are running:

Web Store: http://localhost:8080
Grafana: http://localhost:8080/grafana
Feature Flags UI: http://localhost:8080/feature
Load Generator: http://localhost:8080/loadgen

Step 1.5: Understanding Docker Compose Architecture
The Docker Compose setup includes:
services:
accounting:
image: ${IMAGE_NAME}:${DEMO_VERSION}-accounting
environment:
- KAFKA_ADDR
- OTEL_EXPORTER_OTLP_ENDPOINT
depends_on:
- otel-collector
- kafka
frontend:
image: ${IMAGE_NAME}:${DEMO_VERSION}-frontend
ports:
- "8080:8080"
environment:
- FRONTEND_ADDR
- AD_SERVICE_ADDR
- CART_SERVICE_ADDR

Key components:

  • Services: Each microservice runs in its own container
  • Networks: Services communicate through Docker networks
  • Volumes: Persistent data storage for databases
  • Environment Variables: Configuration management

Phase 2: AWS Account Setup

Step 2.1: Create AWS Account
Step 2.2: Create IAM User
For security, create an IAM user instead of using the root account:

Navigate to IAM Service in AWS Console
Create user:Username: devops-user
Access Type: Programmatic access + AWS Management Console access

Attach policies:
AdministratorAccess (for this demo project)
In production, use least privilege principle

Download credentials:
Access Key ID
Secret Access Key
Save the CSV file securely

Step 2.3: Configure AWS CLI
Install and configure AWS CLI:

Install AWS CLI

On Windows: Download from AWS website

On macOS: brew install awscli

On Linux: sudo apt install awscli

Configure AWS CLI

aws configure

Enter credentials:

AWS Access Key ID: [Your Access Key]
AWS Secret Access Key: [Your Secret Key]
Default region name: us-west-2
Default output format: json

Verify configuration

aws sts get-caller-identity

Expected output:
{
"UserId": "AIDACKCEVSQ6C2EXAMPLE",
"Account": "123456789012",
"Arn": "arn:aws:iam::123456789012:user/devops-user"
}

Step 2.4: Create EC2 Key Pair
Create a key pair for EC2 access:

Create key pair

aws ec2 create-key-pair --key-name devops-key --query 'KeyMaterial' --output text > devops-key.pem

Set permissions (Linux/macOS)

chmod 400 devops-key.pem

Phase 3: Infrastructure as Code with Terraform

Step 3.1: Understanding Terraform Structure
The project includes a Terraform setup in the eks-install directory:
eks-install/
├── main.tf # Main configuration
├── variables.tf # Input variables
├── outputs.tf # Output values
├── backend/ # Remote state setup
│ ├── main.tf
│ └── outputs.tf
└── modules/
├── vpc/ # VPC module
│ ├── main.tf
│ ├── variables.tf
│ └── outputs.tf
└── eks/ # EKS module
├── main.tf
├── variables.tf
└── outputs.tf

Step 3.2: Set Up Terraform Backend
Create the S3 bucket and DynamoDB table for remote state:
cd eks-install/backend
terraform init
terraform plan
terraform apply

Step 3.3: Configure Main Terraform
Update the variables.tf file:
variable "region" {
description = "AWS region"
type = string
default = "us-west-2"
}

variable "cluster_name" {
description = "EKS cluster name"
type = string
default = "opentelemetry-demo-cluster"
}

variable "vpc_cidr" {
description = "CIDR block for VPC"
type = string
default = "10.0.0.0/16"
}

variable "availability_zones" {
description = "Availability zones"
type = list(string)
default = ["us-west-2a", "us-west-2b", "us-west-2c"]
}

variable "private_subnet_cidrs" {
description = "CIDR blocks for private subnets"
type = list(string)
default = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
}

variable "public_subnet_cidrs" {
description = "CIDR blocks for public subnets"
type = list(string)
default = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
}

Step 3.4: Deploy EKS Infrastructure
cd ..
terraform init
terraform validate
terraform plan -out=tfplan
terraform apply tfplan

This creates:

VPC with public and private subnets
Internet Gateway and NAT Gateways
EKS Cluster with managed node groups
Security Groups and IAM roles
Route tables and associations

Expected Duration: 15-20 minutes

Step 3.5: Verify EKS Cluster
aws eks --region us-west-2 update-kubeconfig --name opentelemetry-demo-cluster
kubectl get nodes
kubectl cluster-info

Expected output:
NAME STATUS ROLES AGE VERSION
ip-10-0-1-123.us-west-2.compute.internal Ready 5m v1.28.0
ip-10-0-2-456.us-west-2.compute.internal Ready 5m v1.28.0
ip-10-0-3-789.us-west-2.compute.internal Ready 5m v1.28.0

Phase 4: Container Orchestration with Kubernetes

Step 4.1: Understanding Kubernetes Deployment
Example frontend deployment manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: opentelemetry-demo-frontend
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: opentelemetry-demo-frontend
template:
metadata:
labels:
app.kubernetes.io/name: opentelemetry-demo-frontend
spec:
serviceAccountName: opentelemetry-demo
containers:
- name: frontend
image: ghcr.io/open-telemetry/demo:latest-frontend
ports:
- containerPort: 8080
env:
- name: FRONTEND_ADDR
value: ":8080"
- name: AD_SERVICE_ADDR
value: "opentelemetry-demo-adservice:8080"

Step 4.2: Deploy Application to Kubernetes
kubectl create namespace opentelemetry-demo
kubectl apply -f kubernetes/serviceaccount.yaml -n opentelemetry-demo
kubectl apply -f kubernetes/complete-deploy.yaml -n opentelemetry-demo
kubectl get deployments -n opentelemetry-demo
kubectl get pods -n opentelemetry-demo
kubectl get services -n opentelemetry-demo

Step 4.3: Set Up Ingress Controller
Install AWS Load Balancer Controller:
kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master"

aws iam create-policy \
--policy-name AWSLoadBalancerControllerIAMPolicy \
--policy-document file://iam_policy.json

eksctl create iamserviceaccount \
--cluster=opentelemetry-demo-cluster \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--role-name=AmazonEKSLoadBalancerControllerRole \
--attach-policy-arn=arn:aws:iam::YOUR-ACCOUNT-ID:policy/AWSLoadBalancerControllerIAMPolicy \
--approve

Step 4.4: Configure Application Ingress
Create an ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: opentelemetry-demo-ingress
namespace: opentelemetry-demo
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
ingressClassName: alb
rules:

  • host: your-domain.com http: paths:
    • path: / pathType: Prefix backend: service: name: opentelemetry-demo-frontendproxy port: number: 8080

Apply the ingress:
kubectl apply -f ingress.yaml

Step 4.5: Verify Deployment
kubectl get all -n opentelemetry-demo
kubectl get ingress -n opentelemetry-demo
kubectl logs -l app.kubernetes.io/name=opentelemetry-demo-frontend -n opentelemetry-demo

Phase 5: Domain Setup with Route53

Step 5.1: Register Domain (Optional)
Options:

Register through AWS Route 53
Use an existing domain
Use the Load Balancer DNS name for testing

Step 5.2: Create Hosted Zone
aws route53 create-hosted-zone \
--name your-domain.com \
--caller-reference $(date +%s)

Step 5.3: Update Name Servers

Get name servers from the hosted zone
Update your domain registrar to use Route53 name servers

Step 5.4: Create DNS Records
kubectl get ingress opentelemetry-demo-ingress -n opentelemetry-demo -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'

aws route53 change-resource-record-sets \
--hosted-zone-id ZXXXXXXXXXXXXX \
--change-batch file://dns-record.json

Example dns-record.json:
{
"Changes": [
{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "opentelemetry-demo.your-domain.com",
"Type": "CNAME",
"TTL": 300,
"ResourceRecords": [
{
"Value": "k8s-opentel-opentel-xxxxxxxxx-yyyyyyyyyy.us-west-2.elb.amazonaws.com"
}
]
}
}
]
}

Troubleshooting

Common Issues and Solutions
Issue 1: EKS Nodes Not Ready
kubectl describe node NODE_NAME
Common causes:

  1. Insufficient IAM permissions
  2. Security group issues
  3. Subnet configuration problems

Issue 2: Pods in Pending State
kubectl describe pod POD_NAME -n opentelemetry-demo
Common causes:

  1. Insufficient resources
  2. Image pull errors
  3. Volume mounting issues

Issue 3: Service Not Accessible
kubectl get svc,endpoints -n opentelemetry-demo
kubectl describe ingress opentelemetry-demo-ingress -n opentelemetry-demo

Issue 4: Terraform State Issues
terraform refresh
terraform import aws_instance.example i-1234567890abcdef0

Debugging Commands
kubectl get all --all-namespaces
kubectl get events --sort-by=.metadata.creationTimestamp
kubectl logs -f POD_NAME -n opentelemetry-demo
kubectl exec -it POD_NAME -n opentelemetry-demo -- /bin/bash
kubectl top nodes
kubectl top pods -n opentelemetry-demo

Conclusion

This project demonstrates a complete DevOps implementation using modern cloud-native technologies.

  • Deployed a complex microservices application locally
  • Set up AWS infrastructure using Terraform
  • Orchestrated containers with Kubernetes on EKS
  • Implemented monitoring and observability
  • Configured DNS and load balancing

Next Steps

Enhance security with network policies and RBAC
Implement autoscaling with HPA and cluster autoscaling
Optimize costs with resource requests/limits and spot instances
Set up disaster recovery procedures

This documentation serves as a complete guide for building and deploying the OpenTelemetry Astronomy Shop using modern DevOps practices. For questions or improvements, refer to the official OpenTelemetry documentation and AWS best practices guides.

Top comments (0)