What We're Building
This project demonstrates a complete Kubernetes setup that includes:
- Multi-node EKS cluster with high availability across multiple environments
- Environment isolation using separate namespaces (dev, staging, prod)
- Persistent storage with dynamic volume provisioning using AWS EBS
- MongoDB replica set for database high availability
- Zero-downtime deployments using rolling update strategies
- Comprehensive monitoring with Prometheus and Grafana
- Load balancing and ingress management with NGINX
Architecture Overview
Our setup follows a 3-tier architecture pattern on EKS, providing separation of concerns between the presentation layer (frontend), business logic layer (backend), and data layer (MongoDB). Each environment operates in its own namespace, ensuring complete isolation while sharing the underlying cluster resources efficiently.
Prerequisites and Initial Setup
Before diving into the cluster creation, you'll need an AWS account and basic familiarity with Kubernetes concepts. Don't worry if you're new to EKS – this guide is designed to be beginner-friendly while covering advanced concepts.
Setting Up the Bastion Host
The bastion host serves as our control plane for managing the EKS cluster. It's a separate EC2 instance that won't be part of the cluster itself but will have all the necessary tools to interact with it.
Why use a bastion host?
- Centralized management point for cluster operations
- Secure access to private cluster resources
- Consistent environment for all cluster administrators
Launch a t2.micro EC2 instance and update it:
sudo apt update
Installing Essential Tools
AWS CLI Configuration:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
sudo apt install unzip
unzip awscliv2.zip
sudo ./aws/install
After installation, configure AWS CLI with your IAM user credentials. Make sure your IAM user has appropriate permissions for EKS operations.
kubectl Installation:
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin
kubectl version --short --client
eksctl Installation:
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
eksctl version
Helm Installation:
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
Creating the EKS Cluster
Cluster Creation Strategy
We're using a phased approach to cluster creation, which provides better control and follows AWS best practices:
- Create the control plane first
- Associate IAM OIDC provider for service accounts
- Create worker node groups
- Configure persistent storage
Step 1: Create the EKS Control Plane
eksctl create cluster --name=my-cluster \
--region=ap-south-1 \
--version=1.30 \
--without-nodegroup
This creates the managed control plane without worker nodes, giving us flexibility in node group configuration.
Step 2: Associate IAM OIDC Provider
eksctl utils associate-iam-oidc-provider \
--region ap-south-1 \
--cluster my-cluster \
--approve
This enables IAM roles for service accounts (IRSA), a crucial security feature that allows pods to assume IAM roles.
Step 3: Create Node Group
eksctl create nodegroup --cluster=my-cluster \
--region=ap-south-1 \
--name=my-cluster \
--node-type=t2.medium \
--nodes=2 \
--nodes-min=2 \
--nodes-max=2 \
--node-volume-size=25 \
--ssh-access \
--ssh-public-key=ec2_keypair
Step 4: Update kubectl Context
aws eks update-kubeconfig --region ap-south-1 --name my-cluster
Implementing Persistent Storage
One of the critical aspects of running stateful applications like databases in Kubernetes is persistent storage. We're using AWS EBS CSI Driver for dynamic volume provisioning.
Installing AWS EBS CSI Driver
Add Helm Repository:
helm repo add aws-ebs-csi-driver https://kubernetes-sigs.github.io/aws-ebs-csi-driver
helm repo update
Configure IAM Permissions:
First, identify your node group's IAM role:
aws eks describe-nodegroup \
--cluster-name my-cluster \
--nodegroup-name my-cluster \
--query "nodegroup.nodeRole" --output text
Then attach the required policy:
aws iam attach-role-policy \
--role-name <your-node-role-name> \
--policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy
Install the Driver:
helm install aws-ebs-csi-driver aws-ebs-csi-driver/aws-ebs-csi-driver \
--namespace kube-system
Application Deployment Strategies
This project supports two deployment approaches, each serving different use cases:
1. Single Environment Deployment (Without Helm)
Perfect for development or testing scenarios where you need a single environment:
kubectl apply -f namespace.yml
kubectl apply -f storageclass.yml
kubectl apply -f secret-mongo.yml
kubectl apply -f configmap.yml
kubectl apply -f headless-svc.yml
kubectl apply -f mongo-service.yml
kubectl apply -f mongo-deployment.yml
kubectl apply -f backend-statefullset.yml
kubectl apply -f backend-service.yml
kubectl apply -f frontend-deployment.yml
kubectl apply -f frontend-service.yml
kubectl apply -f ingress.yml
kubectl apply -f auto-scaler.yml
2. Multi-Environment Deployment (With Helm)
Ideal for production scenarios where you need separate dev, staging, and production environments:
# Development Environment
helm install three-tier-dev . -f values-dev.yml -n dev --create-namespace
# Staging Environment
helm upgrade --install three-tier-stage ./ \
-f values-stage.yml \
-n staging \
--create-namespace
# Production Environment
helm upgrade --install three-tier-prod ./ \
-f values-prod.yml \
-n prod \
--create-namespace
Database Configuration: MongoDB Replica Set
Running MongoDB as a replica set in Kubernetes provides high availability and data redundancy. Here's how to initialize it:
Access MongoDB Primary:
kubectl exec -it mongo-0 -n dev -- mongosh
Initialize Replica Set:
rs.initiate({
_id: "rs0",
members: [
{ _id: 0, host: "mongo-0.mongo-headless.dev.svc.cluster.local:27017" },
{ _id: 1, host: "mongo-1.mongo-headless.dev.svc.cluster.local:27017" }
]
})
rs.status()
use admin
db.createUser({
user: "admin",
pwd: "mypassword",
roles: [ { role: "root", db: "admin" }]
})
This configuration ensures that your MongoDB deployment can handle node failures while maintaining data consistency.
Load Balancing and Ingress
Install NGINX Ingress Controller:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.1/deploy/static/provider/aws/deploy.yaml
Get External IP:
kubectl get svc -n ingress-nginx
nslookup <external-ip>
Update your /etc/hosts
file to point your domain to the external IP, then access your application at http://your-app-domain.com
.
Comprehensive Monitoring Setup
Monitoring is crucial for maintaining healthy production systems. We're using the popular kube-prometheus-stack which includes Prometheus, Grafana, and AlertManager.
Install Monitoring Stack:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
kubectl create ns monitoring
helm install monitoring prometheus-community/kube-prometheus-stack -n monitoring
Access Monitoring UIs:
Prometheus (for metrics and alerts):
kubectl port-forward service/prometheus-operated -n monitoring 9090:9090 --address 0.0.0.0
Grafana (for visualization - password: prom-operator):
kubectl port-forward service/monitoring-grafana -n monitoring 8080:80 --address 0.0.0.0
Key Benefits and Best Practices Implemented
High Availability:
- Multi-node cluster across availability zones
- MongoDB replica set for database redundancy
- Rolling update deployment strategy for zero-downtime updates
Security:
- Namespace isolation between environments
- Secure secret management
Scalability:
- Horizontal Pod Autoscaler configuration
- Dynamic volume provisioning
- Resource quotas and limits
Observability:
- Comprehensive metrics collection with Prometheus
- Rich visualization with Grafana
- Alert management for proactive monitoring
Cleanup and Cost Management
When you're done experimenting, clean up resources to avoid unnecessary charges:
eksctl delete cluster --name=my-cluster --region=ap-south-1
This command will delete the entire cluster and associated resources.
Conclusion
This project demonstrates a production-ready approach to running containerized applications on AWS EKS. By implementing proper environment isolation, persistent storage, monitoring, and high availability patterns, you're building a foundation that can scale with your organization's needs.
The combination of Kubernetes' orchestration capabilities with AWS's managed services provides a powerful platform for modern application deployment. Whether you're running a small development environment or a large-scale production system, these patterns and practices will serve you well.
Next Steps
To further enhance this setup, consider:
- Implementing GitOps with ArgoCD or Flux
- Adding service mesh capabilities with Istio
- Implementing backup strategies for persistent data
- Setting up CI/CD pipelines for automated deployments
- Adding security scanning and compliance tools
Remember, the cloud-native journey is iterative. Start with these solid foundations and gradually add complexity as your team's expertise and requirements grow.
Top comments (0)