DEV Community

Uendi Hoxha
Uendi Hoxha

Posted on • Edited on

Container Orchestration with Kubernetes on AWS EKS

As we transition to microservices architectures, container orchestration becomes essential for managing complex application environments. Kubernetes is the leading open-source platform for automating deployment, scaling and operations of containerized applications. Amazon Elastic Kubernetes Service (EKS) simplifies Kubernetes by providing a managed service that automates much of the setup and management process. In this article, we will dive into technical details on how to set up, manage, and scale Kubernetes applications on AWS EKS.

Setting Up Kubernetes on AWS EKS
Let’s walk through the steps of setting up a Kubernetes cluster on EKS.

1. Install AWS CLI and eksctl
First, ensure that you have the necessary tools installed:

  • AWS CLI: To interact with AWS services.
  • eksctl: A command-line tool for creating and managing EKS clusters.
Install AWS CLI
$ curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
$ sudo installer -pkg AWSCLIV2.pkg -target /
Enter fullscreen mode Exit fullscreen mode
Install eksctl
$ curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
$ sudo mv /tmp/eksctl /usr/local/bin
Enter fullscreen mode Exit fullscreen mode

2. Create an EKS Cluster
To create a Kubernetes cluster, use eksctl. This command will create a control plane and worker nodes (EC2 instances) for your cluster.

# Create an EKS Cluster
$ eksctl create cluster \
  --name my-eks-cluster \
  --version 1.25 \
  --region us-east-2 \
  --nodegroup-name my-nodes \
  --node-type t3.medium \
  --nodes 3 \
  --nodes-min 1 \
  --nodes-max 4 \
  --managed
Enter fullscreen mode Exit fullscreen mode

This command will create a managed Kubernetes cluster with 3 EC2 nodes of type t3.medium, automatically scaling between 1 and 4 nodes based on resource requirements.

3. Configure kubectl to Access Your EKS Cluster
After the cluster is created, you’ll need to configure kubectl (Kubernetes CLI) to interact with it.

# Update kubeconfig with EKS cluster details
$ aws eks --region us-east-2 update-kubeconfig --name my-eks-cluster
Enter fullscreen mode Exit fullscreen mode

Deploying Applications on EKS
Now that your Kubernetes cluster is running, let’s deploy a simple containerized application.

1. Create a Deployment
A Deployment is a Kubernetes resource that manages a set of identical pods. Here, we’ll deploy a simple Nginx web server.

# nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
Enter fullscreen mode Exit fullscreen mode

Apply the deployment:
kubectl apply -f nginx-deployment.yaml
This will create 2 replicas of the Nginx server.

2. Expose the Deployment with a Service
To make the Nginx application accessible from outside the cluster, you need to create a Service.

# nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: LoadBalancer
Enter fullscreen mode Exit fullscreen mode

Apply the service:
$ kubectl apply -f nginx-service.yaml

This will create an AWS Elastic Load Balancer that routes traffic to your Nginx pods. You can find the external IP address (ELB) of the service:
$ kubectl get services


Managing Scaling with EKS
Kubernetes in EKS automatically handles horizontal scaling based on CPU and memory utilization. Let’s configure Horizontal Pod Autoscaler (HPA) for the Nginx deployment.

1. Enable Metrics Server
First, ensure that the Metrics Server is installed. This is a Kubernetes component required for autoscaling.

$ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
Enter fullscreen mode Exit fullscreen mode

2. Create Horizontal Pod Autoscaler
Next, create an HPA for the Nginx deployment:

$ kubectl autoscale deployment nginx-deployment --cpu-percent=50 --min=2 --max=10
Enter fullscreen mode Exit fullscreen mode

This command will autoscale the Nginx deployment, ensuring that CPU utilization stays around 50%, and Kubernetes will automatically scale pods between 2 and 10 based on the load.


Securing AWS EKS with IAM and RBAC
1. IAM Roles for Service Accounts (IRSA)
Amazon EKS integrates tightly with AWS IAM to control access to resources. With IAM Roles for Service Accounts (IRSA), you can give specific permissions to pods by associating IAM roles with Kubernetes service accounts.

Here’s how you would set up IRSA for an application that needs to access S3:
Step 1: Create an IAM role with the required S3 permissions.
Step 2: Annotate the Kubernetes service account with the IAM role.

$ eksctl create iamserviceaccount \
  --name my-app-service-account \
  --namespace default \
  --cluster my-eks-cluster \
  --attach-policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess \
  --approve
Enter fullscreen mode Exit fullscreen mode

Top comments (0)