DEV Community

Cover image for How to deploy and manage your workload on EKS in a one-stop shop

How to deploy and manage your workload on EKS in a one-stop shop

Amazon Web Services (AWS) has carried out its Elastic Kubernetes Service (EKS) Auto Mode based on the open source technology Karpenter, which extends the management beyond the Kubernetes cluster itself and provide Just-in-time autoscaling of the cluster based on your deployed workloads, and extremely offload the burden of the administer / DevOps personnel while let developers focus on the application. Recently I have exlpored this service and been amazed by the functionality and features which let you manage your containerized workload on this platform in a one-stop shop. This article works you through the process of setting up the cluster and deploying the sample application onto it.

Create the EKS Auto Mode cluster

First and foremost, you need to have an active AWS account with admin rights, with which you can create a user with the relevant IAM permission and Security credentials (aka, Access Key ID and Secret Access Key).

Second, you can choose the method to create the EKS Auto Mode cluster (like awscli, eksctl, terraform, etc). I choose Terraform module (https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest) and leverage GitHub with Terraform Cloud (https://app.terraform.io) to deploy it in a continuous manner, which extremely simplifies the process of deploying all required resources (EKS cluster, IAM role, IAM policy, SA, etc) and reduces the manually errors. The GUI looks like the picture below.

Image description

Third, if everything is OK, the EKS Auto Mode cluster should be successfully created and displayed on your AWS Management console as shown below.

Image description

Next, you need to update the kubeconfig on your laptop with command below so that you can get access to your cluster.

aws eks update-kubeconfig --name "<your cluster name>"

after which, you can spot the nodes of the cluster with below command:

kubectl get nodepools

Deploy your sample workload onto the EKS cluster

After creating the cluster, you can deploy the sample workload onto the cluster to see the effect. I choose to deploy game 2048 which gives the direct visual effects.

Create a file named 01-namespace.yaml:

apiVersion: v1
kind: Namespace
metadata:
  name: game-2048
Enter fullscreen mode Exit fullscreen mode

Apply the namespace configuration:
kubectl apply -f 01-namespace.yaml

Create a file named 02-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: game-2048
  name: deployment-2048
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: app-2048
  replicas: 5
  template:
    metadata:
      labels:
        app.kubernetes.io/name: app-2048
    spec:
      containers:
        - image: public.ecr.aws/l6m2t8p7/docker-2048:latest
          imagePullPolicy: Always
          name: app-2048
          ports:
            - containerPort: 80
          resources:
            requests:
              cpu: "0.5"
Enter fullscreen mode Exit fullscreen mode

Apply the deployment:
kubectl apply -f 02-deployment.yaml

Create a file named 03-service.yaml:

apiVersion: v1
kind: Service
metadata:
  namespace: game-2048
  name: service-2048
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  type: NodePort
  selector:
    app.kubernetes.io/name: app-2048
Enter fullscreen mode Exit fullscreen mode

Apply the service:
kubectl apply -f 03-service.yaml

Create a file named 04-ingressclass.yaml:

apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  labels:
    app.kubernetes.io/name: LoadBalancerController
  name: alb
spec:
  controller: eks.amazonaws.com/alb
Enter fullscreen mode Exit fullscreen mode

Note:
You need to manually add the tag into the subnets of VPC below if the cluster isn't created by eksctl. Otherwise the creation will fail.
kubernetes.io/role/elb: 1

Then create the Ingress resource. Create a file named 05-ingress.yaml:
apiVersion: networking.k8s.io/v1

kind: Ingress
metadata:
  namespace: game-2048
  name: ingress-2048
  annotations:
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
spec:
  ingressClassName: alb
  rules:
    - http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: service-2048
                port:
                  number: 80
Enter fullscreen mode Exit fullscreen mode

Apply the ingress configurations:

kubectl apply -f 04-ingressclass.yaml
kubectl apply -f 05-ingress.yaml
Enter fullscreen mode Exit fullscreen mode

Verify the Deployment

kubectl get pods -n game-2048
kubectl get svc -n game-2048
kubectl get ingress -n game-2048
Enter fullscreen mode Exit fullscreen mode

The ADDRESS field in the ingress output will show your ALB endpoint. Wait 2-3 minutes for the ALB to provision and register all targets.

If everything works fine, you should be able to get access to the deployed app as shown below.

Image description

Monitoring your workload

In order to monitor your workload via AWS CloudWatch, you need to install CloudWatch Observability add-ons into the cluster.

First, go to the "Observability" tab of the EKS cluster on the management console.

Image description

Second, click on "Manage CloudWatch Observability add-ons" button and follow the instruction to install the add-on.

If everything is ok, you should see the add-on under the "Add-ons" tab as shown below.

Image description

Now you can go to the Container Insights of CloudWatch to monitor your workload in a visual manner.

Image description

Top comments (0)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.