Kubernetes has become the de-facto standard for container orchestration, and many organizations today run their workloads on managed Kubernetes platforms. One of the most popular managed Kubernetes services is Amazon Elastic Kubernetes Service (EKS).
In this hands-on project, I built a complete end-to-end Kubernetes deployment on AWS EKS by deploying the classic 2048 game application.
The goal of this project was simple:
- Containerize an application
- Deploy it on a Kubernetes cluster
- Expose it to the internet
- Understand how Kubernetes workloads run in a real cloud environment
This project helped me understand how containerized applications move from a simple Docker image to a live application running on a Kubernetes cluster in AWS.
If you'd like to explore the full project and code, you can check it out here:
👉 GitHub Repository
https://shorturl.at/LxtaW
Project Architecture Overview
The workflow of this project follows a typical Kubernetes deployment lifecycle:
- Containerize the application using Docker
- Create an Amazon EKS cluster
- Configure IAM roles and worker nodes
- Deploy the application using Kubernetes manifests
- Expose the application using a LoadBalancer service
- Access the application via the internet
By the end of this process, the 2048 game becomes accessible through an AWS LoadBalancer created automatically by Kubernetes.
Prerequisites
Before starting the project, a few essential tools are required.
kubectl
kubectl is the command-line tool used to interact with Kubernetes clusters. It allows you to deploy applications, inspect resources, and manage cluster operations.
eksctl
eksctl simplifies the process of creating and managing Amazon EKS clusters. Instead of manually configuring dozens of AWS resources, eksctl automates most of the work.
AWS CLI
The AWS CLI allows us to interact with AWS services directly from the terminal. In this project, it is used to authenticate with the EKS cluster and update the kubeconfig file.
Once these tools are installed and configured, we can start building the Kubernetes environment.
Step 1 — Creating an Amazon EKS Cluster
The first step is to create a Kubernetes cluster on AWS using Amazon EKS.
An EKS cluster consists of two main components:
- Control Plane (managed by AWS)
- Worker Nodes (EC2 instances where pods run)
While creating the cluster, a few configurations are required:
- Select the default VPC
- Choose 2–3 subnets
- Configure security groups
- Enable public cluster endpoint access
The creation process usually takes around 10–12 minutes.
Once the cluster status becomes Active, we can move to the next step.
Step 2 — Creating IAM Roles
AWS services rely heavily on IAM roles and permissions.
Two roles were created in this project:
EKS Cluster Role
This role allows the Kubernetes control plane to interact with other AWS services.
Policy attached:
AmazonEKSClusterPolicy
Node Group Role
Worker nodes also need permissions to communicate with AWS services.
Policies attached:
AmazonEKSWorkerNodePolicy
AmazonEC2ContainerRegistryReadOnly
AmazonEKS_CNI_Policy
These permissions allow nodes to:
- Pull container images
- Communicate with the cluster
- Manage networking through the CNI plugin
Step 3 — Adding Worker Nodes
Once the cluster is created, we need worker nodes where Kubernetes pods will run.
These nodes are added through Node Groups.
Configuration used:
- AMI: Amazon Linux 2
- Desired nodes: 1
- Security group ports: 22, 80, 8080
- SSH access enabled
After a few minutes, the node group becomes active and ready to run workloads.
Step 4 — Authenticating with the Cluster
Next, we configure local access to the EKS cluster.
Using AWS CLI, we update the kubeconfig file.
aws eks update-kubeconfig --region us-east-1 --name my-cluster
This command stores the cluster credentials locally so that kubectl can communicate with the Kubernetes API server.
To confirm the connection:
kubectl get nodes
If the nodes appear, the cluster is successfully configured.
Step 5 — Deploying the Application Pod
Now comes the interesting part — deploying the 2048 game application.
A Kubernetes Pod definition was created.
apiVersion: v1
kind: Pod
metadata:
name: 2048-pod
labels:
app: 2048-ws
spec:
containers:
- name: 2048-container
image: blackicebird/2048
ports:
- containerPort: 80
This configuration defines:
- The pod name
- Application label
- Docker image
- Container port
Apply the configuration using:
kubectl apply -f 2048-pod.yaml
Verify the pod status:
kubectl get pods
Once the pod is in the Running state, the application is successfully deployed inside the Kubernetes cluster.
Step 6 — Exposing the Application
Although the pod is running, it is not yet accessible from outside the cluster.
To solve this, we create a Kubernetes Service.
apiVersion: v1
kind: Service
metadata:
name: mygame-svc
spec:
selector:
app: 2048-ws
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
This service performs two important functions:
- Routes traffic to the application pod
- Creates an AWS Elastic LoadBalancer
Deploy the service using:
kubectl apply -f mygame-svc.yaml
Check the service details:
kubectl describe svc mygame-svc
Kubernetes will automatically provision a public LoadBalancer.
Step 7 — Accessing the Application
After the LoadBalancer is created, AWS generates a public DNS endpoint.
This DNS can be accessed from a browser.
Once opened, the 2048 game interface appears, and the application becomes publicly accessible.
At this point, the Kubernetes deployment is fully functional.
Scaling the Application
One of the biggest advantages of Kubernetes is horizontal scaling.
If traffic increases, additional replicas can be created.
Example:
kubectl scale deployment my-app --replicas=3
Kubernetes will automatically distribute traffic across the pods.
This ensures high availability and improved performance.
What I Learned from This Project
Working on this project helped me understand several important DevOps concepts:
Kubernetes Workloads
How pods run containerized applications inside a cluster.
Managed Kubernetes
How Amazon EKS simplifies cluster management by handling the control plane.
Networking in Kubernetes
How services and load balancers expose applications externally.
Cloud Infrastructure
How AWS integrates networking, compute, and container orchestration together.
Possible Improvements
Although this project covers the fundamentals, there are many ways to enhance it.
Some improvements could include:
- Using Deployments instead of standalone pods
- Implementing Ingress controllers
- Adding CI/CD pipelines
- Monitoring with Prometheus and Grafana
- Infrastructure automation using Terraform
These additions would make the project closer to a production-grade Kubernetes deployment.
Final Thoughts
Kubernetes can seem overwhelming at first, but projects like this make it much easier to understand how everything fits together.
By deploying a simple application like the 2048 game, we can clearly see how:
- containers run inside pods
- pods run on worker nodes
- services expose applications
- load balancers provide external access
If you are learning DevOps, Kubernetes, or Cloud Engineering, building projects like this is one of the best ways to gain practical experience.
Project Repository
If you want to explore the code, YAML manifests, and setup steps, check out the complete project here:


Top comments (0)