Efficiently Set up infrastructure and deploy to Kubernetes using AWS EKS and Terraform
๐ก Introduction
Hey developers! ๐
Welcome to the world of cloud computing and automation. In this blog, weโre going to walk through an exciting real-world project โ deploying a three-tier Todo List application on Amazon EKS (Elastic Kubernetes Service) using Terraform.
This project is perfect if you're looking to get hands-on experience with:
Provisioning infrastructure using Terraform
Working with Docker to containerize services
Deploying applications on AWS using EKS, ECR, IAM, and more
Weโll break it down step-by-step โ from writing Terraform code to spinning up your Kubernetes cluster, containerizing the frontend, backend, and MongoDB services, and deploying everything seamlessly.
Whether you're new to DevOps or brushing up on your cloud skills, this guide will help you understand how everything connects in a modern microservices-based deployment.
So without further ado, letโs get started and bring our infrastructure to life! ๐๐ ๏ธ
Youtube Demo
๐ง Prerequisites: What Youโll Need Before We Start
Before we dive into the fun part โ building and deploying โ letโs quickly make sure your system is ready for action. Hereโs what youโll need:
โ
An AWS Account
If you donโt already have one, head over to aws.amazon.com and sign up. Weโll be using AWS services like EKS (Elastic Kubernetes Service), ECR (Elastic Container Registry), and IAM (Identity and Access Management), so having an account is essential.
โ
Docker Installed
Weโll use Docker to containerise the three components of our app: the frontend, backend, and MongoDB database. You can download Docker Desktop from the official Docker website and install it like any other app.
โ
Terraform Installed
Terraform will be our tool of choice for provisioning the infrastructure on AWS. You can download Terraform from terraform.io. Just install it โ no need to configure anything yet.
Thatโs it! Once you have these basics set up, youโre good to go. Letโs start building!
๐ Step 1: Set Up AWS CLI and IAM User
Before Terraform can talk to AWS and spin up resources, we need to set up the AWS CLI and create an IAM user with the right permissions. Letโs walk through it step-by-step.
๐ค Create an IAM User
- Log in to your AWS account as the root user (the one you used to sign up).
- In the AWS Management Console, go to IAM > Users and click on โCreate Userโ.
Give the user a name โ something like
three-tier-user
works great โ and click Next.On the Set Permissions page, attach the policy named AdministratorAccess.
โ ๏ธ Important: Weโre giving full admin access here just to avoid permission issues during learning and experimentation. Never use this approach in production โ always follow the Principle of Least Privilege!
- Click Review and then Create User. Youโre done with the IAM part!
๐ฆ Install AWS CLI (Ubuntu/Linux)
If you're using Ubuntu (amd64), you can install the AWS CLI by running these commands in your terminal:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
If you're using a different operating system (like macOS or Windows), just head over to the official install guide here:
๐ AWS CLI Installation Guide
๐ Generate Access Keys & Configure AWS CLI
Go back to the IAM dashboard and click on your new user (
three-tier-user
).Under the Security Credentials tab, click on Create Access Key.
- Choose Command Line Interface (CLI) as the use case, agree to the terms, and proceed.
- Once the keys are generated, copy the Access Key ID and Secret Access Key (youโll need them right away!).
Now, go to your terminal and configure the AWS CLI:
aws configure
It will prompt you to enter:
Access Key ID
Secret Access Key
Default region name: You can use
us-east-1
for this demoDefault output format: Enter
json
Thatโs it! Your AWS CLI is now set up and ready to communicate with your AWS account ๐
๐ ๏ธ Step 2: Install Terraform and Set Up Remote Backend
Now that our AWS CLI is ready and configured, letโs install Terraform, our Infrastructure as Code (IaC) tool of choice for this project. Weโll also set up a secure and scalable way to store our Terraform state using an S3 bucket.
๐ฅ Installing Terraform on Ubuntu (amd64)
If you're using Ubuntu on an amd64 system, follow these commands to install Terraform:
sudo apt-get update && sudo apt-get install -y gnupg software-properties-common
wget -O- https://apt.releases.hashicorp.com/gpg | \
gpg --dearmor | \
sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg > /dev/null
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
https://apt.releases.hashicorp.com $(grep -oP '(?<=UBUNTU_CODENAME=).*' /etc/os-release || lsb_release -cs) main" | \
sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update
sudo apt-get install terraform
โ
After this, you can verify the installation with:
terraform -v
๐ฅ๏ธ If you're on a different operating system or architecture, follow the official installation guide here:
๐ Terraform Install Guide
๐ AWS CLI + Terraform: Working Together
Since weโve already configured the AWS CLI, Terraform will automatically use the credentials (access key & secret key) stored by aws configure
. This means youโre ready to provision AWS resources securely and seamlessly.
โ๏ธ Best Practice: Use Remote Backend for Terraform State
Terraform tracks the state of your infrastructure in a file called terraform.tfstate
. By default, itโs stored locally, but thatโs risky and not scalable. So, weโll follow best practices and store this file remotely in an S3 bucket.
Hereโs how to create an S3 bucket to act as your Terraform backend:
๐ชฃ Create an S3 Bucket for State Storage
aws s3api create-bucket \
--bucket pravesh-terra-state-bucket \
--region us-east-1
๐ Enable Versioning for State History
aws s3api put-bucket-versioning \
--bucket pravesh-terra-state-bucket \
--versioning-configuration Status=Enabled
๐ Enable Default Encryption
aws s3api put-bucket-encryption \
--bucket pravesh-terra-state-bucket \
--server-side-encryption-configuration '{
"Rules": [{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "AES256"
}
}]
}'
And thatโs it! You now have a secure, versioned, and encrypted S3 bucket ready to store your Terraform state files โ a key step toward building a production-grade infrastructure.
๐ฆ Step 3: Clone the Project and Provision Infrastructure with Terraform
With all the groundwork done โ AWS CLI set up, Terraform installed, and the backend ready โ itโs time to move on to the actual project!
The codebase for our three-tier application is available on my GitHub repository:
๐ GitHub Repo: https://github.com/Pravesh-Sudha/3-tier-app-Deployment
๐ Clone the Repository
To get started, open your terminal and run the following commands:
git clone https://github.com/Pravesh-Sudha/3-tier-app-Deployment
cd 3-tier-app-Deployment/
Inside the cloned repo, you'll find a folder named terra-config/
. Thatโs where all the Terraform magic happens. Navigate into that directory:
cd terra-config/
Now initialize the Terraform backend (which we configured to use your S3 bucket earlier):
terraform init
This will configure Terraform to use the remote backend for storing the state file. If your bucket name is different from mine (pravesh-terra-state-bucket
), make sure to update the name in backend.tf
.
๐ Understanding the Terraform Code Structure
Instead of dumping everything into a single main.tf
file, Iโve broken the configuration into logical modules for clarity and scalability. Hereโs a quick overview:
provider.tf
: Specifies the cloud provider. In our case, itโs AWS (no surprise there!).backend.tf
: Configures Terraform to store state remotely in our S3 bucket.ecr.tf
: Creates two public repositories in ECR:3-tier-frontend
and3-tier-backend
for storing Docker images.vpc.tf
: Fetches the default VPC and subnet details.-
role.tf
: Defines IAM roles:- One for the EKS cluster (includes
AmazonEKSClusterPolicy
) - One for the Node Group (includes policies like
AmazonEKSWorkerNodePolicy
,AmazonEC2ContainerRegistryReadOnly
, andAmazonEKS_CNI_Policy
)
- One for the EKS cluster (includes
eks.tf
: Provisions the EKS cluster namedThree-tier-cloud
.node_group.tf
: Creates the worker node group for the cluster with onet2.medium
EC2 instance.
โณ Apply the Terraform Configuration
Now weโre ready to provision the infrastructure! Run the following command:
terraform apply --auto-approve
โฑ๏ธ This might take 15โ20 minutes, especially since provisioning EKS clusters and node groups can take some time. Be patient โ AWS is building your cloud infrastructure behind the scenes.
๐ณ Push Docker Images to ECR
Once the infrastructure is up, itโs time to push our Docker images for the frontend and backend to AWS ECR.
- Go to your AWS Console > ECR > Repositories
Click on the
3-tier-frontend
repositoryClick on โView push commandsโ โ AWS will show you four CLI commands
Now, go to the frontend/
folder in your project directory:
cd ../frontend/
Run each of the four commands one by one to build the image and push it to ECR.
Repeat the same steps for the 3-tier-backend
repository:
Go back to ECR > Repositories
Select
3-tier-backend
and click View push commands
- Navigate to the backend directory:
cd ../backend/
Run the ECR commands provided to push the backend Docker image.
๐ Once done, your container images will be hosted in your private AWS ECR repositories โ ready to be deployed to your EKS cluster!
๐ Step 4: Deploy to EKS with kubectl and Set Up Ingress via ALB
Now that your EKS cluster and ECR repositories are ready, itโs time to interact with the cluster, deploy your workloads, and expose your application to the internet. We'll use kubectl for that โ the command-line tool to manage Kubernetes clusters.
๐งฐ Install kubectl
If you're using Ubuntu on amd64, run the following to install kubectl
:
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin
kubectl version --short --client
If youโre using a different OS/architecture, install it using the official instructions:
๐ kubectl Install Guide
๐ง Connect kubectl
to Your EKS Cluster
Now configure kubectl
to use your EKS cluster:
aws eks update-kubeconfig --region us-east-1 --name Three-tier-cloud
This updates your ~/.kube/config
file so that you can interact with your new EKS cluster using kubectl
.
๐ Update Kubernetes Manifests
Inside the repo directory 3-tier-app-Deployment/k8s_manifests/
, youโll find the Kubernetes manifests for deploying the frontend, backend, and MongoDB services.
Before applying them, update the image URIs in both deployment files with the correct values from ECR.
๐ Update backend_deployment.yml
:
Find this block:
spec:
containers:
- name: backend
image: <YOUR_IMAGE_URI>
imagePullPolicy: Always
Replace <YOUR_IMAGE_URI>
with the full image URL from your three-tier-backend ECR repo (latest
tag).
๐ Update frontend_deployment.yml
:
Do the same in the frontend manifest with the image URI from the three-tier-frontend ECR repo.
๐งฑ Create a Namespace for the App
Letโs keep things clean by isolating our app into a dedicated Kubernetes namespace:
kubectl create namespace workshop
kubectl config set-context --current --namespace workshop
๐ Deploy the App Components
Apply the deployment and service files for each component:
kubectl apply -f frontend-deployment.yaml -f frontend-service.yaml
kubectl apply -f backend-deployment.yaml -f backend-service.yaml
# Deploy MongoDB
cd mongo/
kubectl apply -f .
At this point, your services are up and running within the cluster โ but we still need a way to expose them to the outside world.
๐ Set Up Application Load Balancer (ALB) and Ingress
To route external traffic into your Kubernetes services, weโll use an AWS Application Load Balancer along with an Ingress Controller.
๐ Create an IAM Policy for the Load Balancer
The IAM policy json is present inside the kubernetes manifests dir:
cd k8s_manifests/
Create the IAM policy in AWS:
aws iam create-policy \
--policy-name AWSLoadBalancerControllerIAMPolicy \
--policy-document file://iam_policy.json
๐ Associate OIDC Provider with EKS
To enable IAM roles for Kubernetes service accounts, associate an OIDC provider with your EKS cluster.
First, install eksctl
:
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
eksctl version
Then associate the OIDC provider:
eksctl utils associate-iam-oidc-provider \
--region=us-east-1 \
--cluster=Three-tier-cloud \
--approve
๐ Create a Service Account for the Load Balancer
Replace <Your-Account-Number>
with your actual AWS account ID and run:
eksctl create iamserviceaccount \
--cluster=Three-tier-cloud \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--role-name AmazonEKSLoadBalancerControllerRole \
--attach-policy-arn=arn:aws:iam::<Your-Account-Number>:policy/AWSLoadBalancerControllerIAMPolicy \
--approve \
--region=us-east-1
๐งฐ Install Helm and Deploy the Load Balancer Controller
Weโll use Helm to install the AWS Load Balancer Controller:
sudo snap install helm --classic
helm repo add eks https://aws.github.io/eks-charts
helm repo update eks
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=Three-tier-cloud \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller
Check if itโs running:
kubectl get deployment -n kube-system aws-load-balancer-controller
๐ฃ๏ธ Apply Ingress Configuration
Now go back to the k8s_manifests/
directory and apply the ingress resource:
kubectl apply -f full_stack_lb.yaml
Wait for 5โ7 minutes to allow the ingress and ALB to be fully provisioned.
๐ Access Your Application
To get the ALB endpoint:
kubectl get ing -n workshop
Youโll see an ADDRESS field in the output. Copy that URL, paste it in your browser, and voilร ๐ โ your three-tier application is live on AWS!
๐ To know the errors I encounter while deploying the ALB, watch this LinkedIn Post.
๐งน Step 5: Clean Up AWS Resources
Congratulations on successfully deploying your three-tier application on AWS EKS using Terraform! ๐
Before we wrap things up, itโs important to clean up the resources we created โ to avoid any unexpected AWS charges.
๐๏ธ Delete Docker Images from ECR
Head over to the ECR dashboard in the AWS Console.
Under Private Repositories, select both
three-tier-backend
andthree-tier-frontend
.Delete the images from each repository.
๐ฃ Destroy Infrastructure with Terraform
Now letโs destroy the entire infrastructure from your terminal. Navigate to the terra-config/
directory and run:
terraform destroy --auto-approve
Terraform will tear down the EKS cluster, node group, IAM roles, VPC config, ECR repositories, and more.
๐งฝ Delete Terraform State File and S3 Bucket
After destroying your resources, donโt forget to remove the Terraform state file and the bucket itself:
aws s3 rm s3://pravesh-terra-state-bucket/eks/terraform.tfstate
Then go to the S3 Dashboard, empty the bucket manually (if needed), and delete the bucket to finish the cleanup process.
โ ๏ธ Make sure to delete the bucket, otherwise it will incur unwanted charges.
โ Conclusion: What Youโve Learned
And thatโs a wrap! ๐
In this project, youโve gone through the complete lifecycle of deploying a real-world three-tier application using modern DevOps tools and cloud infrastructure:
You learned how to use Terraform to provision infrastructure as code.
You created and managed AWS resources like EKS, ECR, IAM, and S3.
You containerized applications and deployed them with Kubernetes.
You exposed your app to the internet using an Application Load Balancer and Ingress.
And finally, you followed best practices like remote state management and safe resource cleanup.
This project isn't just a demo โ itโs a strong foundation you can build on for production-grade cloud-native applications.
If this blog helped you, consider sharing it with others or giving the GitHub repo a star โญ!
๐ฌ Have questions, suggestions, or want to collaborate?
Reach out to me on Twitter, LinkedIn, or explore more on blog.praveshsudha.com
Top comments (0)