DEV Community

Cover image for Deploying a 3-tier Micro-services Platform on AWS EKS
David Oyewole
David Oyewole

Posted on

Deploying a 3-tier Micro-services Platform on AWS EKS

by David Oyewole, Cloud & DevOps Engineer


Introduction

Every DevOps engineer eventually faces that “big one”, the project that brings together all the skills we’ve learned across infrastructure, automation, cloud, and observability.

For me, that project was deploying the Sock Shop microservices application on Amazon EKS using Terraform, Helm, AWS Load Balancer Controller, Route53, ACM, ExternalDNS, Prometheus, and Grafana.

What started as an academic capstone evolved into a fully automated, production-ready microservices deployment — complete with HTTPS, DNS automation, and end-to-end monitoring.

This article walks through my design, implementation, challenges, and lessons learned from building this platform from scratch.


The Vision

I wanted to prove I could:

  • Provision and manage AWS infrastructure as code using Terraform
  • Deploy Kubernetes workloads with Helm charts
  • Configure secure ingress with HTTPS and DNS automation
  • Add observability through Prometheus and Grafana
  • Do it all without manual steps, in a reproducible, production-grade setup

In other words, to go beyond “it works” and build something that would scale, self-document, and represent how DevOps is done in the real world.


Architecture Overview

Setup Architecture

At the heart of my setup is AWS EKS, orchestrating dozens of containers across two main namespaces:

  • sock-shop — the microservices application (frontend + internal services)
  • monitoring — the observability stack (Prometheus + Grafana)

The Cloud Stack

Layer Technology Role
Infrastructure Terraform Provisions VPC, EKS, subnets, IAM roles, ACM, Route53
Ingress AWS ALB Ingress Controller Manages ingress traffic via ALB
DNS & SSL Route53 + ACM + ExternalDNS Automated DNS record creation and TLS certificates
Deployment Helm Deploys Sock Shop microservices and monitoring stack
Monitoring Prometheus + Grafana Cluster and app observability
Security IRSA Fine-grained IAM for ALB and DNS controllers

Traffic flows like this:
User → Route53 (DNS) → ALB (HTTPS termination) → Ingress → Frontend Service → Internal ClusterIP Services


Step 1: Infrastructure as Code with Terraform

I started by writing Terraform modules to create the networking and compute foundation.

Terraform handled:

  • A VPC with public and private subnets
  • EKS cluster and managed node groups
  • IAM roles for both controllers (external-dns and aws-load-balancer-controller) using IRSA
  • A Route53 hosted zone for my subdomain (sock.blessedc.org)
  • An ACM wildcard certificate for HTTPS termination

Once the code was ready, I ran

terraform init
terraform apply -auto-approve
Enter fullscreen mode Exit fullscreen mode

In minutes, AWS spun up a fully functional Kubernetes environment with all IAM and networking policies correctly set.

Step 2: Deploying Microservices with Helm

With infrastructure live, I used Helm to deploy the Sock Shop micro-services.

helm repo add sock-shop https://microservices-demo.github.io/helm
helm install sockshop sock-shop/sock-shop -n sock-shop --create-namespace
Enter fullscreen mode Exit fullscreen mode

Helm made it easy to templatize everything and redeploy consistently after every test or rebuild.

I configured only the frontend service to be public (via Ingress), while keeping all other services (catalogue, user, orders, shipping, etc.) internal-only with ClusterIP.

This separation ensured the architecture was both secure and realistic, mimicking real-world micro-service deployments.

Step 3: HTTPS & DNS Automation

Manual certificate management and DNS updates are the enemies of scalability.

So I automated both using:

  • AWS Certificate Manager (ACM) — provisioned via Terraform with a wildcard certificate (*.blessedc.org)
  • ExternalDNS — dynamically updates Route53 A-records based on Ingress resources
  • AWS ALB Ingress Controller — automatically creates the load balancer and attaches the certificate

A single Ingress.yaml file for the frontend handled all of this:

annotations:
  kubernetes.io/ingress.class: alb
  alb.ingress.kubernetes.io/scheme: internet-facing
  alb.ingress.kubernetes.io/target-type: ip
  alb.ingress.kubernetes.io/certificate-arn: <acm-arn>
Enter fullscreen mode Exit fullscreen mode

In less than five minutes, a fully functional HTTPS endpoint appeared:

https://sock.blessedc.org
Enter fullscreen mode Exit fullscreen mode

And yes ExternalDNS took care of the Route53 record automatically using helm by running this command

helm repo add external-dns https://kubernetes-sigs.github.io/external-dns/
helm repo update

helm install external-dns external-dns/external-dns \
--namespace external-dns \
--create-namespace \
--set provider=aws \
--set registry=txt \
--set txtOwnerId=sockshop \
--set domainFilters={sock.blessedc.org} \
--set aws.zoneType=public \
--set serviceAccount.create=true \
--set serviceAccount.name=external-dns \
--set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=arn:aws:iam::<IAM-ID>:role/eks-externaldns-role
Enter fullscreen mode Exit fullscreen mode

Step 4: Monitoring with Prometheus and Grafana

No production cluster is complete without observability.

I deployed a complete monitoring stack using the kube-prometheus-stack Helm chart.

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install monitoring prometheus-community/kube-prometheus-stack -n monitoring --create-namespace
kubectl apply -f grafana-ingress.yaml -n monitoring
kubectl apply -f prometheus-ingress.yaml -n monitoring
Enter fullscreen mode Exit fullscreen mode

This deployed:

  • Prometheus — scraping cluster and application metrics
  • Grafana — for dashboard visualization
  • Alertmanager — ready for future integrations

I created the grafana-ingress.yaml and prometheus-ingress.yaml file before executing that command, this created the grafana and prometheus sub domains like this:

grafana.sock.blessedc.org
prometheus.sock.blessedc.org
Enter fullscreen mode Exit fullscreen mode

and exposed them via Ingress, secured by the same ACM certificate.

Custom Grafana Dashboard

I built a custom “Sock Shop Microservices Overview” dashboard showing:

  • Pod CPU & Memory usage per microservice
  • Pod restart count
  • Node resource utilization
  • Cluster-wide health
  • Request and error rates

It was a beautiful moment seeing real-time metrics flow across a system I built entirely from scratch.

Grafana Dashboard

Step 5: Troubleshooting and Challenges

No project is complete without a few scars.

Here are some of the issues I hit — and how I fixed them:

Issue Root Cause Fix
AccessDenied for Route53 Missing IAM permissions for ExternalDNS Attached route53:* policy to eks-externaldns-role
ALB not creating Missing ingressClassName and IAM for ALB controller Updated Ingress annotations and IRSA role
Ingress stuck without address ACM ARN mismatch Ensured the certificate region matched the cluster region (us-east-1)
Hosted Zone not deleting Route53 still had A/TXT records Cleaned up manually before running terraform destroy
Grafana unreachable Ingress hostname typo Fixed host rule to grafana.sock.blessedc.org

Each issue taught me something about how AWS and Kubernetes actually communicate under the hood and that’s where true learning happens.

Security Highlights

  • Used IRSA (IAM Roles for Service Accounts) for both aws-load-balancer-controller and external-dns

  • Ensured all backend services stayed internal (ClusterIP)

  • Enabled HTTPS-only ingress (redirecting all HTTP to HTTPS)

  • Isolated monitoring stack in a separate namespace

This aligns with cloud-native security best practices for production workloads.

Results

After deployment, I had:

  • A fully automated and reproducible AWS EKS cluster

  • Publicly available frontend accessible via HTTPS

  • DNS records auto-managed by Kubernetes (no manual updates)

  • Centralized monitoring for all microservices

  • A clean teardown/rebuild process via Terraform in minutes

Lessons Learned

  • Networking is everything. Understanding subnets, routes, and public vs private traffic is key before touching EKS.

  • ALB Ingress Controller is incredibly powerful but requires precise IAM permissions.

  • Automation = Reliability. The fewer manual steps, the fewer production mistakes.

  • Monitoring early helps debug faster and ensures confidence in the platform.

  • IRSA changed the way I think about security in Kubernetes no more over-permissioned nodes.

Final Thoughts

This project made me appreciate the power of Infrastructure as Code and declarative configuration.
It taught me to think like a systems designer, not just an implementer building platforms that can scale, heal, and redeploy themselves.

Today, my sock.blessedc.org setup stands as more than a capstone,
it’s my personal proof that DevOps is about automation, resilience, and continuous learning.

Summary

This project demonstrates how to design and deploy a secure, scalable, observable microservices architecture on AWS, using Terraform for infrastructure, Helm for Kubernetes deployments, and modern DevOps best practices for automation, monitoring, and reliability.

👨‍💻 Author

David Oyewole
Cloud & DevOps Engineer
LinkedIn
Github

Top comments (0)