DEV Community

Cover image for How I went from Docker Compose to production EKS without burning AWS budget on mistakes
Victor Ojeje
Victor Ojeje

Posted on

How I went from Docker Compose to production EKS without burning AWS budget on mistakes

The Problem with Learning Kubernetes Directly on AWS

Most tutorials for Kubernetes on AWS tell you to:

  1. Spin up an EKS cluster
  2. Apply your manifests
  3. Debug why nothing works
  4. Watch your bill climb while you figure it out

EKS charges $0.10 per hour just for the control plane, before a single EC2 node is added. That is about $72/month for a cluster sitting idle while you troubleshoot ingress routing, secret injection failures, and container networking issues.

That is fine for a company budget. It hurts when you are learning on your own.

There is a better workflow. Test locally, prove it works, then pay for AWS.


What I Built

A production-grade containerized web application with a staged deployment workflow:

  • Stage 1: Docker Compose for fast local iteration
  • Stage 2: Minikube to validate Kubernetes behavior before touching AWS
  • Stage 3: EKS with full CI/CD, AWS Secrets Manager, and automated ALB provisioning

App repo: https://github.com/escanut/fastapi-k8s-project

Infra repo: https://github.com/escanut/fastapi-aws-infra

The app is a FastAPI backend with async request handling and connection pooling connected to PostgreSQL, plus a vanilla JS frontend. Simple product catalog with create, retrieve, delete items. Nothing fancy on purpose. The focus is the infrastructure pattern and delivery workflow.


Stage 1: Docker Compose

Goal: instant feedback on code changes with zero cluster overhead.

services:
  postgres:
    image: postgres:15-alpine
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U admin -d products"]

  backend:
    build: ./backend
    environment:
      DB_HOST: postgres
    volumes:
      - ./backend:/app

  frontend:
    image: nginx:alpine
    volumes:
      - ./nginx.conf:/etc/nginx/conf.d/default.conf
Enter fullscreen mode Exit fullscreen mode

Nginx routes /api traffic to FastAPI. Credentials are plain env variables here, intentionally different from production. That contrast is the lesson.

Validates: app logic, queries, routing, frontend-backend communication.
Does not validate: Kubernetes behavior.

Stage 2: Minikube

Goal: catch Kubernetes failures at zero cost before provisioning AWS.

minikube start
eval $(minikube docker-env)
minikube addons enable ingress
kubectl apply -f dev/k8s/
Enter fullscreen mode Exit fullscreen mode

Setup mirrors production closely:

Secrets from Kubernetes Secrets, not env vars

Postgres as a pod with a PVC

Ingress configured like production

Broken ingress, wrong ports, and secret issues show up here instead of on a paid cluster. Minikube is not identical to ALB, but it is close enough to surface real problems early.

The CI/CD Pipeline

On every push to main:

  Configure AWS Credentials
  This uses aws-actions/configure-aws-credentials@v4

  Build and push frontend & backend image
  This runs docker build + push

  Updates deployment
  This runs kubectl set image ...

  Verify
  This runs kubectl rollout status ...

Enter fullscreen mode Exit fullscreen mode

Each image is tagged with commit SHA. Rollback just means redeploying a previous SHA. No guesswork.

OIDC is used for short-lived credentials. No static keys stored.

VPC Endpoints: Why They Matter Beyond Security

Without endpoints:

Worker → NAT → Internet → AWS API → Internet → NAT → Worker
Enter fullscreen mode Exit fullscreen mode

NAT data processing costs $0.045/GB. Image pulls, logs, and secrets traffic add up fast.

Eight endpoints route AWS-internal traffic privately. NAT remains only for external access during bootstrap.

This results to lower cost and reduced exposure.

Secrets Management Progression

Stage 1: Plain env var  
Stage 2: K8s Secret  
Stage 3: ExternalSecret synced from Secrets Manager
Enter fullscreen mode Exit fullscreen mode

Production password is randomly generated and stored in Secrets Manager. Terraform state is encrypted. Password is not floating around in plaintext.

IAM Design: Least Privilege

  • GitHub Actions: scoped OIDC role

  • Node role: read-only ECR

  • Backend pods: read specific secret via IRSA

  • ALB Controller: ALB-only permissions

Each component gets only what it needs to perform its task.

Post-Install: Why Not Everything in Terraform

post-install.sh installs:

  • AWS Load Balancer Controller

  • External Secrets Operator

CRDs plus dependency ordering make Terraform Helm messy here. This approach is predictable and aligns with upstream guidance.

Infrastructure Cost Snapshot

Service Monthly
EKS $72
EC2 nodes ~$30
RDS ~$12
NAT ~$65
VPC endpoints ~$11.60
ALB ~$16

Endpoints reduce NAT data charges on AWS API traffic. In active pipelines, that tradeoff makes sense.

What This Demonstrates

  • Cost-aware Kubernetes learning path

  • OIDC instead of static keys

  • IRSA for pod-level IAM

  • VPC endpoints for cost and security

  • External Secrets integration

  • Fully automated ALB provisioning

Try it

Repos are public and documented. Start local, move to Minikube, then EKS.

If something breaks or you want to discuss design choices, reach out. I am always refining this setup myself.

Open to Work

Currently seeking remote roles in Cloud, DevOps, and Platform Engineering.

Top comments (0)