DEV Community

Cover image for ECS vs EKS: When You DON'T Need Kubernetes - A Practical Guide to Choosing AWS Container Services
Timur Galeev for AWS Community Builders

Posted on • Originally published at tgaleev.com

ECS vs EKS: When You DON'T Need Kubernetes - A Practical Guide to Choosing AWS Container Services

Introduction

You know what? I see teams spinning up Kubernetes clusters for three microservices all the time. Then they spend two months figuring out pods, ingress controllers, and all that magic. And then they pay $70 per month just for three clusters in different regions, not counting the actual servers.

Here's the honest truth: Kubernetes is a powerful tool but you don't always need it. Amazon ECS is a simpler alternative that handles most tasks faster and cheaper.

In this article I'll show you:

  • When ECS beats EKS (and saves you tons of money)
  • Real scenarios with numbers and examples
  • Ready-to-use code snippets for deploying to both platforms
  • How to make the decision without headaches

Let's dive in!

Quick Comparison: ECS vs EKS

First let's look at the main differences in a simple table:

Feature AWS ECS AWS EKS
Cluster Cost $0 $0.10/hour (~$70/month)
Setup Complexity Low (2-4 hours) High (1-2 days)
Learning Curve Few days Several weeks
Management AWS Console/CLI kubectl + AWS Console
Ecosystem AWS services Entire Kubernetes world
Portability AWS only Any cloud/on-prem
Updates Automatic Manual (control plane)
Best For 1-10 services 10-100+ services

Architecture: How It Works

ECS Architecture:

Your Application
    ↓
Docker Image (you need this!)
    ↓
Task Definition (container description)
    ↓
ECS Service (manages launch)
    ↓
EC2 or Fargate (where it runs)
    ↓
Container running
Enter fullscreen mode Exit fullscreen mode

EKS Architecture:

Your Application
    ↓
Docker Image
    ↓
Kubernetes Pod specification
    ↓
Deployment/StatefulSet
    ↓
Kubernetes Control Plane ($$$)
    ↓
Worker Nodes
    ↓
Container in Pod
Enter fullscreen mode Exit fullscreen mode

See the difference? ECS has two fewer steps and each one is easier to understand.

When ECS is Your Best Choice

This is where it gets interesting. Many people think Kubernetes is always needed but that's not true. Let's break down real situations where ECS wins.

Scenario 1: Multi-Regional Deployment (3-5 Services)

Imagine: you have a simple API and a couple supporting services. You need to deploy them in three regions - Europe, Asia, USA. For redundancy, you know.

With EKS you pay:

  • Europe cluster: $70/month
  • Asia cluster: $70/month
  • USA cluster: $70/month
  • Total: $210/month just for the right to run containers

With ECS you pay:

  • Cluster is free: $0
  • Total: $0 for management

In other words, save $2,520 per year just on the control plane! And you still gotta pay for the actual servers.

Real Example

I had a project - e-commerce backend. Five services:

  1. API Gateway (Node.js)
  2. Order Service (Python)
  3. Payment Service (Go)
  4. Notification Service (Node.js)
  5. Analytics Worker (Python)

Each service needed a Docker image. Here's a simple Dockerfile example for the Node.js API:

# Dockerfile for API Gateway
FROM node:18-alpine

WORKDIR /app

COPY package*.json ./
RUN npm install --production

COPY . .

EXPOSE 3000
CMD ["node", "server.js"]
Enter fullscreen mode Exit fullscreen mode

We deployed across three regions using ECS Fargate. Setup time: 4 hours including Terraform code. If we'd done it with EKS - that's minimum a week with Helm charts, ingress controllers and all that kitchen.

Here's how we defined the task in ECS (simplified):

# ECS Task Definition - just the container part
resource "aws_ecs_task_definition" "api_gateway" {
  family                   = "api-gateway"
  network_mode             = "awsvpc"
  requires_compatibilities = ["FARGATE"]
  cpu                      = "256"
  memory                   = "512"

  container_definitions = jsonencode([{
    name      = "api"
    image     = "123456789.dkr.ecr.us-east-1.amazonaws.com/api-gateway:latest"
    essential = true

    portMappings = [{
      containerPort = 3000
      protocol      = "tcp"
    }]

    environment = [
      { name = "NODE_ENV", value = "production" },
      { name = "PORT", value = "3000" }
    ]
  }])
}
Enter fullscreen mode Exit fullscreen mode

Compare this to Kubernetes - you'd need Deployment YAML, Service YAML, maybe Ingress, ConfigMaps... it adds up.

Scenario 2: Quick Start and Simplicity

You're a startup. You have an MVP that needs to ship yesterday. Team of three people nobody knows Kubernetes deeply.

ECS gives you:

  • Launch in couple hours (not days!)
  • AWS integration out of the box
  • No need to hire Kubernetes expert
  • Less moving parts = less things break

Look I'm not saying Kubernetes is bad. It's awesome! But do you need it when you just wanna run a container? It's like buying a truck to get bread from the store.

Time to learn:

  • ECS: 2-3 days to work comfortably
  • EKS: 2-3 weeks minimum (or even a month)

Here's a complete minimal ECS setup with Terraform:

# Minimal ECS cluster
resource "aws_ecs_cluster" "main" {
  name = "my-app-cluster"
}

# ECS Service - runs 2 copies of your container
resource "aws_ecs_service" "app" {
  name            = "my-app"
  cluster         = aws_ecs_cluster.main.id
  task_definition = aws_ecs_task_definition.api_gateway.arn
  desired_count   = 2
  launch_type     = "FARGATE"

  network_configuration {
    subnets          = ["subnet-xxx", "subnet-yyy"]
    security_groups  = ["sg-xxx"]
    assign_public_ip = true
  }
}
Enter fullscreen mode Exit fullscreen mode

That's it! No Helm, no kubectl, no YAML soup.

Scenario 3: AWS-Native Project

Your project is fully in AWS:

  • Database - RDS
  • Files - S3
  • Queues - SQS
  • Cache - ElastiCache
  • Logs - CloudWatch

Why Kubernetes here? ECS integrates with these services natively and simpler.

Example - S3 access:

ECS Task Role (simple):

{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Action": "s3:*",
    "Resource": "arn:aws:s3:::my-bucket/*"
  }]
}
Enter fullscreen mode Exit fullscreen mode

Attach the role to Task Definition - done.

In EKS you do the same through IRSA (IAM Roles for Service Accounts):

  • Setup OIDC provider
  • Create ServiceAccount in Kubernetes
  • Link with IAM role
  • Annotate the pod

More steps = more places to mess up.

When EKS Becomes Necessary

Alright enough praising ECS. Let's be honest - there are situations where EKS is really better.

Scenario 1: Large Microservices Architecture (20+ Services)

When you have 20, 30, 50 microservices - that's different math.

Why EKS wins:

  • $70 per cluster is fixed price (whether 5 services or 50)
  • Kubernetes scales complexity better
  • Ecosystem: Helm, Operators, service mesh (Istio, Linkerd)
  • Centralized management of all services

Cost example:

With 30 services in one region:

  • ECS: 30 separate ECS Services = lots of config hard to manage
  • EKS: One cluster all services in namespaces manage through GitOps

Here $70/month pays for convenience.

A typical Kubernetes deployment:

# Kubernetes Deployment - simpler at scale
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-gateway
  namespace: production
spec:
  replicas: 3
  selector:
    matchLabels:
      app: api-gateway
  template:
    metadata:
      labels:
        app: api-gateway
    spec:
      containers:
      - name: api
        image: my-registry/api-gateway:v1.2.3
        ports:
        - containerPort: 3000
        resources:
          requests:
            memory: "512Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "250m"
Enter fullscreen mode Exit fullscreen mode

With Kubernetes you get built-in health checks, rolling updates, easy rollbacks.

Scenario 2: Multi-Cloud or Hybrid Infrastructure

Your company wants:

  • Work in AWS and GCP simultaneously
  • Keep some workloads on-premise
  • Have ability to migrate between clouds

EKS (Kubernetes) gives portability:

  • Same YAML manifests work everywhere
  • Can move applications between clouds
  • Standardization across all infra

ECS is AWS only. Can't move it anywhere. (ECS anywhere?! :) )

Scenario 3: Advanced Features

GPU workloads for ML/AI: EKS supports GPU nodes out of the box + all tooling like Kubeflow.

Complex networking policies: Network Policies in Kubernetes give precise traffic control between pods.

# Network Policy example
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: api-policy
spec:
  podSelector:
    matchLabels:
      app: api-gateway
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
    ports:
    - protocol: TCP
      port: 3000
Enter fullscreen mode Exit fullscreen mode

Stateful applications: StatefulSets Persistent Volumes - all this works better in Kubernetes.

Practical Deployment Examples

Enough theory let's get hands dirty. I'll show how to deploy a simple application to both ECS and EKS. Same application to compare.

Our application: Nginx + simple Node.js API (both need Docker images)

Building Docker Images First

Before deploying anywhere you need Docker images. Here's our setup:

# Dockerfile for our Node.js app
FROM node:18-alpine

WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .

EXPOSE 3000
CMD ["node", "index.js"]
Enter fullscreen mode Exit fullscreen mode

Build and push:

# Build image
docker build -t my-app:latest .

# Tag for ECR
docker tag my-app:latest 123456789.dkr.ecr.us-east-1.amazonaws.com/my-app:latest

# Push to ECR
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 123456789.dkr.ecr.us-east-1.amazonaws.com
docker push 123456789.dkr.ecr.us-east-1.amazonaws.com/my-app:latest
Enter fullscreen mode Exit fullscreen mode

ECS Deployment with Terraform

Let's start with the simpler one - ECS.

Step 1: VPC Setup

# Create VPC for containers
resource "aws_vpc" "main" {
  cidr_block           = "10.0.0.0/16"
  enable_dns_hostnames = true
  enable_dns_support   = true
}

# Public subnets
resource "aws_subnet" "public" {
  count                   = 2
  vpc_id                  = aws_vpc.main.id
  cidr_block              = "10.0.${count.index}.0/24"
  availability_zone       = data.aws_availability_zones.available.names[count.index]
  map_public_ip_on_launch = true
}

# Internet Gateway
resource "aws_internet_gateway" "main" {
  vpc_id = aws_vpc.main.id
}
Enter fullscreen mode Exit fullscreen mode

Step 2: ECS Cluster and Service

# Create ECS cluster
resource "aws_ecs_cluster" "main" {
  name = "my-app-cluster"
}

# Task Definition - describes your Docker container
resource "aws_ecs_task_definition" "app" {
  family                   = "my-app"
  network_mode             = "awsvpc"
  requires_compatibilities = ["FARGATE"]
  cpu                      = "256"
  memory                   = "512"

  execution_role_arn = aws_iam_role.ecs_execution.arn
  task_role_arn      = aws_iam_role.ecs_task.arn

  container_definitions = jsonencode([{
    name      = "app"
    image     = "123456789.dkr.ecr.us-east-1.amazonaws.com/my-app:latest"
    essential = true

    portMappings = [{
      containerPort = 3000
      protocol      = "tcp"
    }]

    logConfiguration = {
      logDriver = "awslogs"
      options = {
        "awslogs-group"         = "/ecs/my-app"
        "awslogs-region"        = "us-east-1"
        "awslogs-stream-prefix" = "app"
      }
    }
  }])
}

# ECS Service - runs and maintains containers
resource "aws_ecs_service" "app" {
  name            = "my-app-service"
  cluster         = aws_ecs_cluster.main.id
  task_definition = aws_ecs_task_definition.app.arn
  desired_count   = 2
  launch_type     = "FARGATE"

  network_configuration {
    subnets          = aws_subnet.public[*].id
    security_groups  = [aws_security_group.ecs_tasks.id]
    assign_public_ip = true
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 3: IAM Roles

# Role for ECS to pull Docker images and write logs
resource "aws_iam_role" "ecs_execution" {
  name = "ecs-execution-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Action = "sts:AssumeRole"
      Effect = "Allow"
      Principal = {
        Service = "ecs-tasks.amazonaws.com"
      }
    }]
  })
}

resource "aws_iam_role_policy_attachment" "ecs_execution_policy" {
  role       = aws_iam_role.ecs_execution.name
  policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
}

# Role for your application (e.g., S3 access)
resource "aws_iam_role" "ecs_task" {
  name = "ecs-task-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Action = "sts:AssumeRole"
      Effect = "Allow"
      Principal = {
        Service = "ecs-tasks.amazonaws.com"
      }
    }]
  })
}
Enter fullscreen mode Exit fullscreen mode

Deploy It

terraform init
terraform plan
terraform apply
Enter fullscreen mode Exit fullscreen mode

Done! Container is running.

EKS Deployment with Terraform

Now the same thing but in EKS.

Step 1: EKS Cluster

# EKS cluster
resource "aws_eks_cluster" "main" {
  name     = "my-eks-cluster"
  role_arn = aws_iam_role.eks_cluster.arn
  version  = "1.28"

  vpc_config {
    subnet_ids = concat(aws_subnet.public[*].id, aws_subnet.private[*].id)
  }
}

# Worker nodes
resource "aws_eks_node_group" "main" {
  cluster_name    = aws_eks_cluster.main.name
  node_group_name = "main-nodes"
  node_role_arn   = aws_iam_role.eks_node.arn
  subnet_ids      = aws_subnet.private[*].id

  scaling_config {
    desired_size = 2
    max_size     = 4
    min_size     = 1
  }

  instance_types = ["t3.medium"]
}
Enter fullscreen mode Exit fullscreen mode

Step 2: IAM for EKS

# Cluster role
resource "aws_iam_role" "eks_cluster" {
  name = "eks-cluster-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Action = "sts:AssumeRole"
      Effect = "Allow"
      Principal = {
        Service = "eks.amazonaws.com"
      }
    }]
  })
}

resource "aws_iam_role_policy_attachment" "eks_cluster_policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = aws_iam_role.eks_cluster.name
}

# Node role
resource "aws_iam_role" "eks_node" {
  name = "eks-node-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Action = "sts:AssumeRole"
      Effect = "Allow"
      Principal = {
        Service = "ec2.amazonaws.com"
      }
    }]
  })
}

resource "aws_iam_role_policy_attachment" "eks_worker_node" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = aws_iam_role.eks_node.name
}

resource "aws_iam_role_policy_attachment" "eks_cni" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = aws_iam_role.eks_node.name
}
Enter fullscreen mode Exit fullscreen mode

Step 3: Kubernetes Manifests

After cluster is created deploy application with kubectl:

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: app
        image: 123456789.dkr.ecr.us-east-1.amazonaws.com/my-app:latest
        ports:
        - containerPort: 3000
        resources:
          requests:
            memory: "512Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "250m"
---
apiVersion: v1
kind: Service
metadata:
  name: my-app-service
spec:
  type: LoadBalancer
  selector:
    app: my-app
  ports:
  - port: 80
    targetPort: 3000
Enter fullscreen mode Exit fullscreen mode

Deploy It

# 1. Apply Terraform
terraform init
terraform apply

# 2. Configure kubectl
aws eks update-kubeconfig --name my-eks-cluster --region us-east-1

# 3. Check nodes
kubectl get nodes

# 4. Deploy application
kubectl apply -f deployment.yaml

# 5. Check status
kubectl get pods
kubectl get svc
Enter fullscreen mode Exit fullscreen mode

Difference:

  • ECS: one terraform apply and done
  • EKS: terraform apply + kubectl commands + wait for everything to come up

Complexity Comparison

Action ECS EKS
Config files 3-4 Terraform files 4-5 Terraform + YAML manifests
First deploy time 5-7 minutes 15-20 minutes
Commands to run 2 (init apply) 5+ (terraform + kubectl)
Need to know AWS Terraform Docker AWS Terraform Kubernetes kubectl Docker

Real Cases and Economics

Let's calculate concrete numbers for typical scenarios.

Case 1: Startup with 5 Microservices in 3 Regions

Requirements:

  • 5 services (API Workers Background Jobs)
  • 3 regions: US EU Asia
  • 2 instances each service
  • All need Docker images built and stored in ECR

ECS Fargate:

Cluster cost: $0
ECR storage: ~$5/month (for Docker images)
Compute (Fargate):
  - 5 services × 2 instances × 3 regions = 30 tasks
  - Each task: 0.25 vCPU 512 MB
  - $0.04048/hour per vCPU $0.004445/hour per GB
  - (~0.25 × $0.04048 + 0.5 × $0.004445) × 730 hours = ~$9/task/month
  - 30 tasks × $9 = $270/month

Total: ~$275/month
Enter fullscreen mode Exit fullscreen mode

EKS:

Cluster cost: $70 × 3 regions = $210/month
ECR storage: ~$5/month (same Docker images)
Compute (EC2 nodes):
  - Minimum 2× t3.medium per region = 6 instances
  - t3.medium = $0.0416/hour × 730 = ~$30/month
  - 6 × $30 = $180/month

Total: $210 + $5 + $180 = $395/month
Enter fullscreen mode Exit fullscreen mode

Savings with ECS: $120/month or $1440/year

Plus with ECS you don't pay DevOps engineer to manage Kubernetes :)

Case 2: Large Project with 30 Services in 1 Region

ECS:

Cluster: $0
Management: 30 separate ECS Services (hard to manage!)
Compute: depends on load
Enter fullscreen mode Exit fullscreen mode

EKS:

Cluster: $70/month
Management: One namespace GitOps Helm (easier!)
Compute: same + better resource utilization
Enter fullscreen mode Exit fullscreen mode

Here EKS wins on management convenience. $70 pays for itself.

Time for Setup and Maintenance

Real numbers from my experience:

Initial setup:

  • ECS: 4 hours (Terraform + tests + Docker builds)
  • EKS: 2 days (cluster + addons + monitoring setup + Docker builds)

Weekly maintenance:

  • ECS: ~30 minutes (check logs updates)
  • EKS: ~2 hours (updates cluster checks monitoring)

Platform updates:

  • ECS: automatic
  • EKS: need to update control plane once a year (takes half a day with tests)

Decision Checklist: What to Choose?

So here's a simple flowchart for decision making:

Choose ECS if:

✅ You have less than 10-15 microservices
✅ Project is AWS only (no multi-cloud plans)
✅ Team doesn't know Kubernetes (and doesn't want to learn)
✅ Need to launch quickly (MVP startup)
✅ Budget is limited
✅ Simple application without complex dependencies
✅ Multi-regional deploy (save on clusters)
✅ Comfortable with Docker basics

Choose EKS if:

✅ More than 20+ microservices
✅ Need portability (multi-cloud hybrid)
✅ Team knows Kubernetes
✅ Need advanced features (service mesh operators)
✅ GPU workloads for ML/AI
✅ Already using Kubernetes elsewhere
✅ Complex microservices architecture
✅ Want access to Kubernetes ecosystem

Middle Ground

You can start with ECS and migrate later!

Many companies do this:

  1. Start on ECS (fast and cheap)
  2. Grow to 15-20 services
  3. Kubernetes developers join the team
  4. Gradually migrate to EKS

This is normal evolution. Don't go Kubernetes just "because it's cool".

Conclusions

Here's what's important to remember:

ECS is not a "second-rate" option. It's a full-fledged solution that handles many tasks excellently. Yes EKS is more powerful in capabilities but most projects simply don't need those capabilities.

Main ECS advantages:

  • Free control plane (save $70-210+ per month)
  • Simplicity and launch speed
  • Less operational overhead
  • Native AWS integration
  • Perfect for multi-regional deployment of small services

When EKS is really needed:

  • Large scale (20+ services)
  • Code portability
  • Advanced Kubernetes features
  • Already have expertise in team

My advice: Don't chase the hype. Start with ECS if the task allows. Save time money and nerves. And when you really grow into Kubernetes - then migrate.

Kubernetes is like a Ferrari - cool car but for a trip to the store a regular Toyota works fine. And uses less gas 😄

Final Thoughts

The choice between ECS and EKS isn't about "better" or "worse" - it's about right tool for the job.

Start simple. ECS lets you ship fast without the Kubernetes learning curve. Your Docker skills transfer directly. AWS handles the orchestration.

As you grow, reassess. When you hit 15-20 services or need multi-cloud, EKS makes sense. But many successful companies run production on ECS for years.

Remember: Complexity is a cost. Every abstraction layer you add costs time money and mental overhead. Sometimes the best architecture is the simplest one that works.

Both platforms use Docker. Both run containers. Both scale. The question is: how much complexity do you actually need?

Choose wisely!


Sources

Top comments (0)