DEV Community

Cover image for AWS Container Services Demystified: A Coffee Shop Conversation
Data Tech Bridge
Data Tech Bridge

Posted on

AWS Container Services Demystified: A Coffee Shop Conversation

AWS Container Services Demystified: A Coffee Shop Conversation

Table of Contents

  1. The Foundation: Understanding the Players
  2. Enter the Kubernetes Ecosystem
  3. A Real-World Scenario
  4. The Newcomer: App Runner
  5. The Supporting Cast: ECR and More
  6. Cost Considerations
  7. Making the Decision
  8. Real-World Migration Story
  9. Practical Tips and Gotchas
  10. Monitoring and Observability
  11. Advanced Patterns
  12. The Future: Where Things Are Heading
  13. Wrapping Up: Your Action Plan
  14. Quick Reference Guide
  15. Additional Resources

Picture this: It's a rainy Tuesday afternoon at a downtown coffee shop. Alex, a developer working on modernizing their company's applications, sits down with Sam, an AWS Solutions Architect who's been working with containers for years.


Alex: sips coffee Okay, Sam, I need your help. My manager keeps talking about "containerizing everything," and I'm drowning in AWS acronyms. ECS, EKS, Fargate... what's the deal with all these services?

Sam: laughs I hear you! It can be overwhelming. But here's the thing – AWS didn't create all these services to confuse you. Each one solves different problems. Think of it like choosing transportation: sometimes you need a bicycle, sometimes a car, sometimes a train. They all get you places, but the journey and requirements are different.

Alex: Okay, I like that analogy. So let's start simple – what exactly are we talking about here?

Sam: At the core, we're talking about running containerized applications – think Docker containers – in the cloud. AWS gives you several ways to do this, and they fall into a few categories:

  1. Orchestration engines – the brains that decide where containers run (ECS and EKS)
  2. Compute options – where containers actually execute (EC2 or Fargate)
  3. Supporting services – things like ECR for storing images and App Runner for simple deployments

Alex: Wait, so orchestration and compute are different things?

Sam: Exactly! This is crucial to understand. The orchestration engine is like an air traffic controller – it decides which containers should run, where they should run, and manages their lifecycle. The compute layer is the actual runway and planes – the infrastructure where containers execute.

The Foundation: Understanding the Players

Alex: Alright, break down the main services for me. What's ECS?

Sam: Amazon ECS – Elastic Container Service – is AWS's homegrown container orchestration platform. Think of it as AWS saying, "We built a system specifically optimized for running on AWS."

The architecture is pretty straightforward:

  • You define tasks (one or more containers that work together)
  • You create services (ensuring a certain number of tasks are always running)
  • ECS places these tasks on your compute infrastructure

Alex: And that compute infrastructure – that's where EC2 vs. Fargate comes in?

Sam: snaps fingers Now you're getting it! With ECS, you can choose:

ECS on EC2: You manage a cluster of EC2 instances. You're responsible for:

  • Patching the OS
  • Scaling the cluster
  • Optimizing instance utilization
  • But you get full control and potentially lower costs at scale

ECS on Fargate: AWS manages everything for you:

  • No servers to manage
  • Pay per task, not per instance
  • Automatic scaling of infrastructure
  • But you pay a bit more for the convenience

Alex: So Fargate is like... serverless containers?

Sam: Exactly! Fargate is the serverless compute engine for containers. It works with both ECS and EKS. You just say "run this container with these resources," and Fargate handles the rest.

Let me show you a simple architecture diagram in words:

ECS Architecture:

┌─────────────────────────────────────────┐
│          Application Load Balancer       │
└──────────────┬──────────────────────────┘
               │
    ┌──────────▼──────────┐
    │    ECS Service       │
    │  (Desired Count: 3)  │
    └──────────┬───────────┘
               │
    ┌──────────▼───────────────────────┐
    │         ECS Cluster              │
    │  ┌────────────┐  ┌────────────┐ │
    │  │   Task 1   │  │   Task 2   │ │
    │  │ [Container]│  │ [Container]│ │
    │  └────────────┘  └────────────┘ │
    │                                  │
    │  Compute: EC2 or Fargate        │
    └──────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Alex: Okay, that makes sense. So what about EKS? I keep hearing about Kubernetes.

Enter the Kubernetes Ecosystem

Sam: leans forward Ah, Amazon EKS – Elastic Kubernetes Service. This is AWS's managed Kubernetes offering.

Kubernetes is the industry-standard container orchestration platform. It was built by Google, open-sourced, and now has massive community support. If ECS is AWS's proprietary language, Kubernetes is like English – spoken everywhere.

Alex: So why would I choose one over the other?

Sam: Great question! Let me paint you some scenarios:

Choose ECS when:

  • You're all-in on AWS
  • You want simpler management and fewer concepts to learn
  • You need deep AWS service integration
  • Your team is small and wants something straightforward

Choose EKS when:

  • You need multi-cloud portability
  • You have existing Kubernetes expertise
  • You want access to the huge Kubernetes ecosystem
  • You need advanced features like custom scheduling, operators, etc.

Alex: What do you mean by "more concepts to learn" with Kubernetes?

Sam: chuckles Okay, so ECS has: Clusters, Services, Tasks, Task Definitions. Pretty straightforward.

Kubernetes has: Clusters, Namespaces, Pods, Deployments, ReplicaSets, StatefulSets, DaemonSets, Services, Ingresses, ConfigMaps, Secrets, ServiceAccounts, and I'm just getting started...

Alex: eyes widen Whoa.

Sam: Right? Kubernetes is incredibly powerful, but it's also complex. The architecture looks like this:

EKS Architecture:

┌────────────────────────────────────────────┐
│           AWS Load Balancer                │
└──────────────┬─────────────────────────────┘
               │
    ┌──────────▼──────────┐
    │    Ingress/Service   │
    └──────────┬───────────┘
               │
    ┌──────────▼─────────────────────────────┐
    │         EKS Cluster                     │
    │                                         │
    │  ┌────────────────────────────────┐    │
    │  │     Control Plane (Managed)    │    │
    │  │    - API Server                │    │
    │  │    - Scheduler                 │    │
    │  │    - Controller Manager        │    │
    │  └────────────────────────────────┘    │
    │               │                         │
    │  ┌────────────▼──────────────────┐     │
    │  │     Worker Nodes              │     │
    │  │  ┌─────────┐    ┌─────────┐  │     │
    │  │  │  Pod 1  │    │  Pod 2  │  │     │
    │  │  │[Contain]│    │[Contain]│  │     │
    │  │  └─────────┘    └─────────┘  │     │
    │  │                               │     │
    │  │  Compute: EC2 or Fargate     │     │
    │  └───────────────────────────────┘     │
    └─────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Alex: Wait, so with EKS, AWS manages the control plane?

Sam: Exactly! That's the "managed" part of managed Kubernetes. You don't worry about the master nodes, API server, etcd database, etc. AWS handles all that. You just work with your worker nodes and deploy your applications.

A Real-World Scenario

Alex: Let me give you a real scenario. We have a monolithic application we want to break into microservices. About 10-15 services, some need to scale independently. What would you recommend?

Sam: thinks for a moment Tell me more. What's your team's expertise? Have they worked with containers before?

Alex: We've played with Docker locally. No one's touched Kubernetes. We have about 4 developers and one ops person.

Sam: In that case, I'd strongly recommend ECS with Fargate. Here's why:

  1. Lower learning curve: Your team can be productive quickly
  2. No infrastructure management: With Fargate, your ops person isn't managing clusters
  3. Independent scaling: Each ECS service can scale independently
  4. Cost-effective at your scale: With 10-15 services, Fargate's pricing makes sense
  5. AWS integration: Easy connections to RDS, S3, CloudWatch, etc.

Here's how I'd architect it:

Microservices on ECS + Fargate:

┌──────────────────────────────────────────────┐
│         Amazon CloudFront (CDN)              │
└──────────────┬───────────────────────────────┘
               │
┌──────────────▼───────────────────────────────┐
│     Application Load Balancer                │
│  - Path-based routing                        │
│  - /api/users → User Service                 │
│  - /api/orders → Order Service               │
└──────────┬──────────────┬────────────────────┘
           │              │
    ┌──────▼─────┐   ┌───▼────────┐
    │User Service│   │Order Service│  ... (10-15 services)
    │            │   │             │
    │ ECS Task   │   │  ECS Task   │
    │ on Fargate │   │  on Fargate │
    └──────┬─────┘   └───┬─────────┘
           │             │
           │   ┌─────────▼─────────┐
           │   │  Amazon RDS       │
           └───►  (PostgreSQL)     │
               └───────────────────┘

┌──────────────────────────────────────────────┐
│         Amazon ECR                           │
│  (Container Image Registry)                  │
│  - user-service:v1.2.3                      │
│  - order-service:v2.1.0                     │
└──────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Alex: That looks manageable. But what if we grow? What if we have 100 services in two years?

Sam: Valid concern! Here's the beautiful part – you can migrate later if needed. But honestly, companies run hundreds of services on ECS successfully.

However, if you get to the point where you're:

  • Running on multiple clouds
  • Need advanced networking policies
  • Want to use Kubernetes-native tools and operators
  • Hiring people with Kubernetes skills

Then migrating to EKS makes sense. The containers themselves don't change much – it's mainly the orchestration layer.

The Newcomer: App Runner

Alex: I saw something called App Runner on AWS. Where does that fit?

Sam: grins Oh, App Runner! That's AWS's answer to "I just want to run my containerized app, I don't care about ANY of this orchestration stuff."

App Runner is the simplest option:

  • Point it to your container image or source code
  • It builds, deploys, and runs everything
  • Automatic HTTPS
  • Automatic scaling
  • Automatic deployments from source

Alex: That sounds... too easy?

Sam: It is easy! But it has limitations:

Use App Runner when:

  • Simple web apps or APIs
  • Don't need complex networking (VPC integration is limited)
  • Don't need to run background jobs or scheduled tasks
  • Want zero infrastructure management

Don't use App Runner when:

  • Complex microservices architectures
  • Need VPC-only resources
  • Need fine-grained control over networking
  • Running batch jobs or worker processes

Think of it this way:

Complexity vs Control Spectrum:

App Runner          ECS/Fargate          ECS/EC2          EKS
   │                    │                   │              │
   ▼                    ▼                   ▼              ▼
Easiest              Easy              Moderate        Complex
Least Control    Moderate Control   More Control   Maximum Control
Enter fullscreen mode Exit fullscreen mode

The Supporting Cast: ECR and More

Alex: You mentioned ECR. That's just where images live, right?

Sam: Yep! Amazon ECR – Elastic Container Registry – is AWS's Docker registry. Like Docker Hub, but:

  • Private by default
  • Integrated with IAM for security
  • Scan images for vulnerabilities
  • Automatic encryption
  • Lifecycle policies to clean up old images

Every container service needs a registry, and ECR is the natural choice on AWS.

Container Workflow with ECR:

Developer → Builds Image → Pushes to ECR → ECS/EKS Pulls → Runs Container

┌──────────────┐
│  Developer   │
│   Machine    │
└──────┬───────┘
       │ docker build
       │ docker push
       ▼
┌──────────────────┐
│   Amazon ECR     │
│                  │
│  repo/app:v1.0   │
│  repo/app:v1.1   │
│  repo/app:latest │
└──────┬───────────┘
       │ ECS/EKS pulls
       ▼
┌──────────────────┐
│  Running Tasks/  │
│      Pods        │
└──────────────────┘
Enter fullscreen mode Exit fullscreen mode

Alex: What about security? We handle sensitive data.

Sam: Great question! All these services integrate with AWS security features:

  1. IAM Roles: Assign permissions to containers, not instances
  2. Secrets Manager: Store database passwords, API keys
  3. VPC: Keep containers in private subnets
  4. Security Groups: Control network traffic
  5. ECR Image Scanning: Find vulnerabilities before deployment

Example ECS task with security:

TaskDefinition:
  TaskRole: arn:aws:iam::xxx:role/MyAppRole  # What the app can do
  ExecutionRole: arn:aws:iam::xxx:role/ECSExecution  # What ECS can do
  NetworkMode: awsvpc  # Each task gets its own network interface
  ContainerDefinitions:
    - Name: my-app
      Image: xxx.ecr.region.amazonaws.com/my-app:v1.0
      Secrets:
        - Name: DB_PASSWORD
          ValueFrom: arn:aws:secretsmanager:region:xxx:secret:db-pass
Enter fullscreen mode Exit fullscreen mode

Cost Considerations

Alex: Let's talk money. What's this going to cost?

Sam: pulls out napkin and pen

The pricing varies significantly:

ECS on EC2:

  • Pay for EC2 instances (24/7)
  • No ECS orchestration fees
  • Best for: steady-state workloads, high utilization
  • Example: t3.medium = ~$30/month, can run multiple containers

ECS on Fargate:

  • Pay per vCPU and memory per second
  • No idle costs
  • Best for: variable workloads, bursts, small scale
  • Example: 0.25 vCPU, 0.5GB RAM = ~$15/month if running 24/7

EKS:

  • $0.10/hour for control plane (~$73/month per cluster)
  • Plus compute costs (EC2 or Fargate)
  • Best when the management value exceeds the control plane cost

App Runner:

  • Pay for active container time
  • Plus build time
  • Automatic scaling includes markup
  • Best for apps with variable traffic

Alex: So for my 15 microservices...

Sam: calculating With Fargate, assuming each service averages 0.25 vCPU and 0.5GB:

  • ~$15/service/month = $225/month
  • Plus ALB ($16/month) and data transfer

With EC2 cluster, you'd probably run:

  • 2-3 t3.small instances = ~$15-25 each = $45-75/month
  • But you need to manage them
  • Better utilization if you can pack containers efficiently

For your case, Fargate makes sense. You get predictable costs and zero management overhead.

Making the Decision

Alex: Okay, let me see if I've got this straight. counts on fingers

Sam: Go for it!

Alex:

  • App Runner: I have a simple web app, just deploy it
  • ECS + Fargate: I have microservices, want it managed, AWS-only is fine
  • ECS + EC2: I have steady workloads, want to optimize costs, willing to manage infrastructure
  • EKS: I need Kubernetes for portability or team expertise, or I'm using K8s ecosystem tools

Sam: slow clap Perfect! That's exactly it.

Let me add a few more decision points:

Go with ECS if:

  • Starting fresh on AWS
  • Small to medium team
  • Need quick time-to-market
  • Want AWS-native integrations
  • Don't need multi-cloud

Go with EKS if:

  • Need multi-cloud portability
  • Have Kubernetes expertise
  • Want to use Helm, Operators, etc.
  • Need advanced K8s features (Custom Resource Definitions, complex scheduling)
  • Part of a larger organization standardizing on K8s

Alex: What about mixing them?

Sam: Totally valid! Many companies run:

  • ECS for simple internal tools
  • EKS for complex applications
  • App Runner for quick prototypes

There's no rule saying you must choose one.

Real-World Migration Story

Alex: Tell me about a real migration. What have you seen?

Sam: settles back Oh, I have a good one. A retail company I worked with:

Starting point:

  • Traditional monolith on EC2
  • Slow deployments (hours)
  • Scaling was manual and slow

Phase 1: Containerize and ECS

  • Broke monolith into 8 services
  • Started with ECS on EC2
  • Deployment time: 10 minutes
  • Auto-scaling worked

Phase 2: Move to Fargate

  • As confidence grew, moved to Fargate
  • Removed ops overhead
  • Costs increased slightly but ops time decreased 70%

Phase 3: Add EKS for specific needs

  • Needed to run Apache Kafka
  • Used Strimzi operator on EKS
  • Kept ECS for application services
  • Best of both worlds

Timeline: 18 months total. Gradual, safe, successful.

Alex: That's reassuring. I don't have to decide everything upfront.

Sam: Exactly! Start simple, evolve as needed. That's the beauty of containers – the applications are portable. The orchestration is just the control layer.

Practical Tips and Gotchas

Alex: Any gotchas I should watch out for?

Sam: laughs Oh, plenty! Let me share the top ones:

ECS Gotchas:

  1. Task Definition versions: Every change creates a new version. You'll have hundreds eventually – use lifecycle policies
  2. Service limits: Default 1000 tasks per service. You can increase, but design accordingly
  3. Load balancer stickiness: Be careful with session affinity and health checks

EKS Gotchas:

  1. Version upgrades: K8s versions expire. Plan for regular cluster upgrades
  2. Node group management: Even with managed node groups, you need to plan updates
  3. Cost visibility: Kubernetes makes it harder to see per-service costs
  4. Add-ons: Don't forget about CoreDNS, kube-proxy, VPC CNI versions

Fargate Gotchas:

  1. Fixed CPU/memory combinations: Can't do 0.3 vCPU – must be 0.25, 0.5, 1, etc.
  2. Startup time: Cold starts can be 60+ seconds
  3. No persistent storage: Can't use EBS volumes (can use EFS)
  4. Cost at scale: Cheaper than EC2 up to a point, then EC2 wins

Alex: The startup time could be an issue for us...

Sam: Here's a pro tip: Keep a minimum task count. If you need instant response, always have at least 1-2 tasks warm. They're cheap insurance.

Also, use Application Load Balancer health checks wisely:

HealthCheck:
  GracePeriodSeconds: 60  # Give containers time to start
  HealthyThreshold: 2      # Don't be too eager
  UnhealthyThreshold: 3    # But catch real failures
  Interval: 30
Enter fullscreen mode Exit fullscreen mode

Monitoring and Observability

Alex: How do I know if everything's working?

Sam: Critical question! Container monitoring is different from traditional monitoring because things are ephemeral.

CloudWatch Container Insights:

  • Automatic metrics for ECS and EKS
  • CPU, memory, network at task/pod level
  • Log collection and analysis
  • Cost: ~$0.30 per container per month

Basic monitoring stack:

┌─────────────────────────────────────┐
│      Application Containers         │
│   (ECS Tasks or K8s Pods)          │
└──────────┬─────────────┬────────────┘
           │             │
           │             │
    ┌──────▼──────┐  ┌──▼─────────┐
    │ CloudWatch  │  │   X-Ray    │
    │   Logs      │  │ (Tracing)  │
    └──────┬──────┘  └──┬─────────┘
           │             │
           └──────┬──────┘
                  │
         ┌────────▼─────────┐
         │  CloudWatch      │
         │  Dashboards      │
         └──────────────────┘
Enter fullscreen mode Exit fullscreen mode

Alex: What should I monitor?

Sam: Start with these key metrics:

Container metrics:

  • CPU utilization (aim for 60-70% average)
  • Memory utilization (watch for memory leaks)
  • Task/Pod restarts (should be rare)
  • Deployment success rate

Application metrics:

  • Request rate
  • Error rate
  • Response time
  • Saturation (queue depth, etc.)

Infrastructure metrics (if using EC2):

  • Cluster utilization
  • Available capacity
  • Instance health

Example CloudWatch alarm:

HighCPUAlarm:
  MetricName: CPUUtilization
  Namespace: AWS/ECS
  Statistic: Average
  Period: 300
  EvaluationPeriods: 2
  Threshold: 80
  AlarmActions:
    - SNS notification
    - Auto-scaling policy
Enter fullscreen mode Exit fullscreen mode

Advanced Patterns

Alex: feeling confident What are some advanced patterns I might grow into?

Sam: Love the forward thinking! Here are patterns I see successful teams use:

1. Blue/Green Deployments:

Current State:
- ALB → Target Group Blue (running v1.0)

Deployment:
- Deploy v1.1 to Target Group Green
- Test Green
- Switch ALB to Green
- Keep Blue running for rollback

ECS and EKS both support this natively!
Enter fullscreen mode Exit fullscreen mode

2. Service Mesh (for EKS):

Without Service Mesh:
Service A → Service B (how to secure? route? retry?)

With Service Mesh (AWS App Mesh):
Service A → Envoy Proxy → Envoy Proxy → Service B
           (mTLS, retry, routing, observability)
Enter fullscreen mode Exit fullscreen mode

3. Hybrid Compute:

┌──────────────────────────────┐
│      ECS Service             │
│                              │
│  ┌──────────┐  ┌──────────┐ │
│  │  Tasks   │  │  Tasks   │ │
│  │   on     │  │   on     │ │
│  │ Fargate  │  │   EC2    │ │
│  │ (burst)  │  │(baseline)│ │
│  └──────────┘  └──────────┘ │
└──────────────────────────────┘

Use EC2 for baseline, Fargate for peaks!
Enter fullscreen mode Exit fullscreen mode

4. Multi-Region Active-Active:

      ┌──────────────────┐
      │   Route 53       │
      │ (Geolocation)    │
      └────┬────────┬────┘
           │        │
    ┌──────▼────┐  └──────▼────┐
    │  Region   │  │  Region    │
    │  us-east  │  │  eu-west   │
    │           │  │            │
    │ ECS/EKS   │  │  ECS/EKS   │
    │ Services  │  │  Services  │
    └────┬──────┘  └──────┬─────┘
         │                │
    ┌────▼─────┐    ┌────▼─────┐
    │   RDS    │◄───►   RDS    │
    │ Primary  │    │ Replica  │
    └──────────┘    └──────────┘
Enter fullscreen mode Exit fullscreen mode

The Future: Where Things Are Heading

Alex: What's coming next? Should I be prepared for anything?

Sam: Good question. Here's what I'm watching:

1. Serverless containers everywhere:

  • Fargate improving (faster cold starts, more features)
  • Lambda + containers (run container images on Lambda)
  • More abstraction, less infrastructure

2. GitOps becoming standard:

  • Flux, ArgoCD for EKS
  • ECS deploy pipelines getting more sophisticated
  • Infrastructure as code is table stakes

3. FinOps and cost optimization:

  • Graviton processors (ARM) for 20% cost savings
  • Spot instances for Fargate
  • Better tooling for right-sizing

4. Enhanced security:

  • Runtime security scanning
  • Automated vulnerability patching
  • Zero-trust networking

Alex: Graviton?

Sam: AWS's ARM-based processors. You can run containers on them for significant cost savings. Works with both ECS and EKS. Highly recommend for new projects – most modern applications just work.

TaskDefinition:
  RuntimePlatform:
    CpuArchitecture: ARM64  # Use Graviton
    OperatingSystemFamily: LINUX
Enter fullscreen mode Exit fullscreen mode

Wrapping Up: Your Action Plan

Alex: Okay, this has been incredibly helpful. Give me the 30-second elevator pitch for my manager.

Sam: stands up, as if presenting

"We'll start with Amazon ECS on Fargate for our microservices migration. It gives us container orchestration without infrastructure management, scales automatically, and integrates seamlessly with our AWS services. Costs are predictable at ~$225/month for our initial services. We can deploy new services in minutes instead of days, scale independently, and have zero servers to patch. If we need Kubernetes later for multi-cloud or specific tools, we can migrate, but for now, this gets us to market fast with minimal operational overhead."

Alex: applauds Perfect! One last thing – how do I actually get started?

Sam: Here's your week one action plan:

Day 1-2: Setup

# Install tools
aws configure
docker pull your-app

# Create ECR repository
aws ecr create-repository --repository-name my-app

# Push image
aws ecr get-login-password | docker login --username AWS --password-stdin ECR_URL
docker tag my-app:latest ECR_URL/my-app:latest
docker push ECR_URL/my-app:latest
Enter fullscreen mode Exit fullscreen mode

Day 3: Create ECS cluster

  • Use AWS Console (it's easier the first time)
  • Create cluster → Fargate
  • Name it, done!

Day 4-5: Deploy first service

  • Create Task Definition
  • Define container, image, CPU, memory
  • Create Service
  • Attach Load Balancer
  • Set desired count

Alex: And if I get stuck?

Sam: AWS documentation is actually pretty good for ECS. Also:

  • AWS re:Post for Q&A
  • GitHub examples from AWS
  • This conversation! taps notebook

But here's my phone number. Text me if you're stuck. I love seeing people succeed with containers.

Alex: smiling Thanks, Sam. I feel way more confident now. It's not as scary as I thought.

Sam: That's the secret – it's all just running your code in little boxes. We've just given those boxes fancy names and smart tools to manage them.

Start simple with ECS and Fargate. Get something working. Learn. Iterate. You'll be fine.

Alex: One more coffee before I head back?

Sam: Make it a double espresso. You've got containerizing to do!


Quick Reference Guide

Your Decision Tree:

Do you just want to run a web app simply?
├─ YES → Try App Runner first
└─ NO → Continue

Do you need Kubernetes specifically?
├─ YES → EKS (+ Fargate or EC2)
└─ NO → Continue

Do you have steady, predictable workloads & want to optimize costs?
├─ YES → ECS on EC2
└─ NO → ECS on Fargate

Is your team small or new to containers?
├─ YES → Definitely ECS on Fargate
└─ NO → Either works, evaluate based on requirements
Enter fullscreen mode Exit fullscreen mode

Service Comparison Table:

Feature App Runner ECS + Fargate ECS + EC2 EKS + Fargate EKS + EC2
Management Overhead Lowest Low Medium Medium High
Flexibility Low Medium High High Highest
Learning Curve Easiest Easy Moderate Steep Steepest
Cost (small scale) $$ $$ $ $$$ $$
Cost (large scale) $$$ $$ $ $$$ $
Multi-cloud Portable No No No Yes Yes
Best For Simple apps Most workloads Cost optimization K8s ecosystem K8s + control

Key Terms Cheatsheet:

  • Cluster: Group of compute resources
  • Task (ECS) / Pod (K8s): One or more containers running together
  • Service: Maintains desired number of tasks/pods
  • Task Definition / Deployment: Blueprint for running containers
  • Fargate: Serverless compute for containers
  • ECR: Container image registry
  • Load Balancer: Distributes traffic across containers

Alex (texting Sam later that evening): "Just deployed my first container on ECS! 🎉"

Sam: "Told you! Now do 14 more 😄"

Alex: "Tomorrow. Tonight I'm celebrating. Thanks again!"

Sam: "Anytime. Welcome to the container club."


And that's how Alex's company successfully modernized their applications, one container at a time. The journey from confusion to clarity doesn't require a computer science degree – just a good conversation, some coffee, and the willingness to start simple.

The End... or rather, The Beginning! 🚀


Additional Resources

Official AWS Documentation:

Hands-on Labs:

  • AWS Workshop: ECS Workshop (ecsworkshop.com)
  • AWS Workshop: EKS Workshop (eksworkshop.com)

Remember: Every expert was once a beginner. Start where you are, use what you have, learn as you go. Happy containerizing! ☁️🐳

Top comments (0)