DEV Community

Cover image for Building Serverless Microservices on AWS with ECS Fargate, ECR, and Terraform

Building Serverless Microservices on AWS with ECS Fargate, ECR, and Terraform

In the evolving world of cloud-native architectures, serverless doesn’t always mean Lambda. With Amazon ECS Fargate, you can run containers without managing servers, combining the scalability of containers with the simplicity of serverless operations.

In this article, we’ll build a serverless microservices architecture using AWS ECS Fargate, ECR, and Terraform.
Our setup includes two Django-based microservices:

Reader Service – handles read operations from a shared database.

Writer Service – handles write operations to the same database.

Both services connect to a shared Amazon RDS PostgreSQL instance. The entire infrastructure is defined and provisioned through Infrastructure as Code (IaC) using Terraform.

Key Components

Amazon ECS Fargate – Runs containerized Django microservices without provisioning EC2 instances.

Amazon ECR (Elastic Container Registry) – Hosts Docker images for both services.

Amazon RDS (PostgreSQL) – Central database for the microservices.

Application Load Balancer (ALB) – Distributes traffic between services.

Terraform – Automates infrastructure provisioning.

1. Containerizing the Microservices

Each Django service (Reader and Writer) is containerized using Docker. Here is the docker compose file that builds both services and a local db container for local testing (test containers before pushing to cloud).

  db:
    image: postgres:15
    container_name: postgres-db
    environment:
      - POSTGRES_DB=${DB_NAME:-library_db}
      - POSTGRES_USER=${DB_USER:-postgres}
      - POSTGRES_PASSWORD=${DB_PASSWORD:-password}
    ports:
      - "5432:5432"
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${DB_USER:-postgres}"]
      interval: 10s
      timeout: 5s
      retries: 5

  reader-service:
    container_name: reader-svc
    build: ./app/reader-service/reader
    ports:
      - "8000:8000"
    environment:
      - DB_USER=${DB_USER:-postgres}
      - DB_PASSWORD=${DB_PASSWORD:-password}
      - DB_NAME=${DB_NAME:-library_db}
      - DB_HOST=db
      - DB_PORT=5432
    depends_on:
      db:
        condition: service_healthy
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8000/health/"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s
    deploy:
      resources:
        limits:
          memory: 512M
        reservations:
          memory: 256M
    restart: unless-stopped

  writer-service:
    container_name: writer-svc
    build: ./app/writer-service/writer
    ports:
      - "8001:8000"
    environment:
      - DB_USER=${DB_USER:-postgres}
      - DB_PASSWORD=${DB_PASSWORD:-password}
      - DB_NAME=${DB_NAME:-library_db}
      - DB_HOST=db
      - DB_PORT=5432
    depends_on:
      db:
        condition: service_healthy
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8000/health/"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s
    deploy:
      resources:
        limits:
          memory: 512M
        reservations:
          memory: 256M
    restart: unless-stopped

volumes:
  postgres_data:
Enter fullscreen mode Exit fullscreen mode

2. After testing locally, build and push each image to ECR.

aws ecr create-repository --repository-name reader-service
aws ecr create-repository --repository-name writer-service

docker build -t reader-service .
docker tag reader-service:latest <account-id>.dkr.ecr.<region>.amazonaws.com/reader-service:latest
docker push <account-id>.dkr.ecr.<region>.amazonaws.com/reader-service:latest

Enter fullscreen mode Exit fullscreen mode

3. Defining Infrastructure with Terraform

Your Terraform project will include these key modules:

VPC and Networking

RDS Instance

ECR Repositories

ECS Cluster and Task Definitions

Application Load Balancer

resource "aws_ecs_cluster" "main" {
  name = "serverless-cluster"
}

resource "aws_ecs_task_definition" "reader_task" {
  family                   = "reader-service"
  network_mode             = "awsvpc"
  requires_compatibilities = ["FARGATE"]
  cpu                      = 256
  memory                   = 512
  execution_role_arn       = aws_iam_role.ecs_task_execution_role.arn
  task_role_arn            = aws_iam_role.ecs_task_execution_role.arn

  container_definitions = jsonencode([
    {
      name      = "reader"
      image     = "${aws_ecr_repository.reader.repository_url}:latest"
      essential = true
      portMappings = [
        {
          containerPort = 8000
          hostPort      = 8000
        }
      ]
      environment = [
        { name = "DB_HOST", value = aws_db_instance.app_db.address },
        { name = "DB_NAME", value = "app_db" },
        { name = "DB_USER", value = "admin" },
        { name = "DB_PASS", value = var.db_password }
      ]
    }
  ])
}

resource "aws_ecs_service" "reader_service" {
  name            = "reader-service"
  cluster         = aws_ecs_cluster.main.id
  task_definition = aws_ecs_task_definition.reader_task.arn
  desired_count   = 2
  launch_type     = "FARGATE"

  network_configuration {
    subnets          = aws_subnet.public[*].id
    assign_public_ip = true
    security_groups  = [aws_security_group.ecs_service.id]
  }

  load_balancer {
    target_group_arn = aws_lb_target_group.reader.arn
    container_name   = "reader"
    container_port   = 8000
  }

  depends_on = [aws_lb_listener.frontend]
}

Enter fullscreen mode Exit fullscreen mode

Repeat a similar configuration for the Writer Service.

4. RDS Database and Networking

Both services share the same RDS instance, accessed via private networking for security.

# create database subnet group
resource "aws_db_subnet_group" "database_subnet_group" {
  name        = "${var.project_name}-${var.environment}-database-subnets"
  subnet_ids  = [aws_subnet.private_data_subnet_az1.id, aws_subnet.private_data_subnet_az2.id]
  description = "subnets for database instance"

  tags = {
    Name = "${var.project_name}-${var.environment}-database-subnets"
  }
}

# create the rds instance
resource "aws_db_instance" "database_instance" {
  engine                 = "postgres"
  engine_version         = "14"
  multi_az               = var.multi_az_deployment
  identifier             = var.database_instance_identifier
  username               = var.db_user
  password               = var.db_password
  db_name                = var.db_name
  instance_class         = var.database_instance_class
  allocated_storage      = 200
  db_subnet_group_name   = aws_db_subnet_group.database_subnet_group.name
  vpc_security_group_ids = [aws_security_group.database_security_group.id]
  availability_zone      = data.aws_availability_zones.available_zones.names[0]
  skip_final_snapshot    = true
  publicly_accessible    = var.publicly_accessible
}


Enter fullscreen mode Exit fullscreen mode

Find the terraform template at: ECS Django Microservices

then run:

Terraform init

Terraform plan

Terraform apply
Enter fullscreen mode Exit fullscreen mode

5. Why ECS Fargate is Serverless

When most developers hear serverless, they immediately think of AWS Lambda. But serverless is more than just event-driven functions, it’s a paradigm. It’s about abstracting infrastructure management, automating scaling, and paying only for what you use.
That’s exactly what Amazon ECS Fargate delivers for containerized workloads.

Serverless Principles and How Fargate Fits In

Let’s break it down:

No Server Management

With ECS Fargate, you don’t provision, scale, or patch EC2 instances.
You define your task requirements (vCPU, memory, networking), and AWS automatically runs your containers in a managed compute environment. This removes the operational overhead of managing ECS clusters on EC2, configuring capacity providers, or dealing with auto scaling groups.

Your focus shifts entirely to application logic, not infrastructure maintenance.

Automatic Scaling and Orchestration

Fargate integrates natively with ECS Service Auto Scaling and Application Auto Scaling.
When your traffic spikes, ECS launches more Fargate tasks; when traffic drops, it scales down automatically, no manual intervention needed.

This elasticity ensures cost-efficiency and performance stability without managing the underlying scaling policies or EC2 lifecycle.

Pay-per-Use Model

You only pay for the exact CPU and memory resources your containers use, billed per second while tasks are running.
There’s no idle server cost, no over-provisioned clusters, and no unused capacity. This aligns perfectly with serverless economics — true usage-based billing.

Seamless AWS Integration

Fargate runs as part of the broader AWS ecosystem. It integrates smoothly with:

CloudWatch Logs for centralized logging,

CloudWatch Alarms for health and performance metrics,

IAM Roles for Tasks for fine-grained permissions,

AWS X-Ray for distributed tracing, and

ECR for secure container image storage and retrieval.

This ecosystem synergy allows teams to build secure, observable, and automated serverless container platforms end-to-end.

Security and Isolation

Each Fargate task runs in its own isolated compute environment, with dedicated kernel and network interfaces.
Unlike EC2-based ECS, where containers share the host kernel, Fargate tasks achieve stronger isolation, closer to the Lambda-level security boundary, making it ideal for multi-tenant or microservice architectures.

Infrastructure as Code Ready

By pairing Fargate with Terraform, you extend serverless principles to your entire infrastructure.
Your ECS services, networking, IAM roles, and monitoring configurations all live in code that is reproducible, version-controlled, and automated. This enables serverless operations not just in runtime, but also in provisioning and deployment.

6. Observability and Monitoring

Add logging and metrics using:

AWS CloudWatch Logs – For container logs and AWS CloudWatch Alarms – For task health and CPU/memory usage using Terraform

# ECS CloudWatch Monitoring
resource "aws_cloudwatch_log_group" "ecs_logs" {
  name              = "/ecs/${var.project_name}"
  retention_in_days = 7
}

# ECS Service Alarms
resource "aws_cloudwatch_metric_alarm" "ecs_cpu_high" {
  alarm_name          = "${var.project_name}-ecs-cpu-high"
  comparison_operator = "GreaterThanThreshold"
  evaluation_periods  = "2"
  metric_name         = "CPUUtilization"
  namespace           = "AWS/ECS"
  period              = "300"
  statistic           = "Average"
  threshold           = "80"
  alarm_description   = "ECS CPU utilization is too high"
  alarm_actions       = [aws_sns_topic.alerts.arn]

  dimensions = {
    ServiceName = aws_ecs_service.ecs_service1.name
    ClusterName = aws_ecs_cluster.ecs_cluster.name
  }
}

resource "aws_cloudwatch_metric_alarm" "ecs_memory_high" {
  alarm_name          = "${var.project_name}-ecs-memory-high"
  comparison_operator = "GreaterThanThreshold"
  evaluation_periods  = "2"
  metric_name         = "MemoryUtilization"
  namespace           = "AWS/ECS"
  period              = "300"
  statistic           = "Average"
  threshold           = "80"
  alarm_description   = "ECS Memory utilization is too high"
  alarm_actions       = [aws_sns_topic.alerts.arn]

  dimensions = {
    ServiceName = aws_ecs_service.ecs_service1.name
    ClusterName = aws_ecs_cluster.ecs_cluster.name
  }
}

# ALB Target Health
resource "aws_cloudwatch_metric_alarm" "alb_unhealthy_targets" {
  alarm_name          = "${var.project_name}-alb-unhealthy-targets"
  comparison_operator = "GreaterThanThreshold"
  evaluation_periods  = "2"
  metric_name         = "UnHealthyHostCount"
  namespace           = "AWS/ApplicationELB"
  period              = "300"
  statistic           = "Average"
  threshold           = "0"
  alarm_description   = "ALB has unhealthy targets"
  alarm_actions       = [aws_sns_topic.alerts.arn]

  dimensions = {
    LoadBalancer = aws_lb.application_load_balancer.arn_suffix
  }
}

# SNS Topic for Alerts
resource "aws_sns_topic" "alerts" {
  name = "${var.project_name}-alerts"
}

resource "aws_sns_topic_subscription" "email" {
  topic_arn = aws_sns_topic.alerts.arn
  protocol  = "email"
  endpoint  = var.alert_email
}
Enter fullscreen mode Exit fullscreen mode

Conclusion

This setup demonstrates that serverless microservices don’t have to rely on AWS Lambda.
With ECS Fargate, ECR, and Terraform, you can build production-grade, scalable, and cost-efficient systems while maintaining full control over your architecture.

It’s the perfect middle ground between full container control and the simplicity of serverless.

Happy coding!

Follow me for more demos and networking. Kevin Kiruri LinkedIn
Find the source code here: ECS Django Microservices

Top comments (0)