DEV Community

Cover image for How I Containerized and Deployed a Dynamic Web Application on AWS using Docker & Amazon ECS (Fargate)
Oluwatobiloba Oludare
Oluwatobiloba Oludare

Posted on

How I Containerized and Deployed a Dynamic Web Application on AWS using Docker & Amazon ECS (Fargate)

As part of my DevOps learning journey, I worked on a hands-on project where I containerized a dynamic web application and deployed it on AWS using Docker and Amazon ECS (Fargate).

This project helped me move from theory to real-world cloud implementation, covering networking, security, containerization, and deployment.

In this article, I will walk through the exact steps I took.

Project Overview

The goal was to:

  • Containerize a dynamic web application using Docker
  • Store the image in Amazon ECR
  • Deploy the container using Amazon ECS (Fargate)
  • Use an Application Load Balancer for traffic routing
  • Securely connect to an RDS MySQL database
  • Configure a production-like cloud environment

Architecture Overview

User β†’ ALB β†’ ECS (Fargate Containers) β†’ RDS (MySQL Database)

Supporting services:

  • VPC (Networking)
  • Security Groups (Access control)
  • Secrets Manager (Credentials)
  • ECR (Container registry)
  • Route 53 (DNS)

Explanation:
The user’s browser sends a request to the domain name, which is resolved by Route 53. The request is routed to an Application Load Balancer in the public subnet. The load balancer forwards the request to ECS containers running in private subnets through a target group. The container processes the request and communicates with the RDS database if needed, then the response travels back through the load balancer to the user.

πŸ” Step 1: Create a Secure VPC

I started by creating a custom VPC with:

  • Public subnets (for ALB)
  • Private subnets (for ECS tasks and RDS)

This ensures that sensitive resources are not exposed to the internet.

Step 2: Configure Security Groups

I implemented strict access control:

  • ALB Security Group

    • Allow HTTP (80) and HTTPS (443) from the internet
  • Container Security Group

    • Allow traffic only from ALB
  • RDS Security Group

    • Allow MySQL (3306) only from:
    • ECS containers
    • Migration server
  • Migration Server (EC2)

    • SSH access only via controlled security group

This setup ensures a secure, production-like environment.

Step 3: Set Up RDS MySQL Database

  • Created a DB subnet group using private subnets across multiple AZs
  • Launched a MySQL database (Free Tier)
  • Enabled auto-generated credentials
  • Attached the appropriate security group

I used MySQL because the application required structured data, relationships, and strong consistency. It works well with Amazon RDS and is easy to manage. The tradeoff is that it’s less flexible and harder to scale horizontally compared to NoSQL databases like MongoDB, so I would choose those alternatives for highly scalable or unstructured workloads.

Step 4: Store Credentials in AWS Secrets Manager

Instead of hardcoding credentials:

  • Stored database username & password securely
  • Created a named secret
  • Allowed access via IAM roles

This improves security and best practices.

Step 5: Data Migration Setup (EC2)

To migrate application data:

  • Launched an EC2 instance in a private subnet
  • Attached an IAM role with access to:

    • Secrets Manager
    • S3
  • Used user-data scripts to automate migration

βš™οΈ Step 6: Configure AWS CLI

  • Created an IAM user with required permissions
  • Generated access keys
  • Configured locally using:
aws configure
Enter fullscreen mode Exit fullscreen mode

Step 7: GitHub Integration

  • Generated SSH key:
ssh-keygen -t ed25519
Enter fullscreen mode Exit fullscreen mode
  • Added public key to GitHub
  • Created a private repository and uploaded application code
  • Generated a Personal Access Token (PAT) and stored it securely

🐳 Step 8: Dockerize the Application

  • Created a Dockerfile for the application
  • Structured project directory properly
  • Built Docker image
  • Pushed image to Amazon ECR

This step converts the app into a portable container.

🎯 Step 9: Create Target Group

  • Target type: IP
  • Configured health checks:

    • HTTP codes: 200, 301, 302

Step πŸ”Ÿ: Create Application Load Balancer (ALB)

  • Internet-facing
  • Attached to public subnets
  • Configured:

    • HTTP β†’ HTTPS redirect
    • HTTPS listener with SSL certificate
  • Linked to target group

Step 11: Create IAM Roles

Created multiple roles for:

  • ECS Task Execution
  • ECS Task Role (S3, Logs, Secrets access)
  • Load Balancer integration

This ensures least privilege access control.

Step 12: Create ECS Cluster (Fargate)

  • Launch type: Fargate (serverless)
  • Enabled Container Insights

Step 13: Create Task Definition

  • Linked to:

    • ECR image
    • Task role
    • Execution role
  • Defined container configuration

Step 14: Deploy ECS Service

  • Used Fargate launch type
  • Deployed in private subnets
  • Disabled public IP
  • Attached ALB
  • Enabled auto-scaling:

    • Min: 1
    • Max: 2
    • CPU target: 70%

Step 15: Configure Domain (Route 53)

  • Created DNS record
  • Mapped domain to ALB

Now the application is publicly accessible πŸŽ‰

Key Things I Learned

  • How to design a secure VPC architecture
  • Importance of security groups and IAM roles
  • How to containerize applications using Docker
  • Deploying containers using ECS Fargate
  • Using Secrets Manager for secure credential handling
  • Integrating multiple AWS services in a real project

Problem It Solved

This project addresses key challenges commonly found in traditional application deployments:

  • Lack of scalability β†’ Applications fail under increased traffic
  • Single point of failure β†’ Downtime when a server crashes
  • Security vulnerabilities β†’ Public databases and hardcoded credentials
  • Manual infrastructure management β†’ High operational overhead
  • Inconsistent environmentsβ†’ Deployment failures across stages

❌ Challenges I Faced

  • Debugging security group rules between ECS and RDS
  • Handling Docker build and push errors
  • Configuring ALB listeners correctly

Each challenge improved my troubleshooting skills significantly.

What I Would Improve Next

  • Add CI/CD pipeline (GitHub Actions)
  • Use Terraform for Infrastructure as Code
  • Implement monitoring with CloudWatch & alerts
  • Introduce auto-scaling based on traffic

Final Thoughts

This project helped me transition from theory to real-world DevOps practice**. It gave me a deeper understanding of how cloud infrastructure, containers, and networking work together.

If you’re learning DevOps or AWS, I highly recommend building something like this β€” it changes everything.

πŸ‘‹ About Me

I’m currently transitioning into DevOps and sharing my learning journey publicly. Follow me for more hands-on cloud, Docker, and Kubernetes content.

Top comments (0)