DEV Community

Oluwatofunmi okedeji
Oluwatofunmi okedeji

Posted on

From GitHub to AWS ECS: A Step-by-Step Guide to Deploying Flask Apps with Docker and Docker Hub.

Deploying applications seamlessly is a crucial skill in today's software engineering and DevOps landscape. In this guide, I’ll show you how to deploy a Flask app using GitHub Actions, Docker Hub, and AWS ECS.
We’ll explore the repository structure, including the Dockerfile, docker-compose.yml, and cicd.yml, which collectively form the backbone of this deployment pipeline.
By the end of this tutorial, you’ll have a fully automated CI/CD pipeline for your Flask app that deploys updates directly to ECS.

Prerequisites
Before starting, ensure you have:

  1. A Flask application in a GitHub repository.
  2. A Docker Hub account and repository.
  3. AWS Account.
  4. Basic familiarity with Docker.

The Flow (High Level)

  1. Write Code: You write and commit your Flask app code to GitHub.
  2. GitHub Actions: When you push your changes:
  3. The workflow builds a Docker image.
  4. The image is pushed to Docker Hub.
  5. AWS ECS is updated to use the latest image.
  6. AWS ECS: ECS pulls the Docker image from Docker Hub and runs your app on the cloud.
  7. Access Your App: Your app is live and accessible via a URL or public IP.

Why This Pipeline Works Well

  • GitHub: Centralized code repository with version control.
  • Docker: Standardizes your app for consistent performance.
  • Docker Hub: Reliable storage for container images.
  • GitHub Actions: Automates the entire process, saving time and reducing human error.

Step 1: Creating IAM Policies for ECS Deployment

To ensure your ECS deployment has the necessary permissions, follow the steps below to create two critical IAM policies. These policies allow ECS to pull images from Docker Hub or Amazon ECR and create CloudWatch log streams.

** Steps to Create Policies **

  1. Go to the IAM Console.
  2. In the left navigation menu, click Policies.
  3. Click Create Policy.
  4. Select the JSON tab and replace the default content with one of the JSON configurations provided below.
  5. Click Next: Tags (you can skip adding tags).
  6. Click Next: Review.
  7. Provide a name and description for the policy:
  8. Example Name: #### ECS-Pull-Image-Policy
  9. Example Description: #### Permissions for ECS to pull images from Docker Hub or ECR.
  10. Click Create Policy.

__Repeat the process for the second JSON configuration.

** Policy 1: Pull Images from Docker Hub or Amazon ECR **
This policy grants ECS permissions to pull container images.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ecr:GetAuthorizationToken",
                "ecr:BatchCheckLayerAvailability",
                "ecr:GetDownloadUrlForLayer",
                "ecr:BatchGetImage"
            ],
            "Resource": "*"
        }
    ]
}
Enter fullscreen mode Exit fullscreen mode

Policy 2: Create CloudWatch Log Streams
This policy allows ECS to create and write log streams to CloudWatch.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogStream",
                "logs:PutLogEvents",
                "logs:CreateLogGroup"
            ],
            "Resource": [
                "arn:aws:logs:*:*:log-group:/ecs/*:*",
                "arn:aws:logs:*:*:log-group:/ecs/*:log-stream:*"
            ]
        }
    ]
}
Enter fullscreen mode Exit fullscreen mode

Additional Notes

  • Why These Policies Are Important:
  • Pulling Images: ECS needs to fetch your container images from Docker Hub or Amazon ECR to deploy your app.
  • CloudWatch Logs: Logs are essential for debugging and monitoring. This policy ensures ECS can write logs for your app's tasks.

-Best Practices:

  1. Assign descriptive names to your policies (e.g., ECS-Log-Policy or ECS-Pull-Image-Policy).
  2. Add clear descriptions for better management.

Step 2: Creating IAM Roles for ECS Deployment

IAM roles allow ECS to perform specific actions like pulling images and writing logs on your behalf. Follow the steps below to create a role for ECS tasks.

Steps to Create the Role

  1. Go to the IAM console.
  2. In the left navigation menu, click Roles.
  3. Click Create Role.
  4. Under Trusted entity type, select Custom trust policy.
  5. Paste the following JSON code into the Custom trust policy editor: json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Statement1",
            "Effect": "Allow",
            "Principal": {
                "Service": "ecs-tasks.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}

Enter fullscreen mode Exit fullscreen mode
  1. Click Next.
  2. On the Permissions page:
  3. Attach the two policies you created earlier:

    • ECS-Pull-Image-Policy
    • ECS-Log-Policy
  4. Click Next.

  5. Provide a Role Name and Description:

  6. Example Role Name: ECSExecutionRole

  7. Example Description: Role to allow ECS tasks to pull images and write logs.

  8. Click Create Role.

What are Trust policies ?

Define WHO can assume a role (the "Principal")

  • Can be AWS services (like ecs-tasks.amazonaws.com)
  • Other AWS accounts
  • IAM users
  • Web identity providers (like Google or Facebook) Always use the "sts:AssumeRole" action Can only have one trust policy per role

Step 3: Set Up ECS Cluster

Understanding ECS: Key Concepts and Nomenclature
Amazon Elastic Container Service (ECS) is a managed container orchestration service that helps deploy, manage, and scale containerized applications. Let’s break down the critical components and concepts you need to know for working with ECS effectively.

Core ECS Dynamics

  1. Task Definitions:
  2. A task definition is like a blueprint for your application. It defines:

    • The container images to use (e.g., from Docker Hub or ECR).
    • The resource requirements (CPU, memory).
    • Network settings and environment variables.
  3. Tasks:

  4. A task is an instantiation of a task definition.

  5. It’s the actual running instance of your application.

  6. Services:

  7. A service manages and maintains the desired number of task instances.

  8. Example: If you want three containers of your app running, the service ensures this even if one fails.

  9. Clusters:

  10. A cluster is a logical grouping of resources. It hosts the infrastructure where your tasks run (either EC2 instances or Fargate).

  11. Container Instances:

  12. If using EC2 launch type, these are the EC2 instances where containers are deployed.

  13. If using Fargate, AWS manages the infrastructure for you (serverless approach).

  14. Launch Types:

  15. Fargate: Serverless, AWS manages the instances. Ideal for most use cases as it eliminates infrastructure management.

  16. EC2: You manage the instances (more flexibility but also more responsibility).

  17. Load Balancers (Optional):

  18. Distributes incoming traffic across multiple containers to ensure availability and fault tolerance.

ECS Workflow in Practice

  1. Define a Task:
    • Write a task definition with the container details.
  2. Create a Cluster:
    • Set up a logical grouping for the infrastructure.
  3. Deploy Services:
    • Use a service to manage and scale tasks.
  4. Monitor:
    • Use CloudWatch for logs and metrics to ensure the application runs as expected.

Define A Task

create a taskdefinition family name similar to the project you are tryingtodeploy.
select a launch type, I will go with Fargate

for Operating system/Architecture
I will go with linux/x86_64
CPU, I will check 1cpu, memory 2GB
for Task execution role. and task role choose the role. you crreated before. ECSExecutionRole

Uploading image

** Create a Cluster: **

  • Open the ECS console.
  • Click "Create Cluster" and choose: select. thename of the cluster EG DevCluster select the name of the namespace, this would have automatically been created and named the same as the cluster, you could change it to something else in the. dropdown for infrastructure select Fargate for serverless deployments. EC2 if you want to manage instances manually. dlick on the dropdown for monitoring and select 'Container Insights with enhanced observability' You can leave everything else as it is and click on create, It takes a while to be completely created also. after is has been created, click on the clister and righr below the pageon the service tab, click on create

Compute configuration

Let me explain the compute configuration options available when setting up an ECS service:

Compute Options - You have two main choices:
a) Capacity Provider Strategy

More flexible and automated approach
Allows mixing of different compute types (like FARGATE and FARGATE_SPOT)
Can set up rules for task distribution
Better for cost optimization and availability

b) Launch Type

Simpler, more direct approach
Choose a single compute type (like FARGATE or EC2)
Less flexible but easier to understand
Good for simple use cases

Capacity Provider Strategy Options:
a) Use Cluster Default

Uses whatever strategy is set at the cluster level
Good for consistency across services

b) Use Custom

FARGATE: Regular, predictable pricing
FARGATE_SPOT: Up to 70% cheaper but can be interrupted
Can set for each provider:

Base: Minimum number of tasks
Weight: Relative distribution of tasks

Platform Version

LATEST: Automatically uses newest platform features
Specific versions available (like 1.4.0):

1.4.0: Enhanced network performance, security patches
1.3.0: Earlier version with basic features
Choose specific versions if you need stability or specific features

Example strategy:

Complete the setup with networking configurations (e.g., VPC and subnets).

Clone the Flask App Repository
Start by cloning the repository containing the Flask application. Use the following command:

git clone https://github.com/21toffy/docker-ecs-deployment-test.git

cd docker-ecs-deployment-test

Enter fullscreen mode Exit fullscreen mode

Step 2: Explore the Repository

Let’s break down the critical files in the repository:

Dockerfile
The Dockerfile describes how to containerize the Flask app:

FROM python:3.10-slim

WORKDIR /app

# Install build dependencies
RUN apt-get update && apt-get install -y \
    build-essential \
    && rm -rf /var/lib/apt/lists/*

COPY requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt

# Copy the application code
COPY . .

# Set environment variables
ENV PYTHONPATH=/app
ENV FLASK_APP="app:create_app()"
ENV FLASK_ENV=development
ENV PYTHONUNBUFFERED=1

EXPOSE 5000

CMD ["flask", "run", "--host=0.0.0.0", "--port=5000"]

Enter fullscreen mode Exit fullscreen mode

Explanation:

  1. Base Image: Uses python:3.10-slim for a lightweight Python environment.
  2. Dependencies: Installs essential tools and Python dependencies from requirements.txt.
  3. Environment Variables: Configures Flask to run in development mode and sets the app entry point (app:create_app()).
  4. Expose Port: Opens port 5000 for the Flask app.
  5. CMD: Runs the Flask app on 0.0.0.0 to make it accessible externally.

** Docker Compose File (docker-compose.yml) **

The docker-compose.yml simplifies running the container and connecting services:

version: "3.8"

services:
  web:
    build: .
    image: oluwatofunmi/question-answer:v1
    container_name: flask_app
    environment:
      - POSTGRES_USER=${POSTGRES_USER}
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
      - POSTGRES_DB=${POSTGRES_DB}
      - DATABASE_URL=${DATABASE_URL}
      - FLASK_ENV=${FLASK_ENV}
      - FLASK_APP=app:create_app()
    env_file:
      - .env
    ports:
      - "5000:5000"
    volumes:
      - .:/app
    networks:
      - app-network
    command: flask run --host=0.0.0.0 --port=5000
Enter fullscreen mode Exit fullscreen mode

Explanation:

  1. Service Definition: Defines a web service using the Flask app. Environment Variables: Reads sensitive credentials from the .env file.
  2. Ports: Maps port 5000 of the container to port 5000 of the host.
  3. Volumes: Mounts the project directory into the container for live updates during development.
  4. Networks: Sets up an isolated network for communication between services.

** GitHub Actions Workflow (cicd.yml) **
The cicd.yml automates the CI/CD process:

name: CI/CD Pipeline

on:
  push:
    branches:
      - main

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v2

      - name: Build Docker image
        run: |
          docker build -t oluwatofunmi/question-answer:latest .

      - name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}

      - name: Push Docker image to DockerHub
        run: docker push oluwatofunmi/question-answer:latest

  deploy-to-ecs:
    needs: build
    runs-on: ubuntu-latest
    steps:
      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v3
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: us-east-1
      - name: Deploy to ECS
        run: |
          aws ecs update-service \
            --cluster selftestcluster \
            --service selftestservice \
            --force-new-deployment

Enter fullscreen mode Exit fullscreen mode

** Explanation:**

  1. Trigger: Runs on every push to the main branch.
  2. Build Job:
  3. Checks out the code.
  4. Builds the Docker image.
  5. Pushes the image to Docker Hub.

  6. Deploy Job:

  7. Configures AWS credentials using repository secrets.

  8. Updates the ECS service to deploy the new Docker image.

Step 3: Set Up Environment Variables

  1. Create a .env File: Add the following to a .env file in your repository:
DATABASE_URL=
OPENAI_API_KEY=
SECRET_KEY=
POSTGRES_USER=
POSTGRES_PASSWORD=
POSTGRES_DB=
Enter fullscreen mode Exit fullscreen mode
  1. Add Secrets to GitHub: In the GitHub repository:

Go to Settings > Secrets and Variables > Actions > New Repository Secret.

Add the following secrets:
DOCKERHUB_USERNAME
DOCKERHUB_TOKEN
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY

Top comments (0)