Deploying applications seamlessly is a crucial skill in today's software engineering and DevOps landscape. In this guide, I’ll show you how to deploy a Flask app using GitHub Actions, Docker Hub, and AWS ECS.
We’ll explore the repository structure, including the Dockerfile, docker-compose.yml, and cicd.yml, which collectively form the backbone of this deployment pipeline.
By the end of this tutorial, you’ll have a fully automated CI/CD pipeline for your Flask app that deploys updates directly to ECS.
Prerequisites
Before starting, ensure you have:
- A Flask application in a GitHub repository.
- A Docker Hub account and repository.
- AWS Account.
- Basic familiarity with Docker.
The Flow (High Level)
- Write Code: You write and commit your Flask app code to GitHub.
- GitHub Actions: When you push your changes:
- The workflow builds a Docker image.
- The image is pushed to Docker Hub.
- AWS ECS is updated to use the latest image.
- AWS ECS: ECS pulls the Docker image from Docker Hub and runs your app on the cloud.
- Access Your App: Your app is live and accessible via a URL or public IP.
Why This Pipeline Works Well
- GitHub: Centralized code repository with version control.
- Docker: Standardizes your app for consistent performance.
- Docker Hub: Reliable storage for container images.
- GitHub Actions: Automates the entire process, saving time and reducing human error.
Step 1: Creating IAM Policies for ECS Deployment
To ensure your ECS deployment has the necessary permissions, follow the steps below to create two critical IAM policies. These policies allow ECS to pull images from Docker Hub or Amazon ECR and create CloudWatch log streams.
** Steps to Create Policies **
- Go to the IAM Console.
- In the left navigation menu, click Policies.
- Click Create Policy.
- Select the JSON tab and replace the default content with one of the JSON configurations provided below.
- Click Next: Tags (you can skip adding tags).
- Click Next: Review.
- Provide a name and description for the policy:
- Example Name: #### ECS-Pull-Image-Policy
- Example Description: #### Permissions for ECS to pull images from Docker Hub or ECR.
- Click Create Policy.
__Repeat the process for the second JSON configuration.
** Policy 1: Pull Images from Docker Hub or Amazon ECR **
This policy grants ECS permissions to pull container images.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage"
],
"Resource": "*"
}
]
}
Policy 2: Create CloudWatch Log Streams
This policy allows ECS to create and write log streams to CloudWatch.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:CreateLogGroup"
],
"Resource": [
"arn:aws:logs:*:*:log-group:/ecs/*:*",
"arn:aws:logs:*:*:log-group:/ecs/*:log-stream:*"
]
}
]
}
Additional Notes
- Why These Policies Are Important:
- Pulling Images: ECS needs to fetch your container images from Docker Hub or Amazon ECR to deploy your app.
- CloudWatch Logs: Logs are essential for debugging and monitoring. This policy ensures ECS can write logs for your app's tasks.
-Best Practices:
- Assign descriptive names to your policies (e.g., ECS-Log-Policy or ECS-Pull-Image-Policy).
- Add clear descriptions for better management.
Step 2: Creating IAM Roles for ECS Deployment
IAM roles allow ECS to perform specific actions like pulling images and writing logs on your behalf. Follow the steps below to create a role for ECS tasks.
Steps to Create the Role
- Go to the IAM console.
- In the left navigation menu, click Roles.
- Click Create Role.
- Under Trusted entity type, select Custom trust policy.
- Paste the following JSON code into the Custom trust policy editor: json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement1",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
- Click Next.
- On the Permissions page:
-
Attach the two policies you created earlier:
- ECS-Pull-Image-Policy
- ECS-Log-Policy
Click Next.
Provide a Role Name and Description:
Example Role Name: ECSExecutionRole
Example Description: Role to allow ECS tasks to pull images and write logs.
Click Create Role.
What are Trust policies ?
Define WHO can assume a role (the "Principal")
- Can be AWS services (like ecs-tasks.amazonaws.com)
- Other AWS accounts
- IAM users
- Web identity providers (like Google or Facebook) Always use the "sts:AssumeRole" action Can only have one trust policy per role
Step 3: Set Up ECS Cluster
Understanding ECS: Key Concepts and Nomenclature
Amazon Elastic Container Service (ECS) is a managed container orchestration service that helps deploy, manage, and scale containerized applications. Let’s break down the critical components and concepts you need to know for working with ECS effectively.
Core ECS Dynamics
- Task Definitions:
-
A task definition is like a blueprint for your application. It defines:
- The container images to use (e.g., from Docker Hub or ECR).
- The resource requirements (CPU, memory).
- Network settings and environment variables.
Tasks:
A task is an instantiation of a task definition.
It’s the actual running instance of your application.
Services:
A service manages and maintains the desired number of task instances.
Example: If you want three containers of your app running, the service ensures this even if one fails.
Clusters:
A cluster is a logical grouping of resources. It hosts the infrastructure where your tasks run (either EC2 instances or Fargate).
Container Instances:
If using EC2 launch type, these are the EC2 instances where containers are deployed.
If using Fargate, AWS manages the infrastructure for you (serverless approach).
Launch Types:
Fargate: Serverless, AWS manages the instances. Ideal for most use cases as it eliminates infrastructure management.
EC2: You manage the instances (more flexibility but also more responsibility).
Load Balancers (Optional):
Distributes incoming traffic across multiple containers to ensure availability and fault tolerance.
ECS Workflow in Practice
- Define a Task:
- Write a task definition with the container details.
- Create a Cluster:
- Set up a logical grouping for the infrastructure.
- Deploy Services:
- Use a service to manage and scale tasks.
- Monitor:
- Use CloudWatch for logs and metrics to ensure the application runs as expected.
Define A Task
create a taskdefinition family name similar to the project you are tryingtodeploy.
select a launch type, I will go with Fargate
for Operating system/Architecture
I will go with linux/x86_64
CPU, I will check 1cpu, memory 2GB
for Task execution role. and task role choose the role. you crreated before. ECSExecutionRole
** Create a Cluster: **
- Open the ECS console.
- Click "Create Cluster" and choose: select. thename of the cluster EG DevCluster select the name of the namespace, this would have automatically been created and named the same as the cluster, you could change it to something else in the. dropdown for infrastructure select Fargate for serverless deployments. EC2 if you want to manage instances manually. dlick on the dropdown for monitoring and select 'Container Insights with enhanced observability' You can leave everything else as it is and click on create, It takes a while to be completely created also. after is has been created, click on the clister and righr below the pageon the service tab, click on create
Compute configuration
Let me explain the compute configuration options available when setting up an ECS service:
Compute Options - You have two main choices:
a) Capacity Provider Strategy
More flexible and automated approach
Allows mixing of different compute types (like FARGATE and FARGATE_SPOT)
Can set up rules for task distribution
Better for cost optimization and availability
b) Launch Type
Simpler, more direct approach
Choose a single compute type (like FARGATE or EC2)
Less flexible but easier to understand
Good for simple use cases
Capacity Provider Strategy Options:
a) Use Cluster Default
Uses whatever strategy is set at the cluster level
Good for consistency across services
b) Use Custom
FARGATE: Regular, predictable pricing
FARGATE_SPOT: Up to 70% cheaper but can be interrupted
Can set for each provider:
Base: Minimum number of tasks
Weight: Relative distribution of tasks
Platform Version
LATEST: Automatically uses newest platform features
Specific versions available (like 1.4.0):
1.4.0: Enhanced network performance, security patches
1.3.0: Earlier version with basic features
Choose specific versions if you need stability or specific features
Example strategy:
Complete the setup with networking configurations (e.g., VPC and subnets).
Clone the Flask App Repository
Start by cloning the repository containing the Flask application. Use the following command:
git clone https://github.com/21toffy/docker-ecs-deployment-test.git
cd docker-ecs-deployment-test
Step 2: Explore the Repository
Let’s break down the critical files in the repository:
Dockerfile
The Dockerfile describes how to containerize the Flask app:
FROM python:3.10-slim
WORKDIR /app
# Install build dependencies
RUN apt-get update && apt-get install -y \
build-essential \
&& rm -rf /var/lib/apt/lists/*
COPY requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
# Copy the application code
COPY . .
# Set environment variables
ENV PYTHONPATH=/app
ENV FLASK_APP="app:create_app()"
ENV FLASK_ENV=development
ENV PYTHONUNBUFFERED=1
EXPOSE 5000
CMD ["flask", "run", "--host=0.0.0.0", "--port=5000"]
Explanation:
- Base Image: Uses python:3.10-slim for a lightweight Python environment.
- Dependencies: Installs essential tools and Python dependencies from requirements.txt.
- Environment Variables: Configures Flask to run in development mode and sets the app entry point (app:create_app()).
- Expose Port: Opens port 5000 for the Flask app.
- CMD: Runs the Flask app on 0.0.0.0 to make it accessible externally.
** Docker Compose File (docker-compose.yml) **
The docker-compose.yml simplifies running the container and connecting services:
version: "3.8"
services:
web:
build: .
image: oluwatofunmi/question-answer:v1
container_name: flask_app
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
- DATABASE_URL=${DATABASE_URL}
- FLASK_ENV=${FLASK_ENV}
- FLASK_APP=app:create_app()
env_file:
- .env
ports:
- "5000:5000"
volumes:
- .:/app
networks:
- app-network
command: flask run --host=0.0.0.0 --port=5000
Explanation:
- Service Definition: Defines a web service using the Flask app. Environment Variables: Reads sensitive credentials from the .env file.
- Ports: Maps port 5000 of the container to port 5000 of the host.
- Volumes: Mounts the project directory into the container for live updates during development.
- Networks: Sets up an isolated network for communication between services.
** GitHub Actions Workflow (cicd.yml) **
The cicd.yml automates the CI/CD process:
name: CI/CD Pipeline
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Build Docker image
run: |
docker build -t oluwatofunmi/question-answer:latest .
- name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Push Docker image to DockerHub
run: docker push oluwatofunmi/question-answer:latest
deploy-to-ecs:
needs: build
runs-on: ubuntu-latest
steps:
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v3
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Deploy to ECS
run: |
aws ecs update-service \
--cluster selftestcluster \
--service selftestservice \
--force-new-deployment
** Explanation:**
- Trigger: Runs on every push to the main branch.
- Build Job:
- Checks out the code.
- Builds the Docker image.
Pushes the image to Docker Hub.
Deploy Job:
Configures AWS credentials using repository secrets.
Updates the ECS service to deploy the new Docker image.
Step 3: Set Up Environment Variables
- Create a .env File: Add the following to a .env file in your repository:
DATABASE_URL=
OPENAI_API_KEY=
SECRET_KEY=
POSTGRES_USER=
POSTGRES_PASSWORD=
POSTGRES_DB=
- Add Secrets to GitHub: In the GitHub repository:
Go to Settings > Secrets and Variables > Actions > New Repository Secret.
Add the following secrets:
DOCKERHUB_USERNAME
DOCKERHUB_TOKEN
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Top comments (0)