What is AWS ECS?
ECS stands for Elastic Container Service and the reason why it exists is to simplify the deployment, management, and scaling of containerized applications using Docker containers within AWS.
In summary: wanna run Docker Containers? AWS ECS is the way to go.
By the way, AWS Lambdas can run Docker Containers as well.
We are going to learn about the main pieces that compose AWS ECS:
- ECS Task and Task Definition
- ECS Service
- ECS Cluster
Keep in mind that you'll need to know at least the basics of Docker.
Let's get started!
Before going over each one of the parts that compose AWS ECS, I would like to explain the relationship between them in an SQL-like way:
- One single ECS Cluster can have multiple ECS Services
- A
1-n
type of relationship
- A
- One single ECS Service can have multiple ECS Tasks
- A
1-n
type of relationship
- A
- One single ECS Service can have only one ECS Task Definition attached/associated to it.
- A
1-1
type of relationship
- A
A Task Definition is just a blueprint to create one or more ECS Tasks.
With that clarified, let's go!
1. ECS Task Definition
A Task Definition is a blueprint that describes how the Docker container(s) should run. You can use a .json
file to create a Task Definition. Let's check the example below.
{
// This is not an exhaustive list of all the parameters!
"family": "my-app",
"networkMode": "bridge",
"containerDefinitions": [
{
"name": "web-server",
"image": "nginx:latest",
"memory": 512,
"cpu": 256,
"environment": [
{
"name": "ENV_VAR_NAME",
"value": "some_value"
}
],
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
]
}
]
}
-
family
sets the family name of the Task Definition. It must be unique within your AWS account and region. -
networkMode
defines how containers within the same task can communicate with each other and with external resources. It's important to note that if you need different containers within a Task to use different network modes, you would need to create separate tasks with their respective network modes. Let's check the available options now:-
bridge (default)
- In this mode, each Docker container within the Task gets its own network stack. The host network is isolated.
- The containers can communicate with each other through localhost(127.0.0.1).
-
host
- In this mode, the Docker containers within the Task share the network namespace with the host EC2 instance (or Fargate task).
- The Docker containers have direct access to the host's network interfaces, which means they can bind to host ports without port mapping.
- This mode can provide lower network latency and higher throughput but may have security implications since containers share the host network stack.
-
awsvpc
- In this mode, each container within the task gets its own Elastic Network Interface (ENI) and its own private IP address within your Amazon VPC.
- Containers can communicate with each other using standard networking, and you can use Security Groups and Network ACLs to control traffic between containers.
- It allows containers to access resources in your VPC, making it suitable for scenarios where your containers need to interact with other AWS services or resources within your VPC.
-
none
- When in the
none
network mode, the containers within the task don't have network connectivity. This mode is typically used for tasks that don't require network access, such as batch processing or certain background jobs.
- When in the
-
-
containerDefinitions
is an array of container definitions, remember, one Task can be "in charge" of one single container or multiple containers, all depending on the use case.
If you want to dive into it more:
- Here you can see each individual available property: Task definition parameters - Amazon Elastic Container Service
- Here you can see a
JSON
Template with all available properties: Task definition template - Amazon Elastic Container Service
2. ECS Service
Before moving on to ECS Services, a quick recap: the Task Definition is used to define what Docker Images you want to use and how the Docker Containers within the Tasks — if there's more than one — will communicate with each other.
Also, don't forget that the Task Definition has a name, which is the value passed to the family
property, and if you haven't noticed yet the word family is not a coincidence!
All Docker Containers defined within a given Task Definition will be grouped together (like when using Docker Composer). Why that's important? You may ask... Well, because the ECS Service is what manages the Tasks.
An ECS Service is a wrapper that allows you to define how many Tasks (instances of a Task Definition) should be running at any given time and how they should be managed. It ensures that a specified number of Tasks are running and maintains a desired state for your application.
- An ECS Service needs one Task Definition to use as a Blueprint to create the necessary amount of Tasks.
- You must choose EC2 or Fargate for the ECS Service, which determines where the Tasks will run.
- On the ECS Service, you can manually define the desired number of Tasks that should be running concurrently.
- You can make this automatic using Auto Scaling Rules. This ensures that your application can automatically scale up or down to handle changes in load.
- You can configure an ECS Service to use Load Balancers for distributing incoming traffic to the Tasks.
- ECS Services can be configured to perform health checks on tasks to determine if they are healthy and responsive and if a Task fails a health check, the service scheduler will replace it with a new Task to maintain the desired count of healthy Tasks.
If you to dive deep into the ins and outs of ECS Services, go here: Amazon ECS services - Amazon Elastic Container Service
3. ECS Cluster
There isn't much mystery involved when talking about Clusters, they are pretty straightforward. An ECS Cluster is a wrapper for the ECS Services — which can be comprised of EC2 Instances or Fargate Tasks — on which you can run containerized applications.
Clusters are used to isolate resources and define the scope of where your Containers can be deployed. It is pretty common to see companies using the following configuration when using only one AWS account:
One Development Cluster
One Staging Cluster
One Production Cluster
An ECS Cluster can have multiple ECS Services running within it. That is useful when creating multiple environments to test an application.
For instance, let's say there are 2 developers
working on new features on WordPress, and after testing things out locally both want to test in the development
environment, with only one dev environment that is not possible but we can make it possible by doing this:
We basically duplicated Service 1, so now we can use development-env-1
and development-env-2
to test things out.
But how would they be accessible to the outside world?
Well, for each environment, we would create an ALB/Load Balancer with a Target Group containing a Listener and some Rules to split the traffic to the right ECS Services. The ALB/Load Balancer knows how to split the demand between the ECS Tasks/Docker Containers inside the ECS Service with the ECS Cluster.
Once the ALB/Load Balancer is set up we would use the AWS Route 53 service to create a DNS Record — assuming you already have a domain — that would point to the Load Balancer we want to expose to the outside world.
ALB stands for Application Load Balancer and is part of the AWS ELB (Elastic Load Balancer) service.
Soon, I'll create a post with a step-by-step guide on how to set up all this infrastructure using Terraform.
Hope that you find this guide useful, if you have any questions or feedback, please feel free to reach out in the comments sections below.
Thank you!
Top comments (0)