In the previous post, we learned how to create an AWS Architecture to support our Python Application . In this post, we will learn how to create a task-definition from a docker-compose file.
Before diving deep into the tutorial, let us define what is a docker-compose file and recall from the previous tutorial what is a task-definition.
What is docker-compose?:
From this tutorial, docker-compose is defined as :
Docker Compose is a way to create reproducible Docker containers using a config file instead of extremely long Docker commands. By using a structured config file, mistakes are easier to pick up and container interactions are easier to define.
What is a task definition:
Letβs recall what a task-definition is: it is just a specification. You use it to define one or more containers that you want to run together, along with other details such as environment variables, CPU/memory requirements, etc.
From the two definitions, we can see that a task definition role is similar to that of a docker-compose file.
We will therefore use the docker-compose file to generate the task-definition.
The real stuff, the transformation :
To make our transformation, we can go back to the project we introduced in the first part and cd
to the project directory.
We will leverage a python tool called container-transform to accomplish our transformation.
You can install it in your project virtual environment with :
pip install container-transform
With the tool installed we can now use it to generate the task definition file.
cat docker-compose.yml | container-transform -v > .aws/task-definition.json
The output of this command is sent to the file .aws/task-definition.json
, if everything went well you will have something like this :
{
"containerDefinitions": [
{
"command": [
"celery",
"-A",
"celery_factory:celery",
"beat",
"-S",
"redbeat.RedBeatScheduler",
"--loglevel=info"
],
"essential": true,
"image": "task_runner",
"links": [
"redis"
],
"name": "celery-beat"
},
{
"command": [
"celery",
"worker",
"-A",
"celery_factory:celery",
"--loglevel=info",
"-E"
],
"essential": true,
"image": "task_runner",
"links": [
"redis"
],
"name": "celery-worker"
},
{
"command": [
"./start_flower"
],
"environment": [
{
"name": "FLOWER_PORT",
"value": "5556"
}
],
"essential": true,
"image": "task_runner",
"links": [
"redis"
],
"name": "flower",
"portMappings": [
{
"containerPort": 5556,
"hostPort": 5556
}
]
},
{
"essential": true,
"image": "redis",
"name": "redis"
}
],
"family": "",
"volumes": []
}
What to note here; all services we have in the docker-compose file are now in the containerDefinitions
sections of our task definition. However, that file is not yet fully complete. We will have to update it with other keys such as the network mode, resources, execution role we created before, and the logging option for sending logs to Cloudwatch. Letβs edit the file by adding the following. We also need to remove the link
key from each container definition.
"requiresCompatibilities": [
"FARGATE"
],
"inferenceAccelerators": [],
"volumes": [],
"networkMode": "awsvpc",
"memory": "512",
"cpu": "256",
"executionRoleArn": "arn:aws:iam::Your-id-from-aws:role/ecs-devops-execution-role",
"family": "ecs-devops-task-definition",
"taskRoleArn": "",
"placementConstraints": []
What are those elements?
requiresCompatibilities
: here, we are specifying that our launch type is of Fargate type.networkMode
: this is the Docker networking mode to use for containers in the task. AWS offers the following network modes:none
,bridge
,awsvpc
, andhost
. In the Fargate launch type, theawsvpc
network mode is required. With this setting, the task is allocated its own elastic network interface (ENI) and a primary private IPv4 address. This gives the task the same networking properties as Amazon EC2 instances. Learn more about networking mode here.memory
: is the amount of RAM to allocate to containers, if your cluster does not have any registered container instances with the requested memory available, the task will fail.cpu
: The number of CPU units that the Amazon ECS container agent will reserve for the container.executionRoleArn
: The Amazon Resource Name (ARN) of the task execution role that grants Amazon ECS container agent permission to make AWS API calls on your behalf. As you can see it is the IAM role we created in our Cloudformation stack.family
: is the name of the task definition we created on the Cloudformation stack.
In each container definition, we need to add this code to send container logs to Cloudwatch.
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "ecs-devops-service-logs-groups",
"awslogs-region": "us-east-2",
"awslogs-stream-prefix": "celery-beat"
}
},
Add those lines to each AWS service and change the awslogs-stream-prefix
key and put the container name. To learn more about task definitions parameters, check AWS documentation
With those parameters edited we end up with the following task-definition.
{
"containerDefinitions": [
{
"command": [
"celery",
"-A",
"celery_factory:celery",
"beat",
"--scheduler=redbeat.RedBeatScheduler",
"--loglevel=debug"
],
"essential": true,
"image": "task_runner",
"environment": [
{
"name": "CELERY_BROKER_URL",
"value": "redis://127.0.0.1:6379"
}
],
"name": "celery-beat",
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "ecs-devops-service-logs",
"awslogs-region": "us-east-2",
"awslogs-stream-prefix": "celery-beat"
}
}
},
{
"command": [
"celery",
"-A",
"celery_factory:celery",
"worker",
"--loglevel=error",
"-E"
],
"essential": true,
"image": "task_runner",
"name": "celery-worker",
"environment": [
{
"name": "CELERY_BROKER_URL",
"value": "redis://127.0.0.1:6379"
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "ecs-devops-service-logs",
"awslogs-region": "us-east-2",
"awslogs-stream-prefix": "celery-worker"
}
}
},
{
"command": ["./start_flower"],
"environment": [
{
"name": "FLOWER_PORT",
"value": "5556"
},
{
"name": "CELERY_BROKER_URL",
"value": "redis://127.0.0.1:6379"
}
],
"essential": true,
"image": "task_runner",
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "ecs-devops-service-logs",
"awslogs-region": "us-east-2",
"awslogs-stream-prefix": "celery-flower"
}
},
"name": "flower",
"portMappings": [
{
"containerPort": 5556,
"hostPort": 5556
}
]
},
{
"essential": true,
"image": "redis",
"name": "redis",
"portMappings": [
{
"containerPort": 6379
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "ecs-devops-service-logs",
"awslogs-region": "us-east-2",
"awslogs-stream-prefix": "celery-redis"
}
}
}
],
"requiresCompatibilities": ["FARGATE"],
"inferenceAccelerators": [],
"volumes": [],
"networkMode": "awsvpc",
"memory": "512",
"cpu": "256",
"executionRoleArn":"arn:aws:iam::****youraws id*****:role/ecs-devops-execution-role",
"family": "ecs-devops-task-definition",
"taskRoleArn": "",
"placementConstraints": []
}
In this tutorial, we learned how to use the container-transform tool to convert a docker-compose file to an AWS task-definition.
With our task definition in place, we can now move to the third part of this tutorial where we will use the task-definition to deploy our containers to our Cloudformation stack, created in part one, using Github actions.
See you then.
Top comments (0)