Motivation:
Sometimes your Service is not a single Application, that is easy to dockerize and deploy. But what if your Service has multiple docker containers and one of them consumes data from another? To achieve this you need to create some sort of communication channel between the containers. It can easily be achieved by strategy, when you create an EC2 instance per Application, but this will increase your costs and obviously such a strategy is not cost-effective. Then you want to get the most out of the EC2 instance and run both of the dockerized Applications on a single EC2 instance.
This is a step-by-step guide on how to create AWS ECS task definition, consisting of two docker containers with Applications and with the ability to communicate between them on a single EC2 instance.
We are going to use ECS service in order to manage docker containers on EC2 instance.
Create ECS Cluster & ECS service
Let's start from basic stack from official AWS-ECS example with little modifications.
declare const vpc: ec2.Vpc;
// Create an ECS cluster
const cluster = new ecs.Cluster(this, 'Cluster', {
vpc,
});
// Add capacity to it
cluster.addCapacity('DefaultAutoScalingGroupCapacity', {
instanceType: new ec2.InstanceType("t2.micro"),
desiredCapacity: 1,
});
// Place for adding task definition,
// docker container options,
// and all other options from this article below
// Instantiate an Amazon ECS Service
const ecsService = new ecs.Ec2Service(this, 'Service', {
cluster,
taskDefinition,
});
Add Task definition
The main thing here is to create task definition with Bridge Network Mode. This mode makes possible the communication between docker containers connected to the same Docker's internal network, or Bridge.
const taskDefinition = new ecs.Ec2TaskDefinition(this, 'TaskDef', {
networkMode: Network.BRIDGE,
});
Add ProducerApp container
Now let’s add producer container. Producer Application will be a simple HTTP server on port 8080 with /data
endpoint where consumers can get data.
const producerContainer = taskDefinition.addContainer('ProducerContainer', {
image: ecs.ContainerImage.fromRegistry('producer-image'),
memoryLimitMiB: 256,
});
producerContainer.addPortMappings({
containerPort: 80,
hostPort: 8080,
protocol: ecs.Protocol.TCP,
});
Add Consumer container
Also let's add consumer container. Consumer Application will be an HTTP Server on port 8081 with the ability to act like a HTTP client, which will retrieve data from Producer Application every minute.
But how the Consumer Application will know how to reach out the Producer Application? What endpoint should Consumer App ping? We will clarify this in a moment.
const consumerContainer = taskDefinition.addContainer('ConsumerContainer', {
image: ecs.ContainerImage.fromRegistry('consumer-image'),
memoryLimitMiB: 256,
});
consumerContainer.addPortMappings({
containerPort: 81,
hostPort: 8081,
protocol: ecs.Protocol.TCP,
});
Add communication between containers
And now the final trick: we need to create a way of communication between ProducerContainer and ConsumerContainer. We will use addLink method for it. Good thing about that method, that we don’t need to worry about mapping ports or whatever. Internally this method adds an alias to /etc/hosts which allows containers to communicate with each other just using an alias and as you remember both of the containers are in the same Bridge Network, so it makes containers reachable.
consumerContainer.addLink(producerContainer)
That's it from CDK point of view. But we still missing Producer's endpoint, from where Consumer Application should retrieve data.
Diving deeper to find Producer endpoint
After successful deployment of CDK, let's find out how Producer endpoint looks like.
Connect to EC2 instance via SSH or session manager.
~$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d6a008b35bb2 producerContainer "/usr/src/app/produc…" 10 min ago Up 10 min producerContainer
a85bbfa3fae3 consumerContainer "/usr/src/app/consum…" 10 min ago Up 10 min consumerContainer
First of all we can clearly see that both docker containers are up and running. Let's dive deep into the consumerContainer.
~$ sudo docker exec -it a85bbfa3fae3 /bin/sh
~$ cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
172.17.0.2 producerContainer d6a008b35bb2
172.17.0.3 a85bbfa3fae3
As you may see, now we know an alias for the producerContainer:
172.17.0.2 producerContainer d6a008b35bb2.
Let’s get some data from it!
~$ curl producerContainer:8080/data
~$ {data: "Very important data"}
It worked like a charm! Now you just need to add producerContainer:8080/data
as target endpoint for HTTP client of Consumer Application and retrieve the data from Producer Application!
As an additional idea, if you retrieve data more often or you need 2-way communication between containers and may consider to establish WebSocket connection between containers and produce/consume data in more convenient and faster way!
Conclusion:
This trick will help you not just to understand more about docker & AWS but also utilise more EC2 instance and save you some money! Hope you enjoyed it!
Top comments (3)
Thanks! I need to try it out and use single instance instead of having two!
Thanks, hope you enjoyed it! if you suddenly have questions - feel free to come back with them in the comments!
Wow! I've been Googling all around found, finally. Thanks for suggestion