DEV Community

Cover image for How to route traffic to your Docker container in AWS ECS using an Application Load Balancer
Dmitriy A. for appfleet

Posted on • Originally published at appfleet.com

How to route traffic to your Docker container in AWS ECS using an Application Load Balancer

Deploying containers into AWS Elastic Container Service (ECS) is straight-forward, especially when using CloudFormation. Once you've got a basic ECS cluster deployed, it's important to think about how to provide high-availability of your service so your customers don't experience any down-time.

In this article, we'll be extending the example provided in Automate Docker container deployment to AWS ECS using CloudFormation, to include multiple replicas which register automatically into an Application Load Balancer.

Load balancing in AWS

To achieve high-availability with ECS we can deploy multiple replicas of an ECS Task, so if one task dies we can failover to another to continue serving traffic.

We can provide further protection by ensuring that our ECS Tasks are deployed to multiple AWS availability zones. This means that if availability Zone 1 becomes compromised, then we have a geographically and operationally separate availability Zone 2 to fall back.

Introducing the Application Load Balancer

The Application Load Balancer is a flavour of AWS's Elastic Load Balancer resource. It is a Layer 7 load balancer, meaning it can make routing decisions at a higher HTTP level. We can have rules that direct traffic based on HTTP request parameters such as headers, request methods, paths, and more.

To connect ECS with an Application Load Balancer, we need to understand the following resources:

  • Load Balancer Listener: checks for connections from clients. Uses configurable rules to determine how to route requests.
  • Target: an end destination to which requests are routed. Can be an EC2 instance id or IP address. In case of ECS, it will be the IP address that has been associated with the ECS Task.
  • Target Group: a logical grouping of targets. We can set properties on the group as a whole to apply to all targets, such as the load balancing algorithm and health check path.

Note that the Application Load Balancer must be created with at least 2 subnets, which helps force us to design highly available architectures.

Alt Text

The diagram above shows the setup for linking ECS with an Application Load Balancer. As long as we point the ECS service to the correct target group, ECS will handle registering a target with the correct IP for us.

That leaves us to handle creating the following resources:

  • Load Balancer
  • Load Balancer Listener
  • Load Balancer Security Group
  • Target Group

CloudFormation for connecting ECS to an ALB

We'll continue where we left off in the last article, where we'd created an ECS Service for an NGINX task exposed via a public IP. Check out the article to grab the CloudFormation script which we'll be building on in this example.

Scaling up

Our previous example only had 1 NGINX instance, so let's scale that up to 2 for high-availability.

Multiple subnets, multiple availability zones

As mentioned earlier the ALB needs 2 subnets, so we're going to change the CloudFormation template to take the subnet ids as parameters. The default VPC created by AWS should contain multiple subnets in different availability zones, so you can always use these.

Alt Text

While we're at it, we'll add a VPCID parameter as well, since this will be required by the target group later.

Change the parameters in the CloudFormation template to this:

Parameters:
  Subnet1ID:
    Type: String
  Subnet2ID:
    Type: String
  VPCID:
    Type: String

Scaling up to 2 replicas

Now that we have included multiple subnets, let's make use of these in our ECS Service by:

  1. including the new subnet in the ECS Service definition, so it can deploy tasks to both subnets
  2. scaling the service up to 2 replicas

Your CloudFormation for the service should now look like this:

Service:
    Type: AWS::ECS::Service
    Properties:
      ServiceName: deployment-example-service
      Cluster: !Ref Cluster
      TaskDefinition: !Ref TaskDefinition
      DesiredCount: 2       #              <--- Increase replicas to 2
      LaunchType: FARGATE
      NetworkConfiguration:
        AwsvpcConfiguration:
          AssignPublicIp: ENABLED
          Subnets:
            - !Ref Subnet1ID #             <--- Add subnet 1
            - !Ref Subnet2ID #             <--- Add subnet 2
          SecurityGroups:
            - !GetAtt ContainerSecurityGroup.GroupId

Note: if you get the below error saying that CloudFormation can't update the stack, change the ServiceName above to something different, e.g. deployment-example-svc.

Alt Text

Deploy the changes

We'll update the CloudFormation stack, which if you followed along with the example in the previous article, was called example-deployment. Note that we need to add parameters now for Subnet1ID, Subnet2ID, and VPCID.

$ aws cloudformation update-stack --stack-name example-deployment --template-body file://./ecs.yml --capabilities CAPABILITY_NAMED
_IAM --parameters ParameterKey=Subnet1ID,ParameterValue=<subnet-1-id> ParameterKey=Subnet2ID,ParameterValue==<subnet-2-id> Para
meterKey=VPCID,ParameterValue=<vpc-id>

Once this stack has finished updating (check out Services > CloudFormation in the AWS Console to get it's status), head on over to Services > ECS > deployment-example-cluster > Tasks, and you should see multiple tasks running:

Alt Text

If you click on each individual task and look at the network section, you'll see each task is deployed into a separate subnet and has a distinct IP address:

Alt Text

Alt Text

So now that we've got our replicas, we just need to connect them up to a load balancer. ✅

Adding new application load balancer resources

Next we're going to create the four resources mentioned earlier, which will provide the gateway through which a user can access our NGINX containers over the internet.

Add the following resources to the end of your CloudFormation template:

LoadBalancerSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupName: LoadBalancerSecurityGroup
      GroupDescription: Security group for load balancer
      SecurityGroupIngress:
        - IpProtocol: tcp
          FromPort: 80
          ToPort: 80
          CidrIp: 0.0.0.0/0
  LoadBalancer:
    Type: AWS::ElasticLoadBalancingV2::LoadBalancer
    Properties:
      Name: deployment-example-load-balancer
      Subnets:
        - !Ref Subnet1ID
        - !Ref Subnet2ID
      SecurityGroups:
        - !GetAtt LoadBalancerSecurityGroup.GroupId
  LoadBalancerListener:
    Type: AWS::ElasticLoadBalancingV2::Listener
    Properties:
      LoadBalancerArn: !Ref LoadBalancer
      Port: 80
      Protocol: HTTP
      DefaultActions:
        - Type: forward
          TargetGroupArn: !Ref TargetGroup
  TargetGroup:
    Type: AWS::ElasticLoadBalancingV2::TargetGroup
    Properties:
      TargetType: ip
      Name: deployment-example-target-group
      Port: 80
      Protocol: HTTP
      VpcId: !Ref VPCID

We're configuring:

  • Security Group: allows inbound traffic to the load balancer on port 80 from any IP
  • Load Balancer: an Application Load Balancer (the default type), with an associated security group
  • Load Balancer Listener: listening on port 80 for HTTP traffic, this will forward requests onto the target group as its default behaviour
  • Target group: receiving HTTP traffic on port 80, this target group is ready for any targets to be registered to it by ECS

Once again, apply this update to your CloudFormation stack and wait for it to complete:

$ aws cloudformation update-stack --stack-name example-deployment --template-body file://./ecs.yml --capabilities CAPABILITY_NAMED
_IAM --parameters ParameterKey=Subnet1ID,ParameterValue=<subnet-1-id> ParameterKey=Subnet2ID,ParameterValue==<subnet-2-id> Para
meterKey=VPCID,ParameterValue=<vpc-id>

New target group

If you head on over to Services > EC2 > Load Balancers you'll be able to see information about your new load balancer, including it's DNS name which we'll need later. If you try hitting it now, you'll get a 503 error.

Alt Text

If you select Target Groups in the left hand navigation, you can also see information on the new target group. The default load balancing algorithm chosen by AWS is Round robin. 🐦 This means requests will be distributed to each target equally.

Alt Text

Link the ECS Service to the new load balancer

We only have one change to make now, and that's to update our ECS Service so that it knows to auto-register targets for each task in our new target group. This can be achieved by adding the following LoadBalancer section to the end of the Service resource:

Service:
    Type: AWS::ECS::Service
    Properties:
      ...
      LoadBalancers:
        - TargetGroupArn: !Ref TargetGroup
          ContainerPort: 80
          ContainerName: deployment-example-container

The ContainerName must match the Name defined in the ContainerDefinitions section of the AWS::ECS::TaskDefinition resource.

For the final time, update the CloudFormation stack with the same aws cloudformation update-stack command from before.

Navigate to Services > EC2 > Target Groups > Targets and once the CloudFormation stack has finished updating you'll see that two new targets have been registered for us.

Alt Text

The IP addresses here are the ones which were already assigned to our two individual ECS Tasks. You can see that the status is healthy since the default health check hits / on port 80, which in the case of our NGINX container returns a 200 response.

Test it out

Grab the DNS name from your Application Load Balancer as described above, then navigate to it in your browser.

Alt Text

You'll get the above default NGINX page proving that our load balancer has been configured correctly to route requests to ECS.

Conclusion

We now have a highly-available ECS cluster deployed, accessible over the internet via an Application Load Balancer. In the case of one of our NGINX instances failing, we have a failover instance available in another availability zone to take over.

This is of course all handled automatically for us by ECS. When one of our NGINX instances becomes unhealthy, requests will no longer be routed to it, and ECS will eventually try restarting the container. Likewise, if ECS decides to proactively remove a task (for example when we're scaling down the number of replicas), it will automatically remove the target from the target group.

Next steps: This example is a simple setup for demonstration purposes. If you want to run a setup like this in production, make sure to:

  • configure the load balancer for HTTPS rather than HTTP access
  • limit the outbound traffic from the load balancer to only those destinations we know it should access, such as ECS. You can do this in the load balancer security group's SecurityGroupEgress rules.
  • set AssignPublicIp to false in the AWS::ECS::Service definition

Top comments (1)

Collapse
 
javaguirre profile image
Javier Aguirre

Thanks! You article helped me understand some things I needed :-)