DEV Community

Cover image for Cloud Portfolio Challenge: Load balancing and content delivery network
Jose Hidalgo
Jose Hidalgo

Posted on

Cloud Portfolio Challenge: Load balancing and content delivery network

The problem

Build an image delivery service that, when queried, returns at least one image matching the search criteria of your choice

Solution design

The solution in Figure 1 demonstrates how I build the image delivery system using Amazon CloudFront, Amazon Load Balancer, and Amazon Auto-scaling group.

The 4 key points that I learned from this project are:

  1. Virtual machines
  2. Networking
  3. Load balancing
  4. Content delivery networks (CDNs)

Figure 1 illustrates the architecture of this solution.

Architecture

  1. The user will access the site using the CloudFront URL. Amazon CloudFront works seamlessly with Amazon EC2 to accelerate the delivery of the dynamic content. You can specify which AWS origins you would like. The AWS origins we can use are Amazon S3, Amazon EC2, or Elastic Load Balancing. When I set up the CloudFront, I specified the load balancing DNS name as the AWS origin. Before I go on about how I set up the public-facing load balancer one of the reasons we want to use CloudFront on a web application is because using a traditional server-based approach can add more load to the web server. With CloudFront, the end-users connection is terminated at CloudFront edge locations closer to them, which helps to reduce the overall round trip required to establish a connection. Without CDN the user would have to connect to the origin server. However, with CDN, the user connects to the nearest CloudFront location where the data is cached for the next users.
  2. I set up an internet-facing load balancer with private subnets used by Amazon EC2 instances. Then, associate the public subnets with the load balancer. It was really straightforward to do. This website helped me a lot. One of the key benefits of using a load balancer vs an API Gateway is the ability to distribute load intelligently across resources. This makes them ideal when using services such as Amazon EC2 or Amazon ECS, where a single instance or task could become non-operational if flooded with requests.
  3. I registered an autoscaling group to the load balancer. I could have used the load balancer without the autoscaling group but I preferred to give it a try to know how to set up one as it was my first time using one. Auto-scaling is an automated process that adjusts capacity for predictable performance and costs. To define auto-scaling to EC2 instances, there are two ways to do it. We can either use Launch Configurations or Launch Template. In the project, I used Launch Configuration because of how easier was to do it but either way works. When we define the Launch Configuration, the auto-scaling group will use it to initiate our instances. Scaling is done automatically through a set of threshold parameters. We can scale up or down depending on what requirement we want, AWS Auto scaling allows us to also monitor the applications, and most importantly scaling is done automatically.
  4. The EC2 instances are holding a web and an API application using NGINX.
    1. The web application is a simple static application that has a search input that when entering some information and press search, makes a call to an API.
    2. The API application is a python fast application that has an exposed endpoint /image/{image_name} when it gets called, it looks for images stored in the EC2 if there are one’s matching criteria it pulls the image otherwise it won’t show anything on the site.

A preview of how the web application works, download the video here.

Code repository

The code for this solution is available on Github.

Next Steps

Something that I have learned over the years is that there is no perfect architecture. While I was building this project I realized that there were some things that could have been better built in a different way, I’m going to mention those things that I would like to tackle for the future that will make the presented architecture stronger:

  • Use Amazon Elastic File System to store the images and decouple them from the EC2 instances. One of the reasons is that spinning an EBS is easy and EBS is a private service and can be shared between many EC2 instances.
  • Leverage storing the applications using Amazon ECS. By using ECS, we can focus on building the application rather than maintaining the infrastructure on which it will run. A major benefit of ECS is that it supports Docker containers. So we can have our application dockerized.

Conclusion

Cloud engineers that operate daily with bigger applications often struggle to think about how we can build highly scalable applications. This project opened my mind to dive deeper into these services: Amazon CloudFront, Amazon Load Balancer, and Amazon Auto-scaling group. How these services are connected and how they can improve the efficiency of a multi-tier application. These technologies not only offer automatic scaling and built-in high availability but provide a better user experience that is important for the end-user. I’d like to thank Lars Klint ****for creating this amazing project that taught me so many good things that I can use in my work. And most importantly, doing this project put me one step closer to taking the AWS Certified Solutions Architect Associate soon.

If you are as interested as I was and would like to get your hands dirty to learn cloud stuff, go to the site to learn more about the project requirements.

Top comments (0)