DEV Community

Cover image for Failover Architecture on AWS(Part 3/4)
AntonNguyen97
AntonNguyen97

Posted on

Failover Architecture on AWS(Part 3/4)

Hi there! Let’s continue our job. By the way, if you are the new one to this I recommend you to read two previous parts, here and here. In part number three, we will create a launch template and auto-scaling group.

So, go to EC2 service and on the left menu bar in the “Instances” section you can find “Launch Templates”. Click on “Create launch template”. Give a name to your launch template and a description. Then choose any AMI that you want, I will use Ubuntu Server 18.04 LTS (HVM), pick instance type, I will use t2.micro because we do not need a powerful server for this example. Then create a new key pair or choose the one that you already have. I will pick the one that was created early by me. Next, we will choose/create a security group, I will use the one that created in part number two of this article. Also, you can add more volume and network interfaces to it if you want to. For this example, I will not do it. Then click on “Advanced details” all we need here is put the script that will be processed after the system boot.
All it does is updating the system, configure NGINX, and modify the default startup page of NGINX, start NGINX and enable it to start on system booting. Below you can find the text of the script:

Alt Text

Then click on “Create launch template” button. Once you see the green checkmark “Success”, congrats launch template created.
Ok, let’s go back to EC2 service, and on the left menu bar in “Auto Scaling” section you can find “Auto Scaling Group”.
Now, we will create our auto-scaling group. Click on “Create Auto Scaling group” give a name to your ASG(Auto Scaling group) and choose the launch template that was created early. Check all configurations and then hit the “Next” button.

On step number 2 all that we need it is to add more subnets to our ASG, we will choose each of them as I mentioned early we need high availability architecture, so instances will launch in every availability zone randomly.

On step number 3 we will enable load balancing and choose classic load balancer that we configured early. Other options we will leave it by default.

On step number 4 we will configure group size and Scaling policies. As the desired capacity I need 2 instances, the minimum is 2 and maximum capacity 5. For this example, I will choose the desired outcome and leave it to the scaling policy to add and remove capacity as needed to achieve that outcome. The policy name I will leave as “Target tracking Policy”, metric-type – average CPU utilization, target value equals 50.
In this example, I will skip step number 5, in this case we do not need notifications, but you can configure it if you want to.

On step number 6 we need to add tags. I will add Key=Name Value=Appus Studio so our instances will have a name as Appus Studio, you can give any name that you want. Then review all configurations and hit “Create Auto Scaling group” button. Now the auto-scaling group will create new instances, you can go to “Instances” and check it.

Alt Text

Wait until the “Status check” will be “2/2 passed” and then you can go to load balancer section and check for instance status for balancer. Once you see the status is “2 of 2 instances in service” it means all instances have passed health check. You can check the performance of the load balancer by going through the DNS name of the balancer itself (https://your-load-balancer-domain). For now, the page is not secured just because the SSL certificate was issued to the domain name that you specified in part number 1 of this article. We will fix it in the next part of an article.

Further, you can modify all parameters of the auto-scaling group, such as scaling strategy, use a new load balancer, etc.

This is the end of this part and I’m looking forward to seeing you in the next part!

Top comments (0)