DEV Community

PaddyAdallah
PaddyAdallah

Posted on

Deploy a high-availability web app using CloudFormation — Servers and Security Groups

Deploy a high-availability web app using CloudFormation — Servers and Security Groups

In **Part 1** of this walkthrough, we deployed and verified the necessary networking infrastructure for a web application on AWS. This article will walk through the next set of resources on top of the networking infrastructure. At the end of this article, we will:

  • Specify firewall rules using security groups

  • Create autoscaling groups for the elasticity of servers

  • Code the launch configuration for the web application

  • Add the target groups and listeners

  • Update the stack with the load balancer

  • Learn to debug the security group.

Network Diagram

We continually refer to the network diagram earlier designed using LucidChart to understand the CloudFormation script.

Here in the servers.yml file, we have one parameter: the same environment name from the network infrastructure deployment. This way, we will refer to any output variables given by our network creation script.

Resources

**Security Groups** — are associated with the specific resources that we are going to deploy, not subnets. They filter traffic to the resources we deploy through the ingress and egress rules on particular ports. By default, traffic is completely blocked, so we have to open ports to allow traffic in and out explicitly.


**Autoscaling Group — creates or removes servers based on user-defined criteria. We provide the auto-scaling group with a launch configuration. You can use an auto-scaling group even when your application needs only one server because it keeps track of that one server, and if something ever happens to it, it will spin it back up.

Under the parameters, we specify the Amazon Machine Image and the instance type that AWS will deploy in the auto-scaling group. Amazon provides the AMIs and Instances information **here.


**Launch configuration — specifies the configuration for the instances we launch, such as volume size, instance type, image ID, SSH keys, and the Instance profile to be associated with that launch configuration. For our deployment, we install an apache server and serve a static webpage to test its successful deployment.

**Load balancer
— distributes traffic across the instances in the auto-scaling group. It requires specific subnets because the LB is not a single point of failure. It has a single entry point into the web app but has more than one copy internally for redundancy.

We add logic to the load balancer to check on the health of our application. If the application is not running, that server is reported unhealthy, and the listener will trigger the autoscaling group to terminate that server and spin up a new one.


**Target group** — the main components are the health checks, which monitor the status of the registered targets. Four our case, we monitor port 8080 of the instances of the auto-scaling group for an http response. In this case, the UnhealthyThresholdCount specifies that if there are five unsuccessful retries by the load balancer. The checks are done in 10-second intervals. If the LB gets two valid responses in intervals of 10 seconds, the servers are presumed healthy, and the LB will forward traffic to them.

**Application Load Balancer Listener** — sends traffic to the target group. We specify a path that it listens to. Our service port is 80 for HTTP. We can use port 443 (HTTPS), which would require an SSL certificate.

Outputs

We retrieve the URL of our load balancer as an output to test the successful deployment of the server infrastructure.


**S3 Bucket**

From the design diagram, the deployed web app uses an S3 bucket for storage. We include the specifications for the S3 bucket in our CloudFormation script. Under the resources section, we create a role that permits access to the S3 bucket. The role is then attached to an instance profile.


JSON File

Our JSON file refers to the network infrastructure deployed in part 1 of this walkthrough. Again, having this additional file with actual parameter values, allows you to change data used by your CloudFormation script without the risk of having to modify the script directly and possibly introduce a typo or some logical error.


Finally, with AWS CLI installed on our local machine, we create the CloudFormation stack by calling the YAML and JSON parameter files and passing them as parameters to the CloudFormation call.
aws cloudformation update-stack --stack-name ourNetworkInfra --template-body file://server_deployment.yml    --parameters file://server_parameters.json  --region=us-east-1
Enter fullscreen mode Exit fullscreen mode

You can view the deployment status through the AWS console or the *describe-stack *option in the CloudFormation command as below:

aws cloudformation describe-stack --stack-name ourNetworkInfra
Enter fullscreen mode Exit fullscreen mode

Conclusion

For more information on your deployed stack, read the CloudFormation CLI documentation reference. Feel free to drop your queries in the comment section. I will be happy to revert and learn from you as well.

You will find the GitHub repository for the project containing the complete YAML files and configurations in the link below;

https://github.com/PaddyAdallah/AWS-Projects/tree/main/cloudformation_project

Top comments (0)