<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Taiwo Akinbolaji</title>
    <description>The latest articles on DEV Community by Taiwo Akinbolaji (@taiwoakinbolaji).</description>
    <link>https://dev.to/taiwoakinbolaji</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/taiwoakinbolaji"/>
    <language>en</language>
    <item>
      <title>How to Automate Instance Management with AWS SDK for Python (Boto3)</title>
      <dc:creator>Taiwo Akinbolaji</dc:creator>
      <pubDate>Wed, 26 Nov 2025 15:29:49 +0000</pubDate>
      <link>https://dev.to/taiwoakinbolaji/how-to-automate-instance-management-with-aws-sdk-for-python-boto3-33i3</link>
      <guid>https://dev.to/taiwoakinbolaji/how-to-automate-instance-management-with-aws-sdk-for-python-boto3-33i3</guid>
      <description>&lt;p&gt;&lt;strong&gt;INTRODUCTION&lt;/strong&gt;: &lt;/p&gt;

&lt;p&gt;As many of us know, DevOps focuses on automating tasks to save valuable time in the workplace. In this project, we’ll explore how Python can help streamline the management of AWS resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Cloud9&lt;/strong&gt; is a cloud-based Integrated Development Environment (IDE) that enables you to write, run, and debug code directly in the cloud. It’s especially useful for collaborative projects, as multiple developers can work on the same codebase simultaneously.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Boto3&lt;/strong&gt; is a Python library that provides a simple and intuitive interface for interacting with AWS services such as EC2, S3, and DynamoDB, making automation easier and more efficient.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS CLI and Boto3 installed&lt;/li&gt;
&lt;li&gt;AWS account with I AM user access, NOT root user&lt;/li&gt;
&lt;li&gt;Basic AWS command line knowledge&lt;/li&gt;
&lt;li&gt;Basic Python programming language&lt;/li&gt;
&lt;li&gt;Basic knowledge of AWS Interactive Development Environment (IDE)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Project&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PART I&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Our DevOps team frequently uses a development lab to test new releases of our application. However, management has raised concerns about the rising costs of maintaining the lab. To reduce expenses, we need to stop all (for this example) 3 EC2 instances once all engineers have finished work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Task:&lt;/strong&gt; Create a Python script that can be executed to stop all EC2 instances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PART II – Advanced&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To avoid affecting production workloads, we want to ensure that only development instances are stopped. Enhance your script to include logic that stops only the running instances that have the tag Environment: Dev, leaving all other instances untouched.&lt;/p&gt;

&lt;p&gt;References about AWS Cloud9 and Boto3 can be found on the official AWS website: Official AWS and Cloud9 documentation&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Install boto3&lt;/strong&gt;&lt;br&gt;
First of all, we need to install Boto3 in the IDE environment. This allows Boto3 to interact with our instances.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fooc712pjeywaae62re6d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fooc712pjeywaae62re6d.png" alt=" " width="800" height="106"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflg8ixqeoqqxd5tjhxod.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflg8ixqeoqqxd5tjhxod.png" alt=" " width="703" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and then we also need to install AWS CLI by running the code:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3bmhxpfprijrc77wd2km.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3bmhxpfprijrc77wd2km.png" alt=" " width="800" height="109"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq7eq4hf3yp060bghj3dt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq7eq4hf3yp060bghj3dt.png" alt=" " width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we already have a repo “gold-member” setup from our previous project we’d be creating a branch on this repo for our project. Follow the steps below to create a new branch “Week14ProjectBranch”&lt;/p&gt;

&lt;p&gt;Go to Cloud9 &amp;gt; Click Main &amp;gt; click create new branch &amp;gt; Name the branch &amp;gt; Week14ProjectBranch” &amp;gt; Then hit the Enter key on your keyboard to switch automatically to this new branch&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu9vquefiv1dn91merdbf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu9vquefiv1dn91merdbf.png" alt=" " width="800" height="613"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg3fobi2nrs7avqd5u3zj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg3fobi2nrs7avqd5u3zj.png" alt=" " width="800" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On cloud9 CLI clone the newly created repository by running the code below&lt;/p&gt;

&lt;p&gt;As you can see above, our repo have now been cloned from our remote GitHub to our local cloud9 IDE.&lt;/p&gt;

&lt;p&gt;Next we need to create a new branch to work with and then a python file. Follow the steps below:&lt;/p&gt;

&lt;p&gt;Go to cloud9 &amp;gt; Select File &amp;gt; New From Template &amp;gt; Python File &amp;gt; Save As… and give it a name. Make sure you KEEP the extension .py.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg4xezt9qxacg6k0eo5v2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg4xezt9qxacg6k0eo5v2.png" alt=" " width="800" height="569"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhxctc79igkvbvs35amn5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhxctc79igkvbvs35amn5.png" alt=" " width="800" height="529"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We need to switch to our working directory for this project that is “gold-member” using the code below&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmuv84jls18pzgaliw48t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmuv84jls18pzgaliw48t.png" alt=" " width="800" height="106"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Create a python script that you can run to build, start and stop all Instances.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Our objective is to create a python script that will create and stop all 3 EC2 instances. To achieve this we’ll use the code below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6qogi5vyo6ya7lnuk31.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6qogi5vyo6ya7lnuk31.png" alt=" " width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To explain how each line of the code works see below&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 1: import boto3&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fum237ourc0qa1ptjbewk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fum237ourc0qa1ptjbewk.png" alt=" " width="800" height="106"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This line is used to import the Boto3 library, which is a Python library for interacting with AWS services programmatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 2:&lt;/strong&gt; ec2 = boto3.resource(‘ec2’)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4378q554855rominzzb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4378q554855rominzzb.png" alt=" " width="800" height="107"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This line of code is for creating EC2 resource object by calling the resource method of the Boto3 EC2 client with the ec2 variable holding this resource object. Moreover the EC2 service in AWS is needed as it is used to interact with our EC2 instances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lines 3 to 8:&lt;/strong&gt; ec2.create_instances()&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;instance = ec2.create_instances(
    ImageId='ami-09c5c62bac0d0634e',
    InstanceType='t2.micro',
    MinCount=3,
    MaxCount=3,
    TagSpecifications=[{'ResourceType':'instance','Tags':[{'Key':'Name','Value':'EmmanueEnv'}]}]
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The function “instance = ec2.create_instances()” as shown above holding several parameters allows us to launch one or more EC2 instances as required by specifying the following details as needed&lt;/p&gt;

&lt;p&gt;**ImageId: **This is the Amazon Machine Image (AMI) we want to use for our instance. We will be using a Linux AMI with the ID ami-09c5c62bac0d0634e.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;InstanceType:&lt;/strong&gt; This is the type of EC2 instance we want to launch. In this case we’re using the t2.micro fee tier eligible&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MinCount:&lt;/strong&gt; This is the minimum number of instances we want to launch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MaxCount:&lt;/strong&gt; This is the maximum number of instances we want to launch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TagSpecifications&lt;/strong&gt;: This indicates the tags on our instance. Adding a tag with the key Name and the value to the instance allows us to name our instances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 9: print(instance)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgj449li6d5z4rwon2qqs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgj449li6d5z4rwon2qqs.png" alt=" " width="800" height="104"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally we print out the instance object using the print(instance) statement giving us the output of our newly created instance, including its ID, state etc….&lt;/p&gt;

&lt;p&gt;Now let’s create our EC2 Instances. For the purpose of this project we are going to create 3 instances for production and another 3 for development.&lt;/p&gt;

&lt;p&gt;Create Developments Instances&lt;/p&gt;

&lt;p&gt;We will use the same code above for both sets of Instances. The only difference will be the TagSpecifications. The value Production specifying our production EC2 and Development specifying our development EC2. For the Production instances use the code below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3


ec2 = boto3.resource('ec2')


dev = ec2.create_instances(
    ImageId='ami-09c5c62bac0d0634e', #Image ID is specidic to your account
    InstanceType='t2.micro',
    MaxCount= 3,
    MinCount= 3,
    TagSpecifications=[
        {
            'ResourceType': 'instance',
            'Tags': [
                {'Key': 'Name','Value': 'Development'},#Change the ID as required
                {'Key': 'ENV','Value': 'Development'}

            ]
        }
    ]
)

print(dev)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fosxrwcw5wv1yyl0fzleg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fosxrwcw5wv1yyl0fzleg.png" alt=" " width="800" height="776"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbldt3i9j0e2yhoclmtfs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbldt3i9j0e2yhoclmtfs.png" alt=" " width="800" height="144"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create Production Instances&lt;/p&gt;

&lt;p&gt;To create the production Instances we use the same code as above and just change the tagspecification from Development to Production as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

ec2 = boto3.resource('ec2')


dev = ec2.create_instances(
    ImageId='ami-09c5c62bac0d0634e', #Image ID is specidic to your account
    InstanceType='t2.micro',
    MaxCount= 3,
    MinCount= 3,
    TagSpecifications=[
        {
            'ResourceType': 'instance',
            'Tags': [
                {'Key': 'Name','Value': 'Production'},#Change the ID as required
                {'Key': 'ENV','Value': 'Production'}

            ]
        }
    ]
)

print(dev)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe2121cdvakl4peh3c9d7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe2121cdvakl4peh3c9d7.png" alt=" " width="800" height="662"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0yh6cqm25gyyd0xsk1x7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0yh6cqm25gyyd0xsk1x7.png" alt=" " width="800" height="161"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Script to Stop all three Development Instances&lt;br&gt;
To stop all three development instances use the script below&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2czdvzceumwwi7n86px4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2czdvzceumwwi7n86px4.png" alt=" " width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Friplfo2vbsas4p7414p7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Friplfo2vbsas4p7414p7.png" alt=" " width="800" height="777"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwnv6kqq9cc6490i7sx48.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwnv6kqq9cc6490i7sx48.png" alt=" " width="800" height="168"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Push Python Script Code to GitHub&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Awesome! Now that completes the first part of the project! Now ensure to push your code to to GitHub as it is always a good practice. With our code on GitHub we can always access it wherever and whenever we need to. Click here to see how to push our project to GitHub.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Challenges Encountered&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When I initially tried to run the code, I experienced the error massage as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;traceback (most recent call last): file "/home/ec2-user/environment/gold-member/week14projectfile.py", line 1, in &amp;lt;module&amp;gt; import boto3 modulenotfounderror: no module named 'boto3'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;From research online I observe that this happened because boto3 was not installed in the same location as our file path “/gold-member/Week14ProjectFile.py”. Go to the path to your Python environment (location of the python.exe file) through the command line and then reinstall boto3 by running the code below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrag7kspbpve5x416wo7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrag7kspbpve5x416wo7.png" alt=" " width="800" height="108"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I hope you have also learnt something about the Automation of EC2 Instance management using Boto3.&lt;/p&gt;

&lt;p&gt;if this was insightful to you, please give my blog a follow&lt;/p&gt;

</description>
      <category>python</category>
      <category>aws</category>
      <category>ec2</category>
      <category>cloud</category>
    </item>
    <item>
      <title>How to Create Auto Scaling Groups of EC2 Instances for High Availability</title>
      <dc:creator>Taiwo Akinbolaji</dc:creator>
      <pubDate>Wed, 26 Nov 2025 14:56:38 +0000</pubDate>
      <link>https://dev.to/taiwoakinbolaji/how-to-create-auto-scaling-groups-of-ec2-instances-for-high-availability-3mj4</link>
      <guid>https://dev.to/taiwoakinbolaji/how-to-create-auto-scaling-groups-of-ec2-instances-for-high-availability-3mj4</guid>
      <description>&lt;p&gt;&lt;strong&gt;INTRODUCTION&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you’ve ever launched an EC2 instance on AWS, you were already working inside a Virtual Private Cloud (VPC). A VPC is essentially the virtual network that controls all your networking activities. Its setup determines how different parts of your infrastructure communicate with each other and how they access the public internet.&lt;/p&gt;

&lt;p&gt;In this project, the goal is to build an environment where an auto-scaling group automatically handles the provisioning and termination of EC2 instances, while elastic load balancers evenly distribute incoming traffic across those instances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DEFINITION OF TERMS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;VPC:&lt;/strong&gt;&lt;br&gt;
A Virtual Private Cloud (VPC) is essentially a private section of the cloud that exists within a larger public cloud environment. It creates a dedicated, logically isolated space where your resources can operate securely, giving you greater control over networking and access.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Subnets:&lt;/strong&gt;&lt;br&gt;
A subnet, or subnetwork, is a smaller network carved out of a larger one. Subnetting helps improve network performance and organization. By dividing a network into subnets, data can move more efficiently since it doesn’t have to travel through unnecessary routers to reach its destination. Subnets are a fundamental part of setting up a VPC.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Internet Gateway:&lt;/strong&gt;&lt;br&gt;
An Internet Gateway is the component that enables communication between your AWS VPC and the public internet. It serves as the entry and exit point for internet-bound traffic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Load Balancer:&lt;/strong&gt;&lt;br&gt;
An Application Load Balancer distributes incoming HTTP and HTTPS traffic across multiple targets—such as EC2 instances, containers, or microservices. It evaluates each incoming request using a set of prioritized listener rules and then directs the traffic to the appropriate target group based on those rules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Route Table:&lt;/strong&gt;&lt;br&gt;
A Route Table contains routing rules that determine how network traffic flows within the VPC. These rules guide traffic coming from subnets or from the internet gateway to their intended destinations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Launch Template:&lt;/strong&gt;&lt;br&gt;
A Launch Template stores predefined configuration settings used to create AWS resources like EC2 instances. It ensures consistency and simplifies the process of launching new instances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;High Availability:&lt;/strong&gt;&lt;br&gt;
High availability refers to a system’s capability to remain operational for as long as required, with minimal downtime. It focuses on eliminating single points of failure so that applications continue running even if one of the underlying components, such as a server, experiences a failure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;USE CASES&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Being able to scale a web application and evenly distribute incoming traffic across multiple instances is crucial for maintaining high availability in modern applications. In this guide, we’ll demonstrate how AWS Application Load Balancers and Auto Scaling Groups can be used to achieve this.&lt;/p&gt;

&lt;p&gt;An Auto Scaling Group automatically provisions and terminates EC2 instances based on demand, while an Elastic Load Balancer ensures that incoming requests are efficiently routed across all active instances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Setup VPC&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;On the AWS console, go to the VPC dashboard and select Create VPC to set up a new VPC named project7vpc, as illustrated below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2dwqc4qm3o7ypx5gns5o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2dwqc4qm3o7ypx5gns5o.png" alt=" " width="800" height="179"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Create three Public Subnets With 10.10.1.0/24 &amp;amp; 10.10.2.0/24 &amp;amp; 10.10.3.0/24&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;i.  Click subnets from left hand panel of AWS page&lt;/p&gt;

&lt;p&gt;ii. Click create subnet&lt;/p&gt;

&lt;p&gt;iii. On the VCP ID select our VCP which we initially created i.e project7vpc&lt;/p&gt;

&lt;p&gt;iv. On the subnet name input project7subnet1&lt;/p&gt;

&lt;p&gt;v. Input the IPV4 CIDR block as 10.10.1.0/24 and click create subnet to create our first subnet. Let’s Name it project7subnet1&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8vqqg8adpy1okvxfset0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8vqqg8adpy1okvxfset0.png" alt=" " width="800" height="573"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click add subnet to create subnets project7subnet2 and project7subnet3 with CIDR blocks 10.10.2.0/24 &amp;amp; 10.10.3.0/24 respectively.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx5zzjshlsaiwshlt7r7j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx5zzjshlsaiwshlt7r7j.png" alt=" " width="800" height="88"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; A VPC is completely private by default. While resources within the same VPC—such as subnets—can communicate with one another, they cannot access other VPCs or the public internet for security reasons. To make our subnets publicly accessible and allow them to reach the internet, we must attach an Internet Gateway to the VPC and associate it with the subnets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step3a: Connect Subnets To Internet Gateway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;(i) Create an internet gateway&lt;/p&gt;

&lt;p&gt;(ii) Attach the gateway to the VPC&lt;/p&gt;

&lt;p&gt;(iii) Create a routable and then&lt;/p&gt;

&lt;p&gt;(iv) Create a route for the gateway&lt;/p&gt;

&lt;p&gt;(v) Attach the route table to the public subnets&lt;/p&gt;

&lt;p&gt;(vi) Create An Internet Gateway&lt;/p&gt;

&lt;p&gt;From the left navigation pane in the AWS console, select Internet Gateways and then choose Create internet gateway.&lt;/p&gt;

&lt;p&gt;Enter a name in the Name tag field—let’s use project7igw—and proceed to create it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(b) Now to attach the gateway to the VPC&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Click VPCs and the internet gateway from AWS console and then click VPC&lt;/p&gt;

&lt;p&gt;Select name of internet gateway. We are going to name it project7igw&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4sipa5fnk8no0eya2uh8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4sipa5fnk8no0eya2uh8.png" alt=" " width="800" height="539"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click action and then select attach VPC&lt;/p&gt;

&lt;p&gt;Select the custom VPC on the available VPC which we created earlier and then click attach internet gateway.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(c) Create A Routable&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click VPC and then click route table and then click Create a new route table.&lt;/li&gt;
&lt;li&gt;Give the routable a name. Let’s call it project7rtb and also select the VPC that we want to attach and then click create route table&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;(d) Create A route for the Gateway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;On the routable we just created,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select edit route&lt;/li&gt;
&lt;li&gt;Select add route to route it to the public internet&lt;/li&gt;
&lt;li&gt;and give 0.0.0.0/0 and target as the gateway we created earlier and save the changes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;(e) Attach The Route Table To The Public subnets&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to subnet and then to public subnet&lt;/li&gt;
&lt;li&gt;select each of the 3 subnets&lt;/li&gt;
&lt;li&gt;Go to the route table tab and click action and then edit rout table association.&lt;/li&gt;
&lt;li&gt;Then change the route table ID to the route we created&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4hs6xamnsqrtg16z4vio.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4hs6xamnsqrtg16z4vio.png" alt=" " width="800" height="570"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Create An Autoscaling Group Using The t2.micro Instances&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, we’ll proceed to create an Auto Scaling Group using t2.micro instances. Make sure you have your Apache installation script ready as well.&lt;/p&gt;

&lt;p&gt;Go to the EC2 Dashboard, and in the left-hand menu under the Auto Scaling section, select Auto Scaling Groups to begin the setup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fal5zm4j0umz2km94ymma.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fal5zm4j0umz2km94ymma.png" alt=" " width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on Create Auto Scaling group to begin. The Auto Scaling Group controls how many EC2 instances should be running at any given time and manages all the automatic scaling operations for the environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3bq6ddqwe3ltekq5m5m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3bq6ddqwe3ltekq5m5m.png" alt=" " width="800" height="222"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we need to create a Launch Template. This template contains all the necessary configuration details for launching our instances, such as the AMI, security group, and other settings. We will use this Launch Template to create our Auto Scaling Group, which will then scale instances up or down as needed based on the template. Click Create Launch Template to proceed, as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvdck1qoqury1zkxlgx6v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvdck1qoqury1zkxlgx6v.png" alt=" " width="800" height="597"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, fill in the Launch Template details with the required names and configuration. Choose the Amazon Machine Image (AMI) for your instances—either an Amazon Linux AMI or an Ubuntu AMI works well for installing Apache. Make sure to set the instance type to t2.micro.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzgivuifl2nka802p3e1e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzgivuifl2nka802p3e1e.png" alt=" " width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Staying consistent with the past projects let’s choose Amazon Linux AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffd8kgdujj78fo63u9z5a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffd8kgdujj78fo63u9z5a.png" alt=" " width="800" height="618"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are going to use the free tier t2.micro instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcv8k97xsirm8gyh5219d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcv8k97xsirm8gyh5219d.png" alt=" " width="800" height="261"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To verify that our instances are public and accessible, we’ll need to SSH into them. Use your existing key pair, or create a new one if you prefer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37huhtqilhck1rn2bjfz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37huhtqilhck1rn2bjfz.png" alt=" " width="800" height="215"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the networking settings section, select the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllmruuzca9f3dqzmotx2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllmruuzca9f3dqzmotx2.png" alt=" " width="800" height="674"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we need to configure the security group to allow incoming traffic. Add a rule to permit internet traffic on port 8080, and also include port 22 so we can SSH into the instances. Click Add Security Group Rule to set this up.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffgu0hwd97plkeriviaw1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffgu0hwd97plkeriviaw1.png" alt=" " width="800" height="217"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And Now we select the drop-down arrow for the ‘Advanced network configuration’&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs3c42om1xkcy1fzsdw21.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs3c42om1xkcy1fzsdw21.png" alt=" " width="800" height="120"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are going to want to enable the auto-assign public IP.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqjc57535gdo8hr9trg49.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqjc57535gdo8hr9trg49.png" alt=" " width="800" height="265"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note: When hosting a web application or service on an EC2 instance in the cloud, you generally want it to be reachable from the public internet. Enabling auto-assign public IP automatically gives the instance a public IP address, making it accessible online without any extra setup.&lt;/p&gt;

&lt;p&gt;Now we can scroll all the way down and expand on Advanced details. Locate user data, so we can input our bash script.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2i6qgq3irq41qe1huwaf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2i6qgq3irq41qe1huwaf.png" alt=" " width="800" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Successful!&lt;/p&gt;

&lt;p&gt;That completes our creating of the EC2 instances that will be launched, moving forward we will begin to create our Load Balancer&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Next, We Need to Create Our Load Balancer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8l29bbqq2kryay78a9a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8l29bbqq2kryay78a9a.png" alt=" " width="800" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select Attach to a new load balancer. As shown below&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foafghmvoruhjzoauk6vd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foafghmvoruhjzoauk6vd.png" alt=" " width="800" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can now fill up the information as shown below as our auto scaling group is automatically selected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjpqu945x2u4yepnde75e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjpqu945x2u4yepnde75e.png" alt=" " width="800" height="579"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; An Application Load Balancer (ALB) is ideal for managing HTTP/HTTPS traffic and offers application-level routing features, whereas a Network Load Balancer (NLB) is better for TCP/UDP traffic and provides high-performance load balancing.&lt;/p&gt;

&lt;p&gt;In the Availability Zones and Subnets section, create a target group. You can leave most settings at their default values, but make sure to specify a name for the target group. Once the target group is created, return to the Load Balancer tab, refresh the list, and select the newly created group.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Launch and configure application load balancer&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Configure network mappings&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Go to the EC2 Dashboard, and in the left-hand menu, scroll down and select Load Balancers, then click Create Load Balancer.&lt;/p&gt;

&lt;p&gt;Choose to create an Application Load Balancer, give it a name, and keep the remaining basic settings at their default values. Proceed to the Network Mappings section, select your VPC, and then choose the three Availability Zones along with their corresponding subnets, as illustrated below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8rrltuycus253blpltx5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8rrltuycus253blpltx5.png" alt=" " width="800" height="272"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We give a name to our Load balancer, let’s call it “project7loadbalancer”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc2ik6njh5iioigbj6xz6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc2ik6njh5iioigbj6xz6.png" alt=" " width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the network mapping we select our VPC ie “project7vpc”. See below&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbemgqu6gilb797lgs2u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbemgqu6gilb797lgs2u.png" alt=" " width="800" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F05wokn6vneuxjzwdl4jq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F05wokn6vneuxjzwdl4jq.png" alt=" " width="800" height="653"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 7: Create Web Server Security group&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, move on to the Security Group settings and create a new security group. Give the security group a name and ensure it’s associated with the VPC you created earlier. Add an inbound rule to allow HTTP traffic from Anywhere (0.0.0.0/0), as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs3g563fb16zllvv96k4z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs3g563fb16zllvv96k4z.png" alt=" " width="800" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also, add another rule of type SSH with the source set to Anywhere. &lt;/p&gt;

&lt;p&gt;Note: This poses a security risk, but for demonstration purposes, we’ll allow it in this example.&lt;/p&gt;

&lt;p&gt;Afterward, click Create Security Group. Then, select the newly created security group from the list, as&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkcu6jddixuxe4rmwvkkl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkcu6jddixuxe4rmwvkkl.png" alt=" " width="800" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scroll to the bottom, review the summary then “Create load balancer”&lt;/p&gt;

&lt;p&gt;Now our load balancer is up and running as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqabps6mgx9wzdbgjs10d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqabps6mgx9wzdbgjs10d.png" alt=" " width="800" height="227"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 8: Create An Autoscaling Group (ASG) Using The t2.micro Instances&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Configure new ASG launch options&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;From the Launch Template, click Create Auto Scaling Group, as shown below. Then, in the left-hand menu, scroll down, select Auto Scaling Groups, and click Create Auto Scaling Group.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqrew7tl9brwk8zz0sh17.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqrew7tl9brwk8zz0sh17.png" alt=" " width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5asscuv1he4ekuzluqk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5asscuv1he4ekuzluqk.png" alt=" " width="800" height="527"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click Next and on the next page we can select our VCP and the subnets we created earlier. Hit next when finished.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feyikem4xjn1upnveavaz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feyikem4xjn1upnveavaz.png" alt=" " width="800" height="653"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we can an ASG along with the required load balancer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvrg17n9dutr15ju3nqd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvrg17n9dutr15ju3nqd.png" alt=" " width="800" height="263"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F52geb1rv4zlw7kou1hdo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F52geb1rv4zlw7kou1hdo.png" alt=" " width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; An Application Load Balancer (ALB) is ideal for managing HTTP/HTTPS traffic and offers routing at the application level, while a Network Load Balancer (NLB) is better suited for TCP/UDP traffic and provides high-performance load balancing.&lt;/p&gt;

&lt;p&gt;In the Availability Zones and Subnets section, create a target group. You can leave most settings at their default values, but be sure to give the target group a name. After creating it, return to the Load Balancer tab, refresh the list, and select the newly created group.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fugfzz3jjszxqc4t603lk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fugfzz3jjszxqc4t603lk.png" alt=" " width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Configure ASG Group size and CloudWatch Monitoring&lt;br&gt;
For our use case, set the desired and minimum capacity to 2, and the maximum capacity to 5. Choose Target Scaling Policy, ensure the metric type is Average CPU Utilization, and set the target value to 50. Then click Next to proceed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmyuiy9uydwnmclwhoucd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmyuiy9uydwnmclwhoucd.png" alt=" " width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqkur3a6cn8c1iaurni8q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqkur3a6cn8c1iaurni8q.png" alt=" " width="800" height="563"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Proceed through the setup until you reach the Review page. Check all the final configurations, then click Create Auto Scaling Group. You should see your ASG displaying a status of “updating capacity…” as it launches EC2 instances based on your predefined settings.&lt;/p&gt;

&lt;p&gt;Next, go to the EC2 Dashboard to confirm that two EC2 instances have been created and are running under your Auto Scaling Group.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqntoar8gih0ok25y602e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqntoar8gih0ok25y602e.png" alt=" " width="800" height="171"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 9: Connect to Servers running Apache Web Server&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Get the public IP address of each EC2 instance from the Networking tab in the Amazon EC2 dashboard. Copy and paste the IP address into your browser’s address bar. You should see the default Apache Web Server webpage displayed, as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fneo1tv8n5ttsbg72gckz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fneo1tv8n5ttsbg72gckz.png" alt=" " width="800" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmxu82z53dh0711qca29q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmxu82z53dh0711qca29q.png" alt=" " width="800" height="309"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Congratulations!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We have done an excellent job! We’ve successfully set up an infrastructure where an Auto Scaling Group automatically handles the creation and termination of EC2 instances, while Elastic Load Balancers efficiently distribute network traffic across those instances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ADVANCED&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stress testing our Auto Scaling group&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this section, we’ll demonstrate how to test and verify that our Auto Scaling Group can maintain high availability by automatically scaling EC2 instances under stress. We’ll use CPU utilization exceeding 50% as the trigger for scaling.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: SSH into EC2 Instance And run stress command&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this step, we’ll connect to the instance via the command line using SSH, run a stress command to put load on the system, and observe how it handles the increased demand.&lt;/p&gt;

&lt;p&gt;Once connected, run the following command to add stress —&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futw08ckchnsw73ne1vte.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futw08ckchnsw73ne1vte.png" alt=" " width="800" height="121"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Review CloudWatch Alarms&lt;/strong&gt;&lt;br&gt;
Navigate to CloudWatch alarms. Select “All alarms” in the left pane.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxf1dtcjzudwyac7m8wd5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxf1dtcjzudwyac7m8wd5.png" alt=" " width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should now see our ASG alarm in the “In alarm” state, as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fylzxahcak95wdeyxfto0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fylzxahcak95wdeyxfto0.png" alt=" " width="800" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For a closer look, click on the alarm to view the CPU utilization. As shown below, it increased from 0.234% up to approximately 83.9%, well above our 50% threshold.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Farzl7k33qe16mb1c0fm4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Farzl7k33qe16mb1c0fm4.png" alt=" " width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Review EC2 Instances&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When we check our running EC2 instances, we can see that the Auto Scaling Group launched an additional instance to handle the increased load. This helped reduce the CPU utilization back below our 50% threshold, as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fykcw3i5ml1g96ky6wifo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fykcw3i5ml1g96ky6wifo.png" alt=" " width="800" height="247"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GREAT RESULT!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We have just stress-tested our infrastructure to demonstrate high availability. When the load exceeds our 50% threshold, the Auto Scaling Group automatically responds by launching additional EC2 instances to handle the increased demand.&lt;/p&gt;

&lt;p&gt;If you have followed along this far, thank you! I hope you found it helpful.&lt;/p&gt;

&lt;p&gt;**Reminder: **Don’t forget to clean up your environment by deleting all the resources you created and configured to avoid unnecessary charges.&lt;/p&gt;

</description>
      <category>ec2</category>
      <category>automaton</category>
      <category>webdev</category>
      <category>aws</category>
    </item>
    <item>
      <title>Automate NGINX Deployment on AWS EC2 Server using Bash Script</title>
      <dc:creator>Taiwo Akinbolaji</dc:creator>
      <pubDate>Fri, 14 Nov 2025 00:36:14 +0000</pubDate>
      <link>https://dev.to/taiwoakinbolaji/automate-nginx-deployment-on-aws-ec2-server-using-bash-script-5a92</link>
      <guid>https://dev.to/taiwoakinbolaji/automate-nginx-deployment-on-aws-ec2-server-using-bash-script-5a92</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this guide, we’ll walk through how to launch a free t2.micro EC2 instance running Amazon Linux, and use the user data section to run a bash script that updates the system packages, installs NGINX, and starts the service automatically. After deployment, we’ll confirm that NGINX is properly installed and running by accessing the instance through its public IP address.&lt;/p&gt;

&lt;p&gt;Next, we will take things a step further by carrying out the same process using the AWS CLI, so you can see how to automate everything from the command line.&lt;/p&gt;

&lt;p&gt;If you want to explore even more, we will also create an Amazon Machine Image (AMI) from the configured instance and launch a new EC2 instance from that AMI to verify that the web server works right out of the box.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Background&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS&lt;/strong&gt;&lt;br&gt;
Amazon Web Services (AWS) is an extensive cloud computing platform that delivers both Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) solutions. It offers flexible, scalable tools for computing power, storage, databases, analytics, and a wide range of other cloud services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NGINX&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;NGINX is an open-source tool used for web serving, reverse proxying, caching, load balancing, media streaming, and various other functions. It was originally developed as a high-performance, highly stable web server. Its long-standing popularity largely comes from its ability to scale efficiently—even on limited hardware—and its low resource consumption.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon EC2 (Elastic Compute Cloud) Instance&lt;/strong&gt;&lt;br&gt;
An Amazon EC2 instance is essentially a virtual machine provided by AWS that delivers on-demand computing power. With EC2, there’s no need to purchase physical hardware upfront, allowing you to build and deploy applications much more quickly and at a reduced cost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Launch EC2 Instance&lt;/strong&gt;&lt;br&gt;
To begin, let’s launch our EC2 instance. In my earlier guide, “Create EC2 Instance and Install Apache Web Server,” I walked through the full process of setting up a Linux-based EC2 instance.&lt;/p&gt;

&lt;p&gt;For this tutorial, we’ll name our instance “NGINX_Instance” and select Amazon Linux 2 AMI, which is eligible for the Free Tier, as our base image—just as shown in the illustration below.&lt;/p&gt;

&lt;p&gt;Click or press enter to view the image in full size.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuxfsonqx41gta0p2rzmk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuxfsonqx41gta0p2rzmk.png" alt=" " width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the instance type section, let’s select the t2. micro free tier eligible instance type.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flb9w220c34pz5s2dwj1g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flb9w220c34pz5s2dwj1g.png" alt=" " width="800" height="250"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create new key pair or select existing key pair. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4xg1ghouuhx5o0yv467q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4xg1ghouuhx5o0yv467q.png" alt=" " width="800" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Group&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, we will create a security group for the instance. A security group acts as a virtual firewall, managing the inbound and outbound traffic that can access the EC2 instance.&lt;/p&gt;

&lt;p&gt;We need to add rules to our firewall (security group) settings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rules&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Allow SSH traffic from internet.&lt;/li&gt;
&lt;li&gt;Allow HTTP traffic from internet.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywsj4kpvc0u01s5n3y3c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywsj4kpvc0u01s5n3y3c.png" alt=" " width="800" height="559"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Create Bash Script&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now Let’s head to the advance details section as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8dzwgybzgvjm8h59liv6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8dzwgybzgvjm8h59liv6.png" alt=" " width="800" height="75"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scroll down to the User Data section. This is where we can enter a bash script that will run automatically when the instance starts. The script will update all system packages, install NGINX, and start the service without any manual steps.&lt;/p&gt;

&lt;p&gt;Ensure 1 instance is selected on the “Number of Instance Section” on the right hand side as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fny6hqfib0uwmwgunucbn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fny6hqfib0uwmwgunucbn.png" alt=" " width="800" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click launch instance on the bottom right hand side to launch our instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6oayozankgq5jzojija.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6oayozankgq5jzojija.png" alt=" " width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Successful!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Click on view instances to see our “NGNX_Instance” up and running&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpcs796x58271029n28bf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpcs796x58271029n28bf.png" alt=" " width="800" height="175"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Well done! If you have followed along up to this stage, we have successfully completed what I like to call “Scripting NGINX on EC2.” The next step is to confirm that our instance (NGINX_Instance) actually has the NGINX web server installed and running. We can verify this by entering the instance’s public IPv4 address in a browser.&lt;/p&gt;

&lt;p&gt;To see the Public IPV4 of our instance, click on the Instance ID. For me my IPV4 address is 35.176.102.1&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Verify If NGINX Web server Is Installed&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now let’s confirm that NGINX was installed correctly on our instance. To do this, open a web browser and enter the instance’s public IPv4 address, for example: &lt;a href="http://35.176.102.1" rel="noopener noreferrer"&gt;http://35.176.102.1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note: Search &lt;a href="http://IPV4" rel="noopener noreferrer"&gt;http://IPV4&lt;/a&gt; on the web browser for it to be successful.&lt;/p&gt;

&lt;p&gt;If you see the page below, the NGINX web server is successfully installed and working.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frogwq8ph0qvz2jiblhym.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frogwq8ph0qvz2jiblhym.png" alt=" " width="800" height="215"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Congratulations!!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You have successfully launched an EC2 t2.micro instance using the Amazon Linux 2 AMI, which is eligible for the free tier. &lt;/p&gt;

&lt;p&gt;An automated batch script was used to update all packages, install NGINX, and start the service. The setup was confirmed by accessing the instance's public IPv4 address in a web browser.&lt;/p&gt;

&lt;p&gt;If you have got this far, well done! &lt;/p&gt;

&lt;p&gt;Thanks for reading. I hope it was worthwhile to you.&lt;/p&gt;

</description>
      <category>nginx</category>
      <category>aws</category>
      <category>bash</category>
      <category>ec2</category>
    </item>
    <item>
      <title>The DevOps Blindspot: Why Reliability and Security Are Two Sides of the Same Coin</title>
      <dc:creator>Taiwo Akinbolaji</dc:creator>
      <pubDate>Tue, 11 Nov 2025 03:24:24 +0000</pubDate>
      <link>https://dev.to/taiwoakinbolaji/why-reliability-and-security-are-two-sides-of-the-same-coin-lcn</link>
      <guid>https://dev.to/taiwoakinbolaji/why-reliability-and-security-are-two-sides-of-the-same-coin-lcn</guid>
      <description>&lt;p&gt;DevOps transformed how software is delivered. By merging development and operations, teams began releasing features faster and collaborating more efficiently. But in the rush for speed, a crucial oversight emerged — the connection between reliability and security.&lt;/p&gt;

&lt;p&gt;Many teams still treat uptime as a separate issue from threat prevention. In reality, they are inseparable. A system cannot be reliable if it is insecure, and insecure systems inevitably become unreliable. Failures create vulnerabilities, and vulnerabilities create failures.&lt;/p&gt;

&lt;p&gt;This article explores why that blindspot persists, how it undermines DevOps practices, and what strategies help teams unify reliability and security through DevSecOps and SRE principles.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Birth of a Blindspot
&lt;/h2&gt;

&lt;p&gt;When DevOps emerged, the focus was mainly on &lt;strong&gt;speed and collaboration&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Speed:&lt;/strong&gt; Shorter release cycles, continuous integration, and continuous deployment.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Collaboration:&lt;/strong&gt; Breaking down silos between dev and ops teams for smoother handoffs and shared ownership.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In pursuit of faster releases, security was often added at the last minute — if at all — while reliability was assumed to be a byproduct of good code and agile processes.&lt;/p&gt;

&lt;p&gt;As DevOps practices matured, teams realized that speed without strong reliability or security creates an illusion of progress. Code reaches production faster, but systems become fragile and more exposed to breaches or cascading failures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the Blindspot Exists
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cultural inertia:&lt;/strong&gt; “Ship it” overshadowed “Secure it.”
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tooling gaps:&lt;/strong&gt; CI/CD pipelines were built for speed, not necessarily for resilience or security checks.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Competing priorities:&lt;/strong&gt; When deadlines approach, reliability and security enhancements are often postponed for feature delivery.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result is a DevOps culture that measures velocity but often overlooks resilience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Reliability and Security Are Inseparable
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Reliability&lt;/strong&gt; ensures your system performs its intended functions consistently under expected (and unexpected) conditions.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Security&lt;/strong&gt; protects your system from malicious activity and ensures data integrity, confidentiality, and availability.&lt;/p&gt;

&lt;p&gt;When reliability falters — for example, through frequent crashes, weak configurations, or unmanaged dependencies — the attack surface grows, making exploitation easier.&lt;br&gt;&lt;br&gt;
Conversely, a security breach nearly always impacts reliability through downtime, emergency fixes, or user trust erosion.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key synergy points:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Shared goals:&lt;/strong&gt; Both reliability and security aim to protect users and systems.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Failover strategies:&lt;/strong&gt; Redundant systems for reliability also isolate or contain attacks.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Incident response:&lt;/strong&gt; Many reliability incidents mirror security ones — both require detection, triage, and recovery.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Integrating Reliability and Security with DevSecOps and SRE
&lt;/h2&gt;

&lt;p&gt;To address this blindspot, organizations are combining &lt;strong&gt;DevSecOps&lt;/strong&gt; and &lt;strong&gt;Site Reliability Engineering (SRE)&lt;/strong&gt; principles.&lt;br&gt;&lt;br&gt;
Together, they form a holistic approach that treats reliability and security as shared responsibilities rather than separate disciplines.&lt;/p&gt;

&lt;h3&gt;
  
  
  DevSecOps
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Shift Left:&lt;/strong&gt; Integrate security early in development through static analysis, threat modeling, and dependency scanning.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automation:&lt;/strong&gt; Build pipelines that automatically run security checks and compliance tests.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Culture:&lt;/strong&gt; Reinforce that security is everyone’s job, not just the security team’s.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Site Reliability Engineering (SRE)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reliability as a Feature:&lt;/strong&gt; Plan for it intentionally with defined objectives and measurable outcomes.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error Budgets:&lt;/strong&gt; Balance innovation and stability by defining acceptable failure thresholds.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Proactive Testing:&lt;/strong&gt; Use chaos engineering and game days to understand system behavior under stress.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When combined, these practices embed continuous security checks into the same feedback loops that maintain system uptime, ensuring you are not just building fast, but building right.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-World Strategies
&lt;/h2&gt;

&lt;p&gt;Concrete steps to make reliability and security complementary instead of competing goals:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Embed Security Engineers with Reliability Teams&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Place security champions inside SRE or platform teams to raise concerns in real time and eliminate last-minute patching.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Include Threat Modeling in Reliability Assessments&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Evaluate security risks like DDoS attacks or insider threats alongside failure modes to understand their combined impact on uptime.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Extend Chaos Engineering to Security&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Simulate security incidents such as token leaks, privilege escalations, or exfiltration attempts to test detection and recovery capabilities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Unify Incident Response&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Reliability and security incidents share the same life cycle: detection, triage, mitigation, and postmortem. Use one coordinated response process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Continuous Monitoring and Auditing&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Expand observability beyond performance metrics. Integrate intrusion detection and anomaly tracking into existing dashboards for a complete picture of system health.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Overcoming Common Barriers
&lt;/h2&gt;

&lt;p&gt;Even with awareness of DevSecOps and SRE, many organizations struggle to integrate reliability and security fully.&lt;/p&gt;

&lt;h3&gt;
  
  
  Common challenges:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Organizational silos:&lt;/strong&gt; Dev, SRE, and security teams operate separately.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Perceived complexity:&lt;/strong&gt; New tooling or practices can seem heavy to implement.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Limited executive support:&lt;/strong&gt; Leadership often prioritizes delivery speed over resilience investments.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Breaking through:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Educate stakeholders:&lt;/strong&gt; Show measurable ROI — reduced downtime, fewer breaches, stronger trust.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Foster culture:&lt;/strong&gt; Include reliability and security metrics in team KPIs and performance goals.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Invest in automation and training:&lt;/strong&gt; Automation prevents errors, and education empowers teams to prevent incidents proactively.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Reliability and security are often managed as parallel efforts, yet they both aim to protect the same thing: your system and your users. When one weakens, the other collapses.&lt;/p&gt;

&lt;p&gt;Modern DevOps maturity means moving beyond speed as the primary metric. True performance lies in the ability to deploy quickly and sustain availability, integrity, and trust over time.&lt;/p&gt;

&lt;p&gt;By integrating DevSecOps and SRE practices — embedding security checks early, treating reliability as a planned feature, and unifying incident response — teams can close the DevOps blindspot and build systems that are fast, stable, and secure by design.&lt;/p&gt;

&lt;p&gt;🧠 &lt;strong&gt;Final Thought:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
The effectiveness of DevOps is measured not just by how fast you deliver, but by how consistently you can keep your systems secure, reliable, and trusted in production.&lt;/p&gt;

&lt;p&gt;💬 &lt;em&gt;If you found this article helpful, consider sharing it with your team or network. Together we can close the gap between reliability and security and build more resilient systems.&lt;/em&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  DevOps #SRE #Security #DevSecOps #CloudEngineering
&lt;/h1&gt;

</description>
    </item>
    <item>
      <title>Building Scalable, Fault-Tolerant, and Highly Available Cloud Architectures with AWS Best Practices.</title>
      <dc:creator>Taiwo Akinbolaji</dc:creator>
      <pubDate>Tue, 11 Nov 2025 00:38:45 +0000</pubDate>
      <link>https://dev.to/aws-builders/building-scalable-fault-tolerant-and-highly-available-cloud-architectures-with-aws-best-practices-2a5f</link>
      <guid>https://dev.to/aws-builders/building-scalable-fault-tolerant-and-highly-available-cloud-architectures-with-aws-best-practices-2a5f</guid>
      <description>&lt;p&gt;Modern applications live in an era of relentless demand. Users expect them to load instantly, scale automatically, and recover from failure without interruption. In the cloud, that level of reliability does not happen by accident. It is the result of intentional design.&lt;/p&gt;

&lt;p&gt;In this article, we will look at how to build scalable, fault-tolerant, and highly available architectures using AWS best practices. The focus is on the classic three-tier model: Web, Application, and Database. You will see how AWS services work together to deliver resilience at scale, and how good architectural choices make that possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Cloud Resilience Matters
&lt;/h2&gt;

&lt;p&gt;Scalability, fault tolerance, and high availability are often used together, but they address different goals.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt; ensures that your system can handle growth, whether that means more users, traffic, or data, by dynamically allocating resources.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fault tolerance&lt;/strong&gt; keeps your system running even when some components fail. It depends on removing single points of failure and designing with redundancy.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High availability&lt;/strong&gt; focuses on minimizing downtime and keeping the system accessible at all times.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AWS provides the building blocks to achieve all three through elastic compute, distributed load balancing, managed databases, and global infrastructure. The architecture you design determines how well these services work together.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Three-Tier Architecture Model
&lt;/h2&gt;

&lt;p&gt;A three-tier architecture is one of the most proven models for designing resilient cloud systems. It divides your application into logical layers, each with its own function and the ability to scale independently.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Web Tier (Presentation Layer)
&lt;/h3&gt;

&lt;p&gt;The Web Tier is the public entry point of your system. It serves static or dynamic content and forwards requests to the backend.&lt;/p&gt;

&lt;p&gt;On AWS, this layer often includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Elastic Load Balancer (ALB)&lt;/strong&gt; for routing user traffic
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto Scaling EC2 instances&lt;/strong&gt; in &lt;strong&gt;public subnets&lt;/strong&gt; for serving web content
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon CloudFront&lt;/strong&gt; for caching and faster global delivery
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This design provides elasticity during traffic spikes and maintains availability by running across multiple Availability Zones.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Application Tier (Logic Layer)
&lt;/h3&gt;

&lt;p&gt;The Application Tier processes business logic and handles communication between the web layer and the database. It manages requests, executes application code, and ensures that data flows securely between layers.&lt;/p&gt;

&lt;p&gt;On AWS, it is usually built with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;EC2 Auto Scaling Groups&lt;/strong&gt; in &lt;strong&gt;private subnets&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Internal Application Load Balancer&lt;/strong&gt; for distributing internal traffic
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ElastiCache (Redis or Memcached)&lt;/strong&gt; for improved performance
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Separating this layer improves security, since it is not publicly exposed, and makes it easier to scale or recover without affecting other parts of the system.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Database Tier (Data Layer)
&lt;/h3&gt;

&lt;p&gt;The Database Tier manages data storage and retrieval. It is the foundation of the architecture and is typically implemented using &lt;strong&gt;Amazon RDS&lt;/strong&gt; or &lt;strong&gt;Amazon Aurora&lt;/strong&gt; within &lt;strong&gt;private subnets&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Best practices for this layer include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enabling &lt;strong&gt;Multi-AZ replication&lt;/strong&gt; for durability and failover
&lt;/li&gt;
&lt;li&gt;Restricting inbound traffic to the Application Tier only
&lt;/li&gt;
&lt;li&gt;Enabling &lt;strong&gt;automated backups&lt;/strong&gt; and &lt;strong&gt;read replicas&lt;/strong&gt; for recovery and performance&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How AWS Enables Resilience by Design
&lt;/h2&gt;

&lt;p&gt;AWS infrastructure is designed for redundancy and fault isolation. However, reliability is not achieved by using AWS services alone. It comes from how you combine and configure them.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Design for Failure
&lt;/h3&gt;

&lt;p&gt;Assume that any component can fail. Deploy resources across multiple Availability Zones, enable health checks, and use Auto Scaling to replace unhealthy instances automatically.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Automate Infrastructure
&lt;/h3&gt;

&lt;p&gt;Infrastructure as Code is essential for consistency and repeatability. Use &lt;strong&gt;AWS CloudFormation&lt;/strong&gt;, &lt;strong&gt;Terraform&lt;/strong&gt;, or the &lt;strong&gt;AWS CDK&lt;/strong&gt; to define and deploy environments. Combine them with CI/CD tools such as &lt;strong&gt;CodePipeline&lt;/strong&gt; or &lt;strong&gt;GitHub Actions&lt;/strong&gt; to automate deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Isolate and Secure Each Layer
&lt;/h3&gt;

&lt;p&gt;Place each tier in its own subnet and control communication using Security Groups and routing rules. Avoid exposing internal services such as databases or APIs to the public internet. All inbound traffic should pass through load balancers.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Use Managed Services When Possible
&lt;/h3&gt;

&lt;p&gt;Leverage AWS managed offerings such as &lt;strong&gt;RDS&lt;/strong&gt;, &lt;strong&gt;ElastiCache&lt;/strong&gt;, &lt;strong&gt;S3&lt;/strong&gt;, and &lt;strong&gt;CloudFront&lt;/strong&gt;. Managed services reduce operational effort, handle scaling and patching automatically, and provide higher availability guarantees.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Monitor and Optimize Continuously
&lt;/h3&gt;

&lt;p&gt;Monitoring is essential for resilience. Use &lt;strong&gt;Amazon CloudWatch&lt;/strong&gt;, &lt;strong&gt;AWS X-Ray&lt;/strong&gt;, and &lt;strong&gt;VPC Flow Logs&lt;/strong&gt; to observe performance, track latency, and detect failures early. Data-driven insights allow you to make informed scaling and cost decisions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bringing It All Together
&lt;/h2&gt;

&lt;p&gt;In a complete implementation, this is how the architecture operates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Users access the application through &lt;strong&gt;Route 53&lt;/strong&gt; and &lt;strong&gt;CloudFront&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;Requests are routed through the &lt;strong&gt;Application Load Balancer&lt;/strong&gt; to EC2 instances in the &lt;strong&gt;Web Tier&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Application Tier&lt;/strong&gt; processes logic and interacts with &lt;strong&gt;Amazon RDS&lt;/strong&gt; in the &lt;strong&gt;Database Tier&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;Each layer is deployed across at least two Availability Zones for redundancy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This layered structure isolates responsibilities, reduces the impact of failures, and allows independent scaling. It forms the backbone of many production systems on AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F959noer2u8clz4qam30k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F959noer2u8clz4qam30k.png" alt=" " width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt; is achieved through elasticity, load balancing, and stateless design.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fault tolerance&lt;/strong&gt; relies on redundancy, automation, and distribution across Availability Zones.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High availability&lt;/strong&gt; depends on isolation, health monitoring, and recovery automation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When combined in a three-tier design, these principles create an architecture that can scale on demand and remain operational during disruptions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Building for reliability and scale requires both technical knowledge and a mindset of resilience. By following AWS best practices, distributing workloads, and automating recovery, you can design systems that adapt to change and recover gracefully from failure.&lt;/p&gt;

&lt;p&gt;The three-tier model remains one of the most effective patterns for building modern cloud applications. It provides clarity, structure, and control while integrating smoothly with AWS services. Whether for a startup project or an enterprise workload, this design remains a foundation for long-term cloud success.&lt;/p&gt;

&lt;p&gt;If you found this helpful, follow me for more insights on AWS architecture, DevOps, and cloud infrastructure best practices.  &lt;/p&gt;

&lt;h1&gt;
  
  
  AWS #CloudArchitecture #DevOps #Scalability #HighAvailability
&lt;/h1&gt;

</description>
      <category>aws</category>
      <category>tutorial</category>
      <category>cloud</category>
      <category>architecture</category>
    </item>
  </channel>
</rss>
