<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: SAFI-ULLAH SAFEER</title>
    <description>The latest articles on DEV Community by SAFI-ULLAH SAFEER (@safi-ullah).</description>
    <link>https://dev.to/safi-ullah</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/safi-ullah"/>
    <language>en</language>
    <item>
      <title>End-to-End Deployment of a Two-Tier Application Using Docker, Kubernetes, Helm, and AWS</title>
      <dc:creator>SAFI-ULLAH SAFEER</dc:creator>
      <pubDate>Thu, 01 Jan 2026 22:58:25 +0000</pubDate>
      <link>https://dev.to/safi-ullah/end-to-end-deployment-of-a-two-tier-application-using-docker-kubernetes-helm-and-aws-3gma</link>
      <guid>https://dev.to/safi-ullah/end-to-end-deployment-of-a-two-tier-application-using-docker-kubernetes-helm-and-aws-3gma</guid>
      <description>&lt;p&gt;In modern cloud-native environments, deploying applications manually is no longer scalable or reliable. DevOps practices and container orchestration tools help us automate, standardize, and scale applications efficiently.&lt;/p&gt;

&lt;p&gt;In this article, I’ll walk you through an &lt;strong&gt;end-to-end deployment&lt;/strong&gt; of a &lt;strong&gt;Two-Tier Application&lt;/strong&gt; using:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;, &lt;strong&gt;Docker &amp;amp; DockerHub&lt;/strong&gt;, &lt;strong&gt;Kubernetes&lt;/strong&gt;, &lt;strong&gt;Helm&lt;/strong&gt; &amp;amp;  &lt;strong&gt;AWS&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This architecture represents a real-world DevOps workflow, commonly used in production systems.&lt;/p&gt;

&lt;p&gt;Before diving deep into the project implementation or practical demonstration, it’s important to first understand the term "two-tier-application"&lt;/p&gt;

&lt;p&gt;A Two-Tier Application Consists of:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Application Tier (Backend):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Flask-based backend service&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Handles business logic&lt;/p&gt;

&lt;p&gt;Processes API requests&lt;/p&gt;

&lt;p&gt;Communicates with the database&lt;/p&gt;

&lt;p&gt;This layer is responsible for how the application behaves and responds to user actions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Database Tier:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MySQL database&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Stores and manages application data&lt;/p&gt;

&lt;p&gt;Handles queries from the backend&lt;/p&gt;

&lt;p&gt;This tier ensures data persistence and consistency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployment Approach&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To keep things simple and structured, initially the deployment is done in two stages:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dockerfile (Single Container Focus)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, we containerize the Flask backend using a Dockerfile.&lt;br&gt;
This helps us understand:&lt;/p&gt;

&lt;p&gt;How Docker images are built&lt;/p&gt;

&lt;p&gt;How application dependencies are managed&lt;/p&gt;

&lt;p&gt;How a single service runs inside a container&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker Compose (Multi-Container Setup)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, we use Docker Compose to run both tiers together:&lt;/p&gt;

&lt;p&gt;Flask Backend container&lt;/p&gt;

&lt;p&gt;MySQL Database container&lt;/p&gt;

&lt;p&gt;Docker Compose allows both services to:&lt;/p&gt;

&lt;p&gt;Run on the same network&lt;/p&gt;

&lt;p&gt;Communicate using service names&lt;/p&gt;

&lt;p&gt;Start and stop with a single command&lt;/p&gt;

&lt;p&gt;This approach represents a real-world development setup and forms the foundation for moving toward Kubernetes in later stages.&lt;/p&gt;

&lt;p&gt;First of All Launch an instance with “2-tier-App-DEPLOYMNET” with private key and “ubuntu os” make all the setting  by default.&lt;br&gt;
From EC2 Instance Connect option connect it the following screen will appear.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fri8qw3vzsaiqxbcm2nnh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fri8qw3vzsaiqxbcm2nnh.png" alt=" " width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Installing Docker on Ubuntu&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To install Docker on Ubuntu, you can use the following commands in your terminal:&lt;/p&gt;

&lt;p&gt;Ls&lt;br&gt;
Sudo apt update&lt;br&gt;
Sudo apt install docker.io&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwrpknqqlt42quwb76xtp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwrpknqqlt42quwb76xtp.png" alt=" " width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To check running containers on docker we run &lt;strong&gt;docker ps&lt;/strong&gt; initially we got error Permission denied while trying to connect to the docker daemon socket.&lt;br&gt;
This happens because your user does not have permission to access the Docker daemon. Let’s see how to resolve it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgtiyjytpgetepf2dv0e7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgtiyjytpgetepf2dv0e7.png" alt=" " width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error troubleshoot in just  2 seconds:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Check the user by “whoami”=&amp;gt;got “Ubuntu”&lt;br&gt;
Sudo chown $USER /var/run/docker.sock&lt;br&gt;
Now run “docker ps” command it will run successfully.&lt;/p&gt;

&lt;p&gt;Now the next step is to &lt;strong&gt;clone the code from github&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Git clone &lt;a href="https://github.com/SAFI-ULLAHSAFEER/two-tier-flask-app.git" rel="noopener noreferrer"&gt;https://github.com/SAFI-ULLAHSAFEER/two-tier-flask-app.git&lt;/a&gt; &lt;br&gt;
cd two-tier-flask-app&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbdxjftviy5cmif6ktpb5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbdxjftviy5cmif6ktpb5.png" alt=" " width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now from list “rm Dockerfile” and create your own Dockerfile from scratch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding the Dockerfile:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The first line in a Dockerfile usually starts with &lt;strong&gt;FROM&lt;/strong&gt;. This specifies the base image for your container — essentially, the operating system with pre-installed software. For example:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;FROM python:3.9-slim&lt;/strong&gt;&lt;br&gt;
Wheares 3.9 is the version of python and slim means image size lightweight&lt;/p&gt;

&lt;p&gt;WORKDIR: Application run on a working directory &lt;br&gt;
&lt;strong&gt;WORKDIR /app&lt;/strong&gt;&lt;br&gt;
RUN apt-get update –y \ =&amp;gt;update your conatiner&lt;br&gt;
 &amp;amp;&amp;amp; apt-get upgrade –y =&amp;gt;upgrade packages&lt;br&gt;
&amp;amp;&amp;amp; apt-get  install –y gcc default-libmysqlclient-dev pkg-config =&amp;gt;install client to run mysql&lt;br&gt;
&amp;amp;&amp;amp; rm –rf /var/lib/apt/lists/*=&amp;gt;remove packages list&lt;/p&gt;

&lt;p&gt;COPY requirements.txt &amp;gt;The file in which all the packages you need is written&lt;/p&gt;

&lt;h1&gt;
  
  
  Install Python packages
&lt;/h1&gt;

&lt;p&gt;RUN pip install mysqlclient&lt;br&gt;
RUN pip install -r requirements.txt&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Copy application code into the container&lt;/strong&gt;&lt;br&gt;
COPY . .&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;br&gt;
The first dot (.) is the source — your local folder containing the code&lt;br&gt;
The second dot (.) is the destination inside the container (/app)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Define the command to run the application&lt;/strong&gt;&lt;br&gt;
CMD ["python", "app.py"]&lt;/p&gt;

&lt;p&gt;Docker file finally  complete now press esc :wq to save the file in vim editor&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp01xdc421i023d9nlm0t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp01xdc421i023d9nlm0t.png" alt=" " width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Now to Build an Image from Docker file:&lt;/strong&gt;&lt;br&gt;
Docker build . –t flaskapp&lt;br&gt;
Where . Is for current  path and –t is for tag and flaskapp is the name of image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4t81jp0ac8e4f3nb5ds.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4t81jp0ac8e4f3nb5ds.png" alt=" " width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;After that we have to run mysql container:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71i2nv15yw1hy9ubghw5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71i2nv15yw1hy9ubghw5.png" alt=" " width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;after that we have to run flaskapp container:&lt;/strong&gt;&lt;br&gt;
To see docker images:&lt;br&gt;
Docker images&lt;br&gt;
T run container from image&lt;br&gt;
Docker run –d –p 5000:5000  flaskapp:latest&lt;br&gt;
-d=run images on background deatttached mode&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnq7uw34lmqu4mzk83kx1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnq7uw34lmqu4mzk83kx1.png" alt=" " width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you have to access your application at port 5000 now configure security group.&lt;/p&gt;

&lt;p&gt;After that copy the public ip of your instance and search it on browser with adding 5000 port number at the end.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7feemg1h6g0q3fdhvkpa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7feemg1h6g0q3fdhvkpa.png" alt=" " width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker Networking:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Docker network create &lt;strong&gt;twotier&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The flaskapp image create container which enables at port 5000 network twotier and all the environment  varaibles.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5cq0v5ijwhgdydwpcne2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5cq0v5ijwhgdydwpcne2.png" alt=" " width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Point (Memorable)&lt;/strong&gt;&lt;br&gt;
“In a multi-container setup, Flask is application-level dependent on MySQL: the Flask app requires a live database connection to function properly at startup. Therefore, always start the MySQL container first, then the Flask container — otherwise the Flask app will crash even though it runs in a standalone container.”&lt;/p&gt;

&lt;p&gt;Now to check conatiners on a docker network&lt;br&gt;
docker inspect network twotier&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fziwcwzjqxks3udxthmsb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fziwcwzjqxks3udxthmsb.png" alt=" " width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, your application is successfully deployed as a two-tier setup, with Flask running on the backend and MySQL managing the database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbww1fq7b9ylwhpnqiurj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbww1fq7b9ylwhpnqiurj.png" alt=" " width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accessing a Docker Container:&lt;/strong&gt;&lt;br&gt;
Docker exec –it container-id bash&lt;br&gt;
Mysql –u admin –p &lt;br&gt;
Show databases;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx71rbv79ywtu4zo6jbrg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx71rbv79ywtu4zo6jbrg.png" alt=" " width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;message enter by me "&lt;strong&gt;AWS Cloud Club MUST&lt;/strong&gt;" AND "&lt;strong&gt;AWS Student Community Day Mirpur 2025&lt;/strong&gt;"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ych8mfz9pibrbjgz6ij.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ych8mfz9pibrbjgz6ij.png" alt=" " width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here is inside the MySql Container:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8alwfekdmk1fdvj0qgv7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8alwfekdmk1fdvj0qgv7.png" alt=" " width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pushing Docker Image to Docker Hub&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flpzqzvdmllr75j2wtk3p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flpzqzvdmllr75j2wtk3p.png" alt=" " width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker login&lt;/strong&gt;&lt;br&gt;
After that tag flaskapp image for docker hub&lt;br&gt;
Docker tag flaskapp:latest safi221/flaskapp&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faa3xe7v5pq9hx3w1g34t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faa3xe7v5pq9hx3w1g34t.png" alt=" " width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, our image is successfully pushed to Docker Hub. This means it is publicly available, and anyone can pull and run the application from anywhere.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flef1sxi3o1cjd2vwp7pc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flef1sxi3o1cjd2vwp7pc.png" alt=" " width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker Compose&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, you might wonder: “How can I run both the backend and database containers simultaneously with a single command?”&lt;/p&gt;

&lt;p&gt;This is where &lt;strong&gt;Docker Compose&lt;/strong&gt; comes in — it allows you to define and run multi-container applications with ease.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Installing Docker Compose&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, install Docker Compose on your system&lt;br&gt;
Once installed, you can create and edit the docker-compose.yml file:&lt;/p&gt;

&lt;p&gt;YAML file is yet another markup language its syntax is in the form of key-value-pair.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding the Docker Compose File (docker-compose.yml)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Docker Compose allows you to define and run multi-container applications. Let’s break down a typical docker-compose.yml for our two-tier Flask + MySQL app.&lt;/p&gt;

&lt;p&gt;Why &lt;strong&gt;depends_on&lt;/strong&gt; is important&lt;/p&gt;

&lt;p&gt;The depends_on option ensures that the MySQL container starts before the Flask backend. Without this, Docker might start the backend first, causing connection errors because the database is not yet available. Using depends_on manages the startup order automatically.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flno0lrsh2sygkeauvxot.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flno0lrsh2sygkeauvxot.png" alt=" " width="800" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;volumes → Persist data even if the container stops or is removed. Here, mysql-data binds the container’s MySQL data to system storage.&lt;/p&gt;

&lt;p&gt;./message.sql:/docker-entrypoint-initdb.d/message.sql → Initializes the database with tables or seed data on container startup. Docker automatically executes scripts in the /docker-entrypoint-initdb.d/ directory.&lt;/p&gt;

&lt;p&gt;depends_on → Ensures the database container is ready before starting the backend.&lt;/p&gt;

&lt;p&gt;💡 &lt;strong&gt;Tip:&lt;/strong&gt; You can use an online YAML formatter to ensure proper indentation and readability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fykdea3vgaqnl870s9t75.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fykdea3vgaqnl870s9t75.png" alt=" " width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Running the Application&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Save the file (:wq in Vim).&lt;/p&gt;

&lt;p&gt;Stop any previous containers: Now kill previous containers by using docker kill.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xez4m8o2air675kzenq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xez4m8o2air675kzenq.png" alt=" " width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start the application with Docker Compose:&lt;/strong&gt;&lt;br&gt;
docker-compose up -d&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tzkokghgvx88mr5l51t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tzkokghgvx88mr5l51t.png" alt=" " width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Deployment Through Docker-Compose:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After this, your two-tier Flask + MySQL application is fully deployed using Docker Compose.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7im4fd599e8bnalmqbxy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7im4fd599e8bnalmqbxy.png" alt=" " width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  End-to-End Two-Tier Application Deployment on Kubernetes (Flask + MySQL)
&lt;/h2&gt;

&lt;p&gt;Deploying a two-tier application on Kubernetes requires understanding not only containers but also components of kubernetes Pods, Deployments, Services, and Persistent Storage.&lt;br&gt;
In this article, we will deploy a two-tier Flask + MySQL application on a Kubernetes cluster created using kubeadm.&lt;/p&gt;

&lt;p&gt;Before diving deep into the project, it’s important to understand the Kubernetes architecture first how is it works , as shown in the diagram below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fimxjdx3slrigpr8efvel.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fimxjdx3slrigpr8efvel.png" alt=" " width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes architecture defines how the control plane (Master) and worker nodes communicate to deploy, manage, scale, and heal applications automatically.&lt;/p&gt;

&lt;p&gt;It separates cluster management (API Server, Scheduler, Controller, etcd) from application execution (Pods, Kubelet, Services), which ensures high availability, scalability, and fault tolerance.&lt;/p&gt;

&lt;p&gt;Without understanding this architecture, it’s difficult to design reliable, production-ready Kubernetes deployments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes Architecture (Master &amp;amp; Node)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before moving into the project implementation, it is important to understand the Kubernetes architecture, as shown in the image above. Kubernetes follows a master–worker (node) architecture, where the responsibilities are clearly divided to manage and run applications efficiently.&lt;/p&gt;

&lt;p&gt;In Kubernetes, we mainly have two types of servers:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Master Server (Control Plane)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Node Server (Worker Node)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To understand this better, we can compare Kubernetes to a software company structure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Master Server (Decision-Making Team)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Master server acts like the administration or decision-making team in a software company. It does not run application containers directly; instead, it manages and controls the entire cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key components of the Master server are:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;API Server&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The API Server acts like a Team Lead. It is the central communication point of Kubernetes. All requests from users, kubectl, or internal components go through the API Server. It communicates with the Scheduler and Node components to decide where and how applications should run.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scheduler&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Scheduler works like an HR team in an organization. Its job is to decide which node should run which Pod or container, based on available resources and constraints.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;etcd&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;etcd is the database of Kubernetes. It stores all cluster data such as Pod states, node information, configurations, and secrets. Just like a company maintains records of employees and projects, Kubernetes stores all cluster information in etcd.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Controller Manager&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Controller Manager acts like a Project Manager. It continuously monitors the cluster and ensures that the desired state matches the actual state. If a Pod crashes, the Controller Manager makes sure a new one is created automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Node Server (Worker / Execution Team)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Node server, also called the Worker node, is like the Research and Development (R&amp;amp;D) team in a software company. This is where the actual application runs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key components of the Node server are:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubelet&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kubelet works like a reporting employee. It runs on every node and continuously reports the status of Pods and containers back to the API Server. It ensures that containers are running as instructed by the Master.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service Proxy&lt;/strong&gt;&lt;br&gt;
Service Proxy acts as a network connector. It allows communication between Pods and enables access to the application from the outside world by routing traffic to the correct Pod.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pods &amp;amp; Containers&lt;/strong&gt;&lt;br&gt;
Pods contain one or more containers where the actual application code runs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;kubectl &amp;amp; Networking&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kubectl acts like the CEO of the organization. It gives commands to the API Server to deploy, scale, or manage applications in the cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CNI (Container Network Interface)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;CNI (such as Calico or Weave Net) acts like the internal communication system of the company, enabling seamless networking between Pods across different nodes.&lt;/p&gt;

&lt;p&gt;Understanding this architecture helps us design scalable, fault-tolerant, and production-ready Kubernetes applications&lt;/p&gt;

&lt;p&gt;After that open the AWS login console and create two instances&lt;br&gt;
one is k8s-master server and another one k8s-node server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffgqztoh108np8sc3h3lg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffgqztoh108np8sc3h3lg.png" alt=" " width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Add Rules to the Security Group:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Allow SSH Traffic (Port 22):&lt;/p&gt;

&lt;p&gt;Type: SSH&lt;br&gt;
Port Range: 22&lt;br&gt;
Source: 0.0.0.0/0 (Anywhere) or your specific IP&lt;br&gt;
Allow Kubernetes API Traffic (Port 6443):&lt;/p&gt;

&lt;p&gt;Type: Custom TCP&lt;br&gt;
Port Range: 6443&lt;br&gt;
Source: 0.0.0.0/0 (Anywhere) or specific IP ranges&lt;br&gt;
Save the Rules:&lt;/p&gt;

&lt;p&gt;Click on Create Security Group to save the settings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ev2p9bhob3oqn8w96xm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ev2p9bhob3oqn8w96xm.png" alt=" " width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that Connect or SSH Both Master and Node Server:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fleecbkxvjf1ff4e4bune.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fleecbkxvjf1ff4e4bune.png" alt=" " width="800" height="68"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Install Kubernetes Prerequisites (Run on Both Master &amp;amp; Node)&lt;/p&gt;

&lt;p&gt;Before initializing the Kubernetes cluster, we need to install the required &lt;strong&gt;Kubernetes components&lt;/strong&gt; on both the &lt;strong&gt;Master&lt;/strong&gt; and Worker &lt;strong&gt;(Node)&lt;/strong&gt; servers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Update System &amp;amp; Install Required Packages&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;sudo apt-get update&lt;br&gt;
sudo apt-get install -y apt-transport-https ca-certificates curl gpg&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Purpose:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This command prepares the system to securely download Kubernetes packages from external HTTPS repositories. It:&lt;/p&gt;

&lt;p&gt;Enables HTTPS-based package downloads&lt;/p&gt;

&lt;p&gt;Verifies SSL certificates to prevent compromised sources&lt;/p&gt;

&lt;p&gt;Uses curl to fetch remote data&lt;/p&gt;

&lt;p&gt;Allows GPG verification to ensure package authenticity&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Add Kubernetes GPG Signing Key&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;curl -fsSL &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key&lt;/a&gt; | \&lt;br&gt;
sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Purpose:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This command downloads the official Kubernetes GPG key and stores it locally.&lt;br&gt;
It ensures that only trusted and signed Kubernetes packages can be installed on the system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Add Kubernetes APT Repository:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] \&lt;br&gt;
&lt;a href="https://pkgs.k8s.io/core:/stable:/v1.29/deb/" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.29/deb/&lt;/a&gt; /' | \&lt;br&gt;
sudo tee /etc/apt/sources.list.d/kubernetes.list&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Purpose:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This adds the official Kubernetes repository to the system’s APT sources and links it with the GPG key, ensuring secure and verified package installation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Update Again&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;sudo apt-get update&lt;/p&gt;

&lt;p&gt;This updates the system package index so APT becomes aware of the newly added Kubernetes repository and can download Kubernetes packages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Install Kubernetes Components&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;sudo apt-get install -y kubelet kubeadm kubectl&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Purpose:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This installs the core Kubernetes components:&lt;/p&gt;

&lt;p&gt;kubelet – Runs on every node and manages Pods&lt;/p&gt;

&lt;p&gt;kubeadm – Used to bootstrap and manage the Kubernetes cluster&lt;/p&gt;

&lt;p&gt;kubectl – Command-line tool to interact with the Kubernetes cluster&lt;/p&gt;

&lt;p&gt;Together, these components enable cluster initialization, node communication, and workload management.&lt;/p&gt;

&lt;p&gt;After Executing all the commands the Master server will look like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F81nopdx0wwuvfnpsxup2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F81nopdx0wwuvfnpsxup2.png" alt=" " width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then after that configure NodeServer also:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flk1c42bgtkxit2rllygn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flk1c42bgtkxit2rllygn.png" alt=" " width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now after that verfiy on Master by running "kubectl get nodes"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fra8tukg4uyw2q9htjis7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fra8tukg4uyw2q9htjis7.png" alt=" " width="800" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that check on Node Server it will show that "This node has joined the cluster"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fptx4hq900nqelmvxulwg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fptx4hq900nqelmvxulwg.png" alt=" " width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can check out the complete documentation on how to install and run a** Kubernetes cluster using kubeadm** here:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://github.com/SAFI-ULLAHSAFEER/kubestarter/tree/main/Kubeadm_Installation_Scripts_and_Documentation" rel="noopener noreferrer"&gt;https://github.com/SAFI-ULLAHSAFEER/kubestarter/tree/main/Kubeadm_Installation_Scripts_and_Documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Concept of Pod in Kubernetes:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Pod&lt;/p&gt;

&lt;p&gt;A Pod is where your Docker containers run.&lt;br&gt;
It is the smallest unit of Kubernetes where your application is actually running.&lt;/p&gt;

&lt;p&gt;Kubernetes uses the Container Runtime Interface (CRI) in the background, and internally it calls containerd to run containers.&lt;/p&gt;

&lt;p&gt;A Pod acts like a house for containers. Inside a Pod, you can define:&lt;/p&gt;

&lt;p&gt;Environment variables&lt;/p&gt;

&lt;p&gt;Resource limits&lt;/p&gt;

&lt;p&gt;Application configuration&lt;/p&gt;

&lt;p&gt;All required resources are enclosed inside a Pod.&lt;/p&gt;

&lt;p&gt;Containers cannot scale alone. In Kubernetes, we scale Pods, not individual containers.&lt;br&gt;
Multiple containers run inside Pods, and creating multiple Pods is called scaling.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Clone the Application Repository:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;git clone &lt;a href="https://github.com/SAFI-ULLAHSAFEER/two-tier-flask-app.git" rel="noopener noreferrer"&gt;https://github.com/SAFI-ULLAHSAFEER/two-tier-flask-app.git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After cloning, move into the Kubernetes directory where all manifest files are present:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F78gqbpcvy6etwwszjjpe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F78gqbpcvy6etwwszjjpe.png" alt=" " width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now configure the "twotier-app-pod.yml" file:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fimwydmib498k8dixooc5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fimwydmib498k8dixooc5.png" alt=" " width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;press :wq and then press enter to save the file &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhytiw7g0cacep126qscq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhytiw7g0cacep126qscq.png" alt=" " width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that run "Kubectl apply –f two-tier-app-pod.yml"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv2bfa1o6qecwaoirudwk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv2bfa1o6qecwaoirudwk.png" alt=" " width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The pod is successfully configured now.&lt;/p&gt;

&lt;p&gt;After that moving towards &lt;strong&gt;Deployment&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A Deployment is used to manage and configure Pods.&lt;/p&gt;

&lt;p&gt;In a Deployment:&lt;/p&gt;

&lt;p&gt;We define a Pod template&lt;/p&gt;

&lt;p&gt;Kubernetes creates multiple replicas of that Pod based on our requirement&lt;/p&gt;

&lt;p&gt;In production, Deployments provide:&lt;/p&gt;

&lt;p&gt;Auto-scaling&lt;/p&gt;

&lt;p&gt;Auto-healing&lt;/p&gt;

&lt;p&gt;High availability&lt;/p&gt;

&lt;p&gt;If any Pod or container crashes, Kubernetes automatically creates a new one, ensuring the application runs smoothly.&lt;/p&gt;

&lt;p&gt;Configure Deployment file "twotier-app-Deployment.yml :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbhfyf2bev18rysrfp3w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbhfyf2bev18rysrfp3w.png" alt=" " width="800" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;after that press :wq and enter to save the&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8qbz2a0l3j3ekyr4jd3y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8qbz2a0l3j3ekyr4jd3y.png" alt=" " width="800" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that run "Kubectl apply –f two-tier-app-Deployment"&lt;br&gt;
and then check pods by running "kubectl get pods"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F96mz07nupr61yb9f05bw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F96mz07nupr61yb9f05bw.png" alt=" " width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To give access to a Deployment from the outside world, we need a Service.&lt;/p&gt;

&lt;p&gt;A Service provides a single, stable entry point to access the application.&lt;/p&gt;

&lt;p&gt;When a user wants to access the application from outside, they cannot directly access Pods, because Pods are dynamic and each Pod has its own IP address.&lt;/p&gt;

&lt;p&gt;Instead, the user first accesses the Service, and the Service then forwards the request to the Deployment.&lt;br&gt;
Since a Deployment can have multiple Pods and multiple container IPs, the Service acts as a single logical node and load-balances traffic across all Pods.&lt;/p&gt;

&lt;p&gt;These three components Pod, Deployment, and Service are very important to run a single-tier application in Kubernetes.&lt;/p&gt;

&lt;p&gt;In a multi-tier application, such as when we also have a database layer, additional components like Persistent Volumes and Persistent Volume Claims are required.&lt;/p&gt;

&lt;p&gt;Now Configure "twotier-app-svc.yml"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5ki3be77a9e9zxdi8ug.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5ki3be77a9e9zxdi8ug.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;after that check it by "kubectl get service"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdamfdvei7x7davr1aujy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdamfdvei7x7davr1aujy.png" alt=" " width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that create a directory on path for persistent volume: /home/ubuntu/two-tier-flask-app/mysqldata  &lt;/p&gt;

&lt;p&gt;After that create two-tier-flask-app/k8s$ vim mysql-pv.yml&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftgmradcokcn3fgr1x3cy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftgmradcokcn3fgr1x3cy.png" alt=" " width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now after that two-tier-flask-app/k8s$ vim mysql-pvc.yml&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Persistent Volume&lt;/strong&gt; vs &lt;strong&gt;Persistent Volume Claim&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Persistent Volume (PV) is used to create or allocate storage at the cluster level.&lt;br&gt;
Persistent Volume Claim (PVC) is used to request storage by specifying how much storage an application needs.&lt;br&gt;
In simple terms:&lt;br&gt;
PV provides the actual storage&lt;br&gt;
PVC asks for the required amount of storage&lt;br&gt;
Kubernetes automatically matches a PVC with an appropriate PV and attaches it to the pod.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqf67sql8axixekki5ezf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqf67sql8axixekki5ezf.png" alt=" " width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now after that run "kubectl apply -f twotier-app-pv.yml"&lt;br&gt;
 and "kubectl apply -f twotier-app-pvc.yml"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frpk2ikcehnk6o7yf1hic.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frpk2ikcehnk6o7yf1hic.png" alt=" " width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now after that create vim mysql-deployment.yml&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frz4bszafsqlaohi30om1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frz4bszafsqlaohi30om1.png" alt=" " width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and then press :wq and then enter and then configured it by running "kubectl apply -f mysql-deployment.yml"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmw0bu5144r3vxbgs7ppw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmw0bu5144r3vxbgs7ppw.png" alt=" " width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that create /two-tier-flask-app/k8s$ vim mysql-svc.yml &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiow7bgqa0gdwaro0d5gu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiow7bgqa0gdwaro0d5gu.png" alt=" " width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;after that verify all the nodes, pods and services&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futsihpqfxrnclmr3olp4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futsihpqfxrnclmr3olp4.png" alt=" " width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you can access your Flask app from browser:&lt;/p&gt;

&lt;p&gt;Use any node’s IP (control-plane or worker). In my case:&lt;br&gt;
&lt;strong&gt;Master node&lt;/strong&gt;: ip-172-31-33-58&lt;br&gt;
&lt;strong&gt;Worker node&lt;/strong&gt;: ip-172-31-37-23&lt;/p&gt;

&lt;p&gt;Combine it with the NodePort 30004 later i have updated 30007 with port 30004 you can also confiure it on anyport between (In Kubernetes, the default port range for a NodePort service is 30000-32767):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxqinitsi1gd2oz3qx5u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxqinitsi1gd2oz3qx5u.png" alt=" " width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Finally, the application is live and running successfully *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fed1axhfcy6cs5feyph7d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fed1axhfcy6cs5feyph7d.png" alt=" " width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Wrapping Up&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We’ve successfully taken a two-tier Flask + MySQL application from containerization with Docker and Docker Compose all the way to production-ready deployment on Kubernetes.&lt;/p&gt;

&lt;p&gt;From building images and pushing them to Docker Hub to orchestrating services, scaling pods, and exposing the application via Kubernetes, this journey covered the complete modern DevOps workflow.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>docker</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>"How to Automate Spring Boot Deployment with Shell Script &amp; Vagrant (Step-by-Step for Beginners)"</title>
      <dc:creator>SAFI-ULLAH SAFEER</dc:creator>
      <pubDate>Sun, 01 Jun 2025 23:56:02 +0000</pubDate>
      <link>https://dev.to/safi-ullah/how-to-automate-spring-boot-deployment-with-shell-script-vagrant-step-by-step-for-beginners-1p88</link>
      <guid>https://dev.to/safi-ullah/how-to-automate-spring-boot-deployment-with-shell-script-vagrant-step-by-step-for-beginners-1p88</guid>
      <description>&lt;p&gt;🎯 Level: Beginner-friendly&lt;/p&gt;

&lt;p&gt;📂 Project Repo: &lt;a href="https://github.com/SAFI-ULLAHSAFEER/Shell-Script" rel="noopener noreferrer"&gt;https://github.com/SAFI-ULLAHSAFEER/Shell-Script&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🛠️ Why Shell Script Automation?
&lt;/h2&gt;

&lt;p&gt;Automating application deployments saves time and reduces human errors. In this blog, I’ll walk you through deploying a Spring Boot WAR app inside a Vagrant-managed Ubuntu machine using a Shell Script. All actions are visualized step-by-step with real screenshots. Perfect for beginners in DevOps, scripting, or system setup!&lt;/p&gt;

&lt;h2&gt;
  
  
  📦  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Installed: Git, Vagrant, VirtualBox&lt;/p&gt;

&lt;p&gt;Basic knowledge of Linux terminal&lt;/p&gt;

&lt;p&gt;A Spring Boot WAR file (included in the GitHub repo)&lt;/p&gt;

&lt;h2&gt;
  
  
  📁 Step 1: Clone the Project
&lt;/h2&gt;

&lt;p&gt;Open the VS code&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35xo5vvvrep310fn8huu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35xo5vvvrep310fn8huu.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Type the following command git clone &lt;a href="https://github.com/SAFI-ULLAHSAFEER/Shell-Script" rel="noopener noreferrer"&gt;https://github.com/SAFI-ULLAHSAFEER/Shell-Script&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fohyhy78b19rxm0l9xd5c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fohyhy78b19rxm0l9xd5c.png" alt="Image description" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  📂 Step 4: Navigate to Your Spring Boot App Directory
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feqrv3mql5vcpg9jgvma1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feqrv3mql5vcpg9jgvma1.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Start the Vagrant Box&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd53nzh9gf6bo5k5snkd7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd53nzh9gf6bo5k5snkd7.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🛠️ This command will automatically create and configure a virtual Linux machine for you, .This command logs you into the virtual Ubuntu machine where everything will run.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgc9jxhcrk4si4m1cv7eq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgc9jxhcrk4si4m1cv7eq.png" alt="Image description" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see the black console screen only we called it as a multi-user.target in linux.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfo97vpilvp02ldn9mc1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfo97vpilvp02ldn9mc1.png" alt="Image description" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now Next Step is SSH Into the Machine&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8okq8anvy337a6o6e46a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8okq8anvy337a6o6e46a.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that Navigate to Your Spring Boot App Directory&lt;br&gt;
By default, Vagrant shares your local directory under /vagrant.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi14piy93uf593v46ulu3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi14piy93uf593v46ulu3.png" alt="Image description" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that next step is Create setup.sh Script&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5dzs133vqb2st8bxjfr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5dzs133vqb2st8bxjfr.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5n20poadiqsm6lliy2u4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5n20poadiqsm6lliy2u4.png" alt="Image description" width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now the next step is to Make the Script Executable and Run&lt;/p&gt;

&lt;p&gt;chmod +x setup.sh&lt;br&gt;
sudo ./setup.sh&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpuxczf8hjq4pzp7rxbts.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpuxczf8hjq4pzp7rxbts.png" alt="Image description" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Update packages&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install Java and Tomcat&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploy your WAR file&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Restart Tomcat&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F111pvg5tb9xki0rsky1b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F111pvg5tb9xki0rsky1b.png" alt="Image description" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now the NeXT Step is Check the IP Address by running command:&lt;br&gt;
ip a&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftoi1mcyr9lfjz5jlta4j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftoi1mcyr9lfjz5jlta4j.png" alt="Image description" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The following ip address of machine  will appear&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faixiy6bhp0t128m16hbs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faixiy6bhp0t128m16hbs.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now the next Step is Access Your App in the Browser&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqk6gxsi8umwnpnsahoea.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqk6gxsi8umwnpnsahoea.png" alt="Image description" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;then press enter and the following screen will appear&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0m6rdws3yrndwv2dnvzh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0m6rdws3yrndwv2dnvzh.png" alt="Image description" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🚀 &lt;strong&gt;Congratulations!&lt;/strong&gt; If everything worked—you’ll see your Spring Boot app live!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8230ozp82zx8viu2em6j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8230ozp82zx8viu2em6j.png" alt="Image description" width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Exit and Destroy the Vagrant Machine (Cleanup)&lt;br&gt;
After verifying the app in the browser, clean up your VM:&lt;/p&gt;

&lt;p&gt;exit&lt;br&gt;
vagrant destroy&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqwxs611brpsej77iihph.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqwxs611brpsej77iihph.png" alt="Image description" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This command:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Logs out of the VM&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompts you to confirm destroying the environment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Frees up system resources&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Thoughts&lt;/strong&gt; ❤️&lt;/p&gt;

&lt;p&gt;This hands-on guide is your stepping stone into automation with shell scripting. Share this with your peers and drop your questions below. Happy scripting!&lt;/p&gt;

</description>
      <category>bash</category>
      <category>linux</category>
      <category>shellscripting</category>
    </item>
    <item>
      <title>Build a Secure Web Server on AWS: A Step-by-Step Guide Deploying a secure and scalable web application on AWS using AWS services</title>
      <dc:creator>SAFI-ULLAH SAFEER</dc:creator>
      <pubDate>Sun, 24 Nov 2024 09:03:22 +0000</pubDate>
      <link>https://dev.to/safi-ullah/build-a-secure-web-server-on-aws-a-step-by-step-guide-deploying-a-secure-and-scalable-web-1ad0</link>
      <guid>https://dev.to/safi-ullah/build-a-secure-web-server-on-aws-a-step-by-step-guide-deploying-a-secure-and-scalable-web-1ad0</guid>
      <description>&lt;p&gt;Deploying a secure and scalable web application on AWS may seem challenging, but with proper guidance, it’s achievable. This article follows a structured approach to set up a fully functional web server using AWS services like Amazon VPC, IAM, EC2, and Systems Manager.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Design Your Architecture&lt;/strong&gt;&lt;br&gt;
Before jumping into implementation, take a moment to review the architecture diagram for your web application. It will guide you as we configure each AWS service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw09k3xl061zgfkaxgovi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw09k3xl061zgfkaxgovi.png" alt="Image description" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key points:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Create a VPC and Subnets&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;An Amazon VPC is a logically isolated virtual network you define, allowing you to launch AWS resources in a secure, isolated environment. We'll use the VPC wizard to quickly set up the entire virtual network for our web server, including subnets, routing, and other resources.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Set Up Security Groups&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Security groups control inbound and outbound traffic for associated resources, like servers. Your VPC comes with a default security group, but you can create additional groups with custom inbound and outbound rules.&lt;/p&gt;

&lt;p&gt;We'll create two security groups to secure our website. One will protect the resources in the public subnets, allowing only the necessary traffic. The other will specifically secure the web server instance.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Configure IAM Roles&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;AWS Identity and Access Management (IAM) is a service for securely controlling access to AWS services. With IAM, you can centrally manage users, security credentials such as access keys, and permissions that control which AWS resources users and applications can access.&lt;/p&gt;

&lt;p&gt;We'll configure IAM to tightly control which AWS resources our web server can access, granting only the necessary permissions.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Launch an EC2 Instance&lt;/strong&gt;
Amazon Elastic Compute Cloud (Amazon EC2) provides on-demand, scalable computing capacity in the Amazon Web Services (AWS) Cloud. Using Amazon EC2 reduces hardware costs so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the following section, we'll deploy our web server using Amazon EC2.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Manage Instance with AWS Systems Manager&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Session Manager is a fully managed AWS Systems Manager capability. With Session Manager, you can manage your Amazon Elastic Compute Cloud (Amazon EC2) instances, edge devices, on-premises servers, and virtual machines (VMs).&lt;/p&gt;

&lt;p&gt;We'll use Session Manager to securely access the web server for administrative purposes.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Create Application load Balancer&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;AWS offers several types of load balancers to distribute traffic across your infrastructure.&lt;/p&gt;

&lt;p&gt;In this section, we'll be setting up an Application Load Balancer (ALB). With the ALB, we'll be able to route incoming web traffic to our single EC2 web server instance. The load balancer will handle the network configuration and security policies to enable secure communication between clients and the web server.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Create S3 Bucket and upload Files on it&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Millions of customers of all sizes and industries store, manage, analyze, and protect any amount of data for virtually any use case, such as data lakes, cloud-native applications, and mobile apps.&lt;/p&gt;

&lt;p&gt;We'll store files in an Amazon S3 bucket, allowing users to access them directly from the website.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Test Your Setup&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Browse to the website!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let’s dive in!&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Instructions&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Navigate to the AWS Management Console and locate the VPC service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdt2y5yhx29w2cwj3wby4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdt2y5yhx29w2cwj3wby4.png" alt="Image description" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on Create VPC Select VPC and more. This will start the VPC wizard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuq4seqd2wm544z1ahaxz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuq4seqd2wm544z1ahaxz.png" alt="Image description" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;**Great job! **You've successfully set up the network infrastructure for our new web server.&lt;/p&gt;

&lt;p&gt;Now Browse to the &lt;strong&gt;Security Groups&lt;/strong&gt; part of the Amazon EC2 service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fasz200syva2y1zeyibq1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fasz200syva2y1zeyibq1.png" alt="Image description" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Now Create two Security Groups here with the following settings
&lt;/h2&gt;

&lt;p&gt;The first one is  &lt;strong&gt;Security group name&lt;/strong&gt; Load Balancer Security Group&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxowpecsxym2pn8xeuyfe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxowpecsxym2pn8xeuyfe.png" alt="Image description" width="800" height="204"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After defining all the rules click on create &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm1cxfnmgxeb922vqvqnh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm1cxfnmgxeb922vqvqnh.png" alt="Image description" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, repeat the process to create a Security Group with the following settings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftbfz7gwu9lg7q746xje6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftbfz7gwu9lg7q746xje6.png" alt="Image description" width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Make sure to add this rule in the Second Group which is &lt;strong&gt;WebserverSecurityGroup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3r35qu51t3cx03zrd7ic.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3r35qu51t3cx03zrd7ic.png" alt="Image description" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that Confirm both the security groups have been created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqvr0wv3r4n3zwe56jzdv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqvr0wv3r4n3zwe56jzdv.png" alt="Image description" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Great,&lt;/strong&gt; we've created two new Security Groups to limit traffic to specific ports. We'll be using these later on in the setup.&lt;/p&gt;

&lt;p&gt;Now We'll configure IAM to tightly control which AWS resources our web server can access, granting only the necessary permissions.&lt;/p&gt;

&lt;p&gt;Create a new IAM role and associate it with the EC2 instance profile for the web server.&lt;br&gt;
Select Roles, then click Create role.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpuigyt42bpkrz0bgkrc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpuigyt42bpkrz0bgkrc.png" alt="Image description" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select EC2 Role for AWS Systems Manager and click Next&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Systems Manager&lt;/strong&gt; is a service that allows you to securely administer and manage your EC2 instances, without needing to access them over the public Internet. This role will grant the necessary permissions for Systems Manager to connect to and manage our web server instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2cclrs7jx73c8klzrr61.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2cclrs7jx73c8klzrr61.png" alt="Image description" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Confirm that the &lt;strong&gt;AmazonSSMManagedInstanceCore&lt;/strong&gt; policy has been added to the role and click Next&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ynscr3naul5v2co2su9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ynscr3naul5v2co2su9.png" alt="Image description" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F499kyk8hz14gnxjf39mf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F499kyk8hz14gnxjf39mf.png" alt="Image description" width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Congratulations!&lt;/strong&gt; You've created an IAM role which will be associated with the EC2 instance profile for our web server. This role provides the necessary permissions for the instance to access other AWS resources, as well as allowing secure administration through AWS Systems Manager, without needing to expose the instance directly to the public Internet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Now we'll deploy our web server using Amazon EC2
&lt;/h2&gt;

&lt;p&gt;Browse to the EC2 service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fez87ma69lxur933e5y12.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fez87ma69lxur933e5y12.png" alt="Image description" width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Some points to  be Remember while configuring EC2
&lt;/h2&gt;

&lt;p&gt;Customers have the flexibility to launch Amazon EC2 instances with a wide selection of operating systems and pre-configured images.&lt;br&gt;
For our simple web server, we'll select the &lt;strong&gt;Amazon Linux 2023 AMI (Amazon Machine Image) in the 64-bit (x86) architecture.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Normally, you'd create a key pair to enable secure SSH access to the EC2 instance. But in this case, we'll skip the key pair since we'll be using AWS Systems Manager to connect, rather than direct SSH.&lt;br&gt;
&lt;strong&gt;Select Proceed without a key pair (Not recommended)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In Network settings, click the Edit button to configure the EC2 instance's networking. Associate the new instance with the Amazon VPC and &lt;strong&gt;private subnet&lt;/strong&gt; we set up earlier.&lt;/p&gt;

&lt;h2&gt;
  
  
  Expand Advanced details
&lt;/h2&gt;

&lt;p&gt;. Under** IAM instance profile, choose WebServerInstanceProfile.** This is the instance profile we created earlier, which will allow us to privately connect to the server.&lt;/p&gt;

&lt;p&gt;We want the server to run a script on boot that installs the necessary PHP web server components. We can accomplish this by specifying user data.&lt;br&gt;
Enter the code below into the user data field.&lt;/p&gt;

&lt;h1&gt;
  
  
  !/bin/bash
&lt;/h1&gt;

&lt;p&gt;yum update -y&lt;/p&gt;

&lt;h1&gt;
  
  
  Install Session Manager agent
&lt;/h1&gt;

&lt;p&gt;yum install -y &lt;a href="https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm" rel="noopener noreferrer"&gt;https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm&lt;/a&gt;&lt;br&gt;
systemctl enable amazon-ssm-agent&lt;/p&gt;

&lt;h1&gt;
  
  
  Install and start the php web server
&lt;/h1&gt;

&lt;p&gt;dnf install -y httpd wget php-json php&lt;br&gt;
chkconfig httpd on&lt;br&gt;
systemctl start httpd&lt;br&gt;
systemctl enable httpd&lt;/p&gt;

&lt;h1&gt;
  
  
  Install AWS SDK for PHP
&lt;/h1&gt;

&lt;p&gt;wget &lt;a href="https://docs.aws.amazon.com/aws-sdk-php/v3/download/aws.zip" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/aws-sdk-php/v3/download/aws.zip&lt;/a&gt;&lt;br&gt;
unzip aws.zip -d /var/www/html/sdk&lt;br&gt;
rm aws.zip&lt;/p&gt;

&lt;h1&gt;
  
  
  Install the web pages for our lab
&lt;/h1&gt;

&lt;p&gt;if [ ! -f /var/www/html/index.html ]; then&lt;br&gt;
rm index.html&lt;br&gt;
fi&lt;br&gt;
cd /var/www/html&lt;br&gt;
wget &lt;a href="https://ws-assets-prod-iad-r-iad-ed304a55c2ca1aee.s3.us-east-1.amazonaws.com/2aa53d6e-6814-4705-ba90-04dfa93fc4a3/index.php" rel="noopener noreferrer"&gt;https://ws-assets-prod-iad-r-iad-ed304a55c2ca1aee.s3.us-east-1.amazonaws.com/2aa53d6e-6814-4705-ba90-04dfa93fc4a3/index.php&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Update existing packages
&lt;/h1&gt;

&lt;p&gt;dnf update -y&lt;/p&gt;

&lt;p&gt;After that Click &lt;strong&gt;Launch Instance&lt;/strong&gt; to complete the configuration and launch the new web server.&lt;/p&gt;

&lt;p&gt;Once the instance is launched, you'll see a success message. Click on the underlined Amazon EC2 instance ID to navigate back to the EC2 dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwrves9a3vize05j3eant.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwrves9a3vize05j3eant.png" alt="Image description" width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Excellent work!&lt;/strong&gt; You've successfully created the web server, leveraging all the foundational components we set up previously.&lt;/p&gt;

&lt;p&gt;Now We'll use &lt;strong&gt;Session Manager&lt;/strong&gt; to securely access the web server for administrative purposes.&lt;br&gt;
Session Manager is a fully managed AWS Systems Manager capability. With Session Manager, you can manage your Amazon Elastic Compute Cloud (Amazon EC2) instances, edge devices, on-premises servers, and virtual machines (VMs).&lt;/p&gt;

&lt;p&gt;In the Amazon EC2 dashboard, select the web server instance. You'll notice it only has a &lt;strong&gt;private IP address&lt;/strong&gt;, not a public one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpx1us1yrr9dpbh17j77m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpx1us1yrr9dpbh17j77m.png" alt="Image description" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now select your &lt;strong&gt;EC2 instance&lt;/strong&gt; and click on Connect &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49wykoha1g0nx5lmw9kz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49wykoha1g0nx5lmw9kz.png" alt="Image description" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Take a moment to marvel at the &lt;strong&gt;web server shell&lt;/strong&gt;, then proceed to run the following commands:&lt;/p&gt;

&lt;p&gt;echo -n 'Private IPv4 Address: ' &amp;amp;&amp;amp; ifconfig enX0 | grep -i mask | awk '{print $2}'| cut -f2 -d: &amp;amp;&amp;amp; \&lt;br&gt;
echo -n 'Public IPv4 Address: ' &amp;amp;&amp;amp; curl checkip.amazonaws.com&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynpb01izwyyd3r55njpg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynpb01izwyyd3r55njpg.png" alt="Image description" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now From the navigation menu, click on &lt;strong&gt;the Load Balancers link&lt;/strong&gt;, then click &lt;strong&gt;Create load balancer&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The Application Load Balancer (ALB) operates at the application layer, providing advanced traffic routing capabilities, in contrast to other load balancer options like the Network Load Balancer which functions at the network layer.&lt;/p&gt;

&lt;p&gt;Click Create under &lt;strong&gt;Application Load Balancer&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1opcuv2iwbrsvw2c6ggb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1opcuv2iwbrsvw2c6ggb.png" alt="Image description" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Configure the Application load balancer with the following basic and network settings:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6h6z2bzx0f0w8pk2we22.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6h6z2bzx0f0w8pk2we22.png" alt="Image description" width="744" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F869lsbtf22y89dgcqzei.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F869lsbtf22y89dgcqzei.png" alt="Image description" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A target group defines the targets (e.g. EC2 instances) that the load balancer will route traffic to. Configure the new target group with the following settings:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpq2jhxswka944uqxuw2v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpq2jhxswka944uqxuw2v.png" alt="Image description" width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select &lt;strong&gt;mywebserver&lt;/strong&gt; and click include as pending below. This will configure the load balancer to route web traffic from the Internet to the EC2 web server instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ayuky9xq9wucl7a9scj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ayuky9xq9wucl7a9scj.png" alt="Image description" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click Create &lt;strong&gt;target group&lt;/strong&gt; to finalize the setup, then close the browser tab to return to the load balancer configuration.&lt;/p&gt;

&lt;p&gt;In the Listeners and routing section, click the refresh button and select the &lt;strong&gt;WebServerTargetGroup&lt;/strong&gt; we just created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhkodvggjhrtevz5da6k2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhkodvggjhrtevz5da6k2.png" alt="Image description" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Leave the remaining settings as default and click Create load balancer&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Filpigqc6rdkepw3d63bg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Filpigqc6rdkepw3d63bg.png" alt="Image description" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Awesome! You have created an Application Load Balancer&lt;/strong&gt;. For this workshop, it is configured to route incoming HTTP (port 80) web traffic from the Internet to your EC2 web server instance. In a production environment, you would want to configure the load balancer to use HTTPS for secure communication.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5kxq3gazt1se4s5j03l1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5kxq3gazt1se4s5j03l1.png" alt="Image description" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Navigate to the Listeners and Rules tab and click on the &lt;strong&gt;WebServerTargetGroup link&lt;/strong&gt;. Verify that there is one healthy target listed.&lt;/p&gt;

&lt;p&gt;Initially &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzj1k5r4we6dtkclxnp77.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzj1k5r4we6dtkclxnp77.png" alt="Image description" width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tip&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If the load balancer is not fully provisioned or the target group doesn't show a healthy instance, give it a few minutes to sort itself out - it usually takes 3-5 minutes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpyoynpkcuqkhtyzva2w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpyoynpkcuqkhtyzva2w.png" alt="Image description" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally  there is one &lt;strong&gt;healthy&lt;/strong&gt; target listed.&lt;/p&gt;

&lt;p&gt;Now let's locate the public URL for the load balancer. You can find this under the &lt;strong&gt;DNS name&lt;/strong&gt; on the Load Balancer page.&lt;/p&gt;

&lt;p&gt;Copy the DNS name from the Load Balancer page and paste it into a new browser tab.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8cbs6ml4cgto7nlrr9e5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8cbs6ml4cgto7nlrr9e5.png" alt="Image description" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now paste it into a new browser tab.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgzs00gq92gxbgnbf65vy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgzs00gq92gxbgnbf65vy.png" alt="Image description" width="800" height="158"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The following screen will appear** We have a functioning website!** You can browse to the load balancer's public DNS address from any device. When you do, you'll see the website with options to perform various actions. The first option is related to Amazon S3 storage, so let's continue by provisioning the necessary storage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fofhyfzg0uln4i68qbfcy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fofhyfzg0uln4i68qbfcy.png" alt="Image description" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now Browse to the &lt;strong&gt;Amazon S3 service&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcvxzfw4484ff8d40v6gi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcvxzfw4484ff8d40v6gi.png" alt="Image description" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Give a unique name to your bucket just like in this case I have used &lt;strong&gt;awslearningclubmust&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdm4n6nc1a7mbji11cxi9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdm4n6nc1a7mbji11cxi9.png" alt="Image description" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Leave the other settings as the defaults, then click Create bucket.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37dtbimhxl9mj03uhsaf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37dtbimhxl9mj03uhsaf.png" alt="Image description" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, let's upload some files to the bucket. Download the required files,from here &lt;br&gt;
[(&lt;a href="https://ws-assets-prod-iad-r-iad-ed304a55c2ca1aee.s3.us-east-1.amazonaws.com/2aa53d6e-6814-4705-ba90-04dfa93fc4a3/UnzipAndUpload.zip)" rel="noopener noreferrer"&gt;https://ws-assets-prod-iad-r-iad-ed304a55c2ca1aee.s3.us-east-1.amazonaws.com/2aa53d6e-6814-4705-ba90-04dfa93fc4a3/UnzipAndUpload.zip)&lt;/a&gt;]&lt;br&gt;
 unarchive them locally.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Or you can upload your own files&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4c6x8yuwfk2v36g9grxa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4c6x8yuwfk2v36g9grxa.png" alt="Image description" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After uploading the objects on your bucket now go your EC2 Connection tester the URL DNS paste it on your browser earlier put your bucket name like in this case &lt;strong&gt;awslearningclubmust&lt;/strong&gt; and your region &lt;strong&gt;us-east-1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83qk40sqxhkz4lha65y1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83qk40sqxhkz4lha65y1.png" alt="Image description" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click Browse. Interesting, it looks like an error occurred. Can you investigate and figure out what might be causing that?&lt;br&gt;
And as expected you got the following error to access this page. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffuk7dwijnrwqc80eh4ni.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffuk7dwijnrwqc80eh4ni.png" alt="Image description" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But don't worry here is the last twist&lt;br&gt;
Browse to the IAM service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwpjsz0sv4d6un7uqmd4t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwpjsz0sv4d6un7uqmd4t.png" alt="Image description" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under Permission policies, click Add permissions and select Attach policies&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5bgu41umhj3z2rz2ldp2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5bgu41umhj3z2rz2ldp2.png" alt="Image description" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Search for s3. Select the AmazonS3ReadOnlyAccess AWS managed policy and click Add permissions&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi4d45okfs8eguwij3xvt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi4d45okfs8eguwij3xvt.png" alt="Image description" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0uzz34lxwlfru8twzm3x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0uzz34lxwlfru8twzm3x.png" alt="Image description" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Switch back to the website and try using the Amazon S3 bucket object browser again.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyn2c571styrxxn9d281h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyn2c571styrxxn9d281h.png" alt="Image description" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fantastic work!&lt;/strong&gt; You've completed the full implementation of the web server and S3 integration, showcasing your ability to deploy a AWS-powered web application. This hands-on experience has equipped you with valuable skills in areas like &lt;strong&gt;networking&lt;/strong&gt;, &lt;strong&gt;security&lt;/strong&gt;, &lt;strong&gt;compute&lt;/strong&gt;, and &lt;strong&gt;storage&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F60d5edntc9a8s1obksxt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F60d5edntc9a8s1obksxt.png" alt="Image description" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test your Knowledge&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;What is an Availability Zone and why use more than one?&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;An Availability Zone is a group of one or more data centers within an AWS Region. Using multiple Availability Zones provides redundancy and high availability for your resources, protecting against failures in a single location.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;What's the maximum number of subnets in an Amazon VPC?&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The VPC wizard has some limitations, but you can create up to 200 subnets per VPC if needed.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;**What's the difference between an IAM role and an IAM permission?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;**An IAM role is a container that holds IAM permissions, which define the specific allowed actions and resources, to be assumed by trusted entities.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;What are the key benefits of using AWS Systems Manager to manage the web server instance?&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The key benefit of using AWS Systems Manager is the ability to securely manage and maintain the web server instance without exposing management ports to the public Internet, along with a range of other administrative capabilities.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;What security principle does the IAM setup we just completed aim to follow?&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The security principle that IAM and the process we followed adheres to is the &lt;strong&gt;principle of least privilege&lt;/strong&gt;; only granting the minimum permissions necessary for the EC2 instance to perform its required functions.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Approximately how many different Amazon EC2 instance types are available?&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There are over &lt;strong&gt;800 Amazon EC2 instance types&lt;/strong&gt; to choose from, allowing you to select the right compute, memory, storage, and networking capabilities to match the requirements of your specific workloads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are the default inbound and outbound rules when creating a new Security Group?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By default, a newly created Security Group denies all inbound traffic and allows all outbound traffic.&lt;/p&gt;

</description>
      <category>iam</category>
      <category>ec2</category>
      <category>s3</category>
      <category>aws</category>
    </item>
    <item>
      <title>Establishing a Site-to-Site VPN Connection on AWS: A Real-Time Project</title>
      <dc:creator>SAFI-ULLAH SAFEER</dc:creator>
      <pubDate>Sat, 12 Oct 2024 10:19:29 +0000</pubDate>
      <link>https://dev.to/safi-ullah/establishing-a-site-to-site-vpn-connection-on-aws-a-real-time-project-4j0f</link>
      <guid>https://dev.to/safi-ullah/establishing-a-site-to-site-vpn-connection-on-aws-a-real-time-project-4j0f</guid>
      <description>&lt;p&gt;When you launch instances into an Amazon Virtual Private Cloud (VPC), they cannot, by default, communicate with your on-premises network. To enable secure communication between your on-premises network and AWS resources, you need to establish a Site-to-Site VPN connection. This article will guide you through the key concepts involved in setting up a Site-to-Site VPN connection, ensuring secure and reliable connectivity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project Overview:&lt;/strong&gt;&lt;br&gt;
The goal is to create a secure communication channel between the AWS side in the Mumbai region and the customer end in Singapore through a site-to-site VPN. This connection enables seamless data transfer between an on-premises network and the AWS cloud. Below is a step-by-step guide for setting up this VPN connection using AWS services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is a VPN Connection?&lt;/strong&gt;&lt;br&gt;
A VPN connection creates a secure, encrypted communication channel between your on-premises equipment (such as servers or devices) and your AWS VPC. This connection enables your on-premises network and AWS to communicate securely over the internet, protecting sensitive data from exposure to public networks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding VPN Tunnel&lt;/strong&gt;&lt;br&gt;
A VPN tunnel is an encrypted link through which data can pass from the customer network to or from AWS. Each VPN connection includes two VPN tunnels that can be used simultaneously for high availability, ensuring that even if one tunnel fails, the other continues to function.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is a Virtual Private Gateway?&lt;/strong&gt;&lt;br&gt;
A Virtual Private Gateway (VGW) is the AWS side of a VPN connection that acts as an entry point for traffic coming from an on-premises network via a Site-to-Site VPN. It's a critical component that enables communication between your VPC and your Customer Gateway (CGW), facilitating a secure connection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Customer Gateway&lt;/strong&gt;&lt;br&gt;
The Customer Gateway is an AWS resource that provides information about your on-premises network, such as the public IP address, and facilitates the secure connection between the AWS and customer sides.&lt;/p&gt;

&lt;p&gt;Step 1: Setting up the Singapore VPC (Customer End)&lt;br&gt;
First, create a VPC in the Singapore region for the customer or on-premises end with &lt;strong&gt;CIDR 192.168.0.0/24&lt;/strong&gt;. Be sure to attach a public subnet to this VPC.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkpld41o8papxu0w35eqp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkpld41o8papxu0w35eqp.png" alt="Image description" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyfq9lu62r9a6pr869l4c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyfq9lu62r9a6pr869l4c.png" alt="Image description" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;EC2 Instance Configuration&lt;br&gt;
Launch an EC2 instance in the Singapore region using AMI 2 Amazon Linux machine available under the free tier. And make sure you enable these three mentions &lt;strong&gt;protocols for security group ICMP,SSH,TCP&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm799w7emoy20ehoyad5x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm799w7emoy20ehoyad5x.png" alt="Image description" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now create another VPC on aws-side Mumbai region with CIDR &lt;strong&gt;172.16.100.0/24&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmm6x8w3mb8m9dqskpxln.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmm6x8w3mb8m9dqskpxln.png" alt="Image description" width="640" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;![Image description](&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8ks8n4giytcp7g614396.png" rel="noopener noreferrer"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8ks8n4giytcp7g614396.png&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After setting up both the VPC now the next step is to create virtual gateway on Mumbai region AWS-side&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0u926ua87s2purl5q771.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0u926ua87s2purl5q771.png" alt="Image description" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foj0gxpsr7p7fu3hszrbp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foj0gxpsr7p7fu3hszrbp.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the creation of virtual private gateway on Mumbai region attach it with your VPC which is in Mumbai region.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftvioaqb4ckcr1trr4pii.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftvioaqb4ckcr1trr4pii.png" alt="Image description" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now select you vpc from here which you have established in Mumbai region and attach to your virtual private gateway&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqolew8i813gam4bf7zcw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqolew8i813gam4bf7zcw.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that create a customer gateway in your Mumbai region and allocate the public ip pf EC2 instance which is in Singapore region that we have recently created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdn5jfsrfjn6ali6cgmw5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdn5jfsrfjn6ali6cgmw5.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Give public ip of on premises instance which is in Singapore region at customer end&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsyhllcp9jl8jdecmw7l2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsyhllcp9jl8jdecmw7l2.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copy it from here and provide to your customer gateway which is in aws-side Mumbai region &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6y7k7qrjilf90set23f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6y7k7qrjilf90set23f.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Paste it here&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2l51o9ftv4lolkq7v8kd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2l51o9ftv4lolkq7v8kd.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now in Mumbai region create a site-to-site vpn connection&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2rrp69qy1k7t8hbf82cw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2rrp69qy1k7t8hbf82cw.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select your virtual private gateway that you have created earlier and customer gateway &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffpmpnqzv1kd0nzij8gi4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffpmpnqzv1kd0nzij8gi4.png" alt="Image description" width="800" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that provide   Routing: static &lt;br&gt;
&lt;strong&gt;static ip prefix: Singapore region VPC CIDR as prefix which is 192.168.0.0/24. **&lt;br&gt;
Also provide your&lt;/strong&gt; AWS-side CIDR which is 172.16.100.0/24 on remote CIDR.** And provide your &lt;strong&gt;customer-end CIDR which is 192.168.0.0/24 on local CIDR block.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F75n9feah517k5gim3h1x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F75n9feah517k5gim3h1x.png" alt="Image description" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  After the creation of vpn-site-site connection. wait for 2 to 3 minutes after the status become available and click on download configuration open it on your notepad. This configuration will help you all over this lab.
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi42abqib59ye3e3z0swa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi42abqib59ye3e3z0swa.png" alt="Image description" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Now the next step is to go to route table and edit route propagations on Mumbai region make sure to enable it&lt;br&gt;
*&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm1zfiw6g4jqlkgo4tl49.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm1zfiw6g4jqlkgo4tl49.png" alt="Image description" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Make it enable by checking the box&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyxgyiv0nlxrg40ruo1ac.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyxgyiv0nlxrg40ruo1ac.png" alt="Image description" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Copy the ip of signapore EC2 instance for ssh&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe10vcdl4rdoqhzabte6s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe10vcdl4rdoqhzabte6s.png" alt="Image description" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now open your terminal from which you want to SSH I am using &lt;strong&gt;mobaxterm&lt;/strong&gt; and it is friendly to use . Paste your public ip on remote host and specify user name ec2-user and check private key and provide your private key and click on okay&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Floq8y1ha8l4iuzvw67ts.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Floq8y1ha8l4iuzvw67ts.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Now you have SSH your EC2 machine which is in Singapore region&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8yzqda6tu1cwkxsra616.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8yzqda6tu1cwkxsra616.png" alt="Image description" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  At the end of the article i will attached all the Commands that will help you to established vpn-site-site.
&lt;/h2&gt;

&lt;p&gt;First use sudo -i as a root user login to give admin rights&lt;br&gt;
After that use yum install libreswan -y to install openswan vpn&lt;br&gt;
Finally you have installed your vpn I have used the command &lt;strong&gt;yum install libreswan –y&lt;/strong&gt; in my case.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwb3fssffqzbqd0b8ey9o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwb3fssffqzbqd0b8ey9o.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fptcp3f1sdgtc4so3gy6x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fptcp3f1sdgtc4so3gy6x.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the installation of openswan &lt;strong&gt;use second comamnd Vim /etc/ipsec.conf For security configuration save file on vim:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyc0nb4olp8lwin12qgra.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyc0nb4olp8lwin12qgra.png" alt="Image description" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;After that press :wq and then press enter&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvh11c13akod1ftp33of.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvh11c13akod1ftp33of.png" alt="Image description" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;now after coming to the main screen enter new command  which is System control configuration command&lt;br&gt;
Vim /etc/sysctl.conf&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lwcbxu3mt9a3o1d87u5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lwcbxu3mt9a3o1d87u5.png" alt="Image description" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Paste this command on your file &lt;br&gt;
net.ipv4.ip_forward = 1 &lt;br&gt;
net.ipv4.conf.all.accept_redirects = 0 &lt;br&gt;
net.ipv4.conf.all.send_redirects = 0&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Then after that press escape button and write -&amp;gt; :wq then press enter&lt;br&gt;
Press I to insert your command on next line then after that press escape *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6a3aslmdfjnwis7ij9fd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6a3aslmdfjnwis7ij9fd.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that use this command Restart network service:&lt;br&gt;
   &lt;strong&gt;service network restart&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Also open your configuration on notepad that you have *&lt;em&gt;downloaded from Mumbai region site-to-site vpn connection&lt;br&gt;
*&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszaqrn7c9yk4nuglpf63.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszaqrn7c9yk4nuglpf63.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now in this configuration file you will see outside ip address and inside ip address.*&lt;em&gt;outside ip address is public ip address of your customer end and aws end *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdzc6wdw15251zeuhrc6z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdzc6wdw15251zeuhrc6z.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now next step you have to do is you have making some chnges in your command for tunnel 1 to make it up available so for that take all your configuration for tunnel 1 and open separate in notepad just like below&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fug7f8gfkkn8orep6wxd7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fug7f8gfkkn8orep6wxd7.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Four changes you have to make on this notepad tunnel configuration file&lt;br&gt;
First chnge provide left id:&lt;br&gt;
Left id=customer gateway outside ip address which is in your configuration file.&lt;br&gt;
Second change provide right id:&lt;br&gt;
right=virtual private gateway outside ip address in configuration files&lt;br&gt;
Third change: provide left subnet which is your Singapore region VPC IP customer end or on premises Left subnet is Singapore region customer or on-premises end subnet in this case 192.168.0.0/28&lt;br&gt;
Fourth change: provide right subnet which is your Mumbai region VPC IP AWS-side.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Make you your configuration is similar to this for tunnel 1 *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fly5z4brqxudnpjo3beab.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fly5z4brqxudnpjo3beab.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Paste in the terminal press :wq and then press enter&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foefbs3fbr9kxd4y96pru.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foefbs3fbr9kxd4y96pru.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now the next step is to make change in your next command three things we need in this first out customer gateway outside ip address and virtual gateway outside ip address that we used earlier. The new thing we need in this case pre-shared key. All these items are present in our Download configuration file just paste it here&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw9q6v84jmm3tqtplk3e6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw9q6v84jmm3tqtplk3e6.png" alt="Image description" width="800" height="194"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Copy the pre-shared key *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl66z83g1zvxlh6okprmp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl66z83g1zvxlh6okprmp.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you just have to *&lt;em&gt;paste these three inside your vim /etc/ipsec.d/aws-vpn.secrets&lt;br&gt;
*&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Few74qv4ky66x6urt5sce.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Few74qv4ky66x6urt5sce.png" alt="Image description" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;NOW After that enter &lt;strong&gt;these three commands one by one to make the vpn active and running and make your tunnel1 status up&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Commands to enable/start ipsec service
       $ chkconfig ipsec on
       $ service ipsec start
       $ service ipsec status&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;END .....................&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Finally site –to –site vpn connection is established active and running.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbhhn0hx8j3kw4hgmoc6y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbhhn0hx8j3kw4hgmoc6y.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://dev.tourl"&gt;&lt;/a&gt;&lt;br&gt;
You can also check your tunnel is active and running. As there are two tunnel you can also configure second tunnel. The main of both tunnel is to make it available on every time and provide not any down time  while connecting your customer end with aws side&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk4xqhrwkz4iig1regz22.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk4xqhrwkz4iig1regz22.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Here is the list of commands that will be required to you to established this site-to-site vpn connection.
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Commands for Installation of Openswan&lt;br&gt;
i. Change to root user: &lt;br&gt;
            $ sudo su&lt;br&gt;
ii. Install openswan:&lt;br&gt;
            $ yum install openswan -y&lt;br&gt;
iii. In /etc/ipsec.conf uncomment following line if not already &lt;br&gt;
      uncommented:&lt;br&gt;
             include /etc/ipsec.d/*.conf&lt;br&gt;
iv. Update /etc/sysctl.conf to have following&lt;br&gt;
net.ipv4.ip_forward = 1&lt;br&gt;
net.ipv4.conf.all.accept_redirects = 0&lt;br&gt;
net.ipv4.conf.all.send_redirects = 0&lt;br&gt;
v. Restart network service:&lt;br&gt;
             $ service network restart&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Command for /etc/ipsec.d/aws-vpn.conf&lt;br&gt;
conn Tunnel1&lt;br&gt;
    authby=secret&lt;br&gt;
    auto=start&lt;br&gt;
    left=%defaultroute&lt;br&gt;
    leftid=Customer end Gateway VPN public IP&lt;br&gt;
    right=AWS Virtual private gateway ID- public IP&lt;br&gt;
    type=tunnel&lt;br&gt;
    ikelifetime=8h&lt;br&gt;
    keylife=1h&lt;br&gt;
    phase2alg=aes128-sha1;modp1024&lt;br&gt;
    ike=aes128-sha1;modp1024&lt;br&gt;
    keyingtries=%forever&lt;br&gt;
    keyexchange=ike&lt;br&gt;
    leftsubnet=Customer end VPN CIDR&lt;br&gt;
    rightsubnet=AWS end VPN CIDR&lt;br&gt;
    dpddelay=10&lt;br&gt;
    dpdtimeout=30&lt;br&gt;
    dpdaction=restart_by_peer&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Contents for  /etc/ipsec.d/aws-vpn.secrets&lt;br&gt;
customer_public_ip aws_vgw_public_ip: PSK "shared secret"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Commands to enable/start ipsec service&lt;br&gt;
       $ chkconfig ipsec on&lt;br&gt;
       $ service ipsec start&lt;br&gt;
       $ service ipsec status&lt;br&gt;
END .....................&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>aws</category>
      <category>vpnsitetosite</category>
      <category>realtimeproject</category>
      <category>networking</category>
    </item>
    <item>
      <title>Creating a WordPress Server on Azure App Service</title>
      <dc:creator>SAFI-ULLAH SAFEER</dc:creator>
      <pubDate>Mon, 23 Sep 2024 05:25:20 +0000</pubDate>
      <link>https://dev.to/safi-ullah/creating-a-wordpress-server-on-azure-app-service-16d8</link>
      <guid>https://dev.to/safi-ullah/creating-a-wordpress-server-on-azure-app-service-16d8</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
Microsoft Azure provides a scalable platform for deploying web applications, including popular content management systems (CMS) like WordPress. With Azure App Service, you can easily host a WordPress site without worrying about managing infrastructure. This article will guide you through setting up a WordPress server on Azure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;br&gt;
Azure Account: If you don't have one, sign up for a free account here.&lt;br&gt;
Basic Understanding of Web Hosting: Familiarity with WordPress and hosting concepts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Log in to the Azure Portal&lt;/strong&gt;&lt;br&gt;
Head over to portal.azure.com and log in with your credentials. Once logged in, you'll have access to the Azure dashboard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Create a New Resource&lt;/strong&gt;&lt;br&gt;
In the Azure portal, click on Create a Resource from the left-hand sidebar. Type WordPress in the search bar and select the WordPress option under Web App.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdzvfg0h6j1czaqs2m2f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdzvfg0h6j1czaqs2m2f.png" alt="Image description" width="640" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 3: &lt;strong&gt;Configure WordPress App Service&lt;/strong&gt;&lt;br&gt;
You'll be prompted to configure your new WordPress web app:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Subscription:&lt;/strong&gt; Select your Azure subscription.&lt;br&gt;
Resource Group: Create a new resource group or use an existing one. A resource group is a container with related resources for an Azure solution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;App Name:&lt;/strong&gt; Choose a unique name for your WordPress application (this will be the domain name for your app, such as yourapp.azurewebsites.net)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwcx1g2gauha2xkff0rdy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwcx1g2gauha2xkff0rdy.png" alt="Image description" width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Set Up Database&lt;/strong&gt;&lt;br&gt;
WordPress requires a MySQL database for its backend. Azure provides the Azure Database for MySQL service, which is automatically suggested when creating a WordPress site. You will need to:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Database Provider:&lt;/strong&gt; Select MySQL in-app or Azure Database for MySQL.&lt;br&gt;
Database Name: Azure generates one for you, but you can customize it.&lt;br&gt;
For production workloads, it's recommended to use Azure Database for MySQL, as it offers better performance and reliability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Choose a Hosting Plan&lt;/strong&gt;&lt;br&gt;
Azure App Service Plan determines the pricing tier and features for your WordPress site:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyelk9s053klbz5a64q1f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyelk9s053klbz5a64q1f.png" alt="Image description" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pricing Tier&lt;/strong&gt;: Select the plan that fits your needs. For testing or low-traffic websites, the free or shared plans (like B1) work well. For larger websites, consider a Standard or Premium plan, which offers scaling options and better performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Deploy WordPress&lt;/strong&gt;&lt;br&gt;
After selecting your database and pricing plan, click Review + Create. Azure will validate the configuration, and once it passes, click Create to deploy your WordPress application. This may take a few minutes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkddyq33y0y8kjlshf6f1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkddyq33y0y8kjlshf6f1.png" alt="Image description" width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configure WordPress&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you open the site URL, the WordPress installation screen appears. You'll need to configure the basic settings:&lt;/p&gt;

&lt;p&gt;Language: Select your preferred language.&lt;br&gt;
Database Name: Azure should already pre-configure the database details.&lt;br&gt;
Admin Username and Password: Choose your WordPress admin credentials.&lt;br&gt;
Site Title: Give your site a title (you can change this later).&lt;br&gt;
Click Install WordPress to complete the installation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwze2exszfynkwquo5r41.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwze2exszfynkwquo5r41.png" alt="Image description" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdd03wx3lg0bfhsgy3150.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdd03wx3lg0bfhsgy3150.png" alt="Image description" width="640" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9wzu9z15b1iue1h4rgya.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9wzu9z15b1iue1h4rgya.png" alt="Image description" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0kiqyxu3ph0es0kgl1na.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0kiqyxu3ph0es0kgl1na.png" alt="Image description" width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyy5a7jx1tpan0tllx1ci.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyy5a7jx1tpan0tllx1ci.png" alt="Image description" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;** Access Your WordPress Site**&lt;br&gt;
Once deployment is complete, navigate to the App Service resource you just created. You can access your WordPress site by clicking on the URL in the app service overview.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Filooec0z8ytswarhxvv3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Filooec0z8ytswarhxvv3.png" alt="Image description" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Paste your domain URL into the browser:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;eastus2-01.azurewebsites.net&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiw6oaawmx03rqrwwnbh1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiw6oaawmx03rqrwwnbh1.png" alt="Image description" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fez5l2ymefgt29v5c37t7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fez5l2ymefgt29v5c37t7.png" alt="Image description" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqq9mlbv3pd884dj9g536.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqq9mlbv3pd884dj9g536.png" alt="Image description" width="640" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is your WordPress website home page &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7410ci8qexpdl94sdoop.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7410ci8qexpdl94sdoop.png" alt="Image description" width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can make changes to it from the pages option&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38aywpr8piu2pscdqo7f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38aywpr8piu2pscdqo7f.png" alt="Image description" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also make changes on the main page&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7ofqu02q17scch548e4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7ofqu02q17scch548e4.png" alt="Image description" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Like this one, I have added my bio to this page&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw1ogfr87b36nzq6shrm2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw1ogfr87b36nzq6shrm2.png" alt="Image description" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So finally you have created a WordPress server using Azure's default domain and you have made chnges to it you can upload pictures or other changes you want to make&lt;/p&gt;

</description>
      <category>azure</category>
      <category>wordpress</category>
      <category>azureappservice</category>
    </item>
    <item>
      <title>I have prepared comprehensive notes for the Microsoft Azure AZ-900 exam using various resources and updated Azure documentation</title>
      <dc:creator>SAFI-ULLAH SAFEER</dc:creator>
      <pubDate>Sun, 22 Sep 2024 18:26:59 +0000</pubDate>
      <link>https://dev.to/safi-ullah/i-have-prepared-comprehensive-notes-for-the-microsoft-azure-az-900-exam-using-various-resources-and-updated-azure-documentation-4opm</link>
      <guid>https://dev.to/safi-ullah/i-have-prepared-comprehensive-notes-for-the-microsoft-azure-az-900-exam-using-various-resources-and-updated-azure-documentation-4opm</guid>
      <description>&lt;p&gt;&lt;strong&gt;Cloud Computing&lt;/strong&gt;&lt;br&gt;
Cloud computing is the delivery of computing services over the Internet. These services include essential IT infrastructure such as virtual machines, storage, databases, and networking. Moreover, cloud computing enhances traditional IT services by incorporating technologies like the Internet of Things (IoT), machine learning (ML), and artificial intelligence (AI).&lt;br&gt;
One of the main advantages of cloud computing is its scalability, which allows users to rapidly increase their IT infrastructure without the constraints of physical data centers​.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud Service Providers
&lt;/h2&gt;

&lt;p&gt;SOME POPULAR CLOUD SERVICE PROVIDERS ARE:&lt;br&gt;
AWS(AMAZON WEB SERVICE). (launched in 2006)&lt;br&gt;
MICROSOFT AZURE.      (launched in feb 2010)&lt;br&gt;
GCP(GOOGLE CLOUD PLATFORM).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Azure&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Azure is Microsoft’s cloud computing platform with an ever expanding set of services to help you build solutions to meet your business goals.&lt;br&gt;
 Azure supports Infrastructure platforms and software as a service      computing with services such as virtual machines running in the cloud website and database hosting and advanced computing services like Artificial intelligence machine learning and IOT.&lt;br&gt;
Most of the azure services are pay-as-you-go you only pay for the computing time that you use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Shared Responsibility Model&lt;/strong&gt;&lt;br&gt;
In traditional corporate data centers, organizations are responsible for managing physical infrastructure, security, and maintaining server operations. With the shared responsibility model in the cloud, responsibilities are divided between the cloud provider and the consumer.&lt;br&gt;
The cloud provider is responsible for physical infrastructure (such as data centers, power, cooling, and network connectivity), while consumers are responsible for their data, access management, and security settings. This model ensures that cloud services are secure and reliable​.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud Service Models
&lt;/h2&gt;

&lt;p&gt;There are several cloud service models, each offering different levels of control, flexibility, and management:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure as a Service (IaaS):&lt;/strong&gt; Provides virtualized computing resources over the internet, allowing businesses to manage their applications and data but leaving infrastructure maintenance to the cloud provider.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Platform as a Service (PaaS):&lt;/strong&gt; Enables users to build, manage, and deploy applications without worrying about the underlying infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Software as a Service (SaaS):&lt;/strong&gt; Delivers software applications over the internet, where the cloud provider manages both the application and infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpc9l6vq6g5wrlzhkb3aj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpc9l6vq6g5wrlzhkb3aj.png" alt="Image description" width="800" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3cjw505mf093gkdandj2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3cjw505mf093gkdandj2.png" alt="Image description" width="800" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsa7ohjcm77yy4nnwgn1d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsa7ohjcm77yy4nnwgn1d.png" alt="Image description" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud Deployment Models
&lt;/h2&gt;

&lt;p&gt;Azure supports several cloud deployment models, allowing businesses to choose the best setup for their needs:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Public Cloud:&lt;/strong&gt; Resources are owned and operated by a third-party cloud provider like Azure.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43sj6bctsb06ueo7d3lr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43sj6bctsb06ueo7d3lr.png" alt="Image description" width="800" height="510"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0x634atrytv9ecwpa7pf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0x634atrytv9ecwpa7pf.png" alt="Image description" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Private Cloud:&lt;/strong&gt; Resources are used exclusively by a single organization and can be hosted on-premises or by a third-party provider.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fup42zllcfhpqr609iudk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fup42zllcfhpqr609iudk.png" alt="Image description" width="800" height="523"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxs937lz8ambbmtw2x0dl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxs937lz8ambbmtw2x0dl.png" alt="Image description" width="800" height="443"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hybrid Cloud:&lt;/strong&gt; Combines public and private clouds, allowing for data and applications to be shared between them​.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnd2wypwo71kvvnhmcqwz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnd2wypwo71kvvnhmcqwz.png" alt="Image description" width="800" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4pjze95qrc2zp397sfu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4pjze95qrc2zp397sfu.png" alt="Image description" width="800" height="443"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flukutudkd4j3w6zc1hwb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flukutudkd4j3w6zc1hwb.png" alt="Image description" width="800" height="104"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fleibuzx1k23jk59amugl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fleibuzx1k23jk59amugl.png" alt="Image description" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure Physical Infrastructure
&lt;/h2&gt;

&lt;p&gt;As a global cloud provider, Azure has data centers around the world. However, these individual data centers aren’t directly accessible. Datacenters are grouped into Azure Regions or Azure Availability Zones that are designed to help you achieve resiliency and reliability for your business-critical workloads.&lt;/p&gt;

&lt;p&gt;A region is a geographical area on the planet that contains at least one, but potentially multiple datacenters that are nearby and networked together with a low-latency network. Azure intelligently assigns and controls the resources within each region to ensure workloads are appropriately balanced.&lt;br&gt;
When you deploy a resource in Azure, you'll often need to choose the region where you want your resource deployed.&lt;br&gt;
&lt;strong&gt;Availability Zones:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Availability zones are physically separate data centers within an Azure region. Each availability zone is made up of one or more data centers equipped with independent power, cooling, and networking. An availability zone is set up to be an isolation boundary. If one zone goes down, the other continues working. Availability zones are connected through high-speed, private fiber-optic networks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7o7kfn53jcaa06v0rmg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7o7kfn53jcaa06v0rmg.png" alt="Image description" width="636" height="628"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Region pairs&lt;/strong&gt;&lt;br&gt;
Most Azure regions are paired with another region within the same geography (such as the US, Europe, or Asia) at least 300 miles away. This approach allows for the replication of resources across a geography that helps reduce the likelihood of interruptions because of events such as natural disasters, civil unrest, power outages, or physical network outages that affect an entire region. For example, if a region in a pair was affected by a natural disaster, services would automatically failover to the other region in its region pair.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnlog7ch1ihfw0zypbw9v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnlog7ch1ihfw0zypbw9v.png" alt="Image description" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8s4gfli1ct69ocnnsmog.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8s4gfli1ct69ocnnsmog.png" alt="Image description" width="800" height="67"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Content Delivery Network&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To frequently access data in a few countries Azure has made an Edge server.&lt;br&gt;
It stores Frequently access resources in cache memory and provides to the customer &lt;/p&gt;

&lt;h2&gt;
  
  
  Describe the consumption-based model:
&lt;/h2&gt;

&lt;p&gt;When comparing IT infrastructure models, there are two types of expenses to consider. &lt;strong&gt;Capital expenditure (CapEx) and operational expenditure (OpEx).&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;CapEx is typically a one-time, up-front expenditure to purchase or secure tangible resources. A new building, repaving the parking lot, building a data center, or buying a company vehicle are examples of CapEx.&lt;/p&gt;

&lt;p&gt;In contrast, OpEx is spending money on services or products over time. Renting a convention center, leasing a company vehicle, or signing up for cloud services are all examples of OpEx.&lt;/p&gt;

&lt;p&gt;Cloud computing falls under OpEx because cloud computing operates on a consumption-based model. With cloud computing, you don’t pay for the physical infrastructure, the electricity, the security, or anything else associated with maintaining a data center. Instead, you pay for the IT resources you use. If you don’t use any IT resources this month, you don’t pay for any IT resources.&lt;br&gt;
This consumption-based model has many benefits, including:&lt;br&gt;
No upfront costs.&lt;br&gt;
No need to purchase and manage costly infrastructure that users might not use to its fullest potential.&lt;br&gt;
The ability to pay for more resources when they're needed.&lt;br&gt;
The ability to stop paying for resources that are no longer needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Describe the benefits of using cloud services
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;High availability&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you’re deploying an application, a service, or any IT resources, the resources must be available when needed. High availability focuses on ensuring maximum availability, regardless of disruptions or events that may occur.&lt;br&gt;
When you’re architecting your solution, you’ll need to account for service availability guarantees. Azure is a highly available cloud environment with uptime guarantees depending on the service. These guarantees are part of the service-level agreements (SLAs).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The other benefit of scalability is that you aren't overpaying for services. Because the cloud is a consumption-based model, you only pay for what you use. If demand drops off, you can reduce your resources and thereby reduce your costs.&lt;br&gt;
Scaling generally comes in two varieties: vertical and horizontal. Vertical scaling is focused on increasing or decreasing the capabilities of resources. Horizontal scaling is adding or subtracting the number of resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vertical scaling&lt;/strong&gt;&lt;br&gt;
With vertical scaling, if you were developing an app and you needed more processing power, you could vertically scale up to add more CPUs or RAM to the virtual machine. Conversely, if you realized you had over-specified the needs, you could vertically scale down by lowering the CPU or RAM specifications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Horizontal scaling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With horizontal scaling, if you suddenly experience a steep jump in demand, your deployed resources could be scaled out (either automatically or manually). For example, you could add additional virtual machines or containers, scaling out. In the same manner, if there was a significant drop in demand, deployed resources could be scaled in (either automatically or manually), scaling in.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5pg0q4tbyujyxatz5t7c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5pg0q4tbyujyxatz5t7c.png" alt="Image description" width="800" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reliability or Fault Tolerance&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Reliability is the ability of a system to recover from failures and continue to function. It's also one of the pillars of the Microsoft Azure Well-Architected Framework.&lt;/p&gt;

&lt;p&gt;The cloud, by its decentralized design, naturally supports a reliable and resilient infrastructure. With a decentralized design, the cloud enables you to have resources deployed in regions around the world. &lt;/p&gt;

&lt;p&gt;With this global scale, even if one region has a catastrophic event other regions are still up and running. You can design your applications to automatically take advantage of this increased reliability.&lt;/p&gt;

&lt;p&gt;In some cases, your cloud environment itself will automatically shift to a different region for you, with no action needed on your part. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkhgqcnqczjsjry5no78.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkhgqcnqczjsjry5no78.png" alt="Image description" width="800" height="668"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disaster Recovery:&lt;/strong&gt;&lt;br&gt;
Azure provides built-in disaster recovery solutions by enabling the replication of data across regions, ensuring minimal downtime during disruptions​.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdsohy9ztre6icqdcbizs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdsohy9ztre6icqdcbizs.png" alt="Image description" width="800" height="675"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MICROSOFT  AZURE FAMOUS  CERTIFICATIONS:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0vclp7brh3md206vcqlr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0vclp7brh3md206vcqlr.jpg" alt="Image description" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;YOU CAN EXPLORE MORE CERTIFICATION HERE&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Azure Training + Certification Guide (microsoft.com)&lt;br&gt;
&lt;a href="https://aka.ms/AzureTrainCertDeck?WT.mc_id=Azure_BoM-wwl" rel="noopener noreferrer"&gt;https://aka.ms/AzureTrainCertDeck?WT.mc_id=Azure_BoM-wwl&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary:
&lt;/h2&gt;

&lt;p&gt;Learning objectives&lt;br&gt;
You should now be able to:&lt;br&gt;
Define cloud computing.&lt;br&gt;
Describe the shared responsibility model.&lt;br&gt;
Define cloud models, including public, private, and hybrid.&lt;br&gt;
Identify appropriate use cases for each cloud model.&lt;br&gt;
Describe the consumption-based model.&lt;br&gt;
Compare cloud pricing models&lt;br&gt;
Describe Azure Global Infrastructure&lt;br&gt;
Different Azure Certification and roles.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>microsoft</category>
      <category>az900</category>
      <category>azurefundamentals</category>
    </item>
    <item>
      <title>Visualize Data using Amazon Quick Sight</title>
      <dc:creator>SAFI-ULLAH SAFEER</dc:creator>
      <pubDate>Sun, 15 Sep 2024 06:37:06 +0000</pubDate>
      <link>https://dev.to/safi-ullah/visualize-data-using-amazon-quick-sight-355c</link>
      <guid>https://dev.to/safi-ullah/visualize-data-using-amazon-quick-sight-355c</guid>
      <description>&lt;p&gt;In today's world of big data, it's crucial to use clear and effective visualizations to make smart decisions. Amazon Quick Sight, a tool from Amazon Web Services (AWS), helps turn complex data into easy-to-understand visual reports. This project explores how to use Amazon Quick Sight to analyze a large dataset of Amazon best sellers, demonstrating how AWS services can handle and make sense of large amounts of data.&lt;/p&gt;

&lt;p&gt;The dataset, comprised of 50,000 records detailing Amazon's top-selling products, was initially sourced from Bright Data. This rich collection of information offers a snapshot of consumer preferences and market trends. To facilitate analysis, the dataset was first stored in an Amazon S3 bucket, a secure and scalable storage solution provided by AWS. Amazon Quick Sight was then employed to create interactive dashboards and visualizations, enabling a deeper understanding of the data.&lt;br&gt;
Step-by-Step Procedure for Visualizing Data with Amazon Quick Sight:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Download and Prepare the Dataset:&lt;br&gt;
**&lt;br&gt;
Obtain the CSV dataset file from Bright Data, containing 50,000 Amazon best seller records.&lt;br&gt;
Here is the link of Bright data:&lt;br&gt;
&lt;a href="https://docs.brightdata.com/introduction" rel="noopener noreferrer"&gt;https://docs.brightdata.com/introduction&lt;/a&gt;&lt;br&gt;
Ensure the dataset is clean and formatted correctly for analysis.&lt;br&gt;
**Upload the Dataset to Amazon S3:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Log in to the AWS Management Console.&lt;br&gt;
Navigate to the Amazon S3 service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmc5bgt5kn2t53pk877t0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmc5bgt5kn2t53pk877t0.jpg" alt="Image description" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a new S3 bucket or select an existing one.&lt;br&gt;
Upload the CSV file to the S3 bucket, ensuring proper access permissions are set.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbb58wd2l2kr4zx49hwkl.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbb58wd2l2kr4zx49hwkl.jpg" alt="Image description" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here both the file has been uploaded to our Bucket name (visualizedata-using-amazon-quicksightproject)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8bvdnvvt018i2fo926tw.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8bvdnvvt018i2fo926tw.jpg" alt="Image description" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now the the next step is copy the Bucket name and paste it on manifest.json&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F205u5hpw20jcgz40x4s4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F205u5hpw20jcgz40x4s4.jpg" alt="Image description" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpks2sn3sxwp1vco8a7uz.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpks2sn3sxwp1vco8a7uz.jpg" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
 so the follwing update will appear in manifest.jason file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8p87z7t7ufjz8als6fx2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8p87z7t7ufjz8als6fx2.jpg" alt="Image description" width="800" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now after setup S3 the next step is :&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Set Up Amazon Quick Sight:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Access Amazon Quick Sight from the AWS Management Console.&lt;br&gt;
Sign up or log in to your Quick Sight account.&lt;br&gt;
Configure necessary permissions and settings for your Quick Sight environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43h6zwlwd0x6z8oygqno.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43h6zwlwd0x6z8oygqno.jpg" alt="Image description" width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now for sign up provide your gmail and also select S3 Bucket to allow access&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuj1ubz2k4n73364hkrmh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuj1ubz2k4n73364hkrmh.jpg" alt="Image description" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now select the Bucket you want Quick sight to be able to access&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ojksqht083pvywwuhd8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ojksqht083pvywwuhd8.jpg" alt="Image description" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now after creating account the following screen will appear&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbod84lxgnheoxt8fftn.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbod84lxgnheoxt8fftn.jpg" alt="Image description" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5mr0zhlipoj5d3mpgo91.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5mr0zhlipoj5d3mpgo91.jpg" alt="Image description" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Now the next step is:&lt;br&gt;
Create a New Data Source in QuickSight:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In Quick Sight, go to the “Datasets” section and choose to create a new dataset.&lt;br&gt;
Select “S3” as the data source.&lt;br&gt;
Provide the S3 bucket path where your CSV file is stored.&lt;br&gt;
Configure the dataset options, such as file format and delimiter, if require&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe1jv70zxk7s5ninebqii.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe1jv70zxk7s5ninebqii.jpg" alt="Image description" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now the next step is provide data-source name and provide the url of your manfiest.json file you have uploaded  on Bucket&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4id9vn5nd55hhfh1k9g.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4id9vn5nd55hhfh1k9g.jpg" alt="Image description" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now click on visualize to finish dataset creation&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe409yedugdynj8yum0fg.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe409yedugdynj8yum0fg.jpg" alt="Image description" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now select the format  interactive sheet and click on create&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb6g7gy4q6fele4o7nzjn.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb6g7gy4q6fele4o7nzjn.jpg" alt="Image description" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you can ready to do different visulaizations like in this case&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypl0l8yyq36hd511iasa.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypl0l8yyq36hd511iasa.jpg" alt="Image description" width="800" height="505"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  So the first visualization we have do in this case on  word cloud Best Amazon selling products brands
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftwpwbijh0xjb6ki8nw8b.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftwpwbijh0xjb6ki8nw8b.jpg" alt="Image description" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can also compare the price of different selling  brands&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxl7ol6x4it9ztcgf5gh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxl7ol6x4it9ztcgf5gh.jpg" alt="Image description" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and in the last we can also compare with availability&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftz8bmu6iyut8rwfdn5ro.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftz8bmu6iyut8rwfdn5ro.jpg" alt="Image description" width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After completing the project make sure you have terminate the Quick sight  the following screen will appear after terminating your Quick sight account&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnb5or5cxluj9xn6ye1qb.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnb5or5cxluj9xn6ye1qb.jpg" alt="Image description" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>visulaizedata</category>
      <category>quicksight</category>
      <category>s3</category>
    </item>
    <item>
      <title>AWS Global Infrastructure: The Backbone of Modern Cloud Computing</title>
      <dc:creator>SAFI-ULLAH SAFEER</dc:creator>
      <pubDate>Sun, 16 Jun 2024 04:33:10 +0000</pubDate>
      <link>https://dev.to/safi-ullah/aws-global-infrastructure-the-backbone-of-modern-cloud-computing-1gd4</link>
      <guid>https://dev.to/safi-ullah/aws-global-infrastructure-the-backbone-of-modern-cloud-computing-1gd4</guid>
      <description>&lt;p&gt;Key Components of AWS Global Infrastructure&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;AWS Regions&lt;br&gt;
AWS divides its global operations into geographical regions. Each region is a separate geographic area, and every region consists of multiple, isolated locations known as Availability Zones (AZs). As of 2024, AWS has 31 regions worldwide, with several more announced or under development.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Availability Zones&lt;br&gt;
An Availability Zone is one or more discrete data centers with redundant power, networking, and connectivity housed in separate facilities. Each region contains multiple AZs, allowing customers to design resilient and fault-tolerant applications. By deploying applications across multiple AZs, businesses can achieve high availability and disaster recovery.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Edge Locations&lt;br&gt;
Edge locations are part of AWS’s content delivery network (CDN) known as Amazon CloudFront. They cache copies of your data closer to users, reducing latency and improving performance for content delivery. AWS has over 400 edge locations globally, ensuring fast content delivery to users regardless of their location.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Local Zones&lt;br&gt;
AWS Local Zones are extensions of AWS regions that place compute, storage, database, and other select AWS services closer to large population and industry centers. This reduces latency and improves performance for applications that require single-digit millisecond latencies. Local Zones are particularly beneficial for real-time gaming, live video streaming, and machine learning.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Wavelength Zones&lt;br&gt;
AWS Wavelength Zones embed AWS compute and storage services within telecommunications providers’ data centers at the edge of the 5G network. This allows developers to build applications that require ultra-low latency, such as IoT devices, machine learning inference at the edge, and augmented reality.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Benefits of AWS Global Infrastructure&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;High Availability and Fault Tolerance&lt;br&gt;
AWS's infrastructure is designed for high availability. By using multiple AZs within a region, businesses can ensure their applications remain available even if one AZ fails. Regions are also isolated from one another, providing an additional layer of fault tolerance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Global Reach&lt;br&gt;
With regions and edge locations spread across the globe, AWS provides businesses with a global footprint. This extensive reach enables companies to serve their customers with low latency and high performance, no matter where they are located.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scalability and Flexibility&lt;br&gt;
AWS infrastructure allows businesses to scale their applications seamlessly. Whether you need to scale up for a global event or down during off-peak times, AWS provides the flexibility to adjust your resources according to your needs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Security and Compliance&lt;br&gt;
AWS places a strong emphasis on security. Each AWS region and AZ is built to the highest security standards, with multiple layers of physical and network security. AWS also complies with numerous global regulatory standards and certifications, making it a trusted platform for industries with stringent compliance requirements.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Performance Optimization&lt;br&gt;
The global infrastructure is optimized for performance. By strategically placing data centers and edge locations, AWS minimizes latency and maximizes throughput. Services like AWS Direct Connect provide dedicated network connections to AWS, further enhancing performance for critical applications.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Innovations and Continuous Expansion&lt;br&gt;
AWS continuously innovates and expands its infrastructure to meet the growing demands of its customers. Recent developments include new regions in strategic locations, additional edge locations to improve content delivery, and specialized infrastructure such as Local Zones and Wavelength Zones to cater to emerging technological needs.&lt;/p&gt;

&lt;p&gt;New Regions and AZs&lt;br&gt;
AWS frequently announces new regions and AZs to expand its global presence. These additions provide more options for data residency and disaster recovery planning, allowing customers to deploy their applications closer to their user base.&lt;/p&gt;

&lt;p&gt;Green Energy Initiatives&lt;br&gt;
AWS is committed to sustainability and aims to power its global infrastructure with 100% renewable energy by 2025. AWS has already made significant investments in solar and wind projects around the world, reducing the carbon footprint of its operations.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;br&gt;
The AWS global infrastructure is a cornerstone of its cloud services, providing the foundation for high availability, scalability, and security. By leveraging a vast network of regions, availability zones, edge locations, local zones, and wavelength zones, AWS ensures that businesses can deliver high-performance applications to users worldwide. As AWS continues to innovate and expand, its global infrastructure will remain a critical asset for organizations looking to harness the power of cloud computing.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>infrastructure</category>
      <category>cloud</category>
    </item>
    <item>
      <title>AWS Global Infrastructure: The Backbone of Modern Cloud Computing</title>
      <dc:creator>SAFI-ULLAH SAFEER</dc:creator>
      <pubDate>Sun, 16 Jun 2024 04:33:07 +0000</pubDate>
      <link>https://dev.to/safi-ullah/aws-global-infrastructure-the-backbone-of-modern-cloud-computing-111m</link>
      <guid>https://dev.to/safi-ullah/aws-global-infrastructure-the-backbone-of-modern-cloud-computing-111m</guid>
      <description>&lt;p&gt;Key Components of AWS Global Infrastructure&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;AWS Regions&lt;br&gt;
AWS divides its global operations into geographical regions. Each region is a separate geographic area, and every region consists of multiple, isolated locations known as Availability Zones (AZs). As of 2024, AWS has 31 regions worldwide, with several more announced or under development.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Availability Zones&lt;br&gt;
An Availability Zone is one or more discrete data centers with redundant power, networking, and connectivity housed in separate facilities. Each region contains multiple AZs, allowing customers to design resilient and fault-tolerant applications. By deploying applications across multiple AZs, businesses can achieve high availability and disaster recovery.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Edge Locations&lt;br&gt;
Edge locations are part of AWS’s content delivery network (CDN) known as Amazon CloudFront. They cache copies of your data closer to users, reducing latency and improving performance for content delivery. AWS has over 400 edge locations globally, ensuring fast content delivery to users regardless of their location.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Local Zones&lt;br&gt;
AWS Local Zones are extensions of AWS regions that place compute, storage, database, and other select AWS services closer to large population and industry centers. This reduces latency and improves performance for applications that require single-digit millisecond latencies. Local Zones are particularly beneficial for real-time gaming, live video streaming, and machine learning.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Wavelength Zones&lt;br&gt;
AWS Wavelength Zones embed AWS compute and storage services within telecommunications providers’ data centers at the edge of the 5G network. This allows developers to build applications that require ultra-low latency, such as IoT devices, machine learning inference at the edge, and augmented reality.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Benefits of AWS Global Infrastructure&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;High Availability and Fault Tolerance&lt;br&gt;
AWS's infrastructure is designed for high availability. By using multiple AZs within a region, businesses can ensure their applications remain available even if one AZ fails. Regions are also isolated from one another, providing an additional layer of fault tolerance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Global Reach&lt;br&gt;
With regions and edge locations spread across the globe, AWS provides businesses with a global footprint. This extensive reach enables companies to serve their customers with low latency and high performance, no matter where they are located.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scalability and Flexibility&lt;br&gt;
AWS infrastructure allows businesses to scale their applications seamlessly. Whether you need to scale up for a global event or down during off-peak times, AWS provides the flexibility to adjust your resources according to your needs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Security and Compliance&lt;br&gt;
AWS places a strong emphasis on security. Each AWS region and AZ is built to the highest security standards, with multiple layers of physical and network security. AWS also complies with numerous global regulatory standards and certifications, making it a trusted platform for industries with stringent compliance requirements.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Performance Optimization&lt;br&gt;
The global infrastructure is optimized for performance. By strategically placing data centers and edge locations, AWS minimizes latency and maximizes throughput. Services like AWS Direct Connect provide dedicated network connections to AWS, further enhancing performance for critical applications.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Innovations and Continuous Expansion&lt;br&gt;
AWS continuously innovates and expands its infrastructure to meet the growing demands of its customers. Recent developments include new regions in strategic locations, additional edge locations to improve content delivery, and specialized infrastructure such as Local Zones and Wavelength Zones to cater to emerging technological needs.&lt;/p&gt;

&lt;p&gt;New Regions and AZs&lt;br&gt;
AWS frequently announces new regions and AZs to expand its global presence. These additions provide more options for data residency and disaster recovery planning, allowing customers to deploy their applications closer to their user base.&lt;/p&gt;

&lt;p&gt;Green Energy Initiatives&lt;br&gt;
AWS is committed to sustainability and aims to power its global infrastructure with 100% renewable energy by 2025. AWS has already made significant investments in solar and wind projects around the world, reducing the carbon footprint of its operations.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;br&gt;
The AWS global infrastructure is a cornerstone of its cloud services, providing the foundation for high availability, scalability, and security. By leveraging a vast network of regions, availability zones, edge locations, local zones, and wavelength zones, AWS ensures that businesses can deliver high-performance applications to users worldwide. As AWS continues to innovate and expand, its global infrastructure will remain a critical asset for organizations looking to harness the power of cloud computing.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>infrastructure</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Understanding AWS Identity and Access Management (IAM)</title>
      <dc:creator>SAFI-ULLAH SAFEER</dc:creator>
      <pubDate>Sun, 16 Jun 2024 04:17:40 +0000</pubDate>
      <link>https://dev.to/safi-ullah/understanding-aws-identity-and-access-management-iam-4aab</link>
      <guid>https://dev.to/safi-ullah/understanding-aws-identity-and-access-management-iam-4aab</guid>
      <description>&lt;p&gt;This article explores the key features, benefits, and best practices of AWS IAM, illustrating how it can help organizations manage their AWS environments securely and efficiently.&lt;/p&gt;

&lt;p&gt;What is AWS IAM?&lt;br&gt;
AWS Identity and Access Management (IAM) is a web service that enables you to manage access to AWS services and resources securely. With IAM, you can create and manage AWS users and groups and use permissions to allow or deny their access to AWS resources. IAM helps you manage identities (users, groups, roles, and policies) and provides fine-grained access control.&lt;/p&gt;

&lt;p&gt;Key Features of AWS IAM&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;User Management&lt;br&gt;
IAM allows you to create individual user accounts for people within your organization. Each user gets unique security credentials, which they can use to interact with AWS resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Groups&lt;br&gt;
You can create groups in IAM and add users to these groups. This allows you to assign permissions to a group rather than to each individual user, simplifying permission management.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Roles&lt;br&gt;
IAM roles provide a way to delegate access with temporary credentials. Roles can be assumed by users, applications, or services that need to perform actions on AWS resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Policies&lt;br&gt;
Policies are JSON documents that define permissions. They specify what actions are allowed or denied for which resources. You attach policies to users, groups, or roles to define their permissions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Multi-Factor Authentication (MFA)&lt;br&gt;
MFA adds an extra layer of security by requiring users to provide a second form of authentication in addition to their password.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Federated Access&lt;br&gt;
IAM supports federated access, allowing users to access AWS resources using existing corporate credentials or through identity providers.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Benefits of AWS IAM&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Enhanced Security&lt;br&gt;
IAM helps ensure that the right people have the appropriate access to your resources, reducing the risk of unauthorized access and potential security breaches.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Granular Control&lt;br&gt;
With IAM policies, you can specify detailed permissions, providing precise control over who can access what resources and what actions they can perform.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scalability&lt;br&gt;
IAM scales with your AWS environment, allowing you to manage access for a growing number of users and resources without sacrificing control or security.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Centralized Management&lt;br&gt;
IAM provides a centralized way to manage user access across all AWS services, simplifying administrative tasks and enhancing oversight.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Compliance and Auditing&lt;br&gt;
IAM helps meet compliance requirements by providing detailed logs and auditing capabilities, ensuring that access is monitored and can be reviewed.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Best Practices for Using AWS IAM&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Least Privilege Principle&lt;br&gt;
Always follow the principle of least privilege, granting users only the permissions they need to perform their tasks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use Groups for Permissions&lt;br&gt;
Assign permissions to groups rather than individual users to simplify management and ensure consistency.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enable MFA&lt;br&gt;
Enable MFA for all users, especially for users with privileged access, to add an extra layer of security.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Regularly Review IAM Policies&lt;br&gt;
Regularly review and update IAM policies to ensure they reflect the current needs and security posture of your organization.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use Roles for Applications and Services&lt;br&gt;
Use IAM roles instead of storing credentials in applications. This enhances security by leveraging temporary credentials.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Monitor and Audit IAM Activity&lt;br&gt;
Utilize AWS CloudTrail to monitor and log IAM activity. Regularly review these logs to detect and respond to suspicious activities.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Conclusion&lt;br&gt;
AWS Identity and Access Management (IAM) is a critical service for managing access to your AWS resources securely and efficiently. By leveraging IAM's robust features and following best practices, organizations can ensure that their cloud environment is secure and that access to resources is appropriately controlled. IAM's ability to provide fine-grained permissions, support for multi-factor authentication, and integration with existing identity systems makes it an essential tool for any organization using AWS.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>iam</category>
      <category>cloud</category>
      <category>awscloud</category>
    </item>
    <item>
      <title>Unlocking Opportunities: The Microsoft Learn Student Ambassadors Program 2024</title>
      <dc:creator>SAFI-ULLAH SAFEER</dc:creator>
      <pubDate>Sat, 15 Jun 2024 07:58:49 +0000</pubDate>
      <link>https://dev.to/safi-ullah/unlocking-opportunities-the-microsoft-learn-student-ambassadors-program-2024-2cnm</link>
      <guid>https://dev.to/safi-ullah/unlocking-opportunities-the-microsoft-learn-student-ambassadors-program-2024-2cnm</guid>
      <description>&lt;p&gt;In the ever-evolving world of technology, staying ahead of the curve is paramount for aspiring tech professionals. The Microsoft Learn Student Ambassadors (MLSA) program has been a beacon of opportunity for students worldwide, offering a unique blend of learning, leadership, and networking. As we step into 2024, the MLSA program continues to empower students with an even more robust platform to grow, innovate, and lead.&lt;/p&gt;

&lt;p&gt;What is the MLSA Program?&lt;br&gt;
The MLSA program is a prestigious initiative by Microsoft designed to cultivate a global community of students passionate about technology. It aims to provide these students with the tools, resources, and support they need to develop their skills and contribute meaningfully to the tech community. By joining the program, students become part of an elite network of like-minded individuals, gaining access to exclusive events, mentorship opportunities, and cutting-edge learning resources.&lt;/p&gt;

&lt;p&gt;Why Join the MLSA Program?&lt;/p&gt;

&lt;p&gt;Comprehensive Learning Resources:&lt;br&gt;
The MLSA program offers unparalleled access to Microsoft’s extensive suite of learning tools. Participants can dive into a myriad of topics ranging from cloud computing with Azure to the intricacies of AI and machine learning. With resources like Microsoft Learn, students can tailor their learning paths to suit their career goals and interests.&lt;/p&gt;

&lt;p&gt;Leadership Development:&lt;br&gt;
Becoming an MLSA is not just about acquiring technical knowledge; it’s also about honing leadership skills. Ambassadors are encouraged to lead local tech communities, organize events, and share their knowledge through workshops and webinars. This hands-on experience in community building and public speaking is invaluable for personal and professional growth.&lt;/p&gt;

&lt;p&gt;Networking Opportunities:&lt;br&gt;
The program offers a unique platform to connect with industry professionals, Microsoft experts, and fellow students from around the globe. These connections can lead to collaborations on projects, internships, and even job opportunities. The global network of MLSA alumni is a testament to the program's ability to foster meaningful professional relationships.&lt;/p&gt;

&lt;p&gt;My recent MLSA Event Recording&lt;br&gt;
Recording Url:  &lt;a href="https://youtu.be/clMZ7Ip0gUU?si=XzdxmOh-HxnOWnXx" rel="noopener noreferrer"&gt;https://youtu.be/clMZ7Ip0gUU?si=XzdxmOh-HxnOWnXx&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Exclusive Events and Challenges:&lt;br&gt;
MLSA participants gain access to exclusive events such as Microsoft Build, Ignite, and various hackathons. These events provide firsthand insights into the latest technological advancements and trends. Additionally, challenges and competitions within the program offer a chance to test skills and earn recognition.&lt;/p&gt;

&lt;p&gt;Success Stories from the MLSA Community&lt;br&gt;
Many MLSA alumni have gone on to achieve remarkable success in their careers. For instance, Jane Doe, an MLSA from the class of 2020, leveraged her experience to secure a position as a Software Engineer at a leading tech firm. Her journey highlights how the program’s resources and networking opportunities can pave the way for significant career advancements.&lt;/p&gt;

&lt;p&gt;How to Apply for the 2024 Cohort&lt;br&gt;
Applying to the MLSA program is a straightforward process designed to identify passionate and driven students. Here’s a step-by-step guide:&lt;/p&gt;

&lt;p&gt;Eligibility Check:&lt;br&gt;
Ensure you are enrolled in an accredited academic institution and are at least 16 years old.&lt;/p&gt;

&lt;p&gt;Application Form:&lt;br&gt;
Fill out the online application form available on the Microsoft Learn Student Ambassadors website. Be prepared to share your academic background, technical skills, and reasons for wanting to join the program.&lt;/p&gt;

&lt;p&gt;Video Submission:&lt;br&gt;
Create a short video (1-2 minutes) explaining why you would make a great Student Ambassador. Highlight your passion for technology and any relevant experiences or projects.&lt;br&gt;
My MlSA Application video&lt;br&gt;
Video Url:   &lt;a href="https://youtu.be/fKepaLvFZgg?si=SKabg2YZgeNMTsbc" rel="noopener noreferrer"&gt;https://youtu.be/fKepaLvFZgg?si=SKabg2YZgeNMTsbc&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Submit and Await Results:&lt;br&gt;
Submit your application and wait for the review process. Successful candidates will be notified and inducted into the program.&lt;/p&gt;

</description>
      <category>microsoft</category>
      <category>ambassadors</category>
      <category>mlsa</category>
      <category>microsoftambassadors</category>
    </item>
    <item>
      <title>A Comprehensive Guide to Amazon Simple Storage Service AWS S3</title>
      <dc:creator>SAFI-ULLAH SAFEER</dc:creator>
      <pubDate>Thu, 22 Feb 2024 21:14:42 +0000</pubDate>
      <link>https://dev.to/safi-ullah/a-comprehensive-guide-to-amazon-simple-storage-service-aws-s3-4kj4</link>
      <guid>https://dev.to/safi-ullah/a-comprehensive-guide-to-amazon-simple-storage-service-aws-s3-4kj4</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Data is the key to success in today's digital environment for companies of all sizes. It's critical to safely and efficiently manage enormous volumes of data. This is the situation where AWS S3 is useful. Because of its unmatched scalability, dependability, and performance, AWS S3 (Amazon Simple Storage Service) is the preferred option for data storage and protection in a wide range of applications and sectors.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the Basics of AWS S3
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What is AWS S3?&lt;/strong&gt;&lt;br&gt;
Amazon Web Services (AWS) offers object storage through its AWS S3 service, which is short for Amazon Simple Storage Service. It provides performance, security, availability of data, and scalability that dominate the industry.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Features of AWS S3
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Scalability:&lt;/strong&gt; With no upfront costs or setup required, AWS S3 can grow to handle any volume of data, from gigabytes to petabytes and beyond. &lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Data Availability:&lt;/strong&gt; High availability and durability are ensured by AWS S3 by replicating data across several availability zones within a region. &lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Security:&lt;/strong&gt; To protect data from unwanted access and breaches, AWS S3 offers strong security features including encryption, access control methods, and compliance certifications. &lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Performance:&lt;/strong&gt; Low latency performance from Amazon S3 makes it possible to access data quickly and reliably from any location in the world.
Storage Classes Offered by AWS S3
AWS S3 provides a range of storage types designed for various use cases and patterns of access. Let's explore some of the key storage classes:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;1.## S3 Intelligent-Tiering&lt;br&gt;
According to usage patterns, S3 Intelligent-Tiering dynamically divides data across frequent and typically access tiers to optimize storage costs.&lt;/p&gt;

&lt;h2&gt;
  
  
  2.   S3 Standard
&lt;/h2&gt;

&lt;p&gt;S3 Standard is ideal for frequently accessed data that requires low-latency access times and high throughput.&lt;/p&gt;

&lt;h2&gt;
  
  
  3.   S3 Express One Zone
&lt;/h2&gt;

&lt;p&gt;Applications that need low latency and high availability for data access inside a single AWS availability zone can use S3 Express One Zone.&lt;/p&gt;

&lt;h2&gt;
  
  
  4.    S3 Standard-Infrequent Access (S3 Standard-IA)
&lt;/h2&gt;

&lt;p&gt;S3 Standard-IA is suitable for data that is accessed less frequently but requires immediate access when needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  5.   S3 One Zone-Infrequent Access (S3 One Zone-IA)
&lt;/h2&gt;

&lt;p&gt;S3 One Zone-IA provides inexpensive storage for rarely accessed data, much like S3 regular-IA. However, because it only stores data in one availability zone, it is less durable than the regular IA class.&lt;/p&gt;

&lt;h2&gt;
  
  
  6.   S3 Glacier Instant Retrieval
&lt;/h2&gt;

&lt;p&gt;S3 Glacier Instant Retrieval is designed for archive data that requires immediate access. It offers fast retrieval times for archived data.&lt;/p&gt;

&lt;h2&gt;
  
  
  7.   S3 Glacier Flexible Retrieva
&lt;/h2&gt;

&lt;p&gt;This storage type, formerly known as S3 Glacier, is appropriate for long-term data that is infrequently accessed and does not need to be accessible right away.&lt;/p&gt;

&lt;h2&gt;
  
  
  8.   Amazon S3 Glacier Deep Archive
&lt;/h2&gt;

&lt;p&gt;For long-term archiving and digital preservation, Amazon S3 Glacier Deep Archive is the best solution for cost-effective storage with hourly retrieval times.&lt;br&gt;
you can also get more details about storage of S3 on AWS Official Documentation. Here is the link below.&lt;br&gt;
&lt;a href="https://aws.amazon.com/s3/storage-classes/" rel="noopener noreferrer"&gt;https://aws.amazon.com/s3/storage-classes/&lt;/a&gt;.&lt;br&gt;
Optimizing Data Management with AWS S3&lt;br&gt;
In order to optimize expenses and fully enjoy the advantages of AWS S3, efficient data management is essential. Here are some best practices:&lt;/p&gt;

&lt;p&gt;1.&lt;strong&gt;Organize Data Efficiently&lt;/strong&gt;&lt;br&gt;
Properly organizing data within AWS S3 buckets and folders ensures easy accessibility and maintenance. Implement a logical structure that reflects your organization's data hierarchy and access requirements.&lt;/p&gt;

&lt;p&gt;2.&lt;strong&gt;Configure Access Controls&lt;/strong&gt;&lt;br&gt;
To efficiently manage user access and permissions, make advantage of AWS Identity and Access Management (IAM). Establish precise access controls to limit access to private information and stop illegal activity.&lt;/p&gt;

&lt;p&gt;3.&lt;strong&gt;Implement Lifecycle Policies&lt;/strong&gt;&lt;br&gt;
Use AWS S3 lifecycle rules to automate data management activities like archiving and removing old data and switching between storage classes. This helps optimize storage costs and compliance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.Monitor and Analyze Storage Usage&lt;/strong&gt;&lt;br&gt;
Regularly monitor AWS S3 usage metrics and analyze storage patterns to identify opportunities for optimization. Use AWS Cost Explorer to visualize storage costs and identify areas for cost savings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5.Secure Data with Encryption&lt;/strong&gt;&lt;br&gt;
To safeguard data stored in AWS S3, enable encryption both in transit and at rest. For increased security and regulatory compliance, use server-side encryption using AWS Key Management Service (KMS).&lt;/p&gt;

&lt;p&gt;Frequently Asked Questions (FAQs)&lt;br&gt;
How is the availability and durability of data ensured by AWS S3?&lt;br&gt;
By duplicating data across many geographically separated data centers within a region, AWS S3 ensures redundancy and fault tolerance while achieving excellent durability and availability.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Can I limit who has access to the data I save on Amazon S3? *&lt;/em&gt;&lt;br&gt;
Yes, you may create granular permissions and access restrictions with AWS S3's comprehensive access control methods, which include bucket rules, access control lists (ACLs), and IAM roles.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What financial effects come with utilizing AWS S3?&lt;/strong&gt;&lt;br&gt;
Pay-as-you-go pricing is available for AWS S3, meaning you only have to pay for the data transmission and storage services you actually use. Data transmission speeds, storage capacity, and storage class all affect storage costs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What distinguishes AWS S3 Glacier Deep Archive from other types of storage?&lt;/strong&gt;&lt;br&gt;
The least expensive storage solution for long-term data retention and digital preservation is provided by Amazon S3 Glacier Deep Archive. For archival data, retrieval times are greater than for other storage types, but the cost reductions are substantial.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can I move my current data to S3 on Amazon?&lt;/strong&gt;&lt;br&gt;
Indeed, AWS offers a number of services and tools, like AWS Snowball, AWS DataSync, and AWS Transfer Family, to make it easier to move data from on-premises storage systems and other cloud platforms to AWS S3.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Is it OK to store static websites on Amazon S3?&lt;/strong&gt;&lt;br&gt;
Yes, static websites can be hosted effectively and affordably using AWS S3. S3 buckets may be configured to host websites, and you can take use of capabilities like website redirection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
To sum up, Amazon S3 is a flexible and strong object storage solution that enables businesses to efficiently handle, store, and safeguard their data. Businesses looking for dependable and secure storage solutions choose Amazon S3 because of its extensive feature set, scalable design, and affordable price. Organizations may promote innovation in the digital age, cut costs, and improve data management efficiency by utilizing AWS S3 to its fullest potential and adhering to best practices.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
