<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mubin</title>
    <description>The latest articles on DEV Community by Mubin (@mubinkhalife).</description>
    <link>https://dev.to/mubinkhalife</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mubinkhalife"/>
    <language>en</language>
    <item>
      <title>MERN CI/CD Project Using AWS Cloud Native Tools</title>
      <dc:creator>Mubin</dc:creator>
      <pubDate>Sun, 25 Jun 2023 03:34:24 +0000</pubDate>
      <link>https://dev.to/mubinkhalife/mern-cicd-project-using-aws-cloud-native-tools-l7f</link>
      <guid>https://dev.to/mubinkhalife/mern-cicd-project-using-aws-cloud-native-tools-l7f</guid>
      <description>&lt;p&gt;Turn &lt;strong&gt;any&lt;/strong&gt; MERN stack application into a CI/CD project using the cloud native tools provided by AWS!&lt;/p&gt;

&lt;p&gt;This article will walk you through doing just that.&lt;/p&gt;

&lt;p&gt;We will need to modify the MERN app source code to fit the CI/CD flow.&lt;/p&gt;

&lt;p&gt;The flow is pretty straightforward. The developer will push the code on GitHub, which triggers “AWS Codepipeline” and it will start the CI/CD pipeline by first pulling the source code and generating build using “AWS CodeBuild”. Then the generated build artifacts will be stored in the S3 bucket in versions and the application will be deployed by “AWS CodeDeploy”.&lt;/p&gt;

&lt;p&gt;Let’s get started!!!&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: AWS Role creation
&lt;/h2&gt;

&lt;p&gt;AWS CodeDeploy and the EC2 instance would need permission to access the build artifacts from S3 storage.&lt;br&gt;
So, we’ll begin by creating roles for them.&lt;br&gt;
Go to Roles inside the IAM dashboard on the AWS console.&lt;br&gt;
Click Create Role button. Select “AWS Service” as the entity type and “EC2” option in common use cases.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8pl9RoP7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h8q4a5llu5ucrbv71rtx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8pl9RoP7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h8q4a5llu5ucrbv71rtx.png" alt="Role Creation for EC2" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the Permissions policy, search for "s3readonly", select "AmazonS3ReadOnlyAccess" from the entry, and click the next button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uW-6BL5i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2ATnedVGadM1AI4i9O6jbLqw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uW-6BL5i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2ATnedVGadM1AI4i9O6jbLqw.png" alt="S3 storage permission for EC2" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Give any name (like "EC2S3ReadPermission") for this role in the text box and click create role button at the bottom of the screen.&lt;br&gt;
Go back to the create role page and select "AWS service" like before, scroll down to the bottom of the page, and select "CodeDeploy" from the dropdown field in the use cases for other AWS services.&lt;br&gt;
After selecting this option, select the first radio option with the text "CodeDeploy"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QCS1wCXV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AC_bEE3kbxry0y8LejnKLFg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QCS1wCXV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AC_bEE3kbxry0y8LejnKLFg.png" alt="Role creation for AWS CodeDeploy" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you click next, the "AWSCodeDeployRole" policy would be attached. Click next and on the final page give a name (like CodeDeployPermission) and click "Create role".&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 2: Launch the EC2 instance
&lt;/h2&gt;

&lt;p&gt;Now, we will launch an EC2 instance where we will run Docker.&lt;br&gt;
In the EC2 launch instance page, give any name ("mern-devops") and select Amazon Linux 2 AMI image.&lt;br&gt;
In the Network Settings, allow SSH traffic and HTTP from anywhere.&lt;br&gt;
If you have a key pair already select it or else create a new one. &lt;br&gt;
Ensure "Auto-assign public IP" is selected in Network Settings.&lt;br&gt;
Scroll down and open the "Advanced Details" section.&lt;br&gt;
For the "IAM instance profile" field, open the dropdown and select the role created for the EC2 instance (EC2S3ReadPermission)as shown in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oJe8W4tx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AtXIc5HsbYIjW-2BTWkbxtA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oJe8W4tx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AtXIc5HsbYIjW-2BTWkbxtA.png" alt="EC2 launch" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the instance is created and ready, ssh into it:&lt;br&gt;
&lt;code&gt;ssh -i &amp;lt;login-key-file&amp;gt; ec2-user@&amp;lt;public-ip-of-ec2-instance&amp;gt;&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 3: Configuration &amp;amp; Installation
&lt;/h2&gt;

&lt;p&gt;Inside the terminal connected to the EC2 instance via SSH, run the update command: &lt;code&gt;sudo yum update -y&lt;/code&gt;&lt;br&gt;
Then add a user (give any name and password):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo su
useradd mubin
passwd mubin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now make this user a sudoer:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;echo "mubin ALL=(ALL) NOPASSWD: ALL" &amp;gt;&amp;gt; /etc/sudoers&lt;/code&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Note: The above setting is not good for production environment.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now switch to this user and install Docker&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;su - mubin

sudo yum install docker -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start the service: &lt;code&gt;sudo service docker start&lt;/code&gt;&lt;br&gt;
Switch to root user and add the user to the docker group&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo su

usermod -aG docker mubin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Switch back to the normal user: &lt;code&gt;su - mubin&lt;/code&gt;&lt;br&gt;
Check the docker installation by running : &lt;code&gt;docker ps&lt;/code&gt;&lt;br&gt;
Next, install the Docker Compose tool and check its installation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#download the binary
sudo curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

#make binary executable
sudo chmod +x /usr/local/bin/docker-compose

#check installlation
docker-compose --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we need to set up a CodeDeploy agent.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install ruby
sudo yum install wget
wget https://aws-codedeploy-us-east-1.s3.amazonaws.com/latest/install
chmod +x ./install
sudo ./install auto
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the status of the service: &lt;code&gt;sudo service codedeploy-agent status&lt;/code&gt;&lt;br&gt;
Let's check if this EC2 instance has access to S3 storage. Run &lt;code&gt;aws s3 ls&lt;/code&gt;&lt;br&gt;
If we don't get any error in the console output, it means our instance has access.&lt;br&gt;
Go to the AWS console and create a bucket. Give a name like "mern-artifact-bucket" . Enable Bucket versioning and click the "Create bucket" button. You can verify bucket creation by going back into the console connected to EC2 via SSH and run &lt;code&gt;aws s3 ls&lt;/code&gt; . And you should see the name of the bucket appearing.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 4: Prep existing MERN app source code
&lt;/h2&gt;

&lt;p&gt;Now we need a MERN app source code from GitHub or any other platform that we can use.&lt;br&gt;
You can search for it or use your own existing repo.&lt;br&gt;
I am going to follow an architecture where you have a "&lt;code&gt;client&lt;/code&gt;" folder for the front-end and a "&lt;code&gt;server&lt;/code&gt;" folder for the back-end. You can also put source code for front-end from a separate repo and put it inside "client" directory and place back-end source code from another repo and place it inside "server" directory. &lt;strong&gt;Please ensure that there is no .git folder in either of "&lt;code&gt;client&lt;/code&gt;" and "&lt;code&gt;server&lt;/code&gt;" directory.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I am going to use my own MERN stack app code which is a StackOverflow clone. The repo location for this project can be found &lt;a href="https://github.com/khalifemubin/stackoverflow-clone"&gt;here&lt;/a&gt;. I'll be removing the .git folder from the root of the repo, since I'll be modifying it and creating it as a new repo for our CI/CD project.&lt;/p&gt;

&lt;p&gt;Once you have the source code in your local system, open it up in your favorite code editor.&lt;/p&gt;

&lt;p&gt;First, create a folder, in the root path of our source repo, by the name "nginx" with nginx.conf file inside it with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#vi nginx.conf
upstream nodeapp{
     server node_app:3001;
}

server {
  listen 80;
  root /usr/share/nginx/html;
  index index.html;

  location / {
   try_files $uri /index.html =404;
  }

  location /test {
 proxy_pass http://nodeapp/test;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;
        proxy_redirect off;
  }

  location /user/ {
        proxy_pass http://nodeapp/user/;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;
        proxy_redirect off;
    }

    location /questions/ {
        proxy_pass http://nodeapp/questions/;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;
        proxy_redirect off;
    }

    location /answer/ {
        proxy_pass http://nodeapp/answer/;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;
        proxy_redirect off;
    }

    location /static/css {
        alias /usr/share/nginx/html/static/css;
    }

    location /static/js {
        alias /usr/share/nginx/html/static/js;
    }

    location = /favicon.ico {
        alias /usr/share/nginx/html/favicon.ico;
    }

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As seen from the above file we are configuring Nginx by setting up reverse proxy and API endpoints defined in each location directive &lt;strong&gt;based on the routes defined in the back-end of application source code&lt;/strong&gt;. We are giving an alias of "node_app" for our back-end server.&lt;/p&gt;

&lt;p&gt;Then create Dockerfile inside the server directory with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#vi server/Dockerfile
FROM node:latest

WORKDIR /app

COPY . .

RUN npm install

EXPOSE 3001
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a file .dockerignore with the content node_modules , both in the root project and inside the server directory.&lt;/p&gt;

&lt;p&gt;Next, create Dockerfile in the root of the repo with the following content&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#vi Dockerfile
FROM nginx:latest

WORKDIR /usr/share/nginx/html

COPY . .

RUN rm /etc/nginx/conf.d/default.conf

COPY ./nginx.conf /etc/nginx/conf.d

ENTRYPOINT [ "nginx" , "-g" , "daemon off;" ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this file, we are setting up a working directory for the client side, copying all files from the build into the working directory, replacing default nginx default configuration with our custom configuration file and running nginx in the foreground (so that container keeps running without halting).&lt;/p&gt;

&lt;p&gt;Copy this Dockerfile inside the nginx folder as well.&lt;/p&gt;

&lt;p&gt;Then create a "docker-compose.yml" file with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#vi docker-compose.yml
version: "3.3"
services:
  nginx_app:
    container_name: nginx_app
    build:
      context: ./nginx
      dockerfile: Dockerfile
    ports:
      - 80:80
    restart: always

  node_app:
    container_name: node_app
    build:
      context: ./server
      dockerfile: Dockerfile
    command: npm start
    restart: always
    expose:
      - 3001
    ports:
      - 3001:3001
    links:
      - mongo_db

  mongo_db:
    container_name: mongo_db
    image: mongo
    volumes:
      - mongo_volume:/data/db
    expose:
      - 27017
    ports:
      - 27017:27017

volumes:
  mongo_volume:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this file, we are defining services for Nginx, server, and Mongo DB. We are also setting up containers for each of the services.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Important: Please use the version specified as it is, else you will encounter error during deployment&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We are using persistent storage as seen in the volumes section of the mongo_db container.&lt;/p&gt;

&lt;p&gt;Next, create a scripts folder in the root of the repo and put two bash files inside it (&lt;code&gt;start-containers.sh&lt;/code&gt; and &lt;code&gt;stop-containers.sh&lt;/code&gt; ) with the following respective content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#start-containers.sh

#!/bin/sh
cd /home/mubin/devopspipeline

docker-compose build
docker-compose up -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#stop-containers.sh

#!/bin/sh

cd /home/mubin/devopspipeline
sudo cp -r build/* nginx

if ! docker info &amp;gt; /dev/null 2&amp;gt;&amp;amp;1; then
    service docker start
fi

docker-compose down
echo $?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In &lt;code&gt;start-containers.sh&lt;/code&gt; script file we are navigating to the "devopspipeline" directory inside the home directory of our user and then running &lt;code&gt;docker-compose build&lt;/code&gt; which will read our &lt;code&gt;docker-compose.yml&lt;/code&gt; which in turn will look for all services and run a docker build for each one. And finally docker-compose up -d to create and start containers in the background.&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;stop-containers.sh&lt;/code&gt;, we are copying all the files from the build into the nginx and shutting down containers. The next code snippet checks if the docker is started or not. If it is not running then start the service. This is required when you stop your instance for some reason and resume again. Also in a fresh deployment, there will be no containers, the last command &lt;code&gt;echo $?&lt;/code&gt; will output a non-zero value.&lt;/p&gt;

&lt;p&gt;Now open up your .env file or the file where the Mongoose connection is being made and change the Mongo URL as &lt;br&gt;
&lt;code&gt;mongodb://mongo_db:27017/stack-overflow-clone&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Instead of using localhost or "127.0.0.1", we are using "mongo_db", since this is what we have specified in the &lt;code&gt;docker-compose.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In the package.json file located inside the client directory, add the Nginx proxy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"proxy": "http://node_app:3001",
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, replace any localhost with the public IP of the EC2 instance on the client side (either in .env) or wherever the client is referencing the server/back-end.&lt;br&gt;
To wind up our MERN source code we need two more files: &lt;code&gt;appspec.yml&lt;/code&gt; and &lt;code&gt;buildspec.yml&lt;/code&gt;.&lt;br&gt;
The &lt;code&gt;appspec.yml&lt;/code&gt; will be used by AWS CodeDeploy to manage deployment, while &lt;code&gt;buildspec.yml&lt;/code&gt; which contains a collection of build commands and related settings that will be used by AWS CodeBuild.&lt;br&gt;
So in the root path of the repo, create these two files with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#vi appspec.yml
version: 0.0
os: linux
files:
  - source: /
    destination: /home/mubin/devopspipeline
permissions:
  - object: scripts/
    mode: 777
    type:
      - directory
hooks:
  AfterInstall:
    - location: scripts/stop-containers.sh
      timeout: 300
      runas: root
  ApplicationStart:
    - location: scripts/start-containers.sh
      timeout: 300
      runas: mubin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#vi buildspec.yml
version: 0.2
phases:
  install:
    commands:
      - echo "the installation phase begins"
  pre_build:
    commands:
      - echo "the prebuild phase begins"
      - cd client
      - npm install
  build:
    commands:
      - echo "the build phase begins"
      # - echo `pwd`
      - npm run build
      # - echo `ls -la`

  post_build:
    commands:
      - echo "the post build phase. navigating back to root path"
      - cp -R build/ ../

artifacts:
  files:
    - build/**/*
    - appspec.yml
    - server/**/*
    - nginx/*
    - scripts/*
    - docker-compose.yaml
    - Dockerfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In &lt;code&gt;appspec.yml&lt;/code&gt; file, we define our deployment configuration like the choice of the operating system, recursively copy all files from &lt;code&gt;/&lt;/code&gt; path to "devopspipeline" directory inside our user's home directory and then run scripts in respective hook actions. If the "devopspipeline" directory doesn't exist, it will get created during the deployment process. Next, we are giving execute permission to all files in the scripts directory.&lt;br&gt;
The "AfterInstall" hook is triggered after the CodeDeploy operation is completed i.e. after our app is deployed.&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;buildspec.yml&lt;/code&gt; file, we are specifying certain commands at different phases (majorly to build front-en d/client side) and finally defining our list of artifacts.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;_Important: Please ensure that version mentioned in both appspec.yml and builspec.yml is same as written in the snippet above. Otherwise you will encounter errors.&lt;/p&gt;

&lt;p&gt;Note: *&lt;em&gt;/&lt;/em&gt; represents all files recursively._&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That's it! With the above structure, set this up as our new repo on GitHub.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 5: AWS Tools setup and manual flow
&lt;/h2&gt;

&lt;p&gt;Go to the CodeBuild page on AWS Console. Then click the "Create build project" button.&lt;br&gt;
Give a name like "MyMernBuild".&lt;br&gt;
Scroll down and in the Source section, select "GitHub" as the source provider and select the "Repository in my GitHub account" option in the Repository field. Click Connect to GitHub. In the pop-up window, provide your GitHub credentials and provide confirmation of access.&lt;br&gt;
you should see a message like "You are connected to GitHub using OAuth".&lt;/p&gt;

&lt;p&gt;Enter the URL of your repo. Enter "main" as the source version. Scroll down to the Environment section and select "Amazon Linux 2" as the operating system. Select "Standard" as runtime.&lt;br&gt;
Select "aws/codebuild/amazonlinux2-arch-x86_64-standard:4.0". Select "Linux" as the environment type.&lt;br&gt;
For the "Service role" field, select the "New service role" option.&lt;br&gt;
In the Role ARN field, select a new service role and give a name ("MyMernBuild").&lt;/p&gt;

&lt;p&gt;Scroll to the Buildspec section, and make sure the "use a buildspec file" option is selected. &lt;br&gt;
Also since we have named our file as "buildspec.yml" we can leave the Buildspec name field blank. Had the file name been different, we would have had to enter that in this field.&lt;/p&gt;

&lt;p&gt;Scroll down to the Artifacts section, and select the Amazon S3 option in the type field.&lt;br&gt;
Enter the bucket name ("mern-artifact-bucket") that we created at the beginning of the project.&lt;br&gt;
Select the "Zip" option for the Artifacts packaging field.&lt;br&gt;
Check the CloudWatch logs as it will help us in reading logs in case something fails in the pipeline. Give any group name ("mymernbuildlogs") and stream name ("mymernlogs").&lt;/p&gt;

&lt;p&gt;Click the "Create build project" button.&lt;/p&gt;

&lt;p&gt;Since we added a new service role we need to give this role S3 permission.&lt;/p&gt;

&lt;p&gt;So go to the Role section on the IAM page. Click on the new role name we have given (MyMernBuild). Then click "Add Permissions" to open the dropdown and then click "Attach policies". In the new window, by "s3fullaccess". Check "AmazonS3FullAccess" and click the "Add permissions" button. Once this permission is added, come back to the CodeBuild page.&lt;/p&gt;

&lt;p&gt;Click the "Start build" button. This will start to create a build and also upload it to our S3 bucket, wait until the build succeeds.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WSqOsSin--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AfKnGrtCYakYJwbTQcxWRcQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WSqOsSin--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AfKnGrtCYakYJwbTQcxWRcQ.png" alt="Start CodeBuild" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can then verify whether all artifacts are present in the bucket by going into the S3 bucket page and clicking the Download button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EgKW6Fug--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AB6xDHrRkxOJTzhaS-W2TLQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EgKW6Fug--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AB6xDHrRkxOJTzhaS-W2TLQ.png" alt="Download S3 artifact" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can tick the checkbox against your build name and click download. You should see a zip file getting downloaded.&lt;br&gt;
You can unzip and verify that it contains all the artifact files and folders specified in the &lt;code&gt;buildspec.yml&lt;/code&gt; file are present in the zip file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ck91qXS2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AXzOkg8Q30ISEw8hCcZPD8w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ck91qXS2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AXzOkg8Q30ISEw8hCcZPD8w.png" alt="Downloaded zip content" width="800" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now go back to the AWS console in the browser and search for CodeDeploy in the search bar and open the AWS CodeDeploy page. Click Application from the left menu.&lt;br&gt;
Then click Create Application button.&lt;br&gt;
Give it a name like "MyMernDeploy" and for the Compute platform, select the "EC2/On-premises" option.&lt;br&gt;
Click on the Create button.&lt;/p&gt;

&lt;p&gt;Once created, click the "Create deployment group" button.&lt;br&gt;
On its creation page, enter a deployment group name (like "MyMernDeploymentGroup").&lt;br&gt;
In the Service role section, type the role name that we created earlier for CodeDeploy (we gave "CodeDeployPermission")&lt;br&gt;
and select the option accordingly. &lt;br&gt;
In the environment configuration, tick the "Amazon EC2 instances" and in the tags field, enter "Name" as the key and value as "mern-devops" (the one which we gave to our EC2 instance at the beginning). Once you do this, you should see a message like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NZa6ymr2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AbC_bz2BxZfR0ZQnNnLa6JA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NZa6ymr2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AbC_bz2BxZfR0ZQnNnLa6JA.png" alt="CodeDeploy Environment Configuration" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scroll down and uncheck Load Balancer, since we will be creating it manually later. Click Create button.&lt;/p&gt;

&lt;p&gt;Once the Deployment group is created, click the "Create deployment" button.&lt;br&gt;
In this screen, ensure that for the Revision type field "My application is stored in Amazon S3" is selected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9kMBTAU7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2A7vQs47WpGC3kiT3B-yjDZg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9kMBTAU7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2A7vQs47WpGC3kiT3B-yjDZg.png" alt="Copy S3 bucket URI" width="800" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copy the S3 URI from the S3 bucket page and paste it into the "Revision location" field.&lt;br&gt;
Then select "zip" as an option for the Revision file type field.&lt;br&gt;
Click create deployment button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HM1_YFXG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AoqRoAx5vd1VJLkivF9d69A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HM1_YFXG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AoqRoAx5vd1VJLkivF9d69A.png" alt="CodeDeploy success message" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the deployment is created, you should see all the artifacts in the "/home/mubin/devopspipeline" directory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UWrM3ruR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2Ae27O8ow4PRbcbcKNkVXBwQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UWrM3ruR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2Ae27O8ow4PRbcbcKNkVXBwQ.png" alt="listing artifacts in the user home directory" width="800" height="274"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Note: To see deployment logs, open terminal connected to EC2 instance via SSH &lt;br&gt;
&lt;code&gt;tail -50 /var/log/aws/codedeploy-agent/codedeploy-agent.log&lt;/code&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now run, &lt;code&gt;docker ps&lt;/code&gt; and you should see the containers running.&lt;/p&gt;

&lt;p&gt;If you copy the EC2 instance public IP and paste it into the browser, you should see our app running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wP7rViRW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AdXoFEfNWDYnVe4f6BGivPA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wP7rViRW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AdXoFEfNWDYnVe4f6BGivPA.png" alt="Working App in browser" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Try out registering as a new user and then log in. The application should be working.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yaFnrZN_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AqDQgqSyBCJhhA-kg6dXg-w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yaFnrZN_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AqDQgqSyBCJhhA-kg6dXg-w.png" alt="Working login of app" width="800" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we need to automate the flow, so that every time we push the changes, AWS CodeBuild will be triggered, which will in turn trigger AWS CodeDeploy.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 6: AWS CodePipeline
&lt;/h2&gt;

&lt;p&gt;Back in the AWS console in the browser, search for CodePipeline.&lt;br&gt;
On the Codepipeline page, click the "Create pipeline" button.&lt;br&gt;
Give a name like "MyMernPipeline". &lt;br&gt;
Select "New service role" in the Service role field.&lt;br&gt;
Expand the "Advanced Settings" section and select the "Custom location" option and enter the bucket name in the Bucket field, click the next button. In the next screen, for the Source provider field, select GitHub Version 1 option, since we will be using password-based authentication.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;It is recommend by AWS to use GitHub version 2.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Click the "Connect to GitHub" button.&lt;br&gt;
In a new pop-up window, click confirm button and you should see that the pop window gets closed and a success message.&lt;br&gt;
Enter the repository name and main as the branch. Select "GitHub webhooks" as the Change detection option and click next.&lt;br&gt;
On the next page, select AWS CodeBuild as the Build provider. Enter "MyMernBuild" in the Project name and click next.&lt;/p&gt;

&lt;p&gt;On the next page, select AWS CodeDeploy as the Deploy provider. Enter "MyMernDeploy" in the Application name field.&lt;br&gt;
Enter "MyMernDeploymentGroup" in the Deployment group field and click next.&lt;br&gt;
Click create pipeline button.&lt;/p&gt;

&lt;p&gt;Once the pipeline is created, you should the three stages, where the first stage is the source from GitHub, the second stage is to build from CodeBuild, and deploy using CodeDeploy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mSUpZKUU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AFEUeRM6D0Hk2pfHMTS0NEQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mSUpZKUU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AFEUeRM6D0Hk2pfHMTS0NEQ.png" alt="CodePipeline in action" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So every time we push any changes to the repository, the pipeline will trigger and stages will run.&lt;/p&gt;

&lt;p&gt;Also every time new changes are incorporated, artifacts will be generated with the same name. Hence the need for versioning in the S3 bucket, otherwise the build will fail.&lt;/p&gt;

&lt;p&gt;Try changing some text or styling and push to GitHub and you should see the pipeline starting after the pipeline flow is complete, your new changes should be reflected on the web page.&lt;br&gt;
 &lt;br&gt;
Meanwhile, in the terminal connected to the EC2 instance via SSH, we will go into the container running Mongo &lt;code&gt;docker exec -it &amp;lt;container-id-of-mongo&amp;gt; bash&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Type &lt;code&gt;mongosh&lt;/code&gt; and its shell should open.&lt;br&gt;
run the command to show all databases: show dbs; and you should see your database.&lt;br&gt;
Switch to our database &lt;code&gt;use stackoverflow-clone&lt;/code&gt;&lt;br&gt;
Show all users: &lt;code&gt;db.users.find().pretty()&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If you have registered in the application, you should see the user entry here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--U86BlJzJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AhOkwD3yAjX3mdP-d0vqeSw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--U86BlJzJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AhOkwD3yAjX3mdP-d0vqeSw.png" alt="User entry in mongo shell" width="800" height="232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Exit out of the shell by typing: &lt;code&gt;exit&lt;/code&gt;. Type &lt;code&gt;exit&lt;/code&gt; again to come out of the container.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 7 (Optional): Separate Volume for MongoDB
&lt;/h2&gt;

&lt;p&gt;A recommended approach is to use persistent storage for MongoDB so that in the event of the instance getting terminated or corrupted our database is safe.&lt;/p&gt;

&lt;p&gt;So we need to create a separate EBS volume and attached it to our instance and then mount that directory for MongoDB to use.&lt;/p&gt;

&lt;p&gt;Go to the EC2 dashboard in the browser and click Volumes under the Elastic Block Store section. Click Create Volume. Select "gp2" as the volume type and can keep any size, but a minimum of 8GB is required. Ensure that this volume must be in the same availability zone (AZ) as the instance. You can check that by going to the details of the EC2 instance and under the Networking tab, see the AZ mentioned.&lt;br&gt;
Add tag, with key as "Name" and value as "mern-mongodb-volume" and click create volume.&lt;/p&gt;

&lt;p&gt;Once created, tick the checkbox against the volume name and click the Action button at the top and click Attach Volume. Enter the Instance by typing the tag and click attach.&lt;/p&gt;

&lt;p&gt;Now, check if the volume is getting listed in our instance, by running the &lt;code&gt;lsblk&lt;/code&gt; command in the terminal connected via SSH. You should see a new volume.&lt;/p&gt;

&lt;p&gt;To check if our EBS volume is attached to any directory run:&lt;br&gt;
&lt;code&gt;sudo file -s /dev/xvdf&lt;/code&gt;&lt;br&gt;
The output &lt;code&gt;/dev/xvdf: data&lt;/code&gt; implies it is not attached to any directory.&lt;/p&gt;

&lt;p&gt;In order to use, it we need to format it first.&lt;br&gt;
&lt;code&gt;sudo mkfs -t xfs /dev/xvdf&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then create a directory&lt;br&gt;
&lt;code&gt;sudo mkdir /mongodbvolume&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now mount the drive to the created directory&lt;br&gt;
&lt;code&gt;sudo mount /dev/xvdf /mongodbvolume&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can now access this directory and it should be empty.&lt;/p&gt;

&lt;p&gt;Now, go to &lt;code&gt;docker-compose.yaml&lt;/code&gt; file and make the following changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;volumes:
 - mongo_volume:/data/db`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;volumes:
 - /mongodbvolume:/data/db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now push this code change via git which will trigger the AWS CodePipeline.&lt;/p&gt;

&lt;p&gt;Now in the terminal connected to our EC2 instance via SSH, go inside the container (&lt;code&gt;docker exec -it &amp;lt;container-id-of-mongo&amp;gt; bash&lt;/code&gt;) and again run the commands to show all users, you will find it empty i.e. there will be no output. &lt;br&gt;
This is because we have changed the volume. So go back to the application in the browser and register as a new user.&lt;br&gt;
Then come back to this mongo shell container and you should see the created user.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 8 (Optional): Create LoadBalancer
&lt;/h2&gt;

&lt;p&gt;In this final step, we will make our application accessible via AWS ELB.&lt;/p&gt;

&lt;p&gt;So open the EC2 dashboard in the AWS console in the browser.&lt;/p&gt;

&lt;p&gt;First, go to Security Groups and click Create. Give a name like "my-mern-sg". Give the same description as the name. Select the default VPC. Click "Add rule" in the Inbound rules section. Enter port 80 in the Port range. Enter "0.0.0.0/0" as the CIDR block in Source and click create.&lt;/p&gt;

&lt;p&gt;Next, in the left menu click LoadBalancer and click the "Create load balancer" button. Create an "Application Load Balancer". Enter a name (like my-mern-lb). Select at least two AZ. And in the Security Groups, enter &lt;code&gt;my-mern-sg&lt;/code&gt; which we created above.&lt;/p&gt;

&lt;p&gt;In the "Listeners and routing" section, click "Create Target Group".It will open a new window. In the new window of the Target Group creation screen, select Instances as the target type and give a target group name (like my-mern-tg). Ensure that the VPC is the default one. Expand the Tags section and enter "Name" as the key and "mern-devops" as the value (same tag as the EC2 instance). Keeping the rest of the field the same, click the next button at the bottom. In the next screen select our instance in the "Available instances" section and then click the "Include as pending below" button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Gzq4uDEE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2A9VSMHv2zU0zD4VhsMYgTHg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Gzq4uDEE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2A9VSMHv2zU0zD4VhsMYgTHg.png" alt="Register instance with Target Group" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then click "Create target group" at the bottom of the screen. Back in the "Listeners and routing" section in the LoadBalancer screen, click the refresh icon button and then select the target group we just created, and finally click create the load balancer.&lt;/p&gt;

&lt;p&gt;Now go to the EC2 instance and in the networking tab, click the security group link under the Security tab. Click edit inbound rule. Click Add rule. Enter the "my-mern-sg" that we created and click save.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KcShIOkb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AJAx3gdFaWpVFZxlBCq9A0w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KcShIOkb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AJAx3gdFaWpVFZxlBCq9A0w.png" alt="Inbound rule for Security Group" width="800" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now if you go back to the browser, our application will not load as we have configured our instance to run on LB.&lt;br&gt;
So go to the created LB page and copy the DNS name and paste it into the browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FmoQDGPU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AJl3HkhxQ2Ru6rEi71ckj5Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FmoQDGPU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AJl3HkhxQ2Ru6rEi71ckj5Q.png" alt="Web app working with LoadBalancer" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's it!!!&lt;/p&gt;

&lt;p&gt;The Git repo for this article can be found &lt;a href="https://github.com/khalifemubin/mern-aws-devops-project"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;That's it! Hope you found the article useful.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Happy Coding!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you found this post helpful, please like, share, and follow me. I am a developer, transitioning to DevOps, not a writer - so each article is a big leap outside my comfort zone.&lt;/p&gt;

&lt;p&gt;If you need a technical writer or a DevOps engineer, do connect with me on LinkedIn: &lt;a href="https://www.linkedin.com/in/mubin-khalife/"&gt;https://www.linkedin.com/in/mubin-khalife/&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Thank you for reading and for your support!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>DevOps project using Terraform, Jenkins, and EKS</title>
      <dc:creator>Mubin</dc:creator>
      <pubDate>Tue, 20 Jun 2023 10:53:27 +0000</pubDate>
      <link>https://dev.to/mubinkhalife/devops-project-using-terraform-jenkins-and-eks-1914</link>
      <guid>https://dev.to/mubinkhalife/devops-project-using-terraform-jenkins-and-eks-1914</guid>
      <description>&lt;p&gt;In this article, we are going to create a DevOps project where we'll use Terraform, Jenkins, GitHub, and EKS.&lt;/p&gt;

&lt;p&gt;The first part of the article deals with setting up a Jenkins server through Terraform.&lt;/p&gt;

&lt;p&gt;The second part will deal with EKS setup using Terraform and setting up the Jenkins pipeline.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: It is assumed that you have Terraform and AWS CLI installed and configured on your local system. Please check this page for Terraform installation and this page for AWS CLI installation.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let's begin!!!&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 1: Jenkins server setup using Terraform
&lt;/h2&gt;

&lt;p&gt;Terraform needs to store state information somewhere. We are going to keep it in AWS S3 storage.&lt;br&gt;
So head over to your AWS panel and create a bucket. The bucket name has to be unique. So I am going to create it with the name: mubin-devops-cicd-terraform-eks , you'll have to keep something else. I'll be using US North Virginia (us-east-1) as the region for the bucket and keeping the rest of the settings the same to create the bucket.&lt;/p&gt;

&lt;p&gt;We'll be needing a key to login into the Jenkins EC2 instance. &lt;br&gt;
So go to the EC2 dashboard and scroll down to the Key Pairs link inside Network &amp;amp; Security menu.&lt;br&gt;
Click the "Create key pair" button. Give a name like "&lt;code&gt;jenkins-server-key&lt;/code&gt;". With the rest of the form fields as default, download the pem file and set its permission to 400. &lt;br&gt;
&lt;code&gt;chmod 400 jenkins-server-key.pem&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AtjcdHds--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n6z2hl715fcofo43zmkh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AtjcdHds--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n6z2hl715fcofo43zmkh.png" alt="Create Key Pair" width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's utilize the S3 storage to store terraform state by creating a terraform state file like below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#backend.tf
terraform {
  backend "s3" {
    bucket = "mubin-devops-cicd-terraform-eks"
    region = "us-east-1"
    key = "jenkins-server/terraform.tfstate"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since we will be using AWS as a cloud provider with the "us-east-1" region, let's configure Terraform with this information as well.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#provider.tf
provider "aws" {
  region = "us-east-1"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let's declare terraform variables that we will be using.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#variables.tf
variable "vpc_cidr_block" {
 type = string
 description = "To set cidr for vpc"
}
variable "subnet_cidr_block" {
 type = string
 description = "To set cidr for subnet"
}
variable "availability_zone" {
 type = string
 description = "To set AWS availability region"
}
variable "env_prefix" {
 type = string
 description = "Set as dev or prod or qa etc. based on desired environment"
}
variable "instance_type" {
 type = string
 description = "To desired instance type for AWS EC2 instance"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Initialize the above variables in the "terraform.tfvars" file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vpc_cidr_block      = "10.0.0.0/16"
subnet_cidr_block   = "10.0.10.0/24"
availability_zone   = "us-east-1a"
env_prefix          = "dev"
instance_type       = "t2.small"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are using '&lt;code&gt;dev&lt;/code&gt;' as environment. You can change &lt;code&gt;env-prefix&lt;/code&gt; variable according to your preference.&lt;br&gt;
We are going to use the latest image of Amazon Linux 2 for our Jenkins server instance. If you go to the EC2 dashboard and select Amazon Linux 2, you see the following details:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--crnauG2y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2A24kUuBpDm0knclpzcYflbw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--crnauG2y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2A24kUuBpDm0knclpzcYflbw.png" alt="AMI information" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see the AMI value is "amzn2-ami-kernel-5.10-hvm-2.0.20230612.0-x86_64-gp2" which can be generalized using a regex style expression like: "amzn2-ami-kernel-&lt;em&gt;-hvm-&lt;/em&gt;-x86_64-gp2"&lt;/p&gt;

&lt;p&gt;Now we'll set up infrastructure for Jenkins through code.&lt;/p&gt;

&lt;p&gt;Add the following content to the "server.tf" file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_ami" "latest-amazon-linux-image" {
  most_recent = true
  owners      = ["amazon"]
  filter {
    name   = "name"
    values = ["amzn2-ami-kernel-*-hvm-*-x86_64-gp2"]
  }
  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }
}

resource "aws_instance" "myjenkins-server" {
  ami                         = data.aws_ami.latest-amazon-linux-image.id
  instance_type               = var.instance_type
  key_name                    = "jenkins-server-key"
  subnet_id                   = aws_subnet.myjenkins-server-subnet-1.id
  vpc_security_group_ids      = [aws_default_security_group.default-sg.id]
  availability_zone           = var.availability_zone
  associate_public_ip_address = true
  user_data                   = "${file("jenkins-server-setup.sh")}"
  tags = {
    Name = "${var.env_prefix}-server"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have customized the above by using official references from here and here for the above data and resource blocks respectively.&lt;/p&gt;

&lt;p&gt;Once the instance is created, let's get the public IP of this instance by appending the below snippet in the above file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "ec2_public_ip" {
  value = aws_instance.myjenkins-server.public_ip
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the resource block, we used the &lt;code&gt;jenkins-server-setup.sh&lt;/code&gt; script file to initialize packages on our Jenkins server. Let's create that file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

# install jenkins

sudo yum update
sudo wget -O /etc/yum.repos.d/jenkins.repo \
    https://pkg.jenkins.io/redhat-stable/jenkins.repo
sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io-2023.key
sudo yum upgrade -y
sudo amazon-linux-extras install java-openjdk11 -y
sudo yum install jenkins -y
sudo systemctl enable jenkins
sudo systemctl start jenkins

# then install git
sudo yum install git -y

#then install terraform
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo
sudo yum -y install terraform

#finally install kubectl
sudo curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.23.6/bin/linux/amd64/kubectl
sudo chmod +x ./kubectl
sudo mkdir -p $HOME/bin &amp;amp;&amp;amp; sudo cp ./kubectl $HOME/bin/kubectl &amp;amp;&amp;amp; export PATH=$PATH:$HOME/bin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above script file, we are installing Jenkins, Git, Terraform, and Kubectl.&lt;/p&gt;

&lt;p&gt;Now we need to set up the networking of our ec2 instance.&lt;/p&gt;

&lt;p&gt;Let's start by creating "vpc.tf" file and entering the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_vpc" "myjenkins-server-vpc" {
  cidr_block = var.vpc_cidr_block
  tags = {
    Name = "${var.env_prefix}-vpc"
  }
}

resource "aws_subnet" "myjenkins-server-subnet-1" {
  vpc_id            = aws_vpc.myjenkins-server-vpc.id
  cidr_block        = var.subnet_cidr_block
  availability_zone = var.availability_zone
  tags = {
    Name = "${var.env_prefix}-subnet-1"
  }
}

resource "aws_internet_gateway" "myjenkins-server-igw" {
  vpc_id = aws_vpc.myjenkins-server-vpc.id
  tags = {
    Name = "${var.env_prefix}-igw"
  }
}

resource "aws_default_route_table" "main-rtbl" {
  default_route_table_id = aws_vpc.myjenkins-server-vpc.default_route_table_id
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.myjenkins-server-igw.id
  }
  tags = {
    Name = "${var.env_prefix}-main-rtbl"
  }
}

resource "aws_default_security_group" "default-sg" {
  vpc_id = aws_vpc.myjenkins-server-vpc.id
  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    from_port   = 8080
    to_port     = 8080
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name = "${var.env_prefix}-default-sg"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above code, we are creating VPC with the CIDR block initialized in our variable file. Then we are creating a subnet in this VPC with its own CIDR block. Then we create Internet Gateway (IG) in this VPC. Then Route table is associated with our created IG. Finally, we are creating a Security Group (SG) which is allowing SSH port 22 and Jenkins web application port 8080 from everywhere.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It's not a good practice to allow all traffic from everywhere, but we are just doing this for testing purposes. If you want you can add your local system's public IP as CIDR block in the ingress rule.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Alright now our terraform scripts are ready, let's execute them.&lt;/p&gt;

&lt;p&gt;Now open your favorite terminal and navigate to the directory where our above terraform scripts are located.&lt;/p&gt;

&lt;p&gt;The first command to run is initializing the Terraform &lt;code&gt;terraform init&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The first command to run is initializing the Terraform &lt;code&gt;terraform init&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qGoZAGFg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2Aua7pcQAmWWAE8HiQSr-O5Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qGoZAGFg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2Aua7pcQAmWWAE8HiQSr-O5Q.png" alt="Terraform init output" width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The second command is to do a dry run i.e. to see what will happen when we apply the scripts. So run terraform plan . You should see output like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Z92mikWF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2ANXZeLrcLFQbulq-rf4Bz1Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Z92mikWF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2ANXZeLrcLFQbulq-rf4Bz1Q.png" alt="Terrafrom plan output" width="754" height="194"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It implies that 6 resources will be created. One route table, one subnet, one internet gateway, one security group, one EC2 instance, and one VPC.&lt;br&gt;
Cool! Let's apply this. Run: &lt;code&gt;terraform apply - auto-approve&lt;/code&gt; Auto approve flag will not ask for confirmation. Once it is complete successfully, you can go to your AWS console and see all these 6 resources created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5QX-m3Wm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2A1zhZt4OsQZGs2oCxRRhLDw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5QX-m3Wm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2A1zhZt4OsQZGs2oCxRRhLDw.png" alt="Terrafrom apply output" width="800" height="535"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice the EC2 public ip printed.&lt;/p&gt;

&lt;p&gt;Most importantly you should see the S3 bucket containing Terraform's state file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DsLDD00E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AKDsm4tMtqj1OhJxXByuZtQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DsLDD00E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AKDsm4tMtqj1OhJxXByuZtQ.png" alt="Amazon S3 bucket with Terraform state file" width="800" height="587"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let us set up the admin login of the Jenkins server. Copy the public IP of the EC2 instance and paste it into the address bar of your favorite browser. you should see the installation screen like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gqCHgiob--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2ArTQCbaGmABEIJY70IvxJqg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gqCHgiob--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2ArTQCbaGmABEIJY70IvxJqg.png" alt="Jenkins installation screen" width="800" height="471"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Log in to the instance via ssh by running: &lt;br&gt;
&lt;code&gt;ssh -i jenkins-server-key.pem ec2-user@&amp;lt;public-ip-of-instance&amp;gt;&lt;/code&gt; . Then copy the default password from &lt;code&gt;/var/lib/jenkins/secrets/initialAdminPassword&lt;/code&gt; by running&lt;br&gt;
&lt;code&gt;sudo cat /var/lib/jenkins/secrets/initialAdminPassword&lt;/code&gt; and paste it into the input text box and click "Continue".&lt;/p&gt;

&lt;p&gt;Click install selected plugins. Once done, create an admin user.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EdDvz1K4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/0%2Akgsgru8VDKg6KmyE.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EdDvz1K4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/0%2Akgsgru8VDKg6KmyE.png" alt="Jenkins Admin creation screen" width="800" height="595"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we need to establish a connection between Jenkins and GitHub. So go to Manage Jenkins by clicking on its link. Then click "Credentials". &lt;br&gt;
Then in the tabular data, select the "global" link under the "Domain" column, as shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UhM6GQyP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AKxAYz5Q51jweOaYbmfhfAA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UhM6GQyP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AKxAYz5Q51jweOaYbmfhfAA.png" alt="Jenkins global credential screen" width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then in the Global credentials screen, click "Add Credentials".&lt;br&gt;
Here we will first add our GitHub username and password to connect it to this Jenkins instance.&lt;br&gt;
Select "Username and password" from the kind field. Ensure that "Global" is selected as the scope.&lt;br&gt;
Enter your GitHub username and password. Give a recognizable ID, like "my-github-creds" and click create.&lt;br&gt;
Once again back on the Global Credentials page click the "Add Credentials" button.&lt;br&gt;
We need to add the AWS account access key id. Select "Secret Text" as the kind. In the ID field enter the "AWS_ACCESS_KEY_ID". And in the secret field enter your AWS account's access key id. Then hit create button.&lt;br&gt;
Once again back on the Global Credentials page click the "Add Credentials" button. We need to add the AWS account secret key. Select "Secret Text" as the kind. In the ID field enter the "AWS_SECRET_ACCESS_KEY". And in the secret field enter your AWS account's secret key id. Then hit create button.&lt;/p&gt;

&lt;p&gt;Next, we need to configure AWS CLI on this Jenkins EC2 instance. In the terminal connected to the instance via SSH, run &lt;code&gt;aws configure&lt;/code&gt; and enter the values for AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. Enter "us-east-1" for the AWS_DEFAULT_REGION.&lt;/p&gt;

&lt;p&gt;This brings an end to our first part.&lt;/p&gt;
&lt;h2&gt;
  
  
  Part 2: Kubernetes setup using Terraform and Jenkins
&lt;/h2&gt;

&lt;p&gt;Since two separate processes will be involved, one dealing with EKS setup and another for the pipeline, we will create two directories. Create the first one by the name "terraform-for-cluster" and another one by the name "kubernetes".&lt;/p&gt;

&lt;p&gt;Navigate into the "terraform-for-cluster" directory. Let's start writing the required terraform files.&lt;/p&gt;

&lt;p&gt;We will create the same "provider.tf" as the one created in part 1.&lt;/p&gt;

&lt;p&gt;Also, we will use the same terraform state file but with a different key. Keeping the same key will overwrite the existing Jenkins infrastructure.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#backend.tf
terraform {
  backend "s3" {
    bucket = "mubin-devops-cicd-terraform-eks"
    region = "us-east-1"
    key = "eks/terraform.tfstate"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we will create our custom VPC module in which our cluster will operate. We will use the code provided by Terraform here and tweak it a bit.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#vpc.tf
data "aws_availability_zones" "azs" {}
module "myjenkins-server-vpc" {
  source          = "terraform-aws-modules/vpc/aws"
  name            = "myjenkins-server-vpc"
  cidr            = var.vpc_cidr_block
  private_subnets = var.private_subnet_cidr_blocks
  public_subnets  = var.public_subnet_cidr_blocks
  azs             = data.aws_availability_zones.azs.names

  enable_nat_gateway   = true
  single_nat_gateway   = true
  enable_dns_hostnames = true

  tags = {
    "kubernetes.io/cluster/myjenkins-server-eks-cluster" = "shared"
  }

  public_subnet_tags = {
    "kubernetes.io/cluster/myjenkins-server-eks-cluster" = "shared"
    "kubernetes.io/role/elb"                  = 1
  }

  private_subnet_tags = {
    "kubernetes.io/cluster/myjenkins-server-eks-cluster" = "shared"
    "kubernetes.io/role/internal-elb"         = 1
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above script, we are setting our availability zone, public and private subnets with CIDR blocks value from the " terraform.tfvars" and NAT gateways.&lt;/p&gt;

&lt;p&gt;The next step is to write a terraform module for the EKS cluster utilizing the above VPC module. We are going to be using the code provided here, by Terraform and tweak it to our requirement.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#eks-cluster.tf
module "eks" {
    source  = "terraform-aws-modules/eks/aws"
    version = "~&amp;gt; 19.0"
    cluster_name = "myjenkins-server-eks-cluster"
    cluster_version = "1.24"

    cluster_endpoint_public_access  = true

    vpc_id = module.myjenkins-server-vpc.vpc_id
    subnet_ids = module.myjenkins-server-vpc.private_subnets

    tags = {
        environment = "development"
        application = "myjenkins-server"
    }

    eks_managed_node_groups = {
        dev = {
            min_size = 1
            max_size = 3
            desired_size = 2

            instance_types = ["t2.small"]
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note the number of min, max, and desired nodes that we have set.&lt;/p&gt;

&lt;p&gt;Let's configure terraform variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#terraform.tfvars
vpc_cidr_block = "10.0.0.0/16"
private_subnet_cidr_blocks=["10.0.1.0/24","10.0.2.0/24","10.0.3.0/24"]
public_subnet_cidr_blocks=["10.0.4.0/24","10.0.5.0/24","10.0.6.0/24"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Define the above variables in its own file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#variables.tf 
variable "vpc_cidr_block" {
 type = string
}

variable "private_subnet_cidr_blocks" {
 type = list(string)
}
variable "public_subnet_cidr_blocks" {
 type = list(string)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, that Terraform part is complete, let's look at the Jenkins pipeline code that we would be running from the Jenkins web panel.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Jenkinsfile
pipeline {
    agent any
    environment {
        AWS_ACCESS_KEY_ID = credentials('AWS_ACCESS_KEY_ID')
        AWS_SECRET_ACCESS_KEY = credentials('AWS_SECRET_ACCESS_KEY')
        AWS_DEFAULT_REGION = "us-east-1"
    }
    stages {
        stage("Create an EKS Cluster") {
            steps {
                script {
                    #Give the location of terraform scripts directory relative 
                    #to the repo
                    dir('part2-cluster-from-terraform-and-jenkins/terraform-for-cluster') {
                        sh "terraform init"
                        sh "terraform apply -auto-approve"
                    }
                }
            }
        }
        stage("Deploy to EKS") {
            steps {
                script {
                    #Give the location of kubernetes scripts directory relative 
                    #to the repo
                    dir('part2-cluster-from-terraform-and-jenkins/kubernetes') {
                        sh "aws eks update-kubeconfig --name myapp-eks-cluster"
                        sh "kubectl apply -f deployment.yaml"
                        sh "kubectl apply -f service.yaml"
                    }
                }
            }
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's go through the overview of the above script. &lt;br&gt;
We are first setting our environment variables related to AWS. Then we are creating two stages for the pipeline. &lt;br&gt;
In the first stage, we are creating a cluster for Kubernetes by specifying from which directory the terraform scripts should run.&lt;br&gt;
In the second stage, we are deploying the default Nginx app on Kubernetes. We specify "kubernetes" as the directory as that is where our scripts for deployment and service will be located.&lt;br&gt;
In that directory, we have two files, one for deployment and another one for exposing it as a service of type LoadBalancer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#service.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, go to the Jenkins panel. From the dashboard click "New Item" from the left side menu. In the new screen, give a name and select "Pipeline" and click ok. This will create a pipeline. Scroll down to the Pipeline field. Select "Pipeline script from SCM". Then select "Git" as an option in the SCM field. Then enter the above repository URL in the "Repository URL" field. Then in credentials, select the git credentials that we created earlier in Jenkins. Then change the branch to "&lt;em&gt;/main" instead of "&lt;/em&gt;/master". Then in the "Script" field give the location of Jenkinsfile e.g "part2-cluster-from-terraform-and-jenkins/Jenkinsfile"&lt;br&gt;
Finally, click "Save".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4QSzFc7U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AhONW26m0ANdmtPUgykSiEg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4QSzFc7U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AhONW26m0ANdmtPUgykSiEg.png" alt="Jenkins pipeline Git configure" width="800" height="538"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Part 3: Witness the Action
&lt;/h2&gt;

&lt;p&gt;Go to the created Jenkins pipeline in the web browser and click on "Build Now" from the left menu of the pipeline to set up EKS.&lt;/p&gt;

&lt;p&gt;It will take about roughly 15 minutes to set up everything on AWS.&lt;/p&gt;

&lt;p&gt;Once you see the "success" message of the build, go to EKS on the AWS console. You can see our cluster got created. Also when you go to S3 on the AWS console, you should be able to see one more folder by the name "eks" and a terraform state file inside it, which is related to the EKS cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VyelfLbL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AgpWqqemk31UpjSVNCD0q2Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VyelfLbL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2AgpWqqemk31UpjSVNCD0q2Q.png" alt="EKS S3 bucket" width="800" height="559"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then go to EC2, you should see our LoadBalancer also got created. Click on it and you should see the DNS name in the description section. Copy it and paste it in the browser and you should see nginx running&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--l-LhBiuh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2Aj0DQRdkZnSPzUjNwXJwuCw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--l-LhBiuh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2Aj0DQRdkZnSPzUjNwXJwuCw.png" alt="EC2 LoadBalancer" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XVV6lYti--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2Az4xak7p4_Rv0fCu2uyJSFg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XVV6lYti--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1600/1%2Az4xak7p4_Rv0fCu2uyJSFg.png" alt="Nginx web page display in browser" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To clean up these resources created (things like LoadBalancers are not free in AWS), in your local system, navigate to the directory where the second part of terraform scripts are located. run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform destroy - auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it! All your dynamically created resources will now be destroyed.&lt;/p&gt;

&lt;p&gt;The Git repo for this article can be found here.&lt;/p&gt;

&lt;p&gt;Hope you found the article useful.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Happy Coding!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you found this post helpful, please like, share, and follow me. I am a developer, transitioning to DevOps, not a writer - so each article is a big leap outside my comfort zone.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you need a technical writer or a DevOps engineer, do connect with me on LinkedIn:&lt;/strong&gt; &lt;a href="https://www.linkedin.com/in/mubin-khalife/"&gt;https://www.linkedin.com/in/mubin-khalife/&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Thank you for reading and for your support!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>ElasticSearch in PHP</title>
      <dc:creator>Mubin</dc:creator>
      <pubDate>Fri, 09 Jun 2023 12:39:10 +0000</pubDate>
      <link>https://dev.to/mubinkhalife/elasticsearch-in-php-567b</link>
      <guid>https://dev.to/mubinkhalife/elasticsearch-in-php-567b</guid>
      <description>&lt;p&gt;&lt;strong&gt;A little overview of Elasticsearch&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Elasticsearch is a real-time distributed and open-source full-text search engine.&lt;/p&gt;

&lt;p&gt;It is accessible from the RESTful web service interface and uses schema-less JSON (JavaScript Object Notation) documents to store data.&lt;/p&gt;

&lt;p&gt;Elasticsearch can be used as a replacement for document stores like MongoDB.&lt;/p&gt;

&lt;p&gt;The following are key terminologies associated with Elasticsearch:&lt;/p&gt;

&lt;p&gt;i) Node: It is a single-running instance of Elasticsearch. Multiple nodes could be running on a single server depending on its resource capabilities.&lt;/p&gt;

&lt;p&gt;ii) Cluster: As the name suggests, it is a collection of one or more nodes that provides collective indexing and search capabilities across all nodes&lt;/p&gt;

&lt;p&gt;iii) Index: Index is a collection of different types of documents and their properties. This means that you can have a collection of documents that contains data for a specific part of the application. In RDBMS it is analogous to a table.&lt;/p&gt;

&lt;p&gt;iv) Field: A field is analogous to a column in RDMS.&lt;/p&gt;

&lt;p&gt;v) Document: A document is a collection of fields in JSON format. In RDBMS it is analogous to a row. Every document has a unique identifier called UID.&lt;/p&gt;

&lt;p&gt;vi) Shard: Indexes are horizontally subdivided into shards. This means that each shard contains all the properties of the document but contains less number of JSON objects. It is more like a subset of the entire index. It acts like an independent node and can be stored in any node. The primary shard is the original horizontal part of an index.&lt;/p&gt;

&lt;p&gt;vii)Replicas: Elasticsearch allows users to create replicas of their indexes and shards. Replication helps in increasing the availability of data in case of failure. This also improves the performance by carrying out parallel search operations in these replicas.&lt;/p&gt;

&lt;p&gt;Alright! Now that you are familiar with Elasticsearch, let’s put it to use in our simple PHP application&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Installation &amp;amp; Configuration&lt;/strong&gt;&lt;br&gt;
We need to set up an Elasticsearch node. So install it from the instructions provided &lt;a href="https://www.elastic.co/downloads/elasticsearch"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I’m going to be using Ubuntu for this article. I am running Ubuntu on Vagrant. We can install via apt-get, but I’ll download the Debian package instead and run it.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;#download with a simple name&lt;br&gt;
wget -c https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.8.0-amd64.deb -O elastic.deb&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;#run installer&lt;br&gt;
sudo dpkg -i elastic.deb&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Elasticsearch also needs Java to run. So let’s install Java.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt-get update&lt;br&gt;
sudo apt install openjdk-11-jre-headless&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We cannot run Elasticsearch as the root user. So, we need to give our user the required permission on certain files:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo chown -R $USER:$GROUP /etc/default/elasticsearch&lt;br&gt;
sudo chown -R $USER:$GROUP /usr/share/elasticsearch/&lt;br&gt;
sudo chown -R $USER:$GROUP /etc/elasticsearch&lt;br&gt;
sudo chown -R $USER:$GROUP /var/lib/elasticsearch&lt;br&gt;
sudo chown -R $USER:$GROUP /var/log/elasticsearch&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Modify its configuration file located at /etc/elasticsearch/elasticsearch.yml with the below content&lt;/p&gt;

&lt;p&gt;&lt;code&gt;xpack.security.enabled: false&lt;br&gt;
xpack.security.enrollment.enabled: false&lt;br&gt;
xpack.security.http.ssl:&lt;br&gt;
  enabled: false&lt;br&gt;
  keystore.path: certs/http.p12&lt;br&gt;
xpack.security.transport.ssl:&lt;br&gt;
  enabled: false&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Configure Java path for Elasticsearch. Open &lt;code&gt;/etc/default/elasticsearch&lt;/code&gt; and enter the following:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ES_JAVA_HOME=/usr/bin/java&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Start the Elasticsearch service&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo systemctl daemon-reload&lt;br&gt;
sudo systemctl enable elasticsearch.service&lt;br&gt;
sudo systemctl start elasticsearch.service&lt;/code&gt;&lt;br&gt;
Let’s start the Elasticsearch instance. Simply run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;/usr/share/elasticsearch/bin/elasticsearch&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Manual Testing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We need a REST client to perform manual operations on Elasticsearch. Visual Studio Code is a popular editor. We are going to use it. If you don’t have it already, then download it. We will use the “Thunder Client” plugin which is available in Visual Studio Code. So open your Visual Studio Code editor. From the left sidebar, click the extension icon. A drawer will open. In the search text field, enter “Thunder Client” and click the first item in the list. Then in the content area, click the install button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8t1IwHkO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cgfvx8d0noem3g2q7ews.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8t1IwHkO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cgfvx8d0noem3g2q7ews.png" alt="Visual Studio Thunder Client" width="800" height="537"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once this is installed, you’ll find the “Thunder Client” icon in the left sidebar. Click on it. It will open a side drawer. In that click, the “New Request” button.&lt;/p&gt;

&lt;p&gt;In the content area, select the “PUT” method and enter the URL as “&lt;a href="http://127.0.0.1:9200/blog"&gt;http://127.0.0.1:9200/blog&lt;/a&gt;" and hit the send button.&lt;/p&gt;

&lt;p&gt;It’ll create an index by the name of “blog” as verified in the response:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2H09TwnR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zyzbjbt3aq0wz1p2ujck.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2H09TwnR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zyzbjbt3aq0wz1p2ujck.png" alt="Create Blog Index" width="800" height="833"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s test further by adding a record inside this index.&lt;/p&gt;

&lt;p&gt;Click, the “New Request” button on the left side of the editor, select the “POST” method, and enter the URL as &lt;a href="http://127.0.0.1:9200/blog/_doc/1"&gt;http://127.0.0.1:9200/blog/_doc/1&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here “1” is the id for our intended document, which we are providing manually. Enter the following JSON content:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;{&lt;br&gt;
  "title": "This is a test blog post",&lt;br&gt;
  "body":"A dummy content for body of the post"&lt;br&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now hit send button and you should see the output like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uqJM7l4V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eahh1czgeau3foz8db3i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uqJM7l4V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eahh1czgeau3foz8db3i.png" alt="Document Insert Screenshot" width="800" height="860"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you send the data again, then it will increment its version. Basically an update operation.&lt;/p&gt;

&lt;p&gt;Let’s do that. Since we need to be storing the “tags” field as well. Modify the JSON content like the below and hit send:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;{&lt;br&gt;
  "title": "This is a test blog post",&lt;br&gt;
  "body":"A dummy content for body of the post",&lt;br&gt;
  "tags":["blog","post","test"]&lt;br&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You should see output something like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4kpE-a8O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xi104wirl33ce25bxbr8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4kpE-a8O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xi104wirl33ce25bxbr8.png" alt="Update document screenshot" width="800" height="883"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, the version of the record got updated from 1 to 2.&lt;/p&gt;

&lt;p&gt;Insert another record without entering any manual id number. Click, the “New Request” button on the left side of the editor, select the “POST” method, and enter the URL as &lt;a href="http://127.0.0.1:9200/blog/_doc"&gt;http://127.0.0.1:9200/blog/_doc&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Enter the following JSON content:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;{&lt;br&gt;
  "title": "What is Lorem Ipsum?",&lt;br&gt;
  "body":"Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.",&lt;br&gt;
  "tags":["lorem","ipsum","capsicum"]&lt;br&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ELgWWXav--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4jmsmxtk603n64xsw0zf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ELgWWXav--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4jmsmxtk603n64xsw0zf.png" alt="Insert without id parameter" width="800" height="890"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice the “_id” value. It got assigned automatically.&lt;/p&gt;

&lt;p&gt;To fetch all documents in the index, click the “New Request” button on the left side of the editor, select the “GET” method, enter the URL as &lt;a href="http://127.0.0.1:9200/blog/_search"&gt;http://127.0.0.1:9200/blog/_search&lt;/a&gt; and hit Send button:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OeKTBfFG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dz5dtahmy5zpzbs17g99.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OeKTBfFG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dz5dtahmy5zpzbs17g99.png" alt="Search Index screenshot" width="800" height="979"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s delete the index so that we can start afresh.&lt;/p&gt;

&lt;p&gt;Click the “New Request” button on the left side of the editor, select the “DELETE” method, enter the URL as &lt;a href="http://127.0.0.1:9200/blog"&gt;http://127.0.0.1:9200/blog&lt;/a&gt;, and hit Send button.&lt;/p&gt;

&lt;p&gt;Now when you search again you should get “index_not_found_exception”.&lt;/p&gt;

&lt;p&gt;We are now set to perform operations programmatically.&lt;/p&gt;

&lt;p&gt;Setup client&lt;br&gt;
First, install the Apache web server.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt update &amp;amp;&amp;amp; sudo apt install apache2 &amp;amp;&amp;amp; sudo apt install libapache2-mod-php&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then make index.php as having high priority over its HTML counterpart. Open &lt;code&gt;/etc/apache2/mods-enabled/dir.conf&lt;/code&gt; and change:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;DirectoryIndex index.html index.cgi index.pl index.php index.xhtml index.htm&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;To&lt;/p&gt;

&lt;p&gt;&lt;em&gt;DirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Restart Apache service:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo systemctl restart apache2&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now we are ready to code our web application. Since it is going to be a PHP application, let’s install it.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt install php-cli unzip&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We need a PHP client for Elasticsearch. For that we need Composer. Composer is a PHP package dependency manager, like NPM for NodeJS.&lt;/p&gt;

&lt;p&gt;To install composer, simply run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl -sS https://getcomposer.org/installer -o /tmp/composer-setup.php&lt;br&gt;
sudo php /tmp/composer-setup.php --install-dir=/usr/local/bin --filename=composer&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Check the installation by running:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;composer --version&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now go to /var/www/html and run this command to download the client:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;composer require elasticsearch/elasticsearch&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This will create a vendor directory and composer.json file.&lt;/p&gt;

&lt;p&gt;Now create an additional directory with a client initialization script&lt;/p&gt;

&lt;p&gt;&lt;code&gt;mkdir app&lt;br&gt;
cd app&lt;br&gt;
touch init.php&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Paste the following content in the init.php file&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&amp;lt;?php&lt;br&gt;
    require_once 'vendor/autoload.php';&lt;br&gt;
    $es = new Elasticsearch\Client([&lt;br&gt;
        'hosts' =&amp;gt; ['127.0.0.1:9200'],&lt;br&gt;
    ]);&lt;br&gt;
?&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The above code initializes the Elasticsearch client that will run on localhost at port 9200&lt;/p&gt;

&lt;p&gt;Time for the web interface.&lt;/p&gt;

&lt;p&gt;We are going to create two screens, one for searching and another one for adding data. We are going to keep the interface to a minimum.&lt;/p&gt;

&lt;p&gt;So let’s create the first screen, index.php, and enter the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?php
require_once "app/init.php";

if(isset($_GET['q'])){
        $q = $_GET['q'];
        $query = $es-&amp;gt;search([
            'body'=&amp;gt; [
                'query' =&amp;gt; [
                    'bool' =&amp;gt; [
                        'should' =&amp;gt; [
                            [ 'match' =&amp;gt; [ 'title' =&amp;gt; $q ] ],
                            [ 'match' =&amp;gt; [ 'body' =&amp;gt; $q ] ],
                        ]
                    ]
                ]
            ]
        ]);

     if($query['hits']['total']["value"] &amp;gt;= 1){
        $results = $query['hits']['hits'];
     }
}
?&amp;gt;
&amp;lt;!doctype html&amp;gt;
&amp;lt;html&amp;gt;
    &amp;lt;head&amp;gt;
        &amp;lt;meta charset="utf-8"&amp;gt;
        &amp;lt;title&amp;gt;Search | ElasticSearch Demo&amp;lt;/title&amp;gt;
 &amp;lt;link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-9ndCyUaIbzAi2FUVXJi0CjmCapSmO7SnpJef0486qhLnuZ2cdeRhO02iuK6FUUVM" crossorigin="anonymous"&amp;gt;    
   &amp;lt;/head&amp;gt;
    &amp;lt;body&amp;gt;
     &amp;lt;div class="row mt-3"&amp;gt;
            &amp;lt;div class="mx-auto col-10 col-md-8 col-lg-6"&amp;gt;
                &amp;lt;!-- We are setting method as GET for form post, so we can see the enered text in url --&amp;gt;
                &amp;lt;form action="&amp;lt;?=$_SERVER["PHP_SELF"]?&amp;gt;" method="GET" autocomplete="off"&amp;gt;
                    &amp;lt;div class="row mb-3"&amp;gt;
                        &amp;lt;div class="col"&amp;gt;
                        &amp;lt;input type="text" class="form-control" name="q" placeholder="Enter text to Search Blog" /&amp;gt;
                        &amp;lt;/div&amp;gt;
                    &amp;lt;/div&amp;gt;
                    &amp;lt;div class="row mb-3"&amp;gt;
                        &amp;lt;div class="col"&amp;gt;
                            &amp;lt;input type="submit" class="btn btn-primary" value="Search" /&amp;gt;
                        &amp;lt;/div&amp;gt;
                    &amp;lt;/div&amp;gt;
  &amp;lt;/form&amp;gt;
  &amp;lt;?php
        if(isset($results)){
            foreach($results as $r){
    ?&amp;gt;
            &amp;lt;div class="row mb-3"&amp;gt;
                &amp;lt;div class="col"&amp;gt;
   &amp;lt;div class="alert alert-secondary" role="alert"&amp;gt;
                        &amp;lt;p class="fw-bolder"&amp;gt;&amp;lt;?=$r["_source"]["title"]?&amp;gt;&amp;lt;/p&amp;gt;
                        &amp;lt;?=implode(",",$r["_source"]["tags"])?&amp;gt;
                    &amp;lt;/div&amp;gt; 
  &amp;lt;/div&amp;gt;
            &amp;lt;/div&amp;gt;
    &amp;lt;?php
            }
 }else{
   echo '&amp;lt;div class="alert alert-danger" role="alert"&amp;gt;
   No data found
    &amp;lt;/div&amp;gt;';
 }
    ?&amp;gt;
            &amp;lt;/div&amp;gt;
        &amp;lt;/div&amp;gt;
            &amp;lt;!-- We are going to skip validation checks --&amp;gt;
    &amp;lt;/body&amp;gt;
&amp;lt;script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/js/bootstrap.bundle.min.js" integrity="sha384-geWF76RCwLtnZ8qwWowPQNguL3RmwHVBC9FhGdlKrxdiJJigb/j/68SIy3Te4Bkz" crossorigin="anonymous"&amp;gt;&amp;lt;/script&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s analyze the code above. The PHP code before HTML doctype declaration is sub-divided into three parts:&lt;br&gt;
i) Requiring the Elasticsearch client, which we created before, in this script&lt;br&gt;
ii) If we get URL parameter ‘q’, then we are going to perform a search operation query on Elasticsearch on the “title” and “body” fields.&lt;br&gt;
iii) $query[‘hits’][‘total’][“value”] will check if there is at least one record is present. If there is, then initialize a “results” variable like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$results = $query['hits']['hits'];&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And after the closing of the HTML form tag, we are checking if the “results” variable is not empty then loop through the documents contained in it and display the “title” and “tags” field values.&lt;/p&gt;

&lt;p&gt;Let’s create the final web interface for creating documents in the index. Create an “add.php” file and enter the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?php
    require_once "app/init.php";

    if(!empty($_POST)){
        if(isset($_POST["title"]) &amp;amp;&amp;amp; isset($_POST["body"]) &amp;amp;&amp;amp; isset($_POST["tags"])){
            $title = $_POST["title"];
            $body = $_POST["body"];
            $tags = explode("," , $_POST["tags"]);

            $indexed = $es-&amp;gt;index([
                "index" =&amp;gt; "blog",
                "title" =&amp;gt; $title,
                "body" =&amp;gt; [
                    'title' =&amp;gt; $title,
                    'body' =&amp;gt; $body,
                    'tags' =&amp;gt; $tags 
                ]
            ]);

            if($indexed){
                echo '&amp;lt;div class="alert alert-success mt-3 mb-3" role="alert"&amp;gt;
                        Document inserted successfully!
                      &amp;lt;/div&amp;gt;';
            }
        }
    }
?&amp;gt;
&amp;lt;!doctype html&amp;gt;
&amp;lt;html&amp;gt;
    &amp;lt;head&amp;gt;
        &amp;lt;meta charset="utf-8"&amp;gt;
        &amp;lt;title&amp;gt;Create | ElasticSearch Demo&amp;lt;/title&amp;gt;
        &amp;lt;link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-9ndCyUaIbzAi2FUVXJi0CjmCapSmO7SnpJef0486qhLnuZ2cdeRhO02iuK6FUUVM" crossorigin="anonymous"&amp;gt; 
   &amp;lt;/head&amp;gt;
    &amp;lt;body&amp;gt;
 &amp;lt;div class="row mt-3"&amp;gt;
   &amp;lt;div class="mx-auto col-10 col-md-8 col-lg-6"&amp;gt;
        &amp;lt;!-- We are setting method as GET for form post, so we can see the enered text in url --&amp;gt;
        &amp;lt;form action="&amp;lt;?=$_SERVER["PHP_SELF"]?&amp;gt;" method="POST" autocomplete="off"&amp;gt;
            &amp;lt;div class="row mb-3"&amp;gt;
    &amp;lt;div class="col"&amp;gt;
                 &amp;lt;input type="text" name="title" class="form-control" placeholder="Enter Title" /&amp;gt;
  &amp;lt;/div&amp;gt;
     &amp;lt;/div&amp;gt;
            &amp;lt;div class="row mb-3"&amp;gt;
    &amp;lt;div class="col"&amp;gt;
                 &amp;lt;textarea name="body" rows="8" class="form-control" placeholder="Enter Body content"&amp;gt;&amp;lt;/textarea&amp;gt;
             &amp;lt;/div&amp;gt;
     &amp;lt;/div&amp;gt;
            &amp;lt;div class="row mb-3"&amp;gt;
    &amp;lt;div class="col"&amp;gt;
                 &amp;lt;input type="text" name="tags" class="form-control" placeholder="Enter comma separated Tags" /&amp;gt;
             &amp;lt;/div&amp;gt;
     &amp;lt;/div&amp;gt;    
    &amp;lt;div class="row mb-3"&amp;gt;
    &amp;lt;div class="col"&amp;gt;
                 &amp;lt;input type="submit" class="btn btn-primary"  value="Create" /&amp;gt;
             &amp;lt;/div&amp;gt;
    &amp;lt;/div&amp;gt;
 &amp;lt;/form&amp;gt;
 &amp;lt;/div&amp;gt;
 &amp;lt;/div&amp;gt;
 &amp;lt;!-- We are going to skip validation checks --&amp;gt;
 &amp;lt;script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/js/bootstrap.bundle.min.js" integrity="sha384-geWF76RCwLtnZ8qwWowPQNguL3RmwHVBC9FhGdlKrxdiJJigb/j/68SIy3Te4Bkz" crossorigin="anonymous"&amp;gt;&amp;lt;/script&amp;gt;
    &amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s analyze the code above. The PHP code before HTML doctype declaration is sub-divided into three parts:&lt;br&gt;
i) Like the search page, we insert the Elasticsearch client in this script.&lt;br&gt;
ii) If the page is submitted with all the form fields, then we are going to perform an insert operation on the “blog” index in Elasticsearch.&lt;br&gt;
iii) The result of the operation is stored in the “indexed” variable. If it is not empty, then we are displaying a success message using the Bootstrap alert component.&lt;/p&gt;

&lt;p&gt;You can then go back to the Search screen (index.php) in the browser and try searching for something. Below is a screenshot of a generic search result of character ‘a’:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_GfAl6GI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uzknzep7b8vcenvbp75i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_GfAl6GI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uzknzep7b8vcenvbp75i.png" alt="Web interface of Search" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Git repo for this article can be found here.&lt;/p&gt;

&lt;p&gt;That’s it! Hope you found the article useful.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Happy Coding!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you found this post helpful, please like, share and follow me. I am a developer, transitioning to DevOps, not a writer — so each article is a big leap outside my comfort zone.&lt;/p&gt;

&lt;p&gt;If you need a technical writer or a DevOps engineer, do connect with me on LinkedIn: &lt;a href="https://www.linkedin.com/in/mubin-khalife/"&gt;https://www.linkedin.com/in/mubin-khalife/&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Thank you for reading and for your support!&lt;/p&gt;

</description>
      <category>elasticsearch</category>
      <category>php</category>
      <category>vagrant</category>
      <category>ubuntu</category>
    </item>
  </channel>
</rss>
