<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sri Vishnuvardhan A</title>
    <description>The latest articles on DEV Community by Sri Vishnuvardhan A (@vishnuswmech).</description>
    <link>https://dev.to/vishnuswmech</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vishnuswmech"/>
    <language>en</language>
    <item>
      <title>Face Recognition System for provisioning AWS instance using Terraform scripts and generating Mail &amp; WhatsApp alerts</title>
      <dc:creator>Sri Vishnuvardhan A</dc:creator>
      <pubDate>Thu, 24 Jun 2021 09:52:47 +0000</pubDate>
      <link>https://dev.to/vishnuswmech/face-recognition-system-for-launching-aws-instance-using-terraform-and-generating-mail-whatsapp-alerts-5hk2</link>
      <guid>https://dev.to/vishnuswmech/face-recognition-system-for-launching-aws-instance-using-terraform-and-generating-mail-whatsapp-alerts-5hk2</guid>
      <description>&lt;p&gt;In this article, We are going to know how to create a face recognition system by CV2 and use it for performing some automation tasks in &lt;strong&gt;Real-time&lt;/strong&gt; such as Provisioning AWS instance and its dependencies such as VPC, Subnet, Security Groups, Route tables, Internet Gateway, and 5GB EBS Volume which is done by triggered Terraform scripts.&lt;/p&gt;

&lt;p&gt;We are also going to know how to generate Whatsapp and Email &lt;strong&gt;alerts&lt;/strong&gt; when our face is &lt;strong&gt;recognized&lt;/strong&gt; using Python.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technologies used:&lt;/strong&gt; Python CV2 and Terraform&lt;/p&gt;

&lt;h1&gt;
  
  
  Contents Overview:
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Model1&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data Collection&lt;/li&gt;
&lt;li&gt;Model Training&lt;/li&gt;
&lt;li&gt;Face Recognition&lt;/li&gt;
&lt;li&gt;Running Terraform Scripts to provision AWS Instance&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;Model2&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data Collection&lt;/li&gt;
&lt;li&gt;Model Training&lt;/li&gt;
&lt;li&gt;Face Recognition&lt;/li&gt;
&lt;li&gt;Generating WhatsApp and Email Alerts&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Data Collection
&lt;/h2&gt;

&lt;p&gt;First step of in Machine Learning world is always a Data Collection. Without data, nothing can be achieved. In this step, we are going to collect &lt;strong&gt;100 samples&lt;/strong&gt; of my face with the size of &lt;strong&gt;200 x 200 pixels&lt;/strong&gt; and stored in the folder named as Vishnu. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_Wvu0Zm6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3167d2nmd2ssl8kj9wwf.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_Wvu0Zm6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3167d2nmd2ssl8kj9wwf.PNG" alt="11"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This folder name is crucial because this folder name will be our &lt;strong&gt;Object name/face name&lt;/strong&gt; for our model in future. So, give Human name to your folder.&lt;/p&gt;

&lt;p&gt;Here, We are going to use haarcascade_frontalface_default.xml file as a  face detector.&lt;/p&gt;

&lt;h2&gt;
  
  
  Model Training
&lt;/h2&gt;

&lt;p&gt;In this phase, model is created using &lt;strong&gt;LBPHFaceRecognizer&lt;/strong&gt; algorithm and the model is trained to recognize my face whenever my face appears in real time in video or appear in image.&lt;/p&gt;

&lt;p&gt;Finally, the model is created and now the model has the capability to identify or recognize my face.&lt;/p&gt;

&lt;h2&gt;
  
  
  Face Recognition
&lt;/h2&gt;

&lt;p&gt;Final step is to do Face Recognition. Our desired output would be the model should recognize my face and display the confident score and say "Hi Vishnu" and ask to hit Enter to perform further operations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--r2zyhH2V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0h5ousvm1xmpdy5koj6z.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--r2zyhH2V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0h5ousvm1xmpdy5koj6z.PNG" alt="fr"&gt;&lt;/a&gt;&lt;br&gt;
The above figure shows that our model predicted successfully and if we press enter, in the mean time, our Terraform scripts will automatically run and launch VPC, Subnet, Security Groups, Route tables, Internet Gateway, 5GB EBS Volume and finally provisioned AWS instance.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RVY1X0is--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h6v12anmxhu0jwxrpgm0.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RVY1X0is--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h6v12anmxhu0jwxrpgm0.PNG" alt="2 success"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  EC2 Dashboard
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--D8HvS0Tu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fjnyosprmuj3sleuwfha.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--D8HvS0Tu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fjnyosprmuj3sleuwfha.PNG" alt="4"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  VPC Dashboard
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DvOonLrj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eqqkb7yyzwlhzaxx6bga.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DvOonLrj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eqqkb7yyzwlhzaxx6bga.PNG" alt="5"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Model2
&lt;/h2&gt;

&lt;p&gt;Now, We are going to create another model for demonstrating automatic Email and WhatsApp alert messages when the face is detected and recognized. Here, we are going to use the face of Actor Vijay.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Collection
&lt;/h2&gt;

&lt;p&gt;Same Procedure. Samples are collecting and storing in another folder named "Vijay" for training the model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Y5GmybqY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mu9pnnnqizl607uv12kc.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Y5GmybqY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mu9pnnnqizl607uv12kc.PNG" alt="Capture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Model Training
&lt;/h2&gt;

&lt;p&gt;Model is trained using &lt;strong&gt;LBPH(Local Binary Pattern Histogram)&lt;/strong&gt; algorithm and used to trained the samples of Actor Vijay's face.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--b4iotEcO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gn0ch1kuruh84h72vojw.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--b4iotEcO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gn0ch1kuruh84h72vojw.PNG" alt="v2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Above figure shows model is trained successfully and attain its capability of Face Recognition.&lt;/p&gt;

&lt;h2&gt;
  
  
  Face Recognition
&lt;/h2&gt;

&lt;p&gt;When the Vijay's face is recognized, it will give the confident score and will say "Hi Vijay" and asked to hit enter to perform further operations. When we hit enter, it will send Email and Whatsapp messages alert to the mentioned mail ID and WhatsApp number.&lt;/p&gt;

&lt;h3&gt;
  
  
  WhatsApp Alert
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jSuCLpLy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nj3v4ljjkvd2eg6yvtoa.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jSuCLpLy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nj3v4ljjkvd2eg6yvtoa.PNG" alt="ww"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Email Alert
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JM_gHQcN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e4oc68gsh5nv2ewqzebj.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JM_gHQcN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e4oc68gsh5nv2ewqzebj.PNG" alt="vv"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Output will be..&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6i54RUKg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5r67s05j2dad784a368e.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6i54RUKg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5r67s05j2dad784a368e.PNG" alt="ffffffffffffffffffff"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Above figures shows that desired output was got and it shows our model was trained successfully and predicted correctly.&lt;/p&gt;

&lt;p&gt;If you want to play, feel free to download the code in the GitHub by click &lt;a href="https://github.com/vishnuswmech/Face-Recognition-System-for-launching-AWS-instance-using-Terraform-and-generating-Mail-WhatsApp-al.git"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;That's it. Thank you for your reads. Stay tuned for my upcoming more interesting articles!!   &lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>aws</category>
      <category>cnn</category>
      <category>terraform</category>
      <category>cv2</category>
    </item>
    <item>
      <title>JavaScript: The Company use cases</title>
      <dc:creator>Sri Vishnuvardhan A</dc:creator>
      <pubDate>Tue, 22 Jun 2021 16:25:58 +0000</pubDate>
      <link>https://dev.to/vishnuswmech/javascript-the-company-use-cases-1n1g</link>
      <guid>https://dev.to/vishnuswmech/javascript-the-company-use-cases-1n1g</guid>
      <description>&lt;p&gt;JavaScript is high-level, often just-in-time compiled, and multi-paradigm. It has curly-bracket syntax, dynamic typing, prototype-based object-orientation, and first-class functions.&lt;/p&gt;

&lt;p&gt;Out of over 1.6 billion websites on the Internet, JavaScript is used in 95% of them.&lt;/p&gt;

&lt;p&gt;Common examples of JavaScript uses and applications include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Presentations&lt;/li&gt;
&lt;li&gt;Web Development&lt;/li&gt;
&lt;li&gt;Server Applications&lt;/li&gt;
&lt;li&gt;Web Applications&lt;/li&gt;
&lt;li&gt;Games&lt;/li&gt;
&lt;li&gt;Mobile Applications &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Microsoft
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sS8x2PDV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qrf6ovpf6oz11k678t4r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sS8x2PDV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qrf6ovpf6oz11k678t4r.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Microsoft needs to work closely with JavaScript to built its Edge web browser.All browsers want to process and execute JavaScript efficiently, so Microsoft has advanced and keeps its personal JavaScript engine for Edge&lt;/p&gt;

&lt;h2&gt;
  
  
  PayPal
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--74b7tZAW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/044x8i7aoxlctquednez.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--74b7tZAW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/044x8i7aoxlctquednez.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;PayPal has manifestly been the usage of JavaScript at the  frontend in their internet site for a protracted time, that’s only the beginning.&lt;/p&gt;

&lt;p&gt;The online payment giant was one of the earliest adopters of NodeJS. During an overhaul in their account evaluate web page, they determined to attempt constructing the web page in Node on the equal time as their standard Java development.&lt;/p&gt;

&lt;h2&gt;
  
  
  Uber
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xfiro_Hh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3jehd5dv7i5wa1wvlbvw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xfiro_Hh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3jehd5dv7i5wa1wvlbvw.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Uber wants to handle loads of data in real time. They have a millions of requests coming continuously, and that’s not just hits on a page. Uber wants to track driver locations, rider locations, and incoming ride requests. &lt;/p&gt;

&lt;p&gt;It has to seamlessly sort that data and match riders as rapid as possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  LinkedIn
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hsAT0GBm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j86gqn9ue2m2kp7xw69a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hsAT0GBm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j86gqn9ue2m2kp7xw69a.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;LinkedIn is predicated on NodeJS for its mobile web website online. A few years back, LinkedIn used Rails for its mobile web website online. As with different different huge Rails applications, it become slow, monolithic, and it scaled poorly.&lt;/p&gt;

&lt;p&gt;LinkedIn converted to &lt;strong&gt;NodeJS&lt;/strong&gt; to clear up its scaling problems. Node’s asynchronous abilities allowed the LinkedIn mobile web website online to carry out extra fast than earlier than at the same time as the usage of fewer resources.&lt;/p&gt;

&lt;p&gt;These are some of the companies which uses Javascript for their various purposes especially User Interface part. But it is not limited to this much companies, more than a million. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Hope this article insightful to you to know about Javascript and its real life use cases. Have a good day!!&lt;/p&gt;
&lt;/blockquote&gt;

</description>
    </item>
    <item>
      <title>Setting up Fast &amp; Secure Globally scaled Content Delivery Network with high availability using AWS Cloudfront &amp; S3</title>
      <dc:creator>Sri Vishnuvardhan A</dc:creator>
      <pubDate>Wed, 16 Jun 2021 02:16:25 +0000</pubDate>
      <link>https://dev.to/vishnuswmech/setting-up-fast-secure-globally-scaled-content-delivery-network-with-high-availability-using-aws-cloudfront-12ln</link>
      <guid>https://dev.to/vishnuswmech/setting-up-fast-secure-globally-scaled-content-delivery-network-with-high-availability-using-aws-cloudfront-12ln</guid>
      <description>&lt;p&gt;In this article, we are going to see about what is the Content Delivery Network, why we need them, what are its use cases and then finally we are going to set up our own Content Delivery Network with Fast, Secure and high availability using one of the most powerful services provided by AWS namely Cloudfront.&lt;/p&gt;

&lt;h2&gt;
  
  
  Content Delivery Network
&lt;/h2&gt;

&lt;p&gt;A Content Delivery Network (CDN) is a globally distributed network of web servers whose purpose is to provide faster content delivery. The content is replicated and stored throughout the CDN so the user can access the data that is stored at a location that is geographically closest to the user.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fptkuvtwyujlug7c4laso.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fptkuvtwyujlug7c4laso.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is different and more efficient than the traditional method of storing content on just one, central server. A client accesses a copy of the data near to the client, as opposed to all clients accessing the same central server, in order to avoid bottlenecks near that server.&lt;/p&gt;

&lt;p&gt;High content loading speed ==positive User Experience&lt;/p&gt;

&lt;h2&gt;
  
  
  CDN Architecture model
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxo1jdtu78lutj7zvsis0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxo1jdtu78lutj7zvsis0.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above figure clearly illustrates the typical CDN model. When a user requests the content, for the first time it will send to Content Provider, then Content Provider will send their copy of the document known as Source to &lt;strong&gt;CDN&lt;/strong&gt; and that copy is stored as digital information which is created, licensed and ready for distribution to an End User. &lt;/p&gt;

&lt;p&gt;If the User requests the content again, he will receive the content from CDN only which is located nearer to the geographical location of the user, &lt;strong&gt;not&lt;/strong&gt; from Content Provider. We can reduce &lt;strong&gt;latency&lt;/strong&gt; and ensure &lt;strong&gt;high availability.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of CDN over Traditional method
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;CDN enables global reach&lt;/li&gt;
&lt;li&gt;100% percent availability&lt;/li&gt;
&lt;li&gt;Your reliability and response times get a huge boost&lt;/li&gt;
&lt;li&gt;Decrease server load&lt;/li&gt;
&lt;li&gt;Analytics&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Use-Cases of CDN
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Optimized file delivery for emerging startups&lt;/li&gt;
&lt;li&gt;Fast and secure E-Commerce&lt;/li&gt;
&lt;li&gt;Netflix-grade video streaming&lt;/li&gt;
&lt;li&gt;Software Distribution, Game Delivery and IoT &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  CloudFront
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsvxm66yxeyyx5pdxec0a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsvxm66yxeyyx5pdxec0a.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
                     &lt;strong&gt;credits:&lt;/strong&gt; whizlabs&lt;/p&gt;

&lt;p&gt;Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront uses Origins from S3 for setting up its Content Delivery Network.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdih89tvfmufp9eg8n24o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdih89tvfmufp9eg8n24o.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cloudfront uses &lt;strong&gt;Edge locations&lt;/strong&gt; to store the data cache. Currently, AWS now spans &lt;strong&gt;77&lt;/strong&gt; Availability Zones within &lt;strong&gt;24&lt;/strong&gt; geographic regions around the world and has announced plans for &lt;strong&gt;18&lt;/strong&gt; more Availability Zones and &lt;strong&gt;6&lt;/strong&gt; more AWS Regions in Australia, India, Indonesia, Japan, Spain, and Switzerland.&lt;br&gt;
AWS uses &lt;strong&gt;DNS&lt;/strong&gt; to find the nearest data centers for storing the &lt;strong&gt;caches&lt;/strong&gt;. Data comes from &lt;strong&gt;Origin&lt;/strong&gt; to &lt;strong&gt;Edge location&lt;/strong&gt; and &lt;strong&gt;Edge location&lt;/strong&gt; to our &lt;strong&gt;PC&lt;/strong&gt;.&lt;/p&gt;
&lt;h1&gt;
  
  
  Practicals
&lt;/h1&gt;

&lt;p&gt;Now I am going to show you how to setup your Own custom Content Delivery Network for your content which includes images, videos, etc. &lt;/p&gt;
&lt;h2&gt;
  
  
  Pre-requisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AWS account&lt;/li&gt;
&lt;li&gt;AWS instance&lt;/li&gt;
&lt;li&gt;EBS volume&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For saving time, I already launched one Amazon instance and one EBS volume sized 10GB and attached to its instance.&lt;/p&gt;
&lt;h2&gt;
  
  
  Steps
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Installing HTTPD server&lt;/li&gt;
&lt;li&gt;Making its document root persistent&lt;/li&gt;
&lt;li&gt;Storing the content in S3&lt;/li&gt;
&lt;li&gt;Deployed it to CloudFront&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Installing HTTPD server
&lt;/h2&gt;

&lt;p&gt;Since the package manager “YUM” is already installed in Amazon Linux 2, so run the following commands for configuration of HTTPD server in that instance.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;yum install httpd -y&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Then we have to start the service.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;yum start httpd&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;We have to enable the service. So that we need not start the service again and again after every reboot.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;yum enable httpd&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Making its document root persistent
&lt;/h2&gt;

&lt;p&gt;Since the OS in Amazon 2 Linux is RHEL 8, so the document root of HTTPD server is &lt;strong&gt;/var/www/httpd.&lt;/strong&gt; The document root is the location where the HTTPD server &lt;strong&gt;reads&lt;/strong&gt; and deploys the webpage in web server. &lt;/p&gt;

&lt;p&gt;We have to make that document root persistent in order to secure the data from being lost due to OS crash etc.&lt;br&gt;
For that, you have to ready with the previously created one EBS volume and done with the partition. Then run the following command.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;mount /var/www/html /partition&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xqrjtqml7gctil5413u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xqrjtqml7gctil5413u.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Simple Storage Service(S3)
&lt;/h2&gt;

&lt;p&gt;Since the Origin for CloudFront is S3, we have to setup S3 so that we can get the Origin Domain name for CloudFront. In S3, the folders are said to be &lt;strong&gt;buckets&lt;/strong&gt; and files are said to be &lt;strong&gt;Objects.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6c1m8i5bxwr6aigzdjm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6c1m8i5bxwr6aigzdjm.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First step is to create a bucket by using the following command syntax.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws s3 mb s3://bucketname&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;The second step is to move/copy the objects to the buckets by using following command syntax.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws s3 mv object-location s3://bucketname&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;By default, Public access for S3 buckets is blocked. We have to release the access by running the following command syntax in your Windows Command Prompt.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws s3api put-public-access-block --bucket bucketname --public-access-block-configuration “BlockPublicAcls=false, IgnorePublicAcls=false, BlockPublicPolicy=false, RestrictPublicBuckets=false”&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;



&lt;p&gt;&lt;code&gt;aws s3api put-object-acl --acl "public-read" --bucket "bucketname"  --key objectname&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;We had completed the S3 setup, the HTML code of the webpage is shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzep9diuhubkv5p5vvfxg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzep9diuhubkv5p5vvfxg.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  CloudFront
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgexgv5kscejlnmki0v3v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgexgv5kscejlnmki0v3v.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since Origin S3 setup is over, now we have to setup the CloudFront by creating one distribution using Origin Domain Name from S3 and Object name as Default Root Object.&lt;/p&gt;

&lt;p&gt;We have to run the following command to create a distribution&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws cloudfront create-distribution --origin-domain-name vishnu1234.s3.amazonaws.com --default-root-object Scam-1992.jpg&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;After creating the distribution, CloudFront will give one domain address, we have to copy that domain address to that HTML code and it will replace the S3 domain address.&lt;/p&gt;

&lt;p&gt;Finally our &lt;strong&gt;Output&lt;/strong&gt; will be*&lt;em&gt;…&lt;/em&gt;*&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd590ng3o3cpgo0ceflsd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd590ng3o3cpgo0ceflsd.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Thank you all for your patience to read this article.Stay tuned for my upcoming interesting articles.Have a good day.         &lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>aws</category>
      <category>s3</category>
      <category>cloudfront</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Network Topology Setup in such a way that System A can ping to two Systems B &amp; C but both systems not able to ping each other</title>
      <dc:creator>Sri Vishnuvardhan A</dc:creator>
      <pubDate>Fri, 11 Jun 2021 04:58:36 +0000</pubDate>
      <link>https://dev.to/vishnuswmech/network-topology-setup-in-such-a-way-that-system-a-can-ping-to-two-systems-b-c-but-both-systems-not-able-to-ping-each-other-1f0p</link>
      <guid>https://dev.to/vishnuswmech/network-topology-setup-in-such-a-way-that-system-a-can-ping-to-two-systems-b-c-but-both-systems-not-able-to-ping-each-other-1f0p</guid>
      <description>&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; We are not going to use any firewall, since it is simple and everyone aware of this, so we are going to use unique way to achieve this topology setup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Concepts used:&lt;/strong&gt; Routing tables and Netmask&lt;/p&gt;

&lt;p&gt;In this practical, We are going to use three Redhat Enterprise Linux Virtual Machines which are hosting by Oracle VirtualBox.&lt;/p&gt;

&lt;p&gt;I already explained these Routing table and Netmask concepts in my previous blog, if you want to know these basic concepts then please refer &lt;a href="https://dev.to/vishnuswmech/setting-up-your-own-network-that-can-ping-google-but-not-able-to-ping-facebook-in-the-same-system-without-using-a-firewall-1c8p"&gt;this&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Before starting these VM's, make sure that it is connected in Host-only adapter mode and the adapter name should be the same in all three VMs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mR7966Z4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fe5z56jwfpicju3olq0a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mR7966Z4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fe5z56jwfpicju3olq0a.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Environment Setup
&lt;/h2&gt;

&lt;h3&gt;
  
  
  System A
&lt;/h3&gt;

&lt;p&gt;First step is to change the IP and set the Netmask by running the following command.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ifconfig enp0s3 162.168.1.1/29&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9-2fI1RQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pzb1zotgnaqycapex1oh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9-2fI1RQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pzb1zotgnaqycapex1oh.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Second step is to create the route rule by running the following command.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;route add -net 162.168.1.0 netmask 255.255.255.248 enp0s3&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uhOTxPd4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hxwqzpillvaj48jhr2mq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uhOTxPd4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hxwqzpillvaj48jhr2mq.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  System B
&lt;/h3&gt;

&lt;p&gt;First step is to change the IP and set the Netmask by running the following command.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ifconfig enp0s3 162.168.1.2/31&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aSu5KEL0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rs4hm9gom94co3pc786p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aSu5KEL0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rs4hm9gom94co3pc786p.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Second step is to create the route rule by running the following command.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;route add -net 162.168.1.0 netmask 255.255.255.254 enp0s3&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zQy-58uy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7lz7oaabi2ltqukz76sk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zQy-58uy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7lz7oaabi2ltqukz76sk.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  System C
&lt;/h3&gt;

&lt;p&gt;First step is to change the IP and set the Netmask by running the following command.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ifconfig enp0s3 162.168.1.4/31&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--u3dfxGqs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qy4sy5czqctyo2ofggxp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u3dfxGqs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qy4sy5czqctyo2ofggxp.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Second step is to create the route rule by running the following command.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;route add -net 162.168.1.0 netmask 255.255.255.254 enp0s3&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--M699txvk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q9yhhzsg5akiqj79amhp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--M699txvk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q9yhhzsg5akiqj79amhp.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Ping Checking
&lt;/h3&gt;

&lt;p&gt;Now, We completed the environment setup. Now we have to run the ping command in order to check the status of Network connectivity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pinging System B from System A
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qd39mDkq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6u959kszum1usd4s4bmc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qd39mDkq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6u959kszum1usd4s4bmc.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Pinging System A from System B
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5NzmkIxf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xbfmnu6wnq6oqk5mkplk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5NzmkIxf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xbfmnu6wnq6oqk5mkplk.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Pinging System C from System A
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IZP5twRQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jiwugp146foyqhva20cd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IZP5twRQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jiwugp146foyqhva20cd.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Pinging System A from System C
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--r-6Z96DB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p79xh5bsl85rdofilk9l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--r-6Z96DB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p79xh5bsl85rdofilk9l.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Pinging System B from System C
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--chOSTtDj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/utortk5s43w9qp73vq39.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--chOSTtDj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/utortk5s43w9qp73vq39.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Pinging System C from System B
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EviVriTm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f04weibaq73zijvmhbzl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EviVriTm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f04weibaq73zijvmhbzl.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How is this Possible?
&lt;/h2&gt;

&lt;p&gt;In simple words, it happened because of routing tables that use the Netmask for the IP range.&lt;/p&gt;

&lt;p&gt;Let's take a look at the routing rule in System A&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;route add -net 162.168.1.0 netmask 255.255.255.248 enp0s3&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;In this, Netmask 255.255.255.248 denotes the IP range which have the connection between them.&lt;br&gt;
If you convert to binary form means, it looks like this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;11111111 11111111 11111111 11111000&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For the last &lt;strong&gt;three places&lt;/strong&gt;(where the zeros located), we can accommodate &lt;strong&gt;8 combinations&lt;/strong&gt;. So it decided the number of IP's (IP range) that can connect each other.&lt;/p&gt;

&lt;p&gt;The range of IPs are that can connect each other is &lt;strong&gt;162.168.1.0–162.168.1.7.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So in this way, System A &lt;strong&gt;can ping&lt;/strong&gt; to System B (162.168.1.2) and System C (162.168.1.4).&lt;/p&gt;

&lt;p&gt;Then if you look at the routing rule in System B.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;route add -net 162.168.1.0 netmask 255.255.255.254 enp0s3&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;The netmask is specified as 255.255.255.254&lt;br&gt;
If you convert to binary form means, it looks like this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;11111111 11111111 11111111 11111110&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For the last &lt;strong&gt;one place&lt;/strong&gt;(where the zero located), We can accommodate &lt;strong&gt;2 combinations&lt;/strong&gt;. So it decided the number of IPs (IP range) that can connect each other.&lt;/p&gt;

&lt;p&gt;So the range of IP's that are connected to each other are two in numbers namely &lt;strong&gt;168.162.1.0 and 162.168.1.1&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In this way System B &lt;strong&gt;can ping&lt;/strong&gt; to System A. It is noted that IP of System C (162.168.1.4) is &lt;strong&gt;not&lt;/strong&gt; in range, that's the reason for not ping.&lt;/p&gt;

&lt;p&gt;Finally, if you take a look on routing rule in System C.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;route add -net 162.168.1.0 netmask 255.255.255.254 enp0s3&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Here, the Netmask specified is 255.255.255.254. So the last &lt;strong&gt;one place&lt;/strong&gt;(where the zero located), we can accommodate &lt;strong&gt;2 combinations&lt;/strong&gt;. So it decided the number of IPs (IP range) that can &lt;strong&gt;connect&lt;/strong&gt; each other.&lt;/p&gt;

&lt;p&gt;So the range of IPs that are connected to each other are &lt;strong&gt;two&lt;/strong&gt; in numbers namely &lt;strong&gt;168.162.1.0 and 162.168.1.1&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In this way, System C can &lt;strong&gt;ping&lt;/strong&gt; to System A. It is noted that IP of B(162.168.1.2) is &lt;strong&gt;not in range&lt;/strong&gt;, that’s the reason for not ping.&lt;/p&gt;

&lt;p&gt;Thats it. Hope this practical help you to undertand the concepts of Netmask and Routing tables and its use cases in Networking. Stay tuned for my next article!!             &lt;/p&gt;

</description>
      <category>networking</category>
      <category>netmask</category>
      <category>route</category>
      <category>linux</category>
    </item>
    <item>
      <title>Setting up your own High Availability managed WordPress hosting using Amazon RDS</title>
      <dc:creator>Sri Vishnuvardhan A</dc:creator>
      <pubDate>Fri, 11 Jun 2021 04:27:10 +0000</pubDate>
      <link>https://dev.to/vishnuswmech/setting-up-your-own-high-availability-managed-wordpress-hosting-using-amazon-rds-55c2</link>
      <guid>https://dev.to/vishnuswmech/setting-up-your-own-high-availability-managed-wordpress-hosting-using-amazon-rds-55c2</guid>
      <description>&lt;p&gt;Hosting your own WordPress website is interesting right!! Ok, come on let's do it!!&lt;/p&gt;

&lt;p&gt;We are going to do this practical from Scratch. From the Creation of our Own &lt;strong&gt;VPC&lt;/strong&gt;, Subnets, Internet Gateway, Route tables to Deployment of &lt;strong&gt;WordPress.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here, We are going to use Amazon Web Service’s RDS service for hosting our own WordPress site. Before that, let's take a look at a basic introduction to RDS service.&lt;/p&gt;

&lt;p&gt;Amazon Relational Database Service is a distributed relational database service by Amazon Web Services (AWS). It is a web service running in the cloud designed to simplify the setup, operation, and scaling of a relational database for use in applications.&lt;/p&gt;

&lt;p&gt;Administration processes like patching the database software, backing up databases and enabling point-in-time recovery are managed automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  Features of AWS RDS
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Lower administrative burden. Easy to use&lt;/li&gt;
&lt;li&gt;Performance. General Purpose (SSD) Storage&lt;/li&gt;
&lt;li&gt;Scalability. Push-button compute scaling&lt;/li&gt;
&lt;li&gt;Availability and durability. Automated backups&lt;/li&gt;
&lt;li&gt;Security. Encryption at rest and in transit&lt;/li&gt;
&lt;li&gt;Manageability. Monitoring and metrics&lt;/li&gt;
&lt;li&gt;Cost-effectiveness. Pay only for what you use&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ok, let's jump onto the practical part!!.We will do this practical from scratch. Since it will be long process, so we divided this into 5 small parts namely,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating a MySQL database with RDS&lt;/li&gt;
&lt;li&gt;Creating an EC2 instance&lt;/li&gt;
&lt;li&gt;Configuring your RDS database&lt;/li&gt;
&lt;li&gt;Configuring WordPress on EC2&lt;/li&gt;
&lt;li&gt;Deployment of WordPress website&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Creating a MySQL database with RDS
&lt;/h2&gt;

&lt;p&gt;Before that, we have to do two pre-works namely the Creation of Virtual Private Cloud(VPC), Subnets and Security groups. These are more important because in order to have a reliable connection between WordPress and MySQL database, they should be located in the &lt;strong&gt;same VPC&lt;/strong&gt; and should have the &lt;strong&gt;same Security Group.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Since Instances are launched on Subnets only, Moreover RDS will launch your MySQL database in EC2 instance only, that we cannot able to see since it is fully managed by AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lRE-XP4e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0kdvmdm5yo0e9gn9xqfn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lRE-XP4e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0kdvmdm5yo0e9gn9xqfn.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are going to create our own VPC. For that, we have to specify IP range and CIDR. We specified &lt;strong&gt;IP&lt;/strong&gt; and &lt;strong&gt;CIDR&lt;/strong&gt; as 192.168.0.0/16.&lt;br&gt;
What is CIDR?. I explained this in my previous blog in very detail. You can refer &lt;a href="https://dev.to/vishnuswmech/setting-up-your-own-network-that-can-ping-google-but-not-able-to-ping-facebook-in-the-same-system-without-using-a-firewall-1c8p"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FtTsmLUb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x4mr8fbq60rluszc9gri.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FtTsmLUb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x4mr8fbq60rluszc9gri.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's come to the point. After specifying the IP range and CIDR,then enter your VPC name.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--M1mySwto--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zb11c029ese4882yc6x3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--M1mySwto--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zb11c029ese4882yc6x3.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, VPC is successfully created with our specified details.&lt;br&gt;
Next step is to launch the subnet in the above VPC.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NetguBOz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3l7kpefg6tuzlmdzizn4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NetguBOz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3l7kpefg6tuzlmdzizn4.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For Creating Subnets, you have to specify which VPC the lab should launch. We already have our own VPC named “myvpc123”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cyrQtfPG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t29qsxobardihqc7is95.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cyrQtfPG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t29qsxobardihqc7is95.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And then we have to specify the range of Subnet IP and CIDR. Please note that the &lt;strong&gt;Subnet range&lt;/strong&gt; should come under &lt;strong&gt;VPC range&lt;/strong&gt;, it should &lt;strong&gt;not exceed&lt;/strong&gt; VPC range.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4FAdR3Ud--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lfosjpjgdhma8czmm5ql.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4FAdR3Ud--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lfosjpjgdhma8czmm5ql.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For achieving the property of High Availability, We have to launch minimum &lt;strong&gt;two&lt;/strong&gt; subnets, so that Amazon RDS will launch its database in two subnets, if one subnet collapsed means, it won't cause any trouble.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--F3RiNrPB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q0n6v72md2hu14s9bt3g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--F3RiNrPB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q0n6v72md2hu14s9bt3g.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, two Subnets with their specified range of IPs and CIDR are launched successfully inside our own VPC and they are available.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--brcVV4dR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gi8e0jv9534gb7rx7rt0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--brcVV4dR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gi8e0jv9534gb7rx7rt0.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next step is to create a security group in order to secure the WordPress and MySQL databases. Note that both should have the &lt;strong&gt;same Security Group&lt;/strong&gt; or else it &lt;strong&gt;won't&lt;/strong&gt; connect.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7LhjjrbS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1nxd41x2ticipiqmzjc2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7LhjjrbS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1nxd41x2ticipiqmzjc2.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For creating a Security Group, we have to specify which VPC it should be launched and adding a Description is mandatory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8SdneCum--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mrpdsrqzbwd0w7bfevrz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8SdneCum--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mrpdsrqzbwd0w7bfevrz.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then we have to specify inbound rules, for making this practical simple, we are allowing all traffic to access our instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SpXby_1f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dmhlqluqgfpzb2bz4pke.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SpXby_1f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dmhlqluqgfpzb2bz4pke.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Now, the Security Group is successfully created with our specified details.&lt;br&gt;
Now let's jump into part 1 which is about Creating a MySQL database with RDS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AJH-KPv0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nm4dzrykwyxtyvr68xkw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AJH-KPv0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nm4dzrykwyxtyvr68xkw.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Select Create database, then select Standard create and specify the database type.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PSTVQ46Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e4q4sn0u5tbg8hy3cn82.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PSTVQ46Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e4q4sn0u5tbg8hy3cn82.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then you have to specify the Version. Version plays a major role in MySQL when integrating with WordPress, so select the &lt;strong&gt;compactible version&lt;/strong&gt; or else it will cause serious trouble at the end. Then select the template, here we are using Free-tier since it won't be chargeable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8nOR0PxZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d30bb3qna9unbnja4r7e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8nOR0PxZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d30bb3qna9unbnja4r7e.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then you have to specify the credentials such as Database Instance name, Master username and Master password.&lt;/p&gt;

&lt;p&gt;Most important part is a selection of VPC, you should select the &lt;strong&gt;same VPC&lt;/strong&gt; where you will launch your EC2 instance for your WordPress and we can’t modify the VPC once the database is created. Then select the Public access as &lt;strong&gt;No&lt;/strong&gt; for providing more security to our database. Now, the people outside of your VPC can’t able to connect to your database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Rh_5wwZw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2aein7hciho8kurl1zjv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Rh_5wwZw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2aein7hciho8kurl1zjv.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UCb4mGrg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hjlf2ckp669wpwxygsba.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UCb4mGrg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hjlf2ckp669wpwxygsba.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then you have to specify the &lt;strong&gt;Security group&lt;/strong&gt; for your database. Note that the Security Group for your database and WordPress should be the &lt;strong&gt;same&lt;/strong&gt; or else it will cause serious trouble.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FV8njX0D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tbsrtr7ecint4t3mhdrt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FV8njX0D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tbsrtr7ecint4t3mhdrt.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Note that Security Groups is created &lt;strong&gt;per VPC&lt;/strong&gt;. After selecting Security Group, then click Ok to create the RDS database.&lt;/p&gt;
&lt;h2&gt;
  
  
  Creating an EC2 instance
&lt;/h2&gt;

&lt;p&gt;Before creating an instance, there should be two things you have to be configured namely Internet Gateway and Route tables. It is used for providing outside internet connectivity to an instance launched in the subnet.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Z-4Zhpr6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z2axumoc2t9o0gy5f7lr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Z-4Zhpr6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z2axumoc2t9o0gy5f7lr.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Internet Gateway is created &lt;strong&gt;per VPC&lt;/strong&gt;. First, we have to create one new Internet Gateway with the specified details.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DbB5Os7q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r8ty7pf17lsirx4838l0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DbB5Os7q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r8ty7pf17lsirx4838l0.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Then you have to attach Internet Gateway to the VPC.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XQye70d2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8ya5rjw4v8e92hrccjxy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XQye70d2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8ya5rjw4v8e92hrccjxy.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Next step is to create Routing tables. Note that Route table is created per Subnet.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m1NbrBQ1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ethal6wsjhgbeqb4nlq6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m1NbrBQ1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ethal6wsjhgbeqb4nlq6.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
We have to specify which VPC in which your subnet is available to attach routing table with it, specify Name and click create to create the route table.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oAqc41O5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w2jz5zfxi3n0juyc2xxh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oAqc41O5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w2jz5zfxi3n0juyc2xxh.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Then click Edit route to edit the route details namely destination and target. Enter destination as 0.0.0.0/0 for accessing any IP anywhere on the Internet and target is your Internet Gateway.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1rfVXJm---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s05tldzct5f5617e38ma.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1rfVXJm---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s05tldzct5f5617e38ma.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
After entering the details, click Save routes.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IbjHxQYZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rddoujmq4u96ufxjuoiy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IbjHxQYZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rddoujmq4u96ufxjuoiy.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
We created a Route table, then we have to attach that table to your Subnet. For that click Edit route table association and select your subnet where you want to attach the route table with it.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XhplebcR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aq91f33djna03ivk78cz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XhplebcR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aq91f33djna03ivk78cz.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, lets jump into the task of creating an EC2 instance. First, you have to choose the AMI image in which you used for creating an EC2 instance, here I selected Amazon Linux 2 AMI for that.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MpbHTde3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7fsb3zezn9pvd0bmyk9p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MpbHTde3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7fsb3zezn9pvd0bmyk9p.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Then you have to select Instance type, here I selected t2.micro since it comes under free tier.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VUEGgKDc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d7cqxavrnhhyeqoa6izp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VUEGgKDc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d7cqxavrnhhyeqoa6izp.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Then you have to specify the VPC, Subnet for your instance and you have to enable Auto-assign Public IP in order to get your Public IP to your instance.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DAPFUreQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9q2nv6cy1c6v8k5umk39.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DAPFUreQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9q2nv6cy1c6v8k5umk39.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then you have to add storage for your instance. It is optional only.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OfddNxKd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zt4ywdf1ufrvc3h819m9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OfddNxKd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zt4ywdf1ufrvc3h819m9.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then you have to specify the tags which will be more useful especially for &lt;strong&gt;automation.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yr2fNQF5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o07qfetjp0yfgrugg09v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yr2fNQF5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o07qfetjp0yfgrugg09v.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then you have to select the Security Group for your instance. It should be the &lt;strong&gt;same&lt;/strong&gt; as your database have.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pH9HcbYe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2jujhw4zgi0jabgn6gpd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pH9HcbYe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2jujhw4zgi0jabgn6gpd.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
And click Review and Launch. Then you have to add Keypair to launch your EC2 instance. If you didn't have Keypair means, you can create at that time.&lt;/p&gt;
&lt;h2&gt;
  
  
  Configuring your RDS database
&lt;/h2&gt;

&lt;p&gt;At this point, you have created an RDS database and an EC2 instance. Now, we will configure the RDS database to allow access to specific entities.&lt;/p&gt;

&lt;p&gt;You have to run the below command in your EC2 instance in order to establish the connection with your database.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;export MYSQL_HOST=&amp;lt;your-endpoint&amp;gt;&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;You can find your endpoint by clicking database in the RDS &lt;br&gt;
dashboard. Then you have to run the following command.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;mysql --user=&amp;lt;user&amp;gt; --password=&amp;lt;password&amp;gt; dbname&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--O_yuy4Qj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7gkrprqaci4hafjhyi11.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--O_yuy4Qj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7gkrprqaci4hafjhyi11.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This output shows the database is successfully connected to an EC2 instance.&lt;br&gt;
In the MySQL command terminal, you have to run the following commands in order to get all privileges to your account.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;CREATE USER 'vishnu' IDENTIFIED BY 'vishnupassword';&lt;br&gt;
GRANT ALL PRIVILEGES ON dbname.* TO vishnu;&lt;br&gt;
FLUSH PRIVILEGES;&lt;br&gt;
Exit&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring WordPress on EC2
&lt;/h2&gt;

&lt;p&gt;For Configuring WordPress on EC2 instance, the first step is to configure the webserver, here I am using Apache webserver. For that, you have to run the following commands.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo yum install -y httpd&lt;br&gt;
sudo service httpd start&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Next step would be download the WordPress application from the internet by using wget command. Run the following code to download the WordPress application.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;wget https://wordpress.org/latest.tar.gz&lt;br&gt;
tar -xzf latest.tar.gz&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--W22AHdDc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rtl2a0bhetww48n9zkds.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W22AHdDc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rtl2a0bhetww48n9zkds.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Then we have to do some configuration, for this follow the below steps.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cd wordpress&lt;br&gt;
cp wp-config-sample.php wp-config.php&lt;br&gt;
cd wp-config.php&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Go inside the wp-config.php file and enter your credentials (including your password too)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PqvuQyee--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b07j1vmlb9ln1upymry8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PqvuQyee--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b07j1vmlb9ln1upymry8.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Then, Goto this &lt;a href="https://api.wordpress.org/secret-key/1.1/salt/"&gt;link&lt;/a&gt; and copy all and paste it to replace the existing lines of code.   &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZaEJlIUa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yvrhp3aqnenq3g8h6a4w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZaEJlIUa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yvrhp3aqnenq3g8h6a4w.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Next step is to deploy the WordPress application. For that, you have to run the following commands in order to solve the dependencies and deploy WordPress in the webserver.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo amazon-linux-extras install -y lamp-mariadb10.2-php7.2 php7.2&lt;br&gt;
sudo cp -r wordpress/* /var/www/html/&lt;br&gt;
sudo service httpd restart&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;That’s it. You have a live, publicly-accessible WordPress installation using a fully-managed MySQL database on Amazon RDS.&lt;br&gt;
Then if you enter your WordPress instance IP in your browser, you will land your WordPress home page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qYTvg4A3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uwcmr891mr7j1011bib6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qYTvg4A3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uwcmr891mr7j1011bib6.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
After you filled in your credentials, you will get your own homepage.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--viswmQQO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zk0bcvta4gliq55rk4mt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--viswmQQO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zk0bcvta4gliq55rk4mt.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
That's it. You launched your own application in your own instance and your database is managed by AWS RDS service.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Thank you all for your reads. Stay tuned for my upcoming articles!!  &lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>amazon</category>
      <category>rds</category>
      <category>wordpress</category>
      <category>mysql</category>
    </item>
    <item>
      <title>How to limit the size of contribution as a data node in HDFS cluster?
</title>
      <dc:creator>Sri Vishnuvardhan A</dc:creator>
      <pubDate>Wed, 09 Jun 2021 02:42:41 +0000</pubDate>
      <link>https://dev.to/vishnuswmech/how-to-limit-the-size-of-contribution-as-a-data-node-in-hdfs-cluster-5mi</link>
      <guid>https://dev.to/vishnuswmech/how-to-limit-the-size-of-contribution-as-a-data-node-in-hdfs-cluster-5mi</guid>
      <description>&lt;p&gt;Hope this title makes some sense to you that what we are going to discuss. &lt;/p&gt;

&lt;p&gt;In this article, we will going to see this from scratch. Here, we use &lt;strong&gt;Linux partition&lt;/strong&gt; concepts for limiting the size of the contribution of the &lt;strong&gt;data node&lt;/strong&gt; in &lt;strong&gt;HDFS Cluster&lt;/strong&gt;. You may think Why we need to limit the size, because we can't shut down the data node when it is exhausted or run out of memory and also it will limits the dynamic storage.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Note:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In this task, the OS we used is &lt;strong&gt;Redhat Linux(RHEL8)&lt;/strong&gt; and you can use any Linux OS and this RHEL8 is installed on top of Oracle Virtualbox.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Pre-requisites:&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Hadoop 1.2.1 &lt;/li&gt;
&lt;li&gt;Java jdk-8u171 
should be installed in your system&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Contents:&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Hadoop&lt;/li&gt;
&lt;li&gt;Linux partitions&lt;/li&gt;
&lt;li&gt;Configuration of HDFS cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Hadoop&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WKDIYLWm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6i49r96nn6c85vtu94ip.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WKDIYLWm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6i49r96nn6c85vtu94ip.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Apache Hadoop is an open-source framework which is used to store massive amount of data ranging from Kilobyte to Petabytes. It functions based on clustering of multiple computers by distributed storage instead of one single Large computer thus results in reduction of cost and time.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Linux partitions&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NRmfqMPG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/34hikzdqex1e2uan3l12.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NRmfqMPG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/34hikzdqex1e2uan3l12.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In RHEL8 Linux, there are three types of partition namely &lt;strong&gt;Primary, Extended and Logical partition&lt;/strong&gt;. Normally only the &lt;strong&gt;four partitions&lt;/strong&gt; can be done per hard disk. Because the &lt;strong&gt;metadata&lt;/strong&gt; of each partitions stored in &lt;strong&gt;64 bytes&lt;/strong&gt; only and size of metadata for one partition is &lt;strong&gt;16 bytes&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;So we have to do some trick, we have to divide harddisk into &lt;strong&gt;two&lt;/strong&gt; partitions namely &lt;strong&gt;three Primary partition&lt;/strong&gt; and &lt;strong&gt;one Extended partition&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;In Extended partition, partition is considered as one new Hard disk thus we can perform this trick again to get more partitions.&lt;/p&gt;

&lt;p&gt;Totally, we can create three Primary partitions, one extended partitions and Sixty Logical partitions. Totally &lt;strong&gt;64 partitions&lt;/strong&gt; can create and we can use &lt;strong&gt;63 partitions.&lt;/strong&gt; since one Extended partition, we can't store any data in it.&lt;/p&gt;

&lt;p&gt;Before creating partition, we required a &lt;strong&gt;raw hard disk&lt;/strong&gt;. You can buy a new hard disk or create a Virtual hard disk in Oracle Virtualbox. &lt;/p&gt;

&lt;p&gt;Please follow the below steps to create a Virtual hard disk in RHEL8 using Oracle virtual box.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Open Settings in Oracle VirtualBox&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--f_RGaWna--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2w18tw7r5xsf04gat0r9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--f_RGaWna--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2w18tw7r5xsf04gat0r9.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Add hard disk in Controller: SATA menu&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--r-e-liO0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6l92jw2ulkae1c0o02ns.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--r-e-liO0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6l92jw2ulkae1c0o02ns.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Create Disk Image icon&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zRfNvPYg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/97aklaudqv303qamupe7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zRfNvPYg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/97aklaudqv303qamupe7.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Virtual Disk Image option&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--W0RT_8yY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/221o1je1yrzyujv76fda.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W0RT_8yY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/221o1je1yrzyujv76fda.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then select Dynamically allocated option in order to make storage Dynamically filled.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ny3BM30v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gh7vake8ug1i75tshqed.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ny3BM30v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gh7vake8ug1i75tshqed.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Give your required storage and then click Create button&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--u8p16xLO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yf6ykugj69ivbxz63gwd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u8p16xLO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yf6ykugj69ivbxz63gwd.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then attach the storage and click choose button&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FDhJ4qEx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qresxq9f3bsoxqpyx8je.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FDhJ4qEx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qresxq9f3bsoxqpyx8je.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can find your created storage details in your dashboard&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MH91fvq7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qik7g6tmwsre9i1r5k1d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MH91fvq7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qik7g6tmwsre9i1r5k1d.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now we have a raw hard disk, in raw hard disk we can’t store any data. We have to do partition to utilize this hard disk.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Steps involved to create a partition:&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Physical partition&lt;/li&gt;
&lt;li&gt;Format&lt;/li&gt;
&lt;li&gt;Mount&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Physical partition:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We have to decide how much space we required for our partition. After that run&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;fdisk -l&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;to find out details of hard disk details.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SYmWPjmk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wjsiza64ijhtcozuy75j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SYmWPjmk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wjsiza64ijhtcozuy75j.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can find the hard disk named &lt;strong&gt;/dev/sdb&lt;/strong&gt; which we previously created. Then execute the following command&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;fdisk /dev/sdb&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;and enter n for new partitions and enter p for primary partitions and select how many sectors or GB according to your required size and enter “w” to save. You can found these in below image.&lt;/p&gt;

&lt;p&gt;Enter below command to check whether the partition was created or not&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;fdisk -l /dev/sdb&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wzlbRzvk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7kdvh1nvx6vzf71j8vwl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wzlbRzvk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7kdvh1nvx6vzf71j8vwl.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But we can create only &lt;strong&gt;4 partitions&lt;/strong&gt;, so we have to do some tricks. We can create &lt;strong&gt;3&lt;/strong&gt; Primary partitions and create &lt;strong&gt;one&lt;/strong&gt; Extended partition for remaining size.&lt;/p&gt;

&lt;p&gt;Extended partition is treated like a raw Hard disk, We can create Logical partitions inside Extended partition and Logical partition sector’s range will be same as range of Extended partition. You can seen this in below image.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jTxtPwOr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8itsb2nojwu6b1lg0zaj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jTxtPwOr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8itsb2nojwu6b1lg0zaj.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Format:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;First step is done. Then we have to format the disk. Format process is like &lt;strong&gt;creating an index&lt;/strong&gt; in hard disk which is used by OS for &lt;strong&gt;searching&lt;/strong&gt; a file and displayed to us when we click to &lt;strong&gt;open&lt;/strong&gt; a file. Enter the following command for formatting the hard disk in "ext4" format. You can use any format according to your requirements.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;mkfs.ext4 /dev/sdb1&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8gYbLUoH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/63wzwapined4s232zhso.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8gYbLUoH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/63wzwapined4s232zhso.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Mount:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Next step is to mount the hard disk in any of the folder/directory in OS. Because, we cant go inside or access the folder in hard disk, for this we have to &lt;strong&gt;link/mount&lt;/strong&gt; this to OS folder. &lt;/p&gt;

&lt;p&gt;For this, we have to enter the following commands in Linux terminal. We created drive1 folder in root directory and its size is 1GB.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;mount /dev/sdb1 /drive1&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Configuring HDFS cluster&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Hadoop works based on multiple clusters which comprises of three types of nodes namely Master node, Client node and Data node.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--slytpk2K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dp0sojnyo6r6dh07r8nt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--slytpk2K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dp0sojnyo6r6dh07r8nt.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Master node&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;It stores the metadata of files stored in data node. It is used to provide the metadata of datanodes to client and thereby it act as a connecting bridge between them.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Client node&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This node is an End-user node that decides the number of replication blocks and block size. Risk factor plays a major role in deciding replication blocks. It is directly proportional to number of replication blocks. By default,the number of replication blocks is &lt;strong&gt;three&lt;/strong&gt; which can be increased or decreased based on our applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Data node&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;It is also known as slave node which is used to store the data which is provided by client node.&lt;/p&gt;

&lt;p&gt;These three nodes combined to form a Hadoop Distributed FileSystem (HDFS) cluster. But before configuring this cluster, we are going to see how to do partition in RHEL-8 Linux since it will help us to achieve our goal of limiting the size of contribution of data node to their master node.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Master node configuration:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This configuration involves two files namely hdfs-site.xml file which involves storage folder for metadata and core-site.xml for networking part. &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EgGFkLgd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a8h1keowqvz2h12hr28n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EgGFkLgd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a8h1keowqvz2h12hr28n.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--T5os0Q1J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9d7a3rzprccym981vq2y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--T5os0Q1J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9d7a3rzprccym981vq2y.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In core-site.xml file, we have to enter the IP as 0.0.0.0 in order to giving permissions to connect with any IP as a master node. we have to enter port number as 9001 since it is a default port number for Hadoop.The configuration of core-site.xml file is shown below. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7KmlvRMT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/be1kpryo9d9m8ibg0jr4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7KmlvRMT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/be1kpryo9d9m8ibg0jr4.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then enter "w" have to start namenode by using following command.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;hadoop-daemon.sh start namenode&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;You can check whether the node is running or not by using Java Process(JPS)&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Data node configuration:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;It also involves two files namely hdfs-site.xml file which involves storage folder for metadata and core-site.xml for networking part. The configuration of hdfs-site.xml is shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--b_whLkUA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eo7gnsrk3yvnlbs5iu15.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--b_whLkUA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eo7gnsrk3yvnlbs5iu15.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In core-site.xml file, we have to enter master IP in order to contribute your storage with it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tqVSLdMC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dwothitj8jyftqc7eses.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tqVSLdMC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dwothitj8jyftqc7eses.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then run the following commands to start a datanode&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;hadoop-daemon.sh start datanode&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Finally, your data node is connected with master node and formed a HDFS-cluster and you can check this with a webGUI. For this you have to perform following commands in your browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="http://masterip:50070"&gt;http://masterip:50070&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally we limited our storage to 1 GiB for datanode to namenode. You can see this in following picture.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7okWQ2-s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wb441a3ufdo5egp78z4b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7okWQ2-s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wb441a3ufdo5egp78z4b.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Thank you for reading this article. Hope you got a clarity about setting HDFS cluster and Linux partitions. Please kindly share your views as a comments so that I can improvise myself to give you all a quality content.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
    </item>
    <item>
      <title>An Impact of Confusion Matrix on Cyber Security</title>
      <dc:creator>Sri Vishnuvardhan A</dc:creator>
      <pubDate>Sat, 05 Jun 2021 15:21:33 +0000</pubDate>
      <link>https://dev.to/vishnuswmech/an-impact-of-confusion-matrix-on-cyber-security-2ce4</link>
      <guid>https://dev.to/vishnuswmech/an-impact-of-confusion-matrix-on-cyber-security-2ce4</guid>
      <description>&lt;p&gt;In this article, we are going to see the use cases of Confusion Matrix and its impact on Cyber Security.&lt;/p&gt;

&lt;h2&gt;
  
  
  Confusion Matrix
&lt;/h2&gt;

&lt;p&gt;A confusion matrix is a table that is often used to describe the performance of a classification model on a set of test data for which the true values are known. The confusion matrix itself is relatively simple to understand, but the related terminology can be confusing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Zz2y1Mby--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9elhbr10vt4my8po9o1x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Zz2y1Mby--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9elhbr10vt4my8po9o1x.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are two possible predicted classes: &lt;strong&gt;"yes"&lt;/strong&gt; and &lt;strong&gt;"no"&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If we were predicting the presence of a disease, for example, "yes" would mean they have the disease, and "no" would mean they don't have the disease.&lt;/p&gt;

&lt;p&gt;The classifier made a total of 165 predictions in which &lt;br&gt;
Out of those 165 cases, the classifier predicted "yes" &lt;strong&gt;110 times&lt;/strong&gt;, and "no" &lt;strong&gt;55 times&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In reality, 105 patients in the sample have the disease, and 60 patients do not.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;True positives (TP):&lt;/strong&gt; These are cases in which we predicted yes (they have the disease), and they do have the disease.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;True negatives (TN):&lt;/strong&gt; We predicted no, and they don't have the disease.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;False positives (FP):&lt;/strong&gt; We predicted yes, but they don't actually have the disease. (Also known as a &lt;strong&gt;"Type I error"&lt;/strong&gt;)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;False negatives (FN):&lt;/strong&gt; We predicted no, but they actually do have the disease. (Also known as a &lt;strong&gt;"Type II error"&lt;/strong&gt;)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--07xKjJ-W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m5wgz9ate6wfmmy8mau3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--07xKjJ-W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m5wgz9ate6wfmmy8mau3.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Real life use cases in Cyber Security
&lt;/h2&gt;

&lt;p&gt;Here we are going to investigate the performance of our classifier in identifying ransomware as a whole. We then move on to discuss how well our classifier can identify a specific type of crypto ransomware.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Collection
&lt;/h2&gt;

&lt;p&gt;We collect over 100MB of ransomware traffic traces from malware traffic-analysis.net, resulting in 265 unique bidirectional ransomware related flows. We collect another 100MB of network traffic that is malware free (clean) to use as a baseline. The clean data consists of flows corresponding to web browsing, file streaming, and file downloading.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;false positive&lt;/strong&gt; rate describes how often clean traffic is misclassified as ransomware. The &lt;strong&gt;false positive&lt;/strong&gt; rate also needs to be as low as possible to prevent the unwarranted blocking of clean traffic. Furthermore, a high false positive rate results in a &lt;strong&gt;base rate fallacy issue&lt;/strong&gt;, which quickly results in a massive number of falsely identified ransomware traffic.&lt;/p&gt;

&lt;p&gt;To measure the classifier’s success, we also look at the &lt;strong&gt;F1 score.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The F1 score is a &lt;strong&gt;weighted average&lt;/strong&gt; of the recall and precision scores and provides an idea of the balance between the false negative and false positive rates.&lt;/p&gt;

&lt;h2&gt;
  
  
  Initial Classification Model
&lt;/h2&gt;

&lt;p&gt;We first tune our stream processor to extract 28 unique features&lt;br&gt;
from our collected network traffic. These features are fed into the classifier, which first ensures the data contains the same number of malicious flows as clean flows in order to prevent classification bias.&lt;/p&gt;

&lt;p&gt;The data is then split into two, unequal sets. One set consists of&lt;br&gt;
&lt;strong&gt;70%&lt;/strong&gt; of the data and is used for training and the other set holds the remaining &lt;strong&gt;30%&lt;/strong&gt; of traffic and is used for testing the learned model.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;810-fold cross validation (CV)&lt;/strong&gt; is performed on our data splitting to ensure our splitting model is unbiased. The confusion matrix  shows the results of our classifier using 28 different&lt;br&gt;
features. Even with a smaller set of traffic data, 200MB, we are&lt;br&gt;
able to achieve a respectable recall of 0.89, a precision of 0.83, and an F1 score of 0.87.&lt;/p&gt;

&lt;h2&gt;
  
  
  Feature Reduction
&lt;/h2&gt;

&lt;p&gt;Feature reduction is a key method used in machine learning to&lt;br&gt;
increase classification accuracy while simultaneously reducing the&lt;br&gt;
computational cost of the model. In order to reduce the number&lt;br&gt;
of feature in our model, we identify the top eight most influential features in classifying ransomware traffic&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qZJHWcUA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/78vxs3bdbam6x02gj6cm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qZJHWcUA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/78vxs3bdbam6x02gj6cm.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
fig 1:The confusion matrix of our 28-feature random&lt;br&gt;
forest classifier shows a recall of 0.89, a precision of 0.83, and&lt;br&gt;
an F1 score of 0.86.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IqIp8ymx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6rn50anq17e3jct4uan6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IqIp8ymx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6rn50anq17e3jct4uan6.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
 fig 2: The confusion matrix of our 8-feature classifier shows similar results to that of our 28-feature classifier with a recall of 0.87, precision of 0.86, and F1 score of 0.87.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The 8-feature model has a slightly lower recall score at 0.87 but produces a higher precision and F1 scores of 0.86 and 0.87, respectively. However, Figure 1b shows a slightly smaller AUC for the 8-feature ROC indicating that the 8-feature classifier performs about 1.4% worse than the 28-feature model. This slight performance loss will be worth the computational savings when running classification at line rate.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Create, Crop, Swap and Collate a BGR image using Python </title>
      <dc:creator>Sri Vishnuvardhan A</dc:creator>
      <pubDate>Thu, 03 Jun 2021 15:22:30 +0000</pubDate>
      <link>https://dev.to/vishnuswmech/create-crop-swap-and-collate-a-colored-image-using-python-2ig4</link>
      <guid>https://dev.to/vishnuswmech/create-crop-swap-and-collate-a-colored-image-using-python-2ig4</guid>
      <description>&lt;p&gt;In this article, We are going to see how to &lt;strong&gt;create&lt;/strong&gt; a coloured image using Python. Not only that, we are also going to see how to make changes in that image which includes &lt;strong&gt;Crop&lt;/strong&gt;, &lt;strong&gt;Swap&lt;/strong&gt; and &lt;strong&gt;collate&lt;/strong&gt; the images.&lt;/p&gt;

&lt;p&gt;In Simple words, an Image is just a &lt;strong&gt;2D/3D array&lt;/strong&gt;. If it is &lt;strong&gt;grey&lt;/strong&gt; image, then it is a &lt;strong&gt;2D&lt;/strong&gt; array and if it is a &lt;strong&gt;color&lt;/strong&gt; image, then it is a &lt;strong&gt;3D&lt;/strong&gt; array.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Concepts used:&lt;/strong&gt; Arrays&lt;/p&gt;

&lt;h2&gt;
  
  
  Aray Slicing
&lt;/h2&gt;

&lt;p&gt;In Python array, there are multiple ways to print the whole array with all the elements, but to print a specific range of elements from the array, we use Slice operation. &lt;/p&gt;

&lt;p&gt;Slice operation is performed on array with the use of colon(:). &lt;/p&gt;

&lt;p&gt;To print elements from beginning to a range use [:Index], to print elements from end use [:-Index], to print elements from specific Index till the end use [Index:], to print elements within a range, use [Start Index:End Index] and to print whole List with the use of slicing operation, use [:]. Further, to print whole array in reverse order, use [::-1].&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl98g8twus5s5wez3grq5.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl98g8twus5s5wez3grq5.PNG" alt="list"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;credit:&lt;/strong&gt; GeeksForGeeks&lt;/p&gt;

&lt;h2&gt;
  
  
  Array
&lt;/h2&gt;

&lt;p&gt;A Python Array is a &lt;strong&gt;collection&lt;/strong&gt; of common type of &lt;strong&gt;data structures&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;Arrays are same as lists. Only difference between them is &lt;strong&gt;list&lt;/strong&gt; is only meant for &lt;strong&gt;row-wise&lt;/strong&gt; operation but &lt;strong&gt;Arrays&lt;/strong&gt; can do &lt;strong&gt;both row-wise&lt;/strong&gt; and &lt;strong&gt;column-wise&lt;/strong&gt; operations too.&lt;/p&gt;

&lt;p&gt;The arrays are especially useful when you have to process the data dynamically. &lt;/p&gt;

&lt;p&gt;Python arrays are much &lt;strong&gt;faster than&lt;/strong&gt; list as it uses &lt;strong&gt;less memory&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating your own image
&lt;/h2&gt;

&lt;p&gt;We already know that image is just an &lt;strong&gt;2D/3D&lt;/strong&gt; array. First step is to create a set of arrays with our desired &lt;strong&gt;dimension/pixels&lt;/strong&gt; using the following commands.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;import cv2&lt;br&gt;
import numpy as np&lt;br&gt;
size= 600,600,3&lt;br&gt;
zero=np.zeros(size, dtype=np.uint8)&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Here, I will create my image with the dimension of 600x600 pixels and the name of the array is zero.&lt;br&gt;
Its output will look like&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9qandehxn8xql8rqcbjk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9qandehxn8xql8rqcbjk.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next step is to &lt;strong&gt;give/assign&lt;/strong&gt; the &lt;strong&gt;color&lt;/strong&gt; to that image since it is a plain black image.&lt;/p&gt;

&lt;p&gt;It is done by using the &lt;strong&gt;concept of arrays&lt;/strong&gt; and &lt;strong&gt;color codes&lt;/strong&gt; which are available in internet. You can refer this color code by using below link.&lt;br&gt;
 &lt;a href="https://www.rapidtables.com/web/color/RGB_Color.html" rel="noopener noreferrer"&gt;https://www.rapidtables.com/web/color/RGB_Color.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One thing you should note, the &lt;strong&gt;color code&lt;/strong&gt; followed in arrays is &lt;strong&gt;BGR&lt;/strong&gt; not RGB.&lt;/p&gt;

&lt;p&gt;The code and its outputs are given below.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;zero[0:200,0:200]=[255,153,51] zero[0:200,200:400]=[0,0,0]&lt;br&gt;
zero[0:200,400:600]=[0,255,255]&lt;br&gt;
zero[200:400,0:200]=[204,0,204]&lt;br&gt;
zero[200:400,200:400]=[0,255,128]&lt;br&gt;
zero[200:400,400:600]=[0,0,255]&lt;br&gt;
zero[400:600,0:200]=[0,102,204]&lt;br&gt;
zero[400:600,200:400]=[51,153,255]&lt;br&gt;
zero[400:600,400:600]=[0,153,0]&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;The Output is&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F999ey99ynv9ff8xe2m9b.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F999ey99ynv9ff8xe2m9b.PNG" alt="1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Left side image is our &lt;strong&gt;generated image&lt;/strong&gt; and right side image is the &lt;strong&gt;master-piece&lt;/strong&gt; developed in Paint.&lt;/p&gt;

&lt;h2&gt;
  
  
  Crop and Swap the image
&lt;/h2&gt;

&lt;p&gt;Again we use the concept of Array slicing to &lt;strong&gt;crop&lt;/strong&gt; and &lt;strong&gt;swap&lt;/strong&gt; the image. You can refer to this in below code.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;#cropping and replace(swap) the squares&lt;br&gt;
zero[100:200,100:200]=[0,153,0]&lt;br&gt;
zero[100:200,400:500]=[255,153,51]&lt;br&gt;
zero[200:400,100:200]=[0,255,255]&lt;br&gt;
zero[200:400,200:400]=[204,0,204]&lt;br&gt;
zero[200:400,400:500]=[0,255,128]&lt;br&gt;
zero[400:500,100:200]=[51,153,255]&lt;br&gt;
zero[400:500,200:400]=[0,102,204]&lt;br&gt;
zero[400:500,400:500]=[0,0,0]&lt;br&gt;
zero[100:200,200:400]=[0,255,255]&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;The output is &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F702kcf5cyeo2hglvubkr.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F702kcf5cyeo2hglvubkr.PNG" alt="3 replace2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Yes, Finally We successfully swapped the boxes.&lt;/p&gt;
&lt;h2&gt;
  
  
  Collate the image
&lt;/h2&gt;

&lt;p&gt;You may notice this option in your &lt;strong&gt;Image editor&lt;/strong&gt; in your phone. Yes, Now we are going to create our &lt;strong&gt;own Image Collager&lt;/strong&gt; using &lt;strong&gt;concatenate function&lt;/strong&gt; in Python.&lt;/p&gt;

&lt;p&gt;There are two types of &lt;strong&gt;Concatenation&lt;/strong&gt; namely &lt;strong&gt;Horizontal&lt;/strong&gt; concatenate and &lt;strong&gt;Vertical&lt;/strong&gt; Concatenate.&lt;/p&gt;

&lt;p&gt;We are going to create an another 3D array and concatenate them horizontally with that existing array/image.&lt;/p&gt;

&lt;p&gt;The code is listed below.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;#Creating another same array(image) for horizontal concatenating&lt;br&gt;
size= 600,600,3&lt;br&gt;
Hconcate=np.zeros(size, dtype=np.uint8)&lt;br&gt;
Hconcate[0:200,0:200]=[255,153,51]&lt;br&gt;
Hconcate[0:200,200:400]=[0,0,0]&lt;br&gt;
Hconcate[0:200,400:600]=[0,255,255]&lt;br&gt;
Hconcate[200:400,0:200]=[204,0,204]&lt;br&gt;
Hconcate[200:400,200:400]=[0,255,128]&lt;br&gt;
Hconcate[200:400,400:600]=[0,0,255]&lt;br&gt;
Hconcate[400:600,0:200]=[0,102,204]&lt;br&gt;
Hconcate[400:600,200:400]=[51,153,255]&lt;br&gt;
Hconcate[400:600,400:600]=[0,153,0]&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;



&lt;p&gt;```#Horizontal concatenating axis=1&lt;/p&gt;

&lt;h1&gt;
  
  
  Vertical concatenating axis=0
&lt;/h1&gt;

&lt;p&gt;horizontal_concate=np.concatenate((zero,Hconcate),axis=1)```&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;After concatenation done, We can see our output using cv2 library in Python by using below code.&lt;/p&gt;

&lt;p&gt;cap=cv2.imshow("photo",horizontal_concate)&lt;br&gt;
cv2.waitKey()&lt;br&gt;
cv2.destroyAllWindows()&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;waitKey&lt;/strong&gt; is used to hold the operation and when we press enter, &lt;strong&gt;destroyAllWindows&lt;/strong&gt; will destroy the process and process will shut down.&lt;/p&gt;

&lt;p&gt;The output of Concatenation is shown in below figure. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj7estvt5kf93jd707y6o.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj7estvt5kf93jd707y6o.PNG" alt="4 concate"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally we done Concatenation too. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Thank you all for your reads. Stay tuned for my upcoming more interesting articles.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>python</category>
      <category>deeplearning</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Launch a GUI Application inside the Docker Container</title>
      <dc:creator>Sri Vishnuvardhan A</dc:creator>
      <pubDate>Mon, 31 May 2021 14:35:14 +0000</pubDate>
      <link>https://dev.to/vishnuswmech/launch-a-gui-application-inside-the-docker-container-3k6m</link>
      <guid>https://dev.to/vishnuswmech/launch-a-gui-application-inside-the-docker-container-3k6m</guid>
      <description>&lt;p&gt;In this article, we are going to see how to run/launch the Jupyter notebook inside the docker container.&lt;/p&gt;

&lt;p&gt;By default, we cant run any GUI applications on Docker container. By performing some trick, we are going to run those applications inside the docker container.&lt;/p&gt;

&lt;p&gt;We have to know our GUI Display variable of our localhost by running the following command.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;echo $DISPLAY&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvu9mvr139ad1plof0g8v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvu9mvr139ad1plof0g8v.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, DISPLAY variable is :0&lt;/p&gt;

&lt;p&gt;Then we have to create the docker container by using following command.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run -it --name mygui --net=host --env="DISPLAY" -v /root:/root centos:latest&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Here, we are giving docker network as host network in order to reach the host and access the host environment, since we are going to use host's Graphical Interface to run the GUI applications on Docker containers.&lt;/p&gt;

&lt;p&gt;We are importing a environmental variable DISPLAY in order to get that display to run GUI applications.&lt;/p&gt;

&lt;p&gt;Now, you are inside the Docker container. But, you may notice that your hostname on bash is not changed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fayziqs1x232s026070yq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fayziqs1x232s026070yq.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also verify using your IP or running any non-default applications.&lt;/p&gt;

&lt;p&gt;Next step is to install firefox and Python3, because Jupyter is a python library so we required Python and Jupyter runs on the top of firefox.&lt;/p&gt;

&lt;p&gt;You can install those packages using the following command.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;yum install firefox python3 -y&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;After installing those packages, final step is to install Jupyter using pip3 by using following command.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pip3 install jupyter&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Now, your Jupyter library is installed. You can use it by running the following command.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;jupyter notebook --allow-root&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk8eott74e3so21o3foac.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk8eott74e3so21o3foac.PNG" alt="jupyter cli"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since, due to security concern, Jupyter cant run in root mode.So, we add --allow-root in order to launch Jupyter in root mode.&lt;br&gt;
     &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ewtkl4mriyxfu3pt1vk.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ewtkl4mriyxfu3pt1vk.PNG" alt="jupyter"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, we launch Jupyter notebook inside the container. Now, you can run any ML models on Jupyter.&lt;/p&gt;

&lt;p&gt;Thats it. Thank you for your reads. Stay tuned for my upcoming article!!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>python</category>
      <category>jupyter</category>
    </item>
    <item>
      <title>Providing Idempotence property to HTTPD Service restart using Ansible handlers</title>
      <dc:creator>Sri Vishnuvardhan A</dc:creator>
      <pubDate>Wed, 31 Mar 2021 08:01:24 +0000</pubDate>
      <link>https://dev.to/vishnuswmech/providing-idempotence-property-to-httpd-service-restart-using-ansible-handlers-i0</link>
      <guid>https://dev.to/vishnuswmech/providing-idempotence-property-to-httpd-service-restart-using-ansible-handlers-i0</guid>
      <description>&lt;p&gt;Before moving to the topic, let's familiar with one unique unfamiliar concept/vocabulary ie &lt;strong&gt;Idempotence&lt;/strong&gt;. Actually, this concept is borrowed from &lt;strong&gt;Mathematics&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Hope all are familiar with &lt;strong&gt;Zero&lt;/strong&gt;. If any number &lt;strong&gt;multiply&lt;/strong&gt; with zero in "N" number of times, its output will be throughout the &lt;strong&gt;same&lt;/strong&gt;. This concept is called &lt;strong&gt;Idempotence.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Any operations are said to be Idempotent if the result of performing &lt;strong&gt;once&lt;/strong&gt; is &lt;strong&gt;exactly equal&lt;/strong&gt; to the result of performing &lt;strong&gt;multiple&lt;/strong&gt; times. &lt;/p&gt;

&lt;p&gt;In other words, An Idempotent operation is one that can be applied multiple times &lt;strong&gt;without changing&lt;/strong&gt; the result beyond the initial application.&lt;/p&gt;

&lt;p&gt;Ok, let's take one real industry use-case so it will make more sense to you. &lt;/p&gt;

&lt;p&gt;It is assumed that you are familiar with the HTTPD web server which is used for the deployment of your code to your web-client.&lt;/p&gt;

&lt;p&gt;We all know that HTTPD server's default port number is &lt;strong&gt;80&lt;/strong&gt; and its default &lt;strong&gt;document root&lt;/strong&gt; is /var/www/html. HTTPD server can only &lt;strong&gt;deploy&lt;/strong&gt; the files present in Document root.&lt;/p&gt;

&lt;p&gt;But we can change all these settings by entering inside the HTTPD configuration file, but after make changes we have to &lt;strong&gt;restart&lt;/strong&gt; the HTTPD so that only the changes made by you will &lt;strong&gt;update&lt;/strong&gt; or else it wont.&lt;/p&gt;

&lt;p&gt;We already know how to launch the webserver in Docker container using Ansible. If not, please refer to my previous blog by clicking &lt;a href="https://dev.to/vishnuswmech/launching-the-web-server-and-deploying-the-code-on-top-of-the-docker-by-using-ansible-playbook-3mcm"&gt;&lt;strong&gt;here&lt;/strong&gt;&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;Here we are going to launch the web server in Managed node's EC2 instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges
&lt;/h2&gt;

&lt;p&gt;If you run the following scripts to start the webserver,&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v5sNdtmS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s1lupj3tnlt5pfgiwef2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v5sNdtmS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s1lupj3tnlt5pfgiwef2.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;if there will be any change in configuration file like changing port number or changing document root, the change won't get updated while running playbook for next time. &lt;/p&gt;

&lt;p&gt;It can be solved by replacing &lt;strong&gt;started&lt;/strong&gt; with &lt;strong&gt;restarted&lt;/strong&gt;, but it will cause other problems, the problem is every time if you run the playbook, it will always restart because it has &lt;strong&gt;no intelligence&lt;/strong&gt; to check whether there is a change or not and it also &lt;strong&gt;consumes&lt;/strong&gt; more resources. &lt;/p&gt;

&lt;p&gt;For us, it should not run everytime, it should run only once when the property of its configuration file changes. &lt;/p&gt;

&lt;p&gt;For solving this challenge, We are going to use &lt;strong&gt;Ansible File handlers&lt;/strong&gt;. It can run &lt;strong&gt;only once&lt;/strong&gt; when it get &lt;em&gt;notified&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code with Explanation
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FJ02q0y2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hkahlx16x9to5f2spd7c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FJ02q0y2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hkahlx16x9to5f2spd7c.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, the Installation of the HTTPD web server is done using &lt;strong&gt;yum module&lt;/strong&gt; and Copying the files to its document root by using &lt;strong&gt;copy module&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;So far, it is simple only.&lt;/p&gt;

&lt;p&gt;Then We are going to change the &lt;strong&gt;default&lt;/strong&gt; port number of httpd server in Managed node's configuration file using &lt;strong&gt;Regular Expression (ReGex)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cedMFQFh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k3tm4o539egnluxwdtms.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cedMFQFh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k3tm4o539egnluxwdtms.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And we are also using &lt;strong&gt;Notify&lt;/strong&gt; keyword, it will notify/trigger restart operation. &lt;br&gt;
&lt;strong&gt;Note&lt;/strong&gt;: Notify will only notify to the &lt;strong&gt;Restart handler&lt;/strong&gt; when there is any change of Replacement operation, or else it won't notify and sends the trigger to restart HTTPD.&lt;/p&gt;

&lt;p&gt;If there will be no change, then "Start HTTPD" will run and it simply start the service.&lt;/p&gt;

&lt;p&gt;In this way, We can save the &lt;strong&gt;CPU &amp;amp; RAM&lt;/strong&gt; resources whenever there will be a need, then only HTTPD will restart, it won't run every time when playbooks are executed.  &lt;/p&gt;

&lt;p&gt;You can find these YAML scripts through my &lt;strong&gt;Github&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Github link:&lt;/strong&gt; &lt;a href="https://github.com/vishnuswmech/Providing-Idempotence-property-to-HTTPD-Service-restart-using-Ansible-handlers.git"&gt;https://github.com/vishnuswmech/Providing-Idempotence-property-to-HTTPD-Service-restart-using-Ansible-handlers.git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have queries, You can also connect with me though&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mail:&lt;/strong&gt; &lt;a href="mailto:vishnuanand97udt@gmail.com"&gt;vishnuanand97udt@gmail.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Linkedin:&lt;/strong&gt; &lt;a href="https://www.linkedin.com/in/sri-vishnuvardhan/"&gt;https://www.linkedin.com/in/sri-vishnuvardhan/&lt;/a&gt; &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Launching the Web server and deploying the code on top of the Docker by using Ansible Playbook</title>
      <dc:creator>Sri Vishnuvardhan A</dc:creator>
      <pubDate>Mon, 29 Mar 2021 11:36:40 +0000</pubDate>
      <link>https://dev.to/vishnuswmech/launching-the-web-server-and-deploying-the-code-on-top-of-the-docker-by-using-ansible-playbook-3mcm</link>
      <guid>https://dev.to/vishnuswmech/launching-the-web-server-and-deploying-the-code-on-top-of-the-docker-by-using-ansible-playbook-3mcm</guid>
      <description>&lt;p&gt;In this article,  We are going to configuring the &lt;strong&gt;Apache Web server&lt;/strong&gt; and deploying the code on it on the docker container in the managed node by using &lt;strong&gt;ansible-playbook&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Before jumping to the practical, let's get familiar with Ansible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ansible
&lt;/h2&gt;

&lt;p&gt;It is an open-source tool mainly used for &lt;strong&gt;configuration management&lt;/strong&gt;, &lt;strong&gt;application deployment&lt;/strong&gt; and in some cases also for &lt;strong&gt;software provisioning&lt;/strong&gt;. Ansible is written in &lt;strong&gt;Python&lt;/strong&gt; language and it uses &lt;strong&gt;ssh protocol&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It uses &lt;strong&gt;YAML&lt;/strong&gt; file for configuration management of the system. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7V_UIGqK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1616993554083/8dVhQvhJDq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7V_UIGqK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1616993554083/8dVhQvhJDq.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can configure multiple systems by sitting in one system. The systems that need the configuration are called as Managed nodes and the system that configured the multiple managed nodes can be called as Controller Node. Typically,there will be only &lt;strong&gt;one&lt;/strong&gt; controller node that will configure the &lt;strong&gt;"N"&lt;/strong&gt; number of managed nodes.&lt;/p&gt;

&lt;p&gt;Ansible has an &lt;strong&gt;agentless architecture&lt;/strong&gt;, means the software that helps the configuration need to be installed only on a single Controller node only. This is the core competency and unique feature for Ansible that possess, this helps Ansible to have a market share of &lt;strong&gt;74.67%&lt;/strong&gt; in the &lt;strong&gt;configuration-management&lt;/strong&gt; market.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5VjUjMUt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1616993840825/6En92Jlkx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5VjUjMUt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1616993840825/6En92Jlkx.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above figure reflects the interest of using Ansible for configuration management of their managed nodes.&lt;/p&gt;

&lt;p&gt;Let's get into the practical.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; I used &lt;strong&gt;Amazon 2 Linux EC2&lt;/strong&gt; instances for this practical. I used container image as &lt;strong&gt;httpd&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Controller node setup
&lt;/h2&gt;

&lt;p&gt;First step is to configure the Ansible software or package in the Controller node. Since Ansible is written in Python, so we can install ansible by using pip3 command.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;yum install python3 -y&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pip3 install ansible&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;Then you create an inventory file which contains all the meta data of the managed nodes which you want to configure. You can find the inventory file which I configured in below figure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--j0EqAYgm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1616994812027/ipnh6kbF0.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--j0EqAYgm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1616994812027/ipnh6kbF0.jpeg" alt="1 vim ip.JPG"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After creating the inventory file, you have to &lt;strong&gt;update&lt;/strong&gt; it in &lt;strong&gt;configuration file&lt;/strong&gt;. The ansible's configuration file is located in &lt;code&gt;/etc/ansible/ansible.cfg&lt;/code&gt; . If not there means, you can create ansible folder using &lt;code&gt;mkdir&lt;/code&gt; command and ansible.cfg using &lt;code&gt;touch&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sd76zcku--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1616995040233/vHVBRtFDE.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sd76zcku--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1616995040233/vHVBRtFDE.jpeg" alt="config file.JPG"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above figure shows the configuration file and how to update the inventory in it.&lt;/p&gt;

&lt;p&gt;After completion, you can use the below command to verify there is a connection between managed node and controller node.&lt;br&gt;
&lt;code&gt;ansible all -m ping&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vC2Wsn1K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1616995254520/xoGPFKV8E.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vC2Wsn1K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1616995254520/xoGPFKV8E.jpeg" alt="2 ping.JPG"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above figure shows that the controller node has a reliable connection with the managed nodes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Steps involved
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Configuring Docker repo&lt;/li&gt;
&lt;li&gt;Creating directory for mount&lt;/li&gt;
&lt;li&gt;Copy the file to that directory&lt;/li&gt;
&lt;li&gt; Python3 installation&lt;/li&gt;
&lt;li&gt; Installation of Docker module&lt;/li&gt;
&lt;li&gt; Installation of IP tables&lt;/li&gt;
&lt;li&gt;  Docker Installation&lt;/li&gt;
&lt;li&gt; Starting/Executing Docker service&lt;/li&gt;
&lt;li&gt; Launch of Docker Container&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  1.Configuring Docker repo
&lt;/h2&gt;

&lt;p&gt;For installation of Docker, first step is to creating one &lt;strong&gt;yum repo&lt;/strong&gt; called Docker.repo. Using this repo, we can install docker in our OS. &lt;br&gt;
 &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--L8FGl4S9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1617000088862/MIop1FxDW.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--L8FGl4S9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1617000088862/MIop1FxDW.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Creating directory for mount
&lt;/h2&gt;

&lt;p&gt;Usually code is stored in file only and files are stored in directory. The deployment of code in docker container requires &lt;strong&gt;storage&lt;/strong&gt;. We are providing storage by &lt;strong&gt;mounting&lt;/strong&gt; docker storage to this directory. So that, in future if you want to deploy any webapp in container, you can store that program file in this baseOS directory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--e20Aaeo2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1617001007796/XnlgDlmTP.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--e20Aaeo2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1617001007796/XnlgDlmTP.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Copy
&lt;/h2&gt;

&lt;p&gt;After the creation of directory, next step is to copy the file you want to deploy in webserver in the above mentioned directory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kTrUHAJ---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1617001061732/DyBRlxlJx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kTrUHAJ---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1617001061732/DyBRlxlJx.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Python3 installation
&lt;/h2&gt;

&lt;p&gt;Next step is to install the Python3 using yum module, since ansible is written in Python and so that we can install the python-docker module in future. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yFRvwwTH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1617001108153/49WMWeXzp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yFRvwwTH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1617001108153/49WMWeXzp.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Installation of Docker module
&lt;/h2&gt;

&lt;p&gt;Then we have to install the docker module in python by using pip &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BQXBtWgZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1617001144069/sifz7D8GP.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BQXBtWgZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1617001144069/sifz7D8GP.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Installing IP tables
&lt;/h2&gt;

&lt;p&gt;In some cases, this step is important, since IP tables are required to start the docker service in some systems. Because in some versions of docker,  docker has no pre-installed IP tables. To avoid those conflicts, we are installing IPtables using yum module.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VOL-hCss--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r8mnqbz810lhfetdqz5w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VOL-hCss--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r8mnqbz810lhfetdqz5w.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Docker Installation
&lt;/h2&gt;

&lt;p&gt;Next step is to install the docker by using &lt;strong&gt;yum&lt;/strong&gt; module.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--30ZHHj6Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1617001243687/x0qLR3gbN.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--30ZHHj6Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1617001243687/x0qLR3gbN.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  8. Start and make Docker service permanent
&lt;/h2&gt;

&lt;p&gt;Then we have to start the &lt;strong&gt;service&lt;/strong&gt; of Docker and enable the service for avoiding the service stop due to system &lt;strong&gt;reboot&lt;/strong&gt;.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FfL16Ix3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1617001300907/i5hMGGE5i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FfL16Ix3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1617001300907/i5hMGGE5i.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  9. Pull image from DockerHub
&lt;/h2&gt;

&lt;p&gt;We are pulling httpd image from DockerHub by using &lt;strong&gt;docker pull&lt;/strong&gt; command&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--f-pEzsws--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1617001336605/t1dGXU_Zg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--f-pEzsws--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1617001336605/t1dGXU_Zg.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  10. Launching Container using docker image
&lt;/h2&gt;

&lt;p&gt;Finally we are going to launch one container named &lt;strong&gt;httpd140&lt;/strong&gt; with exposed port of &lt;strong&gt;8040&lt;/strong&gt; and also we attached docker container httpd's document root to the baseOS or host's directory. Since, the file in &lt;strong&gt;document root&lt;/strong&gt; can only deployed it to the webserver.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eswl-R3m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1617001372022/EMHKetvgT.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eswl-R3m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1617001372022/EMHKetvgT.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Output
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3nv8Wkyv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1617001521382/ol0U-CBTL.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3nv8Wkyv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1617001521382/ol0U-CBTL.jpeg" alt="7 yml output 1.JPG"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7Lx_4bR_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1617001545153/rdSs9Fwje.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7Lx_4bR_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1617001545153/rdSs9Fwje.jpeg" alt="8 yml output 2.JPG"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zTeulymH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1617001565196/H6gTnvkUm.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zTeulymH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1617001565196/H6gTnvkUm.jpeg" alt="expose checking.JPG"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can check this scripts from my github&lt;br&gt;
GitHub URL: &lt;a href="https://github.com/vishnuswmech/Launching-the-Web-server-and-deploying-the-code-on-top-of-the-Docker-by-using-Ansible-Playbook.git"&gt;https://github.com/vishnuswmech/Launching-the-Web-server-and-deploying-the-code-on-top-of-the-Docker-by-using-Ansible-Playbook.git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are done with it!!. Thank you all for your reads. Stay tuned for upcoming articles!! Suggestions are always welcome!!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Setting up your own Network that can ping Google but not able to ping Facebook in the same system without using a firewall</title>
      <dc:creator>Sri Vishnuvardhan A</dc:creator>
      <pubDate>Sat, 27 Mar 2021 12:34:17 +0000</pubDate>
      <link>https://dev.to/vishnuswmech/setting-up-your-own-network-that-can-ping-google-but-not-able-to-ping-facebook-in-the-same-system-without-using-a-firewall-1c8p</link>
      <guid>https://dev.to/vishnuswmech/setting-up-your-own-network-that-can-ping-google-but-not-able-to-ping-facebook-in-the-same-system-without-using-a-firewall-1c8p</guid>
      <description>&lt;p&gt;The first question that arises on your mind after seeing this title is “Why I want to block Facebook? What is the need for this?”. The answer to your question is maybe you have kids in your home or they may be lying to you by saying they are attending online classes but actually they are wasting their precious time on Social Networks. It is more common nowadays since this Pandemic happened.&lt;/p&gt;

&lt;p&gt;Ok. Let's come to the point.&lt;/p&gt;

&lt;p&gt;Here we are going to see how to block Facebook but ensure access to Google in the same system. Maybe you had seen this set up in your college systems where we are not allowed to use some kind of the websites.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Note:&lt;/em&gt;&lt;/strong&gt; Here I am using Redhat Enterprise Linux (RHEL8) which is hosting by Oracle VirtualBox.&lt;/p&gt;

&lt;p&gt;But before starting this practical, you should know some basic Linux Networking concepts and terminologies.&lt;/p&gt;

&lt;h3&gt;
  
  
  IP address
&lt;/h3&gt;

&lt;p&gt;In simple words, it is like your mobile number which is used for identified you uniquely. Every computer has their unique IP address. IP stands for “Internet Protocol”, which is the set of rules governing the format of data sent via the internet or local network.&lt;/p&gt;

&lt;p&gt;IP addresses are not random. They are mathematically produced and allocated by the Internet Assigned Numbers Authority (IANA), a division of the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is a non-profit organization that was established in the United States in 1998 to help maintain the security of the internet and allow it to be usable by all.&lt;/p&gt;

&lt;p&gt;Each time anyone registers a domain on the internet, they go through a domain name registrar, who pays a small fee to ICANN to register the domain.&lt;/p&gt;

&lt;p&gt;An IP address is of two types based on the number of octets namely IPv4 and IPv6.&lt;/p&gt;

&lt;h3&gt;
  
  
  IPv4
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tiBEuwRR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yomr7bsdp0m7m2olhfn9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tiBEuwRR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yomr7bsdp0m7m2olhfn9.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above figure clearly explains IPv4. Its size is 32 bits or 4 bytes. Each number in the set can range from 0 to 255. So, the full IP addressing range goes from 0.0.0.0 to 255.255.255.255.&lt;/p&gt;

&lt;p&gt;That means it can provide support for 2³² IP addresses in total around 4.29 billion. That may seem like a lot, but all 4.29 billion IP addresses have now been assigned, leading to the address shortage issues we face today.&lt;/p&gt;

&lt;h3&gt;
  
  
  IPv6
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---S8Vi9_4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pxc0v8ldelgtp6dn9j10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---S8Vi9_4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pxc0v8ldelgtp6dn9j10.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;IPv6 utilizes 128-bit Internet addresses. Therefore, it can support 2¹²⁸ Internet addresses — 340,282,366,920,938,463,463,374,607,431,768,211,456 of them to be exact. The number of IPv6 addresses is 1028 times larger than the number of IPv4 addresses. So there are more than enough IPv6 addresses to allow for Internet devices to expand for a very long time.&lt;/p&gt;

&lt;p&gt;We can find your system IP address in RHEL8 by using the following command.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ifconfig enp0s3&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mBECsEEW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qtt51c22gy8u5lsmzav8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mBECsEEW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qtt51c22gy8u5lsmzav8.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here our IPv4 address is 192.168.43.97 and IPv6 address is fe80::ad91:551e:e05a:5ab8.&lt;/p&gt;

&lt;h3&gt;
  
  
  Netmask
&lt;/h3&gt;

&lt;p&gt;Netmask plays a major role in finding the range of IPs in which they can ping each other. It has two parts namely Network ID and Host ID.&lt;/p&gt;

&lt;p&gt;For example, with an IP address of 192.168.100.1 and a subnet mask of 255.255.255.0, the network ID is 192.168.100.0 and the host ID is 1. With an IP of 192.168.100.1 and a subnet mask of 255.0.0.0, the network ID is 192 and the host ID is 168.100.1&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DrL79App--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/34o9g673an3z9l0uorhs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DrL79App--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/34o9g673an3z9l0uorhs.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above example, NetMask is 255.255.255.0 and if we convert host ID into binary, it has 8 zeros. 2⁸ =256 IPs are available to connect.&lt;/p&gt;

&lt;h3&gt;
  
  
  CIDR
&lt;/h3&gt;

&lt;p&gt;Classless Inter-Domain Routing, or CIDR, was developed as an alternative to traditional subnetting. The idea is that you can add a specification in the IP address itself as to the number of significant bits that make up the routing or networking portion.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QoUhHWCS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yajtfwrvseiqthffer1x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QoUhHWCS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yajtfwrvseiqthffer1x.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For example, we could express the idea that the IP address 192.168.0.15 is associated with the netmask 255.255.255.0 by using the CIDR notation of 192.168.0.15/24. This means that the first 24 bits of the IP address given are considered significant for the network routing.&lt;/p&gt;

&lt;p&gt;In simple words, CIDR is the count of the total number of Ones in Netmask.&lt;/p&gt;

&lt;h3&gt;
  
  
  Gateway
&lt;/h3&gt;

&lt;p&gt;A gateway is a router that provides access for IP packets into and/or out of the local network. The term “default gateway” is used to mean the router on your LAN which has the responsibility of being the first point of contact for traffic to computers outside the LAN.&lt;/p&gt;

&lt;p&gt;The default gateway IP of your system can be found by using the following command on RHEL8.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;route -n&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--y8s9TXtf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ii9wbxrllcmpf3nysyph.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y8s9TXtf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ii9wbxrllcmpf3nysyph.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, in the routing table, the Gateway IP is 192.168.43.146. The Destination IP mentioned 0.0.0.0 indicates that we can go anywhere on the Internet and accessing any websites without any restriction.&lt;/p&gt;

&lt;p&gt;The above concepts so far I explained are enough to understand this practical. Here comes the practical part.&lt;/p&gt;

&lt;p&gt;The first step is to delete one of the route rule in the routing table. We have to delete the rule which permits the user to access any kind of website. It is done by running the following command.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;route del -n 0.0.0.0 netmask 0.0.0.0 gw 192.168.43.146 enp0s3&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;After this, if you want to ping google or Facebook, it won't possible&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--f7E68WSK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cuiah625b13yfc1pex5e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--f7E68WSK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cuiah625b13yfc1pex5e.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For now, even if you are having an internet connection, you feel like you are offline because your system doesn't know the gateway address, so impossible to go out.&lt;/p&gt;

&lt;p&gt;For this, you have to add one rule to your IP table for granting access to Google only. It is done by the below command.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;route add -n googleip netmask 255.255.255.0 gw 192.168.43.146&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can find Google IP for your PC by running the below command.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;nslookup google.com&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;After running these commands, you can notice that your Facebook IP is not pinging and at the same time your Google IP is pinging and you have good connectivity with Google.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m6LJa950--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ivynd6tg57odngsslr7m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m6LJa950--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ivynd6tg57odngsslr7m.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's it.&lt;/p&gt;

&lt;p&gt;Thank you all for your reads. Stay tuned!! for my more upcoming interesting articles!!&lt;/p&gt;

</description>
      <category>google</category>
      <category>networking</category>
      <category>netmask</category>
      <category>redhat</category>
    </item>
  </channel>
</rss>
