<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: mohammed afif ahmed</title>
    <description>The latest articles on DEV Community by mohammed afif ahmed (@afif_ahmed).</description>
    <link>https://dev.to/afif_ahmed</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/afif_ahmed"/>
    <language>en</language>
    <item>
      <title>Everything you need to know about linux File System directories!</title>
      <dc:creator>mohammed afif ahmed</dc:creator>
      <pubDate>Tue, 15 Jun 2021 03:05:53 +0000</pubDate>
      <link>https://dev.to/afif_ahmed/everything-you-need-to-know-about-linux-directories-57ha</link>
      <guid>https://dev.to/afif_ahmed/everything-you-need-to-know-about-linux-directories-57ha</guid>
      <description>&lt;h2&gt;
  
  
  History of Linux and Windows
&lt;/h2&gt;

&lt;p&gt;Okay, before starting with the actual topic, let’s discuss what makes Linux different from windows. If you are a Windows user you might have seen different drives such as C, D, E, etc which are absent in Linux instead it has folders named /bin, /sbin, /usr, /etc.&lt;/p&gt;

&lt;p&gt;For new Linux users, let us tell you about how Linux and windows evolved. Windows was installed on top of DOS(Disk operating system), which was a command-line tool where you can run programs, games, etc. It used letters to assign removable discs such as floppy drives, i.e. A and B. When the hard drive was introduced, the letter C was assigned for the internal disk and the next available letter for the next disk.&lt;/p&gt;

&lt;p&gt;Microsoft evolved their kernel so that windows boots are less dependent on DOS and eventually without DOS at all. Linux follows the Unix tradition which is why it uses forward slash unlike backslash in windows. It is also case sensitive, also macOS has similar features as it is a common Unix-based ancestor.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  So now let’s jump into the Linux filesystem.
&lt;/h2&gt;

&lt;p&gt;A filesystem is used to control the flow of how data is stored or distributed or retrieved from a physical storage device such as HDD, SSD, etc. Its main purpose is to make users and the operating system store files in such a manner that all the different directories can utilize them efficiently.&lt;/p&gt;

&lt;p&gt;The OS plays the role of intermediate that facilitates the data transfer for storage on a storage device. Linux uses a directory tree to manage directories and files. The tree information is also stored on a storage device and this part is called the root file system or root directory.&lt;/p&gt;

&lt;p&gt;The root directory is the most important as all other directories are derived from it and it is responsible for booting, repairing, and restoring the Linux system.&lt;/p&gt;

&lt;p&gt;The File hierarchy for Linux defines the Linux directory structure. We can refer to the sequential directories in those directories by using directory names connected by a forward slash (/) such as /var/log and /var/spool/mail. These are called paths.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Now let us explore these sub-directories of the root directory one by one.
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.rs-online.com%2Fdesignspark%2Frel-assets%2Fdsauto%2Ftemp%2Fuploaded%2Flinux-filesystem.png%3Fw%3D1042" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.rs-online.com%2Fdesignspark%2Frel-assets%2Fdsauto%2Ftemp%2Fuploaded%2Flinux-filesystem.png%3Fw%3D1042" alt="lfs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  /bin
&lt;/h3&gt;

&lt;p&gt;This is a subdirectory of the root which stands for binaries and contains the executable programs, which are used for minimal functionality for booting or repairing. It usually contains the shell commands like cp(copy), rm(remove), ls, etc. It also contains programs that boot scripts may depend on. Bin folders can also be found in other sections of the file system tree.&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  /sbin
&lt;/h3&gt;

&lt;p&gt;This stands for system binaries that a system administrator may use and a standard user would not have access without permission. This folder, along with the one above it,contains the files that must be accessible while running in single-user mode (this mode boots you in as a root user to enable you to perform device repairs,updates, and testing) rather than multi-user mode.&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  /boot
&lt;/h3&gt;

&lt;p&gt;This directory contains all the executables or programs responsible for booting up a Linux machine whether it is ubuntu or kali or mint. As a result, the data used until even the Linux kernel starts to run some program is stored in the /boot directory.&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  /dev
&lt;/h3&gt;

&lt;p&gt;This directory houses some exclusive or device-specific files. Everything in Linux is categorized as a file or a directory, as this command emphasizes.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-la&lt;/span&gt; /dev 


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We can see all the partitions on the system. /dev/cdrom represents our CD-ROM. Here the nested files represent a hardware device and any changes to them will be reflected in our hardware. For example, /dev/dsp represents the speakers and if you make changes to it, it will be reflected in the speakers. For instance, it will make a sound if you cat something on it.  &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /boot/vminux &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/dsp


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  /etc
&lt;/h3&gt;

&lt;p&gt;This directory contains all your configurations. Here configuration refers to those which are system-wide and not just for a particular user such as /etc/apt contains the sources list which contains the repo’s the system connects to and its various settings.&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  /lib, /lib32, /lib64
&lt;/h3&gt;

&lt;p&gt;These are the directories where the libraries are stored. Libraries are files that the application uses to perform various functions, and these are required by the binaries in the /bin directory.&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  /media and /mnt (mount)
&lt;/h3&gt;

&lt;p&gt;These are the directories where we can find our other mounted drives such as USB,floppy disks, or external hard drives, etc. The /media folder wasn’t there before. &lt;br&gt;
It was just /mnt, but nowadays most Linux distros automatically mount devices in /media directory. But why two directories for mounting? Well, When manually mounting files, we use the /mnt directory and leave the /media directory to the operating system.&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  /opt (this is the optional folder)
&lt;/h3&gt;

&lt;p&gt;This folder usually contains manually installed software from any third-party vendors. This is the place where you can install the software created by you.&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  /proc
&lt;/h3&gt;

&lt;p&gt;This directory contains all the pseudo files that have information about the system processes and resources. Every process has a directory (named after the process ID) that contains all the necessary information on that process and these are not saved on the hard drive. The files here are translated by the kernel to generate some other information. for example &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /proc/cpuinfo 


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This command prints out the information about the CPU.&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  /root
&lt;/h3&gt;

&lt;p&gt;This is the home directory of the root user. It does not contain typical directories and does not reside in the /home directory, unlike a user's /home directory. You can store the files here but you need to have root access. This directory's location also means that the root has constant access to its /home folder.&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  /run
&lt;/h3&gt;

&lt;p&gt;This directory is relatively new, and various distributions use it in different ways. It's a tempfs file system, which means everything in it is deleted when the system is rebooted or shut down. It is used to store runtime information that processes use to work early in&lt;br&gt;
the boot method.&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  /srv
&lt;/h3&gt;

&lt;p&gt;This is called a service directory where data from the service is saved. For you, it will most likely be null, but if you run a web server or an FTP server, you will store files here that will be accessed by other users. Since it is at the root of the drive, it provides better&lt;br&gt;
protection.&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  /sys
&lt;/h3&gt;

&lt;p&gt;This is called a system directory, It's been around for quite some time. It's a method of communicating with the kernel. This directory is similar to the /run directory in that it is not written to the disc physically. It's generated every time the machine starts up, so you wouldn't save anything here, and nothing is mounted here.&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  /tmp
&lt;/h3&gt;

&lt;p&gt;It is a temp (temporary) directory. This is where the program temporarily stores files that could be used during a session. For example, when you write a document in a program in vscode, it will save a temporary copy of what you're writing here on a regular basis. If the program crashes, you can search here to see if you have a recently saved copy that you can restore.&lt;/p&gt;

&lt;p&gt;When you reboot your computer, this folder is normally empty. Any files or directories may still be present or may have been stuck because the machine was unable to remove them. This isn't a concern unless there are hundreds of files taking up disc space, in which case you'll need to log in as root and manually delete them.&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  /usr
&lt;/h3&gt;

&lt;p&gt;In contrast to the /bin directory, which is used by the system and system administrator to perform maintenance, this is the user application space where programs that are used by the user will be installed. Any program installed here is considered non-essential for basic system service, and it is also known as Unix System Resource. &lt;/p&gt;

&lt;p&gt;The installed programme can be found in a variety of locations, including /usr/bin, /usr/sbin, and /usr/local. The local directory is where most programs installed from source code end up. The /usr/share is where many larger programs mount themselves.&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  /var
&lt;/h3&gt;

&lt;p&gt;This is the directory for variables. It contains files and directories that are expected to grow in size over time, such as dynamic data. /var/log contains system and application log files, which will grow in size as you use the system. Other items included here include mail databases and temporary storage for printer queues, also known as /spool.&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  /home
&lt;/h3&gt;

&lt;p&gt;Each user has this directory. The /home directory is where you store your file and documents. Each user can access only their folder unless admin permissions.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;And, phew...! finally we have reach the end. In this article, we have covered all the different Linux File system directories. We have also seen what makes Linux FS different from Windows FS and how Linux evolved. I  certainly hope that you now have knowledge of what each directory in Linux FS means and its true functionality as well as its role in OS.&lt;/p&gt;

&lt;p&gt;liked the post? &lt;br&gt;
&lt;a href="https://ko-fi.com/I2I639WWJ" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fko-fi.com%2Fimg%2Fgithubbutton_sm.svg" alt="ko-fi"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>linux</category>
      <category>tutorial</category>
      <category>os</category>
      <category>windows</category>
    </item>
    <item>
      <title>Introduction to code-splitting in reactjs.</title>
      <dc:creator>mohammed afif ahmed</dc:creator>
      <pubDate>Tue, 25 May 2021 11:00:46 +0000</pubDate>
      <link>https://dev.to/afif_ahmed/introduction-to-code-splitting-in-reactjs-2fkk</link>
      <guid>https://dev.to/afif_ahmed/introduction-to-code-splitting-in-reactjs-2fkk</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Imagine you have an application with many different components and pages with routes for that pages. So when you run your application it takes a large amount of time time to load or display content. So what the problem and how can it be solved.&lt;/p&gt;

&lt;p&gt;That's when code-splitting comes in, it makes sure only those components are fetched which are being displayed on the webpage at that particular moment. For example, if you have a &lt;code&gt;Homepage&lt;/code&gt; component and an &lt;code&gt;AboutUs&lt;/code&gt; component. Homepage component is displayed when you're on root route i.e., &lt;code&gt;/&lt;/code&gt; and AboutUs at &lt;code&gt;/about&lt;/code&gt;, When you're on the home route you don't need AboutUs javascript right? but it is fetched on the initial load, which makes the site load time-consuming, and eventually leads to losing viewers.&lt;/p&gt;

&lt;p&gt;We will look at an example site and how to perform code-splitting with just a few lines of code.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Let's get started:
&lt;/h3&gt;

&lt;p&gt;fork or clone (might give a star as well 😀) &lt;a href="https://github.com/afif1400/chamb"&gt;this&lt;/a&gt; repo from GitHub, it is a single page application built using react. You can apply code spitting anywhere in the components. For example, where you import any third-party library. But an easier place to identify is at the route level, where you write the routing rules.&lt;br&gt;
In the repo, you clone navigate to&lt;code&gt;/src/MainComponent.js&lt;/code&gt; to see all the routes.&lt;/p&gt;

&lt;p&gt;We have a route called &lt;code&gt;/pricing&lt;/code&gt; which renders the PricingPage.js component, we will split this particular component.&lt;/p&gt;

&lt;p&gt;But, before applying code-splitting let's see what the browser fetches or tries to load.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start the app
To start the app run the below commands(assuming you have some react knowledge)
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;npm start 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The app must be live at &lt;a href="http://localhost:3000"&gt;http://localhost:3000&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In Chrome dev tools open the network tab and select js as a filter, you can see that on the initial page load the browser fetches bundel.js.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BCE4MrYn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/charcha/image/upload/v1621934524/netowrk-before_wlgdug.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BCE4MrYn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/charcha/image/upload/v1621934524/netowrk-before_wlgdug.png" alt="network-before"&gt;&lt;/a&gt;This is where react takes all the javascript written in the application and it into this file(it contains all the components). &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JRMBunav--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/charcha/image/upload/v1621935401/index_gq3ize.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JRMBunav--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/charcha/image/upload/v1621935401/index_gq3ize.png" alt="index"&gt;&lt;/a&gt;The index page contains all the js. &lt;br&gt;
As a result, the page load is slow. We are going to exclude some components from going into that bundle and instead fetch only when needed, here when someone navigates to &lt;strong&gt;/pricing&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The latest version of React uses a combination of two things to accomplish this: &lt;code&gt;Suspense&lt;/code&gt; and &lt;code&gt;React.lazy&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Replace the code in MainComponent.js with the below code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Component&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Suspense&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;lazy&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;BrowserRouter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Route&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react-router-dom&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;HomePage&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./HomePage&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;ProductDetailsPage&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./ProductDetailsPage&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;HowItWorks&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./HowItWorks&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;PricingPage&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;lazy&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./PricingPage&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nx"&gt;MainComponent&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nx"&gt;Component&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;render&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Suspense&lt;/span&gt; &lt;span class="nx"&gt;fallback&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;h1&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;Loading&lt;/span&gt;&lt;span class="p"&gt;.....&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/h1&amp;gt;}&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;                &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;BrowserRouter&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
                    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Route&lt;/span&gt; &lt;span class="nx"&gt;exact&lt;/span&gt; &lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="nx"&gt;component&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;HomePage&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;                    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Route&lt;/span&gt;
                        &lt;span class="nx"&gt;exact&lt;/span&gt;
                        &lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/products/:productId&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
                        &lt;span class="nx"&gt;component&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;ProductDetailsPage&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
                    &lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;                    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;PricingPage&lt;/span&gt; &lt;span class="nx"&gt;exact&lt;/span&gt; &lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/pricing&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;
                    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Route&lt;/span&gt; &lt;span class="nx"&gt;exact&lt;/span&gt; &lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/working&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="nx"&gt;component&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;HowItWorks&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;                &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/BrowserRouter&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;            &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/Suspense&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;        &lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nx"&gt;MainComponent&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now go back to the network tab and check the source, you can see one more 1.chunk.js file when you navigate to &lt;code&gt;/pricing&lt;/code&gt;&lt;br&gt;
which contains only the &lt;code&gt;PricingPage&lt;/code&gt; component.&lt;/p&gt;

&lt;p&gt;Also when you do npm run build it builds the components separately unlike bundling all together if not using code splitting. Below are the logs of build logs before and after applying code-spitting.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;File sizes after &lt;span class="nb"&gt;gzip&lt;/span&gt;:

  76.03 KB  build/static/js/2.d23bfa23.chunk.js
  28.43 KB  build/static/js/main.b229bef3.chunk.js
  770 B     build/static/js/runtime-main.e43a4c19.js
  306 B     build/static/css/main.0fc1fc64.chunk.css
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;File sizes after &lt;span class="nb"&gt;gzip&lt;/span&gt;:

  76.03 KB &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nt"&gt;-2&lt;/span&gt; B&lt;span class="o"&gt;)&lt;/span&gt;    build/static/js/2.8bab3079.chunk.js
  28.07 KB &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nt"&gt;-368&lt;/span&gt; B&lt;span class="o"&gt;)&lt;/span&gt;  build/static/js/main.b6b9360c.chunk.js
  1.18 KB            build/static/js/3.58e0fecc.chunk.js
  1.16 KB &lt;span class="o"&gt;(&lt;/span&gt;+418 B&lt;span class="o"&gt;)&lt;/span&gt;   build/static/js/runtime-main.01e4ec24.js
  306 B              build/static/css/main.0fc1fc64.chunk.css
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see there is one extra file that is our js for PricingPage component,  and also you can see the reduction in the size of other files because the pricing component is excluded from that.&lt;/p&gt;

&lt;p&gt;And...that's a wrap, I hope you have learned how to go about splitting a react app, now you can apply the same approach to your application as well.&lt;br&gt;
We looked at code-splitting with react-router a create-react-app template that uses webpack under the hood, but you can apply the same with parcel, babel, etc.&lt;/p&gt;

&lt;p&gt;Liked the post?&lt;br&gt;
&lt;a href="https://ko-fi.com/I2I639WWJ"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FKanlt08--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://ko-fi.com/img/githubbutton_sm.svg" alt="ko-fi"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>node</category>
      <category>javascript</category>
      <category>react</category>
    </item>
    <item>
      <title>Using multiple versions of nodejs.</title>
      <dc:creator>mohammed afif ahmed</dc:creator>
      <pubDate>Mon, 24 May 2021 18:35:35 +0000</pubDate>
      <link>https://dev.to/afif_ahmed/using-multiple-versions-of-nodejs-37bd</link>
      <guid>https://dev.to/afif_ahmed/using-multiple-versions-of-nodejs-37bd</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Usually we work on different version for our nodejs project and its hard to manage them,but fortunately there is tool called NVM(node verion manager) which help to manage your node version and switch between them according to your projects.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Windows installation
&lt;/h3&gt;

&lt;p&gt;Unfortunately nvm project is only for linux/macos, but...but there is another very similar project by Corey bulter,known as nvm-windows. Click this link  download the nvm-setup.zip file and install it in a mundane fashion as on windows.&lt;br&gt;
After installation you can use the same commands as on linux/macos.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Linux installation
&lt;/h3&gt;

&lt;p&gt;In your terminal use curl to install&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-o-&lt;/span&gt; https://raw.githubusercontent.com/nvm-sh/v0.34.0/install.sh | 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;after installation you need to add a bit of configuration in your .bashrc file or .zshrc etc. So, open the file and append the below code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;NVM_DIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;XDG_CONFIG_HOME&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="nv"&gt;$HOME&lt;/span&gt;&lt;span class="p"&gt;/.&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;nvm"&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; :&lt;span class="nv"&gt;$NVM_DIR&lt;/span&gt;/nvm.sh&lt;span class="s2"&gt;" ] &amp;amp;&amp;amp; &lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt; "&lt;/span&gt;&lt;span class="nv"&gt;$NVM_DIR&lt;/span&gt;/nvm.sh&lt;span class="s2"&gt;"
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This sets the path to directory of installation.&lt;br&gt;
Reload your blog terminal, for the changes to take effect and we are good to go.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  usage
&lt;/h3&gt;

&lt;p&gt;So let's jump into the terminal and look at some of the commands.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To Install latest version of node
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;nvm &lt;span class="nb"&gt;install &lt;/span&gt;node
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Install specific version
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;nvm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;node_verion&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;#example&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;nvm &lt;span class="nb"&gt;install &lt;/span&gt;10.0.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;command to list out all the versions installed
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;nvm &lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Switching between different node versions
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# to use latest version&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;nvm use node  

&lt;span class="c"&gt;# for a specific verion&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;nvm use 10.0.0  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Deleting node versions
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ nvm uninstall {node_version}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As this was an introductory post, we looked at some of the most used commands, this is a very useful tool if you are working on multiple projects which requires different version of node. You can have a look at the official nvm &lt;a href="https://github.com/nvm-sh/nvm"&gt;https://github.com/nvm-sh/nvm&lt;/a&gt; gitrepo to understand thoroughly.&lt;/p&gt;

&lt;p&gt;Liked the content?&lt;br&gt;
&lt;a href="https://ko-fi.com/I2I639WWJ"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FKanlt08--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://ko-fi.com/img/githubbutton_sm.svg" alt="ko-fi"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>node</category>
      <category>cli</category>
      <category>nvm</category>
    </item>
    <item>
      <title>Everything you need to know about Docker Swarm</title>
      <dc:creator>mohammed afif ahmed</dc:creator>
      <pubDate>Sun, 23 May 2021 05:16:10 +0000</pubDate>
      <link>https://dev.to/afif_ahmed/everything-you-need-to-know-about-docker-swarm-3dck</link>
      <guid>https://dev.to/afif_ahmed/everything-you-need-to-know-about-docker-swarm-3dck</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Docker is a free and open framework for building, delivering, and running apps. Docker allows you to decouple your code from your hardware, allowing you to easily deliver apps. You will handle the infrastructure the same way you manage your apps with Docker. One of the characteristics you have with docker is that you can run multiple docker environments in the same host environment. &lt;br&gt;
However, docker may be able to maintain only a certain number of containers, because it runs on a single node. But what if someone wants to work with or develop thousands of containers.&lt;/p&gt;

&lt;p&gt;This is where Docker swarm comes into the picture. A docker swarm is the virtualization of the large number of nodes(running Docker engine) running in a cluster. These nodes can communicate with each other, helping developers to maintain multiple nodes in a single environment.&lt;/p&gt;
&lt;h2&gt;
  
  
  Docker swarm
&lt;/h2&gt;

&lt;p&gt;A swarm mode consists of Multiple Docker hosts which serve as managers (to handle membership and delegation) and workers which run swarm services. A Docker host may be a manager, a worker, or both at the same time. You determine the optimum state of service when you develop it (number of replicas, network and storage resources available to it, ports the service exposes to the outside world, and more). Docker strives to keep the optimal state. Docker schedules a worker node's activities on other nodes if that node becomes inaccessible. A job, as opposed to a standalone container, is a running container that is part of a swarm service and operated by a swarm manager.&lt;/p&gt;

&lt;p&gt;When you're in swarm mode, you can change the configuration of a service, including the networks and volumes it's connected to, without having to restart it manually. Docker will upgrade the configuration, avoid any service tasks that have out-of-date configurations, and start new ones that fit the desired configuration.&lt;br&gt;
So, the difference between swarm mode and standalone docker containers is that, when in swarm mode only managers can manage containers or swarm, unlike standalone containers which can be started on any daemon. But, the daemon can participate in swarm mode as a manager or worker, or both.&lt;/p&gt;
&lt;h2&gt;
  
  
  Docker swarm architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://camo.githubusercontent.com/2414d3a74f17d5601aa0abb140ca97ea3eab8ad4/68747470733a2f2f692e706f7374696d672e63632f4d4b4850383959792f737761726d2e6a7067" class="article-body-image-wrapper"&gt;&lt;img src="https://camo.githubusercontent.com/2414d3a74f17d5601aa0abb140ca97ea3eab8ad4/68747470733a2f2f692e706f7374696d672e63632f4d4b4850383959792f737761726d2e6a7067" alt="architecture diagram"&gt;&lt;/a&gt;&lt;br&gt;
Previously, we have used terms such as manager, worker, nodes, etc, Now let us try to understand what it means and how the docker swarm works.&lt;/p&gt;
&lt;h3&gt;
  
  
  Node
&lt;/h3&gt;

&lt;p&gt;A node is a Docker engine instance that is part of the swarm. This can also be thought of as a Docker server. One or more nodes may operate on a single physical device or cloud server. But in development, these swarm cluster nodes can be spread over several machines on the cloud.&lt;br&gt;
There are two types of nodes, a manager node, and a worker node.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Manager&lt;br&gt;
In the above image, we can see a swarm manager who is responsible to manage what a docker worker does. It maintains track of all of its workers' whereabouts. Docker Manager knows what job the worker is working on, what task it has been allocated, how assignments are distributed to all jobs, and whether the worker is up and running.&lt;br&gt;
Docker Manager's API is used to build a new service and orchestrate it. Assigns tasks to workers using the worker’s IP addresses.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Worker&lt;br&gt;
The Docker Manager has complete power over a Docker Worker. The Docker Worker accepts and executes the tasks/instructions that the Docker Manager has delegated to it. A Docker Worker is a client agent that informs the manager about the state of the node it’s been running on through a REST API over HTTP protocol.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  services
&lt;/h2&gt;

&lt;p&gt;The tasks to be executed on the manager or worker nodes are described by a service. It is the swarm system's core mechanism and the main point of user engagement with the swarm.&lt;br&gt;
When you create a service, you also create containers and also specify tasks, which must be executed inside them.&lt;br&gt;
The swarm manager distributes a certain number of replica tasks among the nodes in the replicated resources model depending on the scale you set in the desired state.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Load Balancing
To expose the resources you want to make available externally to the swarm, the swarm manager uses ingress load balancing. The swarm manager can automatically add a PublishedPort to the service automatically or manually configure one. If you don't mention a port, the swarm manager assigns a port between 30000 and 32767 to the operation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;External modules, such as cloud load balancers, can access the service through the PublishedPort of any node in the cluster. regardless of whether that node is actually performing the service's task. Ingress links are routed to a running task instance by all nodes in the swarm.&lt;/p&gt;
&lt;h2&gt;
  
  
  Swarm features
&lt;/h2&gt;

&lt;p&gt;After looking at what the docker swarm is and its related terminology, let us see what are the different features that swarm mode offers on the docker engine&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use the Docker Engine CLI to create a swarm of Docker Engines where you can deploy application services. You don’t need additional orchestration software to create or manage a swarm.&lt;/li&gt;
&lt;li&gt;The Docker Engine manages some specialization at runtime, rather than managing distinction between node roles at deployment time. The Docker Engine can be used to deploy both manager and worker nodes. This means that a swarm can be created on just a disc image.&lt;/li&gt;
&lt;li&gt;Docker Engine takes a declarative approach of defining the optimal state of your application stack's different resources. For example, a web front-end server with message queueing services and a backend database might be described as an application.&lt;/li&gt;
&lt;li&gt;You should declare the number of tasks to be performed with each service. When the swarm manager scales up or down(this means that we are scaling up or down the number of services or containers), it automatically adapts to preserve the desired environment, by adding or deleting tasks.&lt;/li&gt;
&lt;li&gt;The swarm manager node checks the cluster status continuously and reconciles any inconsistencies between the current state and the target state. For instance, you set up a service of running 10 container replicas on 5 workers, and two of the replicas on one of the worker crashes. Then, the manager creates two more replicas and assigns them to the worker who is up and running.&lt;/li&gt;
&lt;li&gt;A network overlay for your services may be specified. When the application is initialized or modified the swarm manager immediately assigns addresses to the containers in the overlay network. The ports may be shown to an external load balancer for the utilities. Internally, you should decide how service containers can be distributed between nodes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, let us get into some practical, we are going to create a cluster, add two worker nodes and then deploy services to that swarm.&lt;br&gt;
Pre-requisites:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Three Linux machines, that can communicate over a network&lt;/li&gt;
&lt;li&gt;Docker installed on all three of them.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This tutorial includes three Linux hosts with Docker-enabled and network-compatible communications. They can be physical, virtual, or Amazon EC2 instances or otherwise hosted.&lt;br&gt;
One will be a manager and the other two will be workers.&lt;br&gt;
We are going to use three Linux machines hosted on AWS, that is EC2 instances.&lt;/p&gt;

&lt;p&gt;While creating EC2 instances add the following rules in the security group as follows:&lt;br&gt;
The following ports must be available. On some systems, these ports are open by default.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;TCP port 2377 for cluster management communications&lt;/li&gt;
&lt;li&gt;TCP and UDP port 7946 for communication among nodes&lt;/li&gt;
&lt;li&gt;UDP port 4789 for overlay network traffic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh6.googleusercontent.com%2F_dO-z3KAK_eJK8pvZRpa4ABb0eQShKWCu-XrTlt6FEZb1vAe9JHBTzlGr_hMth7iNuD8I7NCxhMPg42S_gIKcYIaQZK7HHLCo9apriyatb3AA2A7uqkmG5clAIUiI5TcRilRiOA" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh6.googleusercontent.com%2F_dO-z3KAK_eJK8pvZRpa4ABb0eQShKWCu-XrTlt6FEZb1vAe9JHBTzlGr_hMth7iNuD8I7NCxhMPg42S_gIKcYIaQZK7HHLCo9apriyatb3AA2A7uqkmG5clAIUiI5TcRilRiOA" alt="aws image-1"&gt;&lt;/a&gt;&lt;br&gt;
While creating the manager machine add these rules.&lt;br&gt;
Then while creating the worker nodes use the same security group created for the manager machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2Fl_zk1msM6Do4-Vqxwu5-E8ae1WwH2cvlPhhmjUGuTC_uOQQhQEexoz99ihbSlL61ie6ER7a_YzX1Y-n8rzC0qGUu_hIESTz_M4aRv0cU" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2Fl_zk1msM6Do4-Vqxwu5-E8ae1WwH2cvlPhhmjUGuTC_uOQQhQEexoz99ihbSlL61ie6ER7a_YzX1Y-n8rzC0qGUu_hIESTz_M4aRv0cU" alt="aws image-2"&gt;&lt;/a&gt;&lt;br&gt;
Next, ssh into all the machines and install docker-engine.&lt;br&gt;
Use the following commands to install docker-engine on all three machines.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update and add packages of the apt installation index so that apt can use an HTTPS repository.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt-get update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg \
    lsb-release
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Add Docker’s official GPG key:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Use the following command to set up the stable repository
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ echo \ "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list &amp;gt; /dev/null
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Install docker engine
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt-get update
$ sudo apt-get install docker-ce docker-ce-cli containerd.io
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Repeat this for all the three machines.&lt;/p&gt;

&lt;p&gt;You are able to create a swarm after you have completed all the setup steps. Ensure that the host machines are running the Docker Engine daemon. Open a terminal and ssh on the computer where the manager node must be executed.&lt;/p&gt;

&lt;p&gt;Run the following command to create a new swarm:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker swarm init --advertise-addr &amp;lt;MANAGER-IP&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have a manager machine with IP 172.31.80.181, so the command is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker swarm init --advertise-addr 172.31.80.181
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FbBObNBdbEYxnX9cqfpFSSegsWKvj8_7L2VC80xfUUAD6mZ6UkjmVpO-ygplavIYnwWIVlx_oEK8jeyojBo4PkTI0QO7utj47hXFijn7dW7gBpqNoOnKh0U3gRqUYAjJywf4WhDc" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FbBObNBdbEYxnX9cqfpFSSegsWKvj8_7L2VC80xfUUAD6mZ6UkjmVpO-ygplavIYnwWIVlx_oEK8jeyojBo4PkTI0QO7utj47hXFijn7dW7gBpqNoOnKh0U3gRqUYAjJywf4WhDc" alt="image-1"&gt;&lt;/a&gt;&lt;br&gt;
The --advertise-addr flag configures the manager node as 172.31.80.181 to publish its address. The other swarm nodes must have access to the IP address of the manager.&lt;br&gt;
The output consists of commands to attach new nodes to the swarm. Nodes will enter as managers or workers according to the flag value.&lt;/p&gt;

&lt;p&gt;To see the current state you can use docker info:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo docker info
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2FKOYu_m2HaHpa8THEDQJ4To4HShgp0hCZWgTOZYvb8igh_kKtLP8oIO1-DG1hRC5NfIKV5V0eZXUPBBRZzmSIeM0fAF7IO6EKfGN7j6d6" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2FKOYu_m2HaHpa8THEDQJ4To4HShgp0hCZWgTOZYvb8igh_kKtLP8oIO1-DG1hRC5NfIKV5V0eZXUPBBRZzmSIeM0fAF7IO6EKfGN7j6d6" alt="image-2"&gt;&lt;/a&gt;&lt;br&gt;
In this above image, we can see that there are no containers running on the docker server and the swarm flag is active. This also prints out the clusterId and number of managers and nodes etc.&lt;/p&gt;

&lt;p&gt;To view information about nodes use the command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker node ls
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh4.googleusercontent.com%2FAFe1gqFm_LdpMicIK-ie3ii41KAwuL1QTaBkRwEV5wcHHHF3h4kbOPSqqgT0OSq3vWrnad0rGT7hhinjQvBt5hoMQGBY9wy13kDJiGqYRSm_tGFHM7wmZMQstWHuAbq2IoBvW20" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh4.googleusercontent.com%2FAFe1gqFm_LdpMicIK-ie3ii41KAwuL1QTaBkRwEV5wcHHHF3h4kbOPSqqgT0OSq3vWrnad0rGT7hhinjQvBt5hoMQGBY9wy13kDJiGqYRSm_tGFHM7wmZMQstWHuAbq2IoBvW20" alt="image-3"&gt;&lt;/a&gt;&lt;br&gt;
The * next to the ID represents that this is the node we are currently connected to.&lt;br&gt;
The swarm mode Docker Engine names the machine hostname node automatically.&lt;/p&gt;
&lt;h2&gt;
  
  
  Adding Worker Nodes
&lt;/h2&gt;

&lt;p&gt;It's time to add worker nodes to the above-created swarm cluster.&lt;br&gt;
ssh into the machine where you want to run your worker.&lt;br&gt;
Now, we must run the output of docker swarm init as a command in this worker terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker swarm join --token SWMTKN-1-05ikz9ituzi3uhn1dq1r68bywhfzczg260b9zkhigj9bubomwb-a003dujcz7zu93rlb48wd0o87 172.31.80.181:2377 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FZhQ5C7fxOSzq6sV4BBMG-AFhxFjsPokbTfkJeRo74Tn0dziXWUkPwUNXAW9kjyiJRYub1Vx-mUzoiC3tZKh0AZUonLYP9ydUq8VZ7o9CB4P1dLg56T-3DDJv-HpZ-JR6IW8E2GM" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FZhQ5C7fxOSzq6sV4BBMG-AFhxFjsPokbTfkJeRo74Tn0dziXWUkPwUNXAW9kjyiJRYub1Vx-mUzoiC3tZKh0AZUonLYP9ydUq8VZ7o9CB4P1dLg56T-3DDJv-HpZ-JR6IW8E2GM" alt="image-4"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can execute the following command on a manager node to retrieve a worker's join command if your command is not available:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker swarm join-token worker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2FHEZYxK-EUszv77JLLWilxNWdGkV633JXATrNJ7N0M1SfataG62KcyuVtSfDvLjzUwgJiXFpGbya6V_4qr8ReCBgeqvPWrEbsKKEzS54bbv4Dl3P_OGjuxclXgY4hRpEOcJNa_D8" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2FHEZYxK-EUszv77JLLWilxNWdGkV633JXATrNJ7N0M1SfataG62KcyuVtSfDvLjzUwgJiXFpGbya6V_4qr8ReCBgeqvPWrEbsKKEzS54bbv4Dl3P_OGjuxclXgY4hRpEOcJNa_D8" alt="image-5"&gt;&lt;/a&gt;&lt;br&gt;
Do the same with the other worker as well. SSH into another machine and run the join command.&lt;/p&gt;

&lt;p&gt;To view worker nodes, open a terminal and ssh in the machine that runs the manager node and execute the Docker node ls command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker node ls
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh6.googleusercontent.com%2FRI-b7Fx5rA3e-8PPWpZskO-OCQSSl-AnUfdbL_1UJBW5KbKGYjYeEIKgV6Y8IAkJNduJHdz0bFVXumx_9ZwPNLXM95nicPFTiUKs2ckgT-hTyG1Qo7vlgWSQFg2qtHeG_nlQvxQ" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh6.googleusercontent.com%2FRI-b7Fx5rA3e-8PPWpZskO-OCQSSl-AnUfdbL_1UJBW5KbKGYjYeEIKgV6Y8IAkJNduJHdz0bFVXumx_9ZwPNLXM95nicPFTiUKs2ckgT-hTyG1Qo7vlgWSQFg2qtHeG_nlQvxQ" alt="image-6"&gt;&lt;/a&gt;&lt;br&gt;
The manager nodes in the swarm are determined by the column MANAGER. Worker1 and worker2 are identified as working nodes by the empty state in this column.&lt;/p&gt;
&lt;h2&gt;
  
  
  Deploy service to the swarm
&lt;/h2&gt;

&lt;p&gt;Now we have a cluster with a manager and two workers. We can now deploy services to the swarm cluster.&lt;/p&gt;

&lt;p&gt;Open the terminal into the manager node after SSH, and run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker service create --replicas 1 --name helloworld alpine ping docker.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's break down the above command:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;docker service create: to create a service&lt;/li&gt;
&lt;li&gt;--replicas: this flag indicates the desired state of 1 running instance.&lt;/li&gt;
&lt;li&gt;--name: used to name the service&lt;/li&gt;
&lt;li&gt;alpine ping docker.com: this indicates that the services is going to run alpine Linux and the primary command to run inside the instance or service is ping docker.com
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FwoerrRuAFhnZOahzy1jmOERTxQZoZOfLzpnVlT_IbxSJwu54Py313sSJVMGVTRjt4oV7cwkxQ727tuF3--Bi-H4F9__6j1BQmBN2PQRzVYxgTK7JYnYOG3MgUvHgL7Wfreo0EZs" alt="image-7"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To see the list of running services run the below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker service ls
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2FY9FtdwlgrlqwJn_bYEnVX7KrXlNeRToeZ5tkk1edVceK0aohCDYwIezTCAZP8btEmd0dfWJvbMWJouEem4WrxOKVcLH5v0qqhKbN7yC9UUAtGPHF51lUVkC53XXJtZZVXCeO1ZI" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2FY9FtdwlgrlqwJn_bYEnVX7KrXlNeRToeZ5tkk1edVceK0aohCDYwIezTCAZP8btEmd0dfWJvbMWJouEem4WrxOKVcLH5v0qqhKbN7yC9UUAtGPHF51lUVkC53XXJtZZVXCeO1ZI" alt="image-8"&gt;&lt;/a&gt;&lt;br&gt;
This image lists the name of the service we just create and the number of replicas, along with the base image, which is alpine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, We started with a description of the docker, then we discussed the need for multiple docker hosts. We then described what is docker swarm, its uses, and its working through docker swarm architecture, we also covered, different docker swarm terminologies like manager node and worker node. After we thoroughly understood the docker swarm we stated to implement it or run services in a swarm cluster. We started with creating 3 Linux hosts in AWS as EC2 instances along with the security group configuration(by adding TCP and UDP rules).&lt;br&gt;
We looked at how to create or initialize a swarm cluster through a manager node, then we added a couple of worker nodes to the same cluster. TO conclude we added a service running Linux alpine to run a ping command.&lt;br&gt;
Also this is my first article on dev.to, and really enjoyed writing it.&lt;/p&gt;

&lt;p&gt;liked the post? &lt;br&gt;
&lt;a href="https://ko-fi.com/I2I639WWJ" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fko-fi.com%2Fimg%2Fgithubbutton_sm.svg" alt="ko-fi"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>linux</category>
      <category>swarm</category>
    </item>
  </channel>
</rss>
