<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Yash Patil</title>
    <description>The latest articles on DEV Community by Yash Patil (@yash_patil16).</description>
    <link>https://dev.to/yash_patil16</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/yash_patil16"/>
    <language>en</language>
    <item>
      <title>Managing Storage and Packages in Linux: A Complete Guide</title>
      <dc:creator>Yash Patil</dc:creator>
      <pubDate>Fri, 07 Mar 2025 07:11:29 +0000</pubDate>
      <link>https://dev.to/yash_patil16/managing-storage-and-packages-in-linux-a-complete-guide-462</link>
      <guid>https://dev.to/yash_patil16/managing-storage-and-packages-in-linux-a-complete-guide-462</guid>
      <description>&lt;p&gt;Okay so lets get cracking on what could be the last part of this Linux Fundamentals series, and this is a exciting one to say the least !&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Storage Management&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;The Importance of High Throughput and Low Latency Storage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In computing, the efficiency of storage systems significantly influences application performance. High throughput ensures that large volumes of data are read or written swiftly, which is crucial for tasks like data analysis and multimedia processing. Low latency guarantees immediate data retrieval, vital for real-time applications such as online gaming and financial transactions. Together, high throughput and low latency contribute to a seamless and efficient computing experience.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Understanding Block Storage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Block storage is a method where data is stored in fixed-sized chunks called blocks. Each block is assigned a unique address, allowing the operating system to retrieve data efficiently. For instance, when storing a 10GB file, the system divides it into smaller blocks (commonly 4KB each). Each block is then stored separately, and an index keeps track of these blocks, facilitating quick access and management. This method enhances performance and allows for flexible data management.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7lpsrlqjabmn8l6ke1lg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7lpsrlqjabmn8l6ke1lg.png" alt="Image description" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS EBS(Elastic Block Storage) is an example of Block Storage devices. Block Storage is suitable for transactional databases and more where multiple users access the data and where parallel processing is needed.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Exploring Block Devices&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A block device is a hardware or virtual device that allows for reading and writing data in fixed-sized blocks. These devices include hard drives, SSDs, and virtual block devices. In Linux, block devices are represented as files and are typically found under the &lt;code&gt;/dev&lt;/code&gt; directory. For example, &lt;code&gt;/dev/sda&lt;/code&gt; might represent the first SATA drive in the system. These device files enable the operating system and applications to interact with the hardware seamlessly.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For our demo I will be using an EC2 instance from AWS with a default 8GB storage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3q1jbb0bqhpg4dut0zte.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3q1jbb0bqhpg4dut0zte.png" alt="Image description" width="800" height="349"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have logged into our EC2 instance:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faygp3scfa3x2j7gibvj5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faygp3scfa3x2j7gibvj5.png" alt="Image description" width="800" height="96"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To list the available block devices in your system use the following command :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lsblk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fls58cd56uf0y9npl17a7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fls58cd56uf0y9npl17a7.png" alt="Image description" width="800" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What I can see is a &lt;code&gt;xvda&lt;/code&gt; block device which has 4 partitions &lt;code&gt;xvda1&lt;/code&gt;, &lt;code&gt;xvda14&lt;/code&gt;, &lt;code&gt;xvda15&lt;/code&gt; and &lt;code&gt;xvda16&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feodc24zon9sgqul9gair.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feodc24zon9sgqul9gair.png" alt="Image description" width="690" height="172"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As I said all the block devices can be found under &lt;code&gt;/dev&lt;/code&gt; directory.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Diving into Partitions and the GPT Scheme&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Partitions are subdivisions of a storage device, allowing users to segment their storage for various purposes, such as separating system files from user data. The GUID Partition Table (GPT) is a modern partitioning scheme that offers several advantages over the older Master Boot Record (MBR) system. GPT supports larger disk sizes, allows for a nearly unlimited number of partitions, and includes redundancy and error-checking features.&lt;/p&gt;

&lt;p&gt;Our &lt;code&gt;xvda&lt;/code&gt; block device has 4 partitions &lt;code&gt;xvda1&lt;/code&gt;, &lt;code&gt;xvda14&lt;/code&gt;, &lt;code&gt;xvda15&lt;/code&gt; and &lt;code&gt;xvda16&lt;/code&gt; mounted on separate locations.&lt;/p&gt;

&lt;p&gt;But we can also use our disk or block storage as is without necessarily creating any partitions out of it.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Filesystems and Mounting: Making Partitions Usable&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now lets attach another block storage to our instance in the form on an &lt;strong&gt;AWS EBS Volume.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fts1iibdhpxl9al730c1o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fts1iibdhpxl9al730c1o.png" alt="Image description" width="800" height="121"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Above is the default 8GB Volume (&lt;strong&gt;xvda&lt;/strong&gt;) that was attached to our EC2 Instance when we created it.&lt;/p&gt;

&lt;p&gt;Created a volume of 10GB :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhnn8zkv1aq0yjuxngwt4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhnn8zkv1aq0yjuxngwt4.png" alt="Image description" width="800" height="65"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now attach this volume to our EC2 instance :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrsz3pvmxjwse0y5g68n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrsz3pvmxjwse0y5g68n.png" alt="Image description" width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnnntbbr3qq38ellz93wa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnnntbbr3qq38ellz93wa.png" alt="Image description" width="800" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see a new block storage is available under &lt;code&gt;xvdf&lt;/code&gt; of 10GB, its the EBS volume we created just now.&lt;/p&gt;

&lt;p&gt;Now lets create partitions out of it using &lt;strong&gt;gdisk&lt;/strong&gt; and lets mount them to separate locations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;```
sudo gdisk /dev/partion_or_blockdevice 
```
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fotx9m8i1qs0it1ouepcn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fotx9m8i1qs0it1ouepcn.png" alt="Image description" width="800" height="611"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose the &lt;strong&gt;n&lt;/strong&gt; flag to create a new partition out of &lt;strong&gt;/dev/xvdf&lt;/strong&gt; which was our 10GB EBS Block Storage Device.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5z8tclvq8ht4repw1k5x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5z8tclvq8ht4repw1k5x.png" alt="Image description" width="800" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Press&lt;/strong&gt; &lt;code&gt;n&lt;/code&gt; → Create a new partition.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Partition Number&lt;/strong&gt; → Press &lt;code&gt;Enter&lt;/code&gt; to accept the default.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;First sector&lt;/strong&gt; → Type &lt;code&gt;2048&lt;/code&gt; (or press Enter for the default, usually aligned).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Last sector&lt;/strong&gt; → Type &lt;code&gt;+1G&lt;/code&gt; (this tells &lt;code&gt;gdisk&lt;/code&gt; to create a 1GB partition).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Hex Code&lt;/strong&gt; → Press &lt;code&gt;Enter&lt;/code&gt; (default is &lt;code&gt;8300&lt;/code&gt; for Linux filesystems).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Press&lt;/strong&gt; &lt;code&gt;w&lt;/code&gt; → Write the changes to the disk.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Confirm with&lt;/strong&gt; &lt;code&gt;y&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Our partition of 1Gb is created with the name &lt;strong&gt;xvdf1.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F65am75qsb5bh2g7iwqmb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F65am75qsb5bh2g7iwqmb.png" alt="Image description" width="800" height="287"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Possible Actions for the Remaining 9GB:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Create More Partitions&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* You can create additional partitions using the unallocated space.

* Example: You could create a second 5GB partition and a third 4GB 
  partition.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Expand an Existing Partition Later&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* If needed, you can extend the **1GB partition** to use more space 
  later using tools like `gdisk`, `parted`, or `resize2fs` (for 
  filesystems like ext4).
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Leave It Unallocated&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* The remaining space will be available for future use but won’t be 
  accessible by the OS until partitioned and formatted.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After creating partitions, they remain raw spaces until formatted with a filesystem, which organizes data for storage and retrieval. A filesystem defines how data is stored and retrieved on a storage device. &lt;code&gt;ext4&lt;/code&gt; is a widely used filesystem in Linux, known for its robustness, support for large volumes, and journaling capabilities, which help in data recovery during unexpected shutdowns.&lt;/p&gt;

&lt;p&gt;To format a partition with the &lt;code&gt;ext4&lt;/code&gt; filesystem and mount it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Format the partition:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;```
sudo mkfs.ext4 /dev/xvdf1
```
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Replace &lt;code&gt;/dev/&lt;/code&gt;xvdf1 with your specific partition identifier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi7e571qv4pe8tucg1jli.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi7e571qv4pe8tucg1jli.png" alt="Image description" width="800" height="257"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Create a mount point:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;```
sudo mkdir /mnt/xvdf1Mount
```
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Mount the partition:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;```
sudo mount /dev/sdX1 /mnt/mydata
```
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmi8ug75y0d4407a2jirc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmi8ug75y0d4407a2jirc.png" alt="Image description" width="800" height="58"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F018tikwhze47xjdyeb0u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F018tikwhze47xjdyeb0u.png" alt="Image description" width="800" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see we have successfully mount our partition of a mount path in the Ubuntu OS of out EC2 Instance. Now you can create, read and write data in that mount path.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feyg5y9zscx7jagiry823.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feyg5y9zscx7jagiry823.png" alt="Image description" width="800" height="39"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Persisting Mounts with&lt;/strong&gt; &lt;code&gt;/etc/fstab&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To ensure that partitions mount automatically at boot, you can add entries to the &lt;code&gt;/etc/fstab&lt;/code&gt; file specifying &lt;strong&gt;the device, mount point, filesystem type, and options&lt;/strong&gt; :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Open&lt;/strong&gt; &lt;code&gt;/etc/fstab&lt;/code&gt; with a text editor:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;```
sudo vi /etc/fstab
```
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Add a line specifying the device, mount point, filesystem type, and options:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;```
/dev/xvdf1 /mnt/xvdf1Mount ext4 rw 0 0
```
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Save and close the file.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This configuration ensures that the system mounts the specified partitions automatically during startup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5kz6igoqxeagf6428c7h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5kz6igoqxeagf6428c7h.png" alt="Image description" width="800" height="155"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So I just rebooted my EC2 Instance but as you can see the partition still exists and is mounted to that path :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fugssfjnfvu07gxl6cdd9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fugssfjnfvu07gxl6cdd9.png" alt="Image description" width="800" height="305"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And our file also exists :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flxm7v9h5zwk47zzi9bup.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flxm7v9h5zwk47zzi9bup.png" alt="Image description" width="800" height="120"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Package Management&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Understanding Package Management&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Package management is a fundamental aspect of Linux systems, &lt;br&gt;
   facilitating the installation, upgrade, configuration, and removal &lt;br&gt;
   of software packages. It ensures that software dependencies are &lt;br&gt;
   handled automatically, maintaining system stability and &lt;br&gt;
   consistency.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Package Managers in Debian and RPM Families&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   * **Debian Family:**
     - **APT (Advanced Package Tool):** 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;A command-line tool that simplifies the process of managing software packages on Debian-based distributions like Ubuntu.                                               It handles package installation, updates, and                                                 removal, resolving                                                 dependencies                                                 automatically.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;     - **APT-GET:** An older command-line tool for handling packages, still widely used for script-based operations.

   * **RPM Family:**

     - **YUM (Yellowdog Updater Modified):** A package manager for RPM-based distributions like CentOS and Fedora. It manages package installations, updates, and removals, resolving dependencies automatically.

     -  **DNF (Dandified YUM):** The next-generation version of YUM, offering improved performance and better dependency resolution.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Operations with APT and APT-GET&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* **Difference Between APT and APT-GET:**

    While both tools serve similar purposes, `apt` is designed to provide a more user-friendly experience, combining functionalities of `apt-get` and `apt-cache`. It's recommended for interactive use, whereas `apt-get` is preferred for scripting purposes.

* **Common Commands:**

    * **Update Package Lists:**
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        ```
        sudo apt update
        ```
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    * **Upgrade Installed Packages:**
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        ```
        sudo apt upgrade
        ```
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    We usually run the above commands when we launch a new VM to update and upgrade its packages.

    * **Install a Package:**
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        ```bash
        sudo apt install package_name
        ```
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    * **Search for a Package:**
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        ```bash
        apt search package_name
        ```
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    * **Remove a Package:**
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        ```bash
        sudo apt remove package_name
        ```
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    * **Edit Sources List:**

        The package sources are listed in `/etc/apt/sources.list`. To edit:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        ```bash
        sudo nano /etc/apt/sources.list
        ```
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fad5cptopwgi6vzn9v5wn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fad5cptopwgi6vzn9v5wn.png" alt="Image description" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;List Installed Packages:&lt;/strong&gt;&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    apt list --installed
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Famt9n7sqqnadqk6a6lpo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Famt9n7sqqnadqk6a6lpo.png" alt="Image description" width="800" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;How Package Managers Work:&lt;/strong&gt;&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Package managers like APT work by connecting to remote repositories specified in the `/etc/apt/sources.list` file. They download package information and updates, ensuring that software installations and upgrades are consistent and reliable.
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0gpv5wink5o6m5d2nxc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0gpv5wink5o6m5d2nxc.png" alt="Image description" width="800" height="255"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;I hope this blog post helped you understand the &lt;strong&gt;standard partitioning scheme&lt;/strong&gt;, storage is divided into &lt;strong&gt;fixed partitions&lt;/strong&gt; during disk setup. Each partition is formatted with a filesystem and mounted for use and the &lt;strong&gt;Package Management&lt;/strong&gt; in the Linux Systems. If you have any more questions or need further clarification on any topic, feel free to ask!&lt;/p&gt;

&lt;p&gt;Next up is &lt;strong&gt;Traditional Partitioning vs. Logical Volume Management (LVM): Why LVM is the Better Choice.&lt;/strong&gt; I know I said this could be the last of the series but this just got more interesting than I anticipated it to be. Also this post is getting too big! :)&lt;/p&gt;

</description>
      <category>linux</category>
      <category>aws</category>
      <category>opensource</category>
      <category>devops</category>
    </item>
    <item>
      <title>Linux User Management and File Permissions</title>
      <dc:creator>Yash Patil</dc:creator>
      <pubDate>Fri, 07 Mar 2025 06:36:24 +0000</pubDate>
      <link>https://dev.to/yash_patil16/linux-user-management-and-file-permissions-4oi2</link>
      <guid>https://dev.to/yash_patil16/linux-user-management-and-file-permissions-4oi2</guid>
      <description>&lt;p&gt;&lt;strong&gt;Linux Users, Groups, and File Permissions: A Deep Dive&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Do We Need User Accounts and File Permissions in Linux?
&lt;/h3&gt;

&lt;p&gt;Imagine a company office where multiple employees share a workspace. Each employee has their own desk (personal files) and shared meeting rooms (public directories). To maintain order, employees should only have access to their own desks and specific shared spaces. Similarly, in Linux, user accounts and file permissions ensure that only authorized users can access certain files and directories, maintaining security and privacy.&lt;/p&gt;

&lt;p&gt;Every Linux user is associated with an account that determines their identity and access levels.&lt;/p&gt;

&lt;p&gt;And &lt;strong&gt;What is a Group?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;group&lt;/strong&gt; is a collection of users, used to organize users based on common attributes such as roles and functions.&lt;/p&gt;

&lt;p&gt;Remember, if a user is not a part of any group while its creation, it has a primary group assigned to itself with the &lt;strong&gt;same name as the username&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;User accounts come with several attributes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;User ID (UID):&lt;/strong&gt; A unique identifier for each user.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Group ID (GID):&lt;/strong&gt; The primary group associated with the user.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Home Directory:&lt;/strong&gt; The default location where user files and configurations are stored.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Default Shell:&lt;/strong&gt; The command-line interface the user interacts with upon logging in.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Types of Accounts in Linux
&lt;/h3&gt;

&lt;p&gt;Linux accounts fall into different categories:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Regular Users:&lt;/strong&gt; Standard accounts with limited access, meant for everyday use.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Superuser (Root):&lt;/strong&gt; The administrator with unrestricted system access. &lt;strong&gt;UID= 0&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;System Accounts:&lt;/strong&gt; Used for system processes and background services. &lt;strong&gt;UID&lt;/strong&gt; &lt;strong&gt;(500-1000)&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Service Accounts:&lt;/strong&gt; Created for applications that require specific permissions.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Understanding the &lt;code&gt;sudo&lt;/code&gt; Group
&lt;/h3&gt;

&lt;p&gt;In Linux, direct root access can be risky. Instead, users are given &lt;code&gt;sudo&lt;/code&gt; privileges to execute administrative commands temporarily. The &lt;code&gt;sudo&lt;/code&gt; group allows users to perform actions that typically require root access.&lt;/p&gt;

&lt;p&gt;To add an existing user to the &lt;code&gt;sudo&lt;/code&gt; group:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;usermod &lt;span class="nt"&gt;-aG&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;username
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;username&lt;/code&gt; with the actual username.&lt;/p&gt;

&lt;h3&gt;
  
  
  Managing Users and Groups
&lt;/h3&gt;

&lt;p&gt;Creating a new group:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;groupadd fsociety
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Adding a new user and assigning them to a specific group:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;useradd &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; /home/john &lt;span class="nt"&gt;-s&lt;/span&gt; /bin/bash &lt;span class="nt"&gt;-G&lt;/span&gt; fsociety elliot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a user &lt;code&gt;john&lt;/code&gt; with a home directory, a default shell, and assigns them to the &lt;code&gt;developers&lt;/code&gt; group.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key System Files for User Management
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;/etc/passwd&lt;/code&gt; - Stores user account information such as UID, GID, home directory, and shell.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4h85dkvw9ov6ocw9ntj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4h85dkvw9ov6ocw9ntj.png" alt="Image description" width="800" height="235"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;And here i am :
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0vj0jiupxzlmnxl6daqm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0vj0jiupxzlmnxl6daqm.png" alt="Image description" width="622" height="61"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;/etc/shadow&lt;/code&gt; - Contains encrypted passwords and expiration details for user accounts.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fts5entf8ll77ryozs3y5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fts5entf8ll77ryozs3y5.png" alt="Image description" width="527" height="56"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Ohhh look, **PERMISSION DENIED!!**
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0i6ypuvb4ovk1ez3d1r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0i6ypuvb4ovk1ez3d1r.png" alt="Image description" width="582" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Good thing i have **sudo** privilages huh?!

Also if you see, my password is stored using a cryptographic hash using modern cryptographic hashing algorithms.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0mw449azgl2lfkc76ok.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0mw449azgl2lfkc76ok.png" alt="Image description" width="800" height="25"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;/etc/group&lt;/code&gt; - Lists all system groups and their associated users.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2pe17708a8f533rn3zr6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2pe17708a8f533rn3zr6.png" alt="Image description" width="645" height="310"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You can see a **yashp** group with **GID=1000**, it is the primary group that was created with the same name as my account username.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  File Permissions: Why They Matter
&lt;/h3&gt;

&lt;p&gt;Restricting file access prevents unauthorized modifications and data leaks. For example, SSH private keys need strict permissions; otherwise, SSH will refuse to use them.&lt;/p&gt;
&lt;h3&gt;
  
  
  Understanding the Octal Permission System
&lt;/h3&gt;

&lt;p&gt;Each file in Linux has three types of permissions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Read (r) = 4&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Write (w) = 2&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Execute (x) = 1&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Permissions apply to three categories:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Owner&lt;/strong&gt; (the file creator)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Group&lt;/strong&gt; (users in the same group as the owner)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Others&lt;/strong&gt; (everyone else)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Lets see what i actually mean..&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fat4m8qaekowbv3trmxdg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fat4m8qaekowbv3trmxdg.png" alt="Image description" width="623" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Created these files using &lt;code&gt;touch&lt;/code&gt; command with default permissions. You can see the files have read and write permissions for the owner i.e &lt;strong&gt;yashp&lt;/strong&gt; user account, and only read permissions for members of the group &lt;strong&gt;yashp&lt;/strong&gt; and other global users.&lt;/p&gt;

&lt;p&gt;Setting permissions using &lt;code&gt;chmod&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;chmod &lt;/span&gt;777 filename
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This sets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Owner: &lt;strong&gt;rwx (7 = 4+2+1)&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Group: &lt;strong&gt;rwx (5 = 4+0+1)&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Others: &lt;strong&gt;rwx (4 = 4+0+0)&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhpqhl69hez3q2md27bkc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhpqhl69hez3q2md27bkc.png" alt="Image description" width="670" height="172"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see &lt;strong&gt;Read, Write and Execute&lt;/strong&gt; permissions are allocated to the &lt;strong&gt;Owner, Members of the Group and other Global users&lt;/strong&gt;. Try and experiment different combinations yourself to get a good idea about this.&lt;/p&gt;

&lt;h3&gt;
  
  
  File Ownership and the &lt;code&gt;chown&lt;/code&gt; Command
&lt;/h3&gt;

&lt;p&gt;Each file has an owner and an associated group. Changing ownership:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;chown &lt;/span&gt;newowner:newgroup filename
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;chown &lt;/span&gt;alice:developers project.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This makes &lt;code&gt;alice&lt;/code&gt; the owner and assigns the &lt;code&gt;developers&lt;/code&gt; group.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;User and file management in Linux ensures security, privacy, and system integrity. Understanding these concepts helps prevent unauthorized access and system misuse, making Linux both powerful and secure.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>opensource</category>
      <category>devops</category>
      <category>operatingsystems</category>
    </item>
    <item>
      <title>Introduction to Linux: The Open-Source Revolution</title>
      <dc:creator>Yash Patil</dc:creator>
      <pubDate>Fri, 07 Mar 2025 06:30:31 +0000</pubDate>
      <link>https://dev.to/yash_patil16/introduction-to-linux-the-open-source-revolution-1494</link>
      <guid>https://dev.to/yash_patil16/introduction-to-linux-the-open-source-revolution-1494</guid>
      <description>&lt;p&gt;Let’s explore the &lt;strong&gt;history, architecture, and core concepts&lt;/strong&gt; of Linux in this deep dive.&lt;/p&gt;

&lt;p&gt;Would 100% recommend you to watch &lt;a href="https://www.youtube.com/watch?v=o8NPllzkFhE&amp;amp;pp=ygUTbGludXMgdG9ydmFsZHMgdGVkeA%3D%3D" rel="noopener noreferrer"&gt;Linus’s TED talk&lt;/a&gt;, where he has talked about his journey in creating Linux and changing the open-source community completely!&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;1. The History of Linux: From UNIX to Today&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The Birth of UNIX (1960s-1970s)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The journey of Linux began in &lt;strong&gt;1964&lt;/strong&gt; when Bell Labs, MIT, and General Electric collaborated on the &lt;strong&gt;Multics&lt;/strong&gt; project—a powerful, multi-user OS. However, Multics was overly complex, leading Bell Labs to withdraw from the project.&lt;/p&gt;

&lt;p&gt;In &lt;strong&gt;1969&lt;/strong&gt;, two brilliant minds at Bell Labs—&lt;strong&gt;Dennis Ritchie&lt;/strong&gt; and &lt;strong&gt;Ken Thompson&lt;/strong&gt;—decided to build a simpler alternative. They created &lt;strong&gt;UNIX&lt;/strong&gt;, originally called UNICS (Uniplexed Information and Computing Service), which was designed for multitasking and multi-user functionality. UNIX was small, efficient, and written in &lt;strong&gt;assembly language&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;By &lt;strong&gt;1973&lt;/strong&gt;, Ritchie rewrote UNIX in &lt;strong&gt;C&lt;/strong&gt;, making it portable across different hardware. This was a game-changer, as it allowed UNIX to spread rapidly in universities, research institutions, and corporations.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;MINIX and the Rise of Linux (1980s-1991)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In the 1980s, UNIX became proprietary, leading to limited access. However, &lt;strong&gt;Andrew Tanenbaum&lt;/strong&gt;, a professor, developed &lt;strong&gt;MINIX&lt;/strong&gt;, a lightweight, Unix-like OS for educational purposes.&lt;/p&gt;

&lt;p&gt;In &lt;strong&gt;1991&lt;/strong&gt;, &lt;strong&gt;Linus Torvalds&lt;/strong&gt;, a Finnish computer science student, wanted a free and open OS like UNIX but with more flexibility than MINIX. So, he &lt;strong&gt;built his own kernel&lt;/strong&gt; and announced it in a Usenet post:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“I'm doing a (free) operating system (just a hobby, won't be big and professional like GNU) for 386(486) AT clones.”&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This quote holds massive AURA!&lt;/p&gt;

&lt;p&gt;Thus, &lt;strong&gt;Linux was born&lt;/strong&gt;. With the contributions of developers worldwide, what started as a personal project by Linus Torvalds, Linux quickly evolved into a &lt;strong&gt;powerful, open-source OS&lt;/strong&gt; used in personal computers, servers, smartphones, and embedded systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Why Did We Need Linux?&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;UNIX was expensive and closed-source&lt;/strong&gt; – Linux became the &lt;strong&gt;free&lt;/strong&gt; alternative.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Flexibility and customization&lt;/strong&gt; – Linux allowed users to modify and tailor the OS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security and stability&lt;/strong&gt; – Linux provided a more secure and stable computing environment.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Key Features of Linux&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Open-source&lt;/strong&gt;: Anyone can modify and improve it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Multi-user &amp;amp; multitasking&lt;/strong&gt;: Supports multiple users and processes simultaneously.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;: Built-in permissions, encryption, and firewall capabilities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Portability&lt;/strong&gt;: Runs on various hardware platforms.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: Powers everything from small devices to supercomputers.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;2. Understanding the Linux Kernel&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What is the Kernel?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;Linux kernel&lt;/strong&gt; is the &lt;strong&gt;core of the operating system&lt;/strong&gt;—the bridge between hardware and software. It controls everything, from &lt;strong&gt;memory management&lt;/strong&gt; to &lt;strong&gt;process execution&lt;/strong&gt; and &lt;strong&gt;device handling&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Why is the Kernel Necessary?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Without a kernel, your computer wouldn't know how to talk to the hardware.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Memory Management&lt;/strong&gt; : Keeps track of how much memory is used to store what and where.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Process Management&lt;/strong&gt; : Determines which process can use the CPU and for how long.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Device Drivers&lt;/strong&gt; : Acts as a mediator between hardware and processes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;System calls and Security&lt;/strong&gt; : Receives requests from processes for services&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Components of an Operating System&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A &lt;strong&gt;complete OS&lt;/strong&gt; consists of:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Kernel&lt;/strong&gt; – Handles core system functions (process scheduling, hardware interaction).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;GNU Utilities&lt;/strong&gt; – Provides essential tools (shell, compilers, libraries).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Together, &lt;strong&gt;Linux Kernel + GNU&lt;/strong&gt; create a fully functional OS.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Kernel Space vs. User Space&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Kernel Space&lt;/strong&gt; – It is the portion of memory where the kernel operates and provides its services, with full access to system resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;User Space&lt;/strong&gt; – Where applications and user processes run, with limited access for security reasons.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Conceptual Diagram of a UNIX system&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzltjue3cybgqgkmfr5st.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzltjue3cybgqgkmfr5st.png" alt="Image description" width="652" height="652"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Utilities and Applications interact with the Kernel via the Shell which is written in ‘C‘, to perform tasks. Hence users interact with the kernel via shell or system calls which then gives access to hardware and software resources for the applications to execute tasks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;uname&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use this command to get basic Kernel information.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdt651whryh66wnkc9ie.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdt651whryh66wnkc9ie.png" alt="Image description" width="534" height="97"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;3. The Linux Boot Sequence&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The Linux boot process is &lt;strong&gt;how the system starts&lt;/strong&gt; when you power it on:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. BIOS/UEFI Initialization&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;BIOS (Basic Input/Output System) performs hardware checks using BIOS POST(Power on Self Test).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It looks for a bootable disk and hands over control to the &lt;strong&gt;boot loader&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. Boot Loader (GRUB 2)&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;After a successful POST test, BIOS loads and executes the boot code from the boot device, located in the first sector of the hard drive.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;GRUB (Grand Unified Bootloader)&lt;/strong&gt; loads the Linux kernel into memory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It allows you to select the OS (if dual-booting).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3. Kernel Initialization&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The kernel decompresses, loads into the memory and executes tasks such as initializing hardware and memory management and mounts the &lt;strong&gt;root filesystem&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It looks for an &lt;strong&gt;init process&lt;/strong&gt; to initialize user space and sets up the processes needed to run in user environment.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4. Init Process (systemd)&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The &lt;strong&gt;init&lt;/strong&gt; process then calls the &lt;strong&gt;systemd&lt;/strong&gt; daemon.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;systemd&lt;/strong&gt; starts system services and targets, bringing the OS to a working state.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Essential services (networking, logging, display) are initialized.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Actually systemd the first process with its process id (PID) 1.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8du17tjtq8xvb16cz0yd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8du17tjtq8xvb16cz0yd.png" alt="Image description" width="800" height="105"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;5. Shells and Environment Variables&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Types of Shells in Linux&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Bash (Bourne Again Shell)&lt;/strong&gt; – Most widely used, powerful scripting features.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Zsh (Z Shell)&lt;/strong&gt; – Extended features over Bash (auto-completion, themes).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Fish (Friendly Interactive Shell)&lt;/strong&gt; – User-friendly, with syntax highlighting.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Features of Bash Shell&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Command history&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Aliases &amp;amp; functions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Script execution (&lt;code&gt;.sh&lt;/code&gt; files)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Alias for commands&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Environment Variables&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Environment variables store system-wide settings. To check all the Environmental variables in the systems use :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;env&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F06gwtil5tdw7z6rnhfak.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F06gwtil5tdw7z6rnhfak.png" alt="Image description" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to create a custom environment variable and make it persistant across logins ?&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;VARAIBLE_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;value
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2q118h5jgpw33eb3ur2i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2q118h5jgpw33eb3ur2i.png" alt="Image description" width="474" height="118"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To make it persistent, in the &lt;code&gt;.bashrc&lt;/code&gt; file add this export VARAIBLE_NAME=value line at the end of the file using a VIM editor and reboot the system.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;6. Linux File Types and Filesystem Structure&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Every object in Linux can be considered to be a type of file. This is a generic statement which in principle is actually true.&lt;/p&gt;

&lt;p&gt;How to check file types?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-ltr&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5getj6pp2fjcay7dmtx8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5getj6pp2fjcay7dmtx8.png" alt="Image description" width="790" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first character such as d, -, etc. specifies its file type.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Linux File Types&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Regular Files (&lt;/strong&gt;&lt;code&gt;-&lt;/code&gt;) – Documents, programs, scripts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Directories (&lt;/strong&gt;&lt;code&gt;d&lt;/code&gt;) – Folders containing files. &lt;code&gt;/home/user&lt;/code&gt; is a type of folder every linux OS has.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Character Files (&lt;/strong&gt;&lt;code&gt;c&lt;/code&gt;) – Represent devices under &lt;code&gt;/dev&lt;/code&gt; filesystem that allows OS to communicate with I/O devices like keyboards.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Block Files (&lt;/strong&gt;&lt;code&gt;b&lt;/code&gt;) – Represent storage devices like hard drives.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Symbolic Links (&lt;/strong&gt;&lt;code&gt;l&lt;/code&gt;) – Shortcuts to other files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Sockets (&lt;/strong&gt;&lt;code&gt;s&lt;/code&gt;) – Allow inter-process communication.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pipes (&lt;/strong&gt;&lt;code&gt;p&lt;/code&gt;) – Enable data transfer between processes. Used to connect one process’s input to other process and this is a unidirectional flow.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Linux Filesystem Architecture&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;Linux filesystem&lt;/strong&gt; is a hierarchical structure starting from &lt;strong&gt;root (&lt;/strong&gt;&lt;code&gt;/&lt;/code&gt;):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqumkwmkat09797yebfam.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqumkwmkat09797yebfam.png" alt="Image description" width="800" height="75"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fps25pnts9pooeiabf062.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fps25pnts9pooeiabf062.png" alt="Image description" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Final Thoughts&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Linux has come a long way from being a &lt;strong&gt;hobby project&lt;/strong&gt; to &lt;strong&gt;powering the internet&lt;/strong&gt;. Whether you're a beginner or an advanced user, understanding its &lt;strong&gt;history, architecture, and core components&lt;/strong&gt; is key to mastering it.&lt;/p&gt;

&lt;p&gt;In the next blog, we’ll dive deeper into &lt;strong&gt;user and file access management, package management, storage, networking and much more.&lt;/strong&gt; Stay tuned! 💙🐧&lt;/p&gt;

</description>
      <category>linux</category>
      <category>opensource</category>
      <category>devops</category>
      <category>opearingsystems</category>
    </item>
    <item>
      <title>Leveraging efficient workflows of GitHub Actions for Seamless Automation</title>
      <dc:creator>Yash Patil</dc:creator>
      <pubDate>Sat, 25 Jan 2025 06:50:27 +0000</pubDate>
      <link>https://dev.to/yash_patil16/leveraging-efficient-workflows-of-github-actions-for-seamless-automation-1ef9</link>
      <guid>https://dev.to/yash_patil16/leveraging-efficient-workflows-of-github-actions-for-seamless-automation-1ef9</guid>
      <description>&lt;h3&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Hello again&lt;/strong&gt; !! In this blog, lets explore GitHub Actions and their role in automating CI/CD workflows. We’ll walk through a detailed explanation of a GitHub Actions YAML file I created for automating the CI/CD pipeline of a MERN stack application. Additionally, lets also compare GitHub Actions with Jenkins to know perks and cons of both.&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;What Are GitHub Actions?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;GitHub Actions is a CI/CD and automation platform directly integrated into GitHub. It allows developers to automate their workflows—such as testing, building, and deploying—by writing simple YAML configuration files. These workflows are triggered by events such as pushes, pull requests, or even manual intervention.&lt;/p&gt;

&lt;p&gt;GitHub Actions goes beyond just DevOps and lets you run workflows when other events happen in your repository. For example, you can run a workflow to automatically add the appropriate labels whenever someone creates a new issue in your repository.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why Use GitHub Actions?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Seamless Integration:&lt;/strong&gt; It’s built into GitHub, so there’s no need for separate configuration. If any organization uses Github for its codebase, it just makes sense of use Github Actions for their automation and SDLC workflows.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automation:&lt;/strong&gt; Automate repetitive tasks like testing, building, and deploying code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Flexibility:&lt;/strong&gt; Trigger workflows based on a variety of events such as commits to certain branches, issues created to the repository, PRs created, manual triggering with input parameters and many more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost-Effectiveness:&lt;/strong&gt; Its free for public repositories and has generous limits for private repositories.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Key Components:
&lt;/h4&gt;

&lt;p&gt;You can configure a GitHub Actions &lt;strong&gt;workflow&lt;/strong&gt; to be triggered when an &lt;strong&gt;event&lt;/strong&gt; occurs in your repository, such as a pull request being opened or an issue being created. Your workflow contains one or more &lt;strong&gt;jobs&lt;/strong&gt; which can run in sequential order or in parallel. Each job will run inside its own virtual machine &lt;strong&gt;runner&lt;/strong&gt;, or inside a container, and has one or more &lt;strong&gt;steps&lt;/strong&gt; that either run a script that you define or run an &lt;strong&gt;action&lt;/strong&gt;, which is a reusable extension that can simplify your workflow.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Workflows:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;Workflow&lt;/strong&gt; is a configurable automated process that will run one or more jobs. Workflows are defined by a YAML file checked in to your repository and will run when triggered by an event in your repository, or they can be triggered manually, or at a defined schedule.&lt;/p&gt;

&lt;p&gt;Workflows are defined in the &lt;code&gt;.github/workflows&lt;/code&gt; directory in a repository. A repository can have multiple workflows, each of which can perform a different set of tasks such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Building and testing pull requests&lt;/li&gt;
&lt;li&gt;Deploying your application every time a release is created&lt;/li&gt;
&lt;li&gt;Adding a label whenever a new issue is opened&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Events:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An &lt;strong&gt;event&lt;/strong&gt; is a specific activity in a repository that triggers a &lt;strong&gt;workflow&lt;/strong&gt; run. For example, an activity can originate from GitHub when someone creates a pull request, opens an issue, or pushes a commit to a repository. You can also trigger a workflow to run on a schedule, by posting to a REST API, or manually.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Jobs:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;job&lt;/strong&gt; is a set of &lt;strong&gt;steps&lt;/strong&gt; in a workflow that is executed on the same &lt;strong&gt;runner&lt;/strong&gt;. Each step is either a shell script that will be executed, or an &lt;strong&gt;action&lt;/strong&gt; that will be run. Steps are executed in order and are dependent on each other. Since each step is executed on the same runner, you can share data from one step to another. For example, you can have a step that builds your application followed by a step that tests the application that was built.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;Steps:&lt;/strong&gt; Individual tasks in a job, like running a script or executing an action.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;Runners:&lt;/strong&gt; A &lt;strong&gt;runner&lt;/strong&gt; is a server that runs your workflows when they're triggered. Each runner can run a single &lt;strong&gt;job&lt;/strong&gt; at a time. GitHub provides Ubuntu Linux, Microsoft Windows, and macOS runners to run your &lt;strong&gt;workflows&lt;/strong&gt;. Each workflow run executes in a fresh, newly-provisioned virtual machine.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;




&lt;p&gt;So before going forward, check out the MERN Stack Application we will be working with at this &lt;a href="https://dev.to/yash_patil16/containerizing-a-mern-stack-application-1pn1"&gt;blog&lt;/a&gt; and checkout this project’s &lt;a href="https://github.com/YashPatil1609/MERN-CICD-ActionsWorkflow.git" rel="noopener noreferrer"&gt;repository&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  CI/CD Workflow for the MERN Stack Application
&lt;/h3&gt;

&lt;p&gt;Below is the complete GitHub Actions YAML file which i have written for this project from scratch. Let’s break it down step by step.&lt;/p&gt;

&lt;h4&gt;
  
  
  Full YAML File:
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build and Deploy a MERN Stack Application&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;workflow_dispatch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

&lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;DOCKERHUB_USER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.DOCKERHUB_USER }}&lt;/span&gt;
    &lt;span class="na"&gt;DOCKERHUB_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.DOCKERHUB_PASSWORD }}&lt;/span&gt;
    &lt;span class="na"&gt;VERSION&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ github.sha }}&lt;/span&gt;
    &lt;span class="na"&gt;INSTANCE_IP&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.INSTANCE_IP }}&lt;/span&gt;
    &lt;span class="na"&gt;SSH_KEY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.PRIVATE_KEY }}&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Build-Push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build and Push Docker Images&lt;/span&gt;
        &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;

        &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout code&lt;/span&gt;
          &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;

        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Login to DockerHub&lt;/span&gt;
          &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo $DOCKERHUB_PASSWORD | docker login -u $DOCKERHUB_USER --password-stdin&lt;/span&gt;

        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build and push Frontend Image&lt;/span&gt;
          &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;cd ./mern/frontend&lt;/span&gt;
            &lt;span class="s"&gt;docker build -t $DOCKERHUB_USER/mern-frontend:$VERSION .&lt;/span&gt;
            &lt;span class="s"&gt;docker push $DOCKERHUB_USER/mern-frontend:$VERSION&lt;/span&gt;

        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build and push Backend Image&lt;/span&gt;
          &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;cd ./mern/backend&lt;/span&gt;
            &lt;span class="s"&gt;docker build -t $DOCKERHUB_USER/mern-backend:$VERSION .&lt;/span&gt;
            &lt;span class="s"&gt;docker push $DOCKERHUB_USER/mern-backend:$VERSION&lt;/span&gt;

    &lt;span class="na"&gt;Deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy the application&lt;/span&gt;
        &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
        &lt;span class="na"&gt;needs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build-Push&lt;/span&gt;

        &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout code&lt;/span&gt;
          &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;

        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Add SSH Key&lt;/span&gt;
          &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;mkdir -p ~/.ssh&lt;/span&gt;
            &lt;span class="s"&gt;echo "${{ secrets.PRIVATE_KEY }}" &amp;gt; ~/.ssh/id_rsa&lt;/span&gt;
            &lt;span class="s"&gt;chmod 600 ~/.ssh/id_rsa&lt;/span&gt;

        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Add EC2 to known hosts&lt;/span&gt;
          &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;ssh-keyscan -H $INSTANCE_IP &amp;gt;&amp;gt; ~/.ssh/known_hosts&lt;/span&gt;

        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Copy startup-script and docker-compose to EC2&lt;/span&gt;
          &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;scp -i ~/.ssh/id_rsa startup-script.sh ubuntu@$INSTANCE_IP:/home/ubuntu&lt;/span&gt;
            &lt;span class="s"&gt;scp -i ~/.ssh/id_rsa docker-compose.yml ubuntu@$INSTANCE_IP:/home/ubuntu&lt;/span&gt;

        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run deployment script on EC2&lt;/span&gt;
          &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;ssh -i ~/.ssh/id_rsa ubuntu@$INSTANCE_IP "bash /home/ubuntu/startup-script.sh $VERSION"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Step-by-Step Explanation
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Workflow Trigger: &lt;code&gt;on: workflow_dispatch&lt;/code&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Purpose:&lt;/strong&gt; This allows manual triggering of the workflow from the GitHub Actions interface.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Why:&lt;/strong&gt; Useful for deployments where you want to ensure readiness before running the CI/CD pipeline.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We can also configure these triggers to execute our workflows by events such as commits to certain branches, issues raised to the repository, PRs created, manual triggering with input parameters and many more.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;code&gt;env&lt;/code&gt; Section:
&lt;/h3&gt;

&lt;p&gt;This defines environment variables used across jobs. We have to configure these secrets under repository Settings/Secrets and Variables/Actions/Repository Secrets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqf9tuqbliqjgndjj4g1q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqf9tuqbliqjgndjj4g1q.png" alt="Image description" width="800" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have created these variables or secrets to avoid hardcoding values of repetitive and secrets information in the pipeline script.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;DOCKERHUB_USER&lt;/code&gt; and &lt;code&gt;DOCKERHUB_PASSWORD&lt;/code&gt; are pulled from GitHub Secrets for secure access to my Dockerhub account while pushing and pulling built images.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;VERSION&lt;/code&gt; uses &lt;code&gt;github.sha&lt;/code&gt; to dynamically tag Docker images with the commit SHA. This voids the need to hardcode image tags.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;INSTANCE_IP&lt;/code&gt; specifies the target EC2 instance IP for deployment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;SSH_KEY&lt;/code&gt; contains the private key for SSH access to the EC2 instance.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Job 1: &lt;code&gt;Build-Push&lt;/code&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Runs-on: ubuntu-latest&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Purpose:&lt;/strong&gt; Specifies the environment where the job will run. GitHub-hosted runners (like &lt;code&gt;ubuntu-latest&lt;/code&gt;) provide a pre-configured environment. When a workflow starts that is meant to be “&lt;strong&gt;run-on&lt;/strong&gt;“ one of these GitHub Hosted Runners, GitHub spins up a fresh Virtual Machine(or a container) based on the specified environment(here &lt;code&gt;ubuntu-latest&lt;/code&gt;). After the Job is executed, the VM is destroyed which ensures a clean environment for every new job or a workflow.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This Runner comes with pre-configured environment, that’s the reason why i am able to run Docker commands without explicitly installing it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;These Runners are managed and maintained entirely by GitHub and our project has no ownership over it.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Steps in Build-Push Job:
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Checkout Code:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout code&lt;/span&gt;
  &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* **Purpose:** `actions/checkout@v2` is a pre-built action from GitHub’s marketplace. Retrieves the code from the repository to the runner.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Login to DockerHub:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Login to DockerHub&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo $DOCKERHUB_PASSWORD | docker login -u $DOCKERHUB_USER --password-stdin&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* **Purpose:** Logs into DockerHub to allow pushing images.

* 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwhmofisa9rt2iu5wc2w5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwhmofisa9rt2iu5wc2w5.png" alt="Image description" width="634" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    In this step I used the Environment Variables created at the start of the script.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Build and Push Frontend Image:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build and push Frontend Image&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;cd ./mern/frontend&lt;/span&gt;
    &lt;span class="s"&gt;docker build -t $DOCKERHUB_USER/mern-frontend:$VERSION .&lt;/span&gt;
    &lt;span class="s"&gt;docker push $DOCKERHUB_USER/mern-frontend:$VERSION&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* **Purpose:** Builds and tags the frontend image, then pushes it to DockerHub. The folder already contains the custom DockerFile for the frontend service.

* Thing to notice is how i use `VERSION` variable that uses `github.sha` to dynamically tag Docker images with the commit SHA. This voids the need to hardcode image tags.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Build and Push Backend Image:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build and push Backend Image&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;cd ./mern/backend&lt;/span&gt;
    &lt;span class="s"&gt;docker build -t $DOCKERHUB_USER/mern-backend:$VERSION .&lt;/span&gt;
    &lt;span class="s"&gt;docker push $DOCKERHUB_USER/mern-backend:$VERSION&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* **Purpose:** Similar to the frontend step but for the backend application.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here our First Job is completed where we Checked out the repository codebase, built the Docker Images for the application services and pushed them to the DockerHub.&lt;/p&gt;

&lt;p&gt;After the execution of this Job, the GitHub hosted runner (&lt;code&gt;ubuntu-latest&lt;/code&gt;) is terminated.&lt;/p&gt;




&lt;h3&gt;
  
  
  Job 2: &lt;code&gt;Deploy&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;For Deploying our application I will be using a AWS EC2 Instance. So stick with me, I will also show the execution of the entire pipeline, this is just the explanation of the pipeline script.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Needs: Build-Push&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Ensures this job starts only after the &lt;code&gt;Build-Push&lt;/code&gt; job completes successfully.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Steps in Deploy Job:
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Checkout Code:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* Same as the previous step; ensures code is available on the runner.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Add SSH Key:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Add SSH Key&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;mkdir -p ~/.ssh&lt;/span&gt;
    &lt;span class="s"&gt;echo "${{ secrets.PRIVATE_KEY }}" &amp;gt; ~/.ssh/id_rsa&lt;/span&gt;
    &lt;span class="s"&gt;chmod 600 ~/.ssh/id_rsa&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* **Purpose:** Configures the private SSH key for accessing the EC2 instance securely. I am using the `${{ secrets.PRIVATE_KEY }}` which stores the private key to the deployment server (EC2 Instance).
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Add EC2 to Known Hosts:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Add EC2 to known hosts&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;ssh-keyscan -H $INSTANCE_IP &amp;gt;&amp;gt; ~/.ssh/known_hosts&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* **Purpose:** Prevents host authenticity issues during SSH commands.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Copy Deployment Files to EC2:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Copy startup-script and docker-compose to EC2&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;scp -i ~/.ssh/id_rsa startup-script.sh ubuntu@$INSTANCE_IP:/home/ubuntu&lt;/span&gt;
    &lt;span class="s"&gt;scp -i ~/.ssh/id_rsa docker-compose.yml ubuntu@$INSTANCE_IP:/home/ubuntu&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* **Purpose:** This transfers my custom startup shell script and the docker-compose file for deployment to the target EC2 instance.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Run Deployment Script on EC2:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run deployment script on EC2&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;ssh -i ~/.ssh/id_rsa ubuntu@$INSTANCE_IP "bash /home/ubuntu/startup-script.sh $VERSION"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxr1yazzo4wjk3z3b8u0t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxr1yazzo4wjk3z3b8u0t.png" alt="Image description" width="429" height="215"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    **Purpose:** Executes the deployment script remotely on the EC2 instance to start the application using Docker Compose. I export the `$VERSION` variable to the EC2 Instance using the Shell Script which i have referenced in the Docker-Compose file to pull and run the correct versions of application service containers.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Pipeline Execution&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Lets execute this pipeline shall we ?!?!&lt;/p&gt;

&lt;p&gt;We have configured our workflow to be manually triggered. Click on &lt;strong&gt;Run workflow&lt;/strong&gt; to execute the pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9sd0x3362hv5k503pg8d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9sd0x3362hv5k503pg8d.png" alt="Image description" width="800" height="213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So the Workflow for our first job &lt;strong&gt;Build and Push Docker Images&lt;/strong&gt; is started and lets checkout the logs of each step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobzxsujdtb6z2488byqb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobzxsujdtb6z2488byqb.png" alt="Image description" width="800" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;This sets up the GitHub hosted Runner for our job :&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu3url5vq67q699bge2jf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu3url5vq67q699bge2jf.png" alt="Image description" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Checks out the codebase of our Repository:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3w5zavb3lzcpw8hubyd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3w5zavb3lzcpw8hubyd.png" alt="Image description" width="800" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Logs In to my DockerHub Account inside the Runner, Builds the images and pushes them to DockerHub:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqswej0xjlvun8c84ia43.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqswej0xjlvun8c84ia43.png" alt="Image description" width="800" height="128"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr9tzdpwyq8upgiyt1535.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr9tzdpwyq8upgiyt1535.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjbtspjn1xszd7u0riekm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjbtspjn1xszd7u0riekm.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;As you can see our First Job is successfully executed and our application service’s docker images are built and pushed to Docker hub with dynamic tags.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As soon as our first job &lt;strong&gt;Build-Push&lt;/strong&gt; i=has completed its execution, our deployment job(which i have configured as a separate job) starts its execution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7fl20ve7f2bhklhanqd4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7fl20ve7f2bhklhanqd4.png" alt="Image description" width="800" height="265"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the deployment of our application I have created an AWS EC2 Instance. &lt;strong&gt;NOTE :&lt;/strong&gt; Make sure Docker and Docker-Compose are installed on the instance and set permissions to execute docker daemon. Also enable Inbound rule at port 5173 of the instance, because this is where we will access our application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmy4keg1t3u9pextc8ib2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmy4keg1t3u9pextc8ib2.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Job starts its execution.. and we can checkout its logs :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fge6z0srud5ao9etie5ji.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fge6z0srud5ao9etie5ji.png" alt="Image description" width="800" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This copies the bash Startup-script and the docker-compose file to the EC2 Instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwmdtm6miyktwo1ogdkr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwmdtm6miyktwo1ogdkr.png" alt="Image description" width="630" height="109"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And with this command :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frrs4klog5fjlaxjx3ue8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frrs4klog5fjlaxjx3ue8.png" alt="Image description" width="800" height="74"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It SSH’s into the server and executes the custom shell script which deploys the application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpaovx1pu0o7je9876egu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpaovx1pu0o7je9876egu.png" alt="Image description" width="800" height="93"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lets access it using the Instance IP and the mapped port number :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5nw5schtz4tillxloijq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5nw5schtz4tillxloijq.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And there you go!! We have successfully deployed our 3-tier MERN Stack application to a remote EC2 Instance which acts like a Deployment server using GitHub Actions.&lt;/p&gt;




&lt;h3&gt;
  
  
  Jenkins vs GitHub Actions
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Having built CI/CD pipelines using both I have felt quite a difference between both. GitHub Actions uses its compute resources more efficiently and is more fast and easy to begin with as a beginner. Perfect for projects hosted on GitHub, as it requires minimal setup and no external tools to start automating workflows.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Jenkins requires separate installation and setup, either on-premises or via cloud infrastructure. Pipelines are defined using Groovy scripts, which can have a steeper learning curve for beginners compared to YAML. But it is best for advanced users who need fine-grained control over their CI/CD pipelines and want to access highly extensible UI with a rich plugin ecosystem (over 1,800 plugins), allowing integration with nearly any tool or platform.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Checkout my Jenkins pipeline setup for this same application, check out &lt;a href="https://dev.to/yash_patil16/building-a-robust-cicd-pipeline-using-jenkins-for-a-mern-stack-application-28jp"&gt;[this blog]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Connect with me on &lt;a href="https://www.linkedin.com/in/yash-patil-24112a258/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;




</description>
      <category>githubactions</category>
      <category>automation</category>
      <category>cicd</category>
      <category>aws</category>
    </item>
    <item>
      <title>Building a Robust CI/CD Pipeline using Jenkins for a MERN Stack Application</title>
      <dc:creator>Yash Patil</dc:creator>
      <pubDate>Sat, 25 Jan 2025 06:33:45 +0000</pubDate>
      <link>https://dev.to/yash_patil16/building-a-robust-cicd-pipeline-using-jenkins-for-a-mern-stack-application-592c</link>
      <guid>https://dev.to/yash_patil16/building-a-robust-cicd-pipeline-using-jenkins-for-a-mern-stack-application-592c</guid>
      <description>&lt;p&gt;As modern software development emphasizes agility and reliability, CI/CD (Continuous Integration/Continuous Deployment) pipelines have become indispensable. In this blog, we’ll delve into the implementation of a CI/CD pipeline for a MERN stack application (React.js frontend, Node.js/Express backend, and MongoDB database) which we have already worked on (&lt;a href="https://dev.to/yash_patil16/containerizing-a-mern-stack-application-1pn1"&gt;check out this blog to know about the application&lt;/a&gt;), highlighting semantic versioning, why versioning is needed in modern-day applications, Docker image versioning, automated deployment, and more. Let’s walk through each stage of the pipeline with detailed explanations and examples.&lt;/p&gt;

&lt;p&gt;First let’s talk about what are CI/CD pipelines and how my built pipeline fits into the picture..&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What is a CI/CD Pipeline?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A CI/CD pipeline automates the integration and deployment of code, ensuring that changes are validated, built, tested, and deployed efficiently. It minimizes manual intervention, reduces errors, and accelerates delivery. In our project, the pipeline not only automates these processes but also increments the application version for each build.&lt;/p&gt;

&lt;p&gt;Your pipeline embodies a real-world DevOps best practice: &lt;strong&gt;Continuous Integration and Continuous Deployment (CI/CD)&lt;/strong&gt;. Let’s break this down in the context of industry-standard workflows and explain why each step is necessary.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Versioning in Real-World Practices&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Semantic Versioning&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Semantic Versioning is the most common standard for versioning applications in the industry. It follows the format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;MAJOR.MINOR.PATCH
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;PATCH&lt;/strong&gt;: Incremented for bug fixes or minor improvements (e.g., &lt;code&gt;1.0.1&lt;/code&gt; → &lt;code&gt;1.0.2&lt;/code&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;MINOR&lt;/strong&gt;: Incremented for new features that are backward-compatible (e.g., &lt;code&gt;1.0.0&lt;/code&gt; → &lt;code&gt;1.1.0&lt;/code&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;MAJOR&lt;/strong&gt;: Incremented for changes that are backward-incompatible (e.g., &lt;code&gt;1.0.0&lt;/code&gt; → &lt;code&gt;2.0.0&lt;/code&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Is It Necessary to Increment the Version for Every Commit or a pipeline build job?&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No&lt;/strong&gt;, in many cases, a version bump only happens when a release is being prepared for &lt;strong&gt;deployment to production&lt;/strong&gt; or for &lt;strong&gt;public consumption&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Yes&lt;/strong&gt;, if every commit directly impacts a deliverable (e.g., in a Continuous Deployment (CD) environment where every commit leads to production changes).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;When to Increment Versions in the Real World&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Version Bumps Are Typically Tied to Releases&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Teams often work in branches (e.g., &lt;code&gt;feature&lt;/code&gt;, &lt;code&gt;development&lt;/code&gt;, or &lt;code&gt;staging&lt;/code&gt;) and only merge into &lt;code&gt;main&lt;/code&gt; when a feature or fix is complete.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A version bump occurs when a deployable release is ready, not for every commit.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Scenarios Where Version Increments Happen&lt;/strong&gt;:
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Feature Releases&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* A feature is complete, tested, and merged into `main`. The version is incremented (e.g., `1.1.0`) before releasing it to production.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Bug Fixes&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* A critical bug is fixed, and the version's **PATCH** number is incremented before deployment.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Hotfixes&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* Emergency fixes often lead to quick **PATCH** bumps (e.g., `1.2.1` → `1.2.2`).
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Performance Improvements&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* Minor performance updates might warrant a **PATCH** bump, or a **MINOR** bump if they involve significant new optimizations.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Commits That Do Not Trigger Version Increments&lt;/strong&gt;:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Work-in-progress changes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Internal refactoring without user-facing impacts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Development or experimental changes not yet merged to &lt;code&gt;main&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In practice, &lt;strong&gt;incrementing versions&lt;/strong&gt; should be done automatically. In our pipeline, to simulate a continuous deployment pipeline such that every build increments the &lt;strong&gt;patch version&lt;/strong&gt;, signaling minor updates. Builds that are &lt;strong&gt;directly deliverable to production environments&lt;/strong&gt;, version increments are necessary every time.&lt;/p&gt;

&lt;p&gt;For this pipeline, we are using &lt;strong&gt;Jenkins&lt;/strong&gt;, a widely popular automation server known for its flexibility and extensive plugin ecosystem. Jenkins excels in orchestrating tasks like building, testing, and deploying applications. Its robust community support, ease of configuration, and compatibility with a wide range of tools make it a go-to choice for CI/CD workflows.&lt;/p&gt;

&lt;p&gt;To set up Jenkins, I have created a Jenkins container using the following Docker command:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-p&lt;/span&gt; 8080:8080 &lt;span class="nt"&gt;-p&lt;/span&gt; 50000:50000 &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-v&lt;/span&gt; jenkins_home:/var/jenkins_home &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-v&lt;/span&gt; /var/run/docker.sock:/var/run/docker.sock &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;which docker&lt;span class="si"&gt;)&lt;/span&gt;:/usr/bin/docker &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--group-add&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;stat&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'%g'&lt;/span&gt; /var/run/docker.sock&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    jenkins/jenkins:lts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Explanation of the Arguments:
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;-p 8080:8080&lt;/code&gt;: Maps Jenkins’ web interface to port 8080 on the host machine.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;-p 50000:50000&lt;/code&gt;: Maps the Jenkins agent communication port to the host machine.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;-v jenkins_home:/var/jenkins_home&lt;/code&gt;: Persists Jenkins data on the host for durability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;-v /var/run/docker.sock:/var/run/docker.sock&lt;/code&gt;: Grants Jenkins access to the Docker daemon, enabling it to manage Docker containers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;-v $(which docker):/usr/bin/docker&lt;/code&gt;: Provides Jenkins access to the Docker CLI.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;--group-add $(stat -c '%g' /var/run/docker.sock)&lt;/code&gt;: Adds the Jenkins user to the Docker group for permission to run Docker commands.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;jenkins/jenkins:lts&lt;/code&gt;: Specifies the Jenkins Long-Term Support (LTS) image.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This setup ensures Jenkins is fully equipped to handle Docker-based workflows, making it an integral part of our CI/CD process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Also make sure that the Jenkins container has Node and NPM installed on it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxqohxloi6v36u1cgm6o3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxqohxloi6v36u1cgm6o3.png" alt="Image description" width="800" height="78"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see our Jenkins container is up and running and accessible at &lt;code&gt;localhost:8080&lt;/code&gt; :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fed0ypvv3svch2kndh11o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fed0ypvv3svch2kndh11o.png" alt="Image description" width="800" height="340"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have opted to use a simple pipeline job for our project and written a pipeline script in the Jenkinsfile.&lt;br&gt;&lt;br&gt;
Check out the project &lt;a href="https://github.com/YashPatil1609/MERN-CI-CD-Workflow.git" rel="noopener noreferrer"&gt;repository&lt;/a&gt; that contains the codebase of the application, the pipeline script under Jenkinsfile, our docker-compose file and a bash script which is used for deployment of our application but we’ll get to that later.  &lt;/p&gt;

&lt;h3&gt;
  
  
  STAGE ONE : Increment application versions
&lt;/h3&gt;

&lt;p&gt;Our Frontend and Backend services use NPM as their package managers and dependencies handlers. Every package manager tool keeps track of a version in its main build file. &lt;code&gt;package.json&lt;/code&gt; file we write for our application has a field &lt;code&gt;version&lt;/code&gt; which denotes the current application version. This is also where build information, dependencies and startup scripts are listed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgk0qfruf7sumtivmucta.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgk0qfruf7sumtivmucta.png" alt="Image description" width="407" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzjb0grmialms51ton5d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzjb0grmialms51ton5d.png" alt="Image description" width="743" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Build tools have commands to increment the versions of the applications. In this pipeline, every build automatically increments the &lt;strong&gt;patch version&lt;/strong&gt;, simulating the scenario of minor updates.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr56bsqp51occphv7pf3q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr56bsqp51occphv7pf3q.png" alt="Image description" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm version patch --no-git-tag-version&lt;/code&gt;: Updates the patch version in &lt;code&gt;package.json&lt;/code&gt; without creating a Git tag.&lt;/p&gt;

&lt;p&gt;I have used &lt;strong&gt;JQ&lt;/strong&gt; a lightweight and flexible command-line utility &lt;strong&gt;to parse, filter, transform, and process JSON data&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In our case it parses the updated version from &lt;code&gt;package.json&lt;/code&gt; files of both frontend and backend services and stores them in environmental variables &lt;code&gt;env.FRONTEND_VERSION&lt;/code&gt; &lt;strong&gt;&amp;amp;&lt;/strong&gt; &lt;code&gt;env.BACKEND_VERSION&lt;/code&gt; respectively followed by the BUILD_NUMBER which is also an environmental variable of Jenkins ecosystem out of the box.&lt;/p&gt;

&lt;h3&gt;
  
  
  STAGE TWO : &lt;strong&gt;Building Docker Images and pushing them to Dockerhub.&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Building Docker Images for Every New Version&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;When changes are committed, &lt;strong&gt;rebuilding the Docker images&lt;/strong&gt; ensures that the application is packaged with the latest code and dependencies.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why is this important in industry?
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Immutable Deployments&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* Docker images are snapshots of your application and its environment. By creating a new image for every version, you guarantee the environment for that version is consistent, regardless of where or when it is deployed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Eliminates "Works on My Machine" Issues&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* All developers, testers, and production environments use the same image, avoiding discrepancies between local setups and production.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Supports Rollbacks&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* If something breaks, you can redeploy an older image/version without rebuilding the application.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Pushing Images to a Centralized Repository&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;By pushing images to &lt;strong&gt;Docker Hub&lt;/strong&gt; (or another registry):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Centralized Access&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* Teams and systems can pull the latest images without needing access to the source code.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Supports Distributed Teams&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* Developers across different locations can pull the same image, ensuring consistency.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Versioned History&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* The registry acts as a timeline of all your application versions, making it easy to trace or rollback.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In industry, container registries like Docker Hub, AWS ECR, or Azure Container Registry are used to store and manage these images.&lt;/p&gt;

&lt;p&gt;We will tag our image with &lt;code&gt;${env.FRONTEND_VERSION}&lt;/code&gt; and &lt;code&gt;${env.BACKEND_VERSION}&lt;/code&gt; for precise versioning.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxrt4xoo0opfezcmmo0hs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxrt4xoo0opfezcmmo0hs.png" alt="Image description" width="800" height="222"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sorry for the small image.&lt;/p&gt;

&lt;p&gt;We have already created Dockerfiles for the frontend and backend services at paths &lt;code&gt;./mern/frontend&lt;/code&gt; and &lt;code&gt;./mern/backend&lt;/code&gt; respectively. We build the images and tag them using our dockerhub username, the image repository for the individual service and the updated versions stored in &lt;code&gt;${env.FRONTEND_VERSION}&lt;/code&gt; and &lt;code&gt;${env.BACKEND_VERSION}&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7s8hnuzzupv98ipc2fv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7s8hnuzzupv98ipc2fv.png" alt="Image description" width="433" height="75"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have created a environment variable to store value for the dockerhub username as to not hardcode it.&lt;/p&gt;

&lt;p&gt;After the images are built we need to push them to Dockerhub, but to do so we need to login to DockerHub user from our Jenkins ecosystem. I had created usernamePassword type credentials in Jenkins.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxatky8k7chir6bg4m15.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxatky8k7chir6bg4m15.png" alt="Image description" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And accessed them in the Pipeline script using the &lt;strong&gt;withCredentials()&lt;/strong&gt; function. The &lt;code&gt;--password-stdin&lt;/code&gt; flag avoids exposing sensitive data in logs while docker login.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;STAGE THREE : Committing Version Updates to Git Repository&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Why Commit Version Changes?
&lt;/h4&gt;

&lt;p&gt;Now that we have built and pushed our images with the correct and latest version tags, committing the version bump ensures the repository reflects the current state of the application, keeping the development history consistent and traceable. This is crucial because:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Maintains Accurate State in the Repository&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* The `package.json` or equivalent file reflects the actual version of the deployed application.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Avoids Conflicts&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* Without this step, multiple contributors could unknowingly use the same old version number, leading to conflicts.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Collaboration&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* Other contributors always pull the latest version with updated dependencies, reducing confusion about which version is in production.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw37v4qepv0ezgf2w4lhn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw37v4qepv0ezgf2w4lhn.png" alt="Image description" width="800" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Again for this stage I have setup Github Access credentials so that the Jenkins User can commit the version bump and changes to the Main Branch.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Git Config&lt;/strong&gt;: Sets up Jenkins as the Git user.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Remote URL Update&lt;/strong&gt;: Includes the GitHub PAT (Personal Access Token) for secure authentication.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Commit and Push&lt;/strong&gt;: Tracks version changes and pushes them to the main branch.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;STAGE FOUR : Deploying the Application with the Latest Version to the Server&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;As for deploying the application I had used an EC2 instance from AWS which acts as a Deployment Server in our pipeline workflow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;There are few Pre-Requisites&lt;/strong&gt; :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Add SSH rules for Jenkins Server to the EC2 Instance in the security group.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add an Inbound rule for port 5173 which is where we have to access our application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker and Docker-compose must be installed on the Instance Beforehand the execution of the pipeline.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I have used a sshagent plugin to SSH to the EC2 Instance using “&lt;strong&gt;SSH Username with private key”&lt;/strong&gt; type credentials.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwnwoe5fmqm9tmra5d8b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwnwoe5fmqm9tmra5d8b.png" alt="Image description" width="800" height="39"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have also used a startup-script which will be executed once the Jenkins user will SSH into the EC2 Instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftboqa8tpjcbgs4gbd15c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftboqa8tpjcbgs4gbd15c.png" alt="Image description" width="521" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzb6w9p1fw70yjd1mfewz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzb6w9p1fw70yjd1mfewz.png" alt="Image description" width="800" height="287"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I am passing &lt;code&gt;${env.FRONTEND_VERSION}&lt;/code&gt; and &lt;code&gt;${env.BACKEND_VERSION}&lt;/code&gt; variables are arguments to the script which it stores as Environmental Variables once we SSH to the EC2 instance and Docker-compose files references these variables for its image tags.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;First we copy the script and the docker-compose file to the deployment server(EC2 Instance).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;And we have instructed the pipeline to execute this script once it SSH to the EC2 Instance.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We have opted for a Pipeline Job to which we provide the Git Repository to use, the credentials to access it and provide the path to the Pipeline Script written in the JenkinsFile.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqcoix7qhbftjlyfm01vt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqcoix7qhbftjlyfm01vt.png" alt="Image description" width="800" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj55bulp4vuveblj0k584.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj55bulp4vuveblj0k584.png" alt="Image description" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  PIPELINE EXECUTION
&lt;/h3&gt;

&lt;p&gt;Now lets execute the pipeline and see the console output for its execution of its each stage.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It checks-out the latest codebase from the repository and branch mentioned to build.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjivlug4oi6xrfggvd2oj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjivlug4oi6xrfggvd2oj.png" alt="Image description" width="800" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It executes the first stage of our script and increments the versions of the services.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyh5g9tsctd1e6ni36clc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyh5g9tsctd1e6ni36clc.png" alt="Image description" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Builds the images based on the latest version and code changes and pushes them to Dockerhub.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3v5xd6rm6lrbd3ewvbvq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3v5xd6rm6lrbd3ewvbvq.png" alt="Image description" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhsqskt44gmolu9nemxkc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhsqskt44gmolu9nemxkc.png" alt="Image description" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2For9qy80r5934dus9xq07.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2For9qy80r5934dus9xq07.png" alt="Image description" width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvyuu4r2369r2t06o4ebm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvyuu4r2369r2t06o4ebm.png" alt="Image description" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4. Commit the Version Updates to the Git Repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhavgn8h7xblsbe5o6wyi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhavgn8h7xblsbe5o6wyi.png" alt="Image description" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxuxydnbqm7z32cswasm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxuxydnbqm7z32cswasm.png" alt="Image description" width="800" height="175"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;As for the Final stage of the pipeline, it starts a SSH agent for the Jenkins user.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqvvo26xzk45fs3kqm8x9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqvvo26xzk45fs3kqm8x9.png" alt="Image description" width="800" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;6. Copies the startup-script and the docker-compose file to the deployment server.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxxleraairy1akz5cim7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxxleraairy1akz5cim7.png" alt="Image description" width="800" height="140"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flk43s6j0l06hppxoxat2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flk43s6j0l06hppxoxat2.png" alt="Image description" width="743" height="79"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;7. SSH into the server and executes the startup-script which starts up our application using docker-compose.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhthchn4vg5y6olnhmv3m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhthchn4vg5y6olnhmv3m.png" alt="Image description" width="800" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbwg7qang10cf5ltv99e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbwg7qang10cf5ltv99e.png" alt="Image description" width="800" height="98"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, our application is up and running in the deployment server and our pipeline has been successfully executed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ox1dikdy21im7t64n23.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ox1dikdy21im7t64n23.png" alt="Image description" width="561" height="554"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And Finally lets access our application using the server’s public IP at port 5173.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdz674q77ir5z5w6lac0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdz674q77ir5z5w6lac0.png" alt="Image description" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0xkk35ujiar7kuw2lu6z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0xkk35ujiar7kuw2lu6z.png" alt="Image description" width="800" height="291"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4bey50bwwg0ntfdkdnz4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4bey50bwwg0ntfdkdnz4.png" alt="Image description" width="800" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Well that’s it, this CI/CD pipeline demonstrates a practical approach to automating the development and deployment lifecycle of a MERN stack application. From incrementing application versions and building Docker images to committing changes and deploying on an EC2 instance, every stage reflects a real-world workflow followed in the industry. While this example focuses on a specific setup, such as using Docker for containerization and Jenkins for pipeline orchestration, the foundational concepts remain the same. Depending on project requirements, additional stages like automated testing, security scans, or performance monitoring might be included, and deployment environments may vary—ranging from Kubernetes clusters to other cloud platforms. This flexibility and modularity make pipelines like this a cornerstone of modern DevOps practices.&lt;/p&gt;

</description>
      <category>jenkins</category>
      <category>docker</category>
      <category>cicd</category>
      <category>mern</category>
    </item>
    <item>
      <title>Setting Up an NGINX Reverse Proxy with a Node.js Cluster Using Docker</title>
      <dc:creator>Yash Patil</dc:creator>
      <pubDate>Fri, 10 Jan 2025 20:14:54 +0000</pubDate>
      <link>https://dev.to/yash_patil16/setting-up-an-nginx-reverse-proxy-with-a-nodejs-cluster-using-docker-4bln</link>
      <guid>https://dev.to/yash_patil16/setting-up-an-nginx-reverse-proxy-with-a-nodejs-cluster-using-docker-4bln</guid>
      <description>&lt;p&gt;In this blog, I’ll walk you through a project where I set up an NGINX server as a reverse proxy to handle requests for a Node.js application cluster. The setup uses Docker to containerize both the NGINX server and the Node.js applications, enabling seamless scaling and management. By the end of this, you'll understand why NGINX is an essential tool for modern web development and how to configure it for such use cases.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is NGINX?
&lt;/h3&gt;

&lt;p&gt;NGINX (pronounced "engine-x") is a high-performance web server and reverse proxy server. It is widely used for its speed, reliability, and ability to handle concurrent connections. Here are some of its key functionalities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Web Server:&lt;/strong&gt; NGINX can serve static files like HTML, CSS, and JavaScript with exceptional speed.&lt;/p&gt;

&lt;p&gt;Apache web server also provides this functionality but Nginx is favored for its high performance, low resource consumption, and ability to handle a large number of concurrent connections.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reverse Proxy:&lt;/strong&gt; It can forward incoming client requests to backend servers or upstream servers and return the responses to the client. This improves scalability and security. How? In this scenario the end user dont directly send request to the backend servers, instead Nginx acts like a mediator and handles this task.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Load Balancing:&lt;/strong&gt; NGINX can distribute incoming traffic across multiple backend(upstream) servers using algorithms like round-robin or least connections(Default being the round-robin algorithm).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6yu9129oijwemky57gvw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6yu9129oijwemky57gvw.png" alt="Image description" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes Ingress Controller:&lt;/strong&gt; NGINX is often used as an ingress controller in Kubernetes clusters. In this role, NGINX receives requests from a cloud load balancer and routes them to services inside the cluster. This ensures that the cluster remains secure and only the load balancer is exposed to the public.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Bonus: NGINX in Kubernetes
&lt;/h3&gt;

&lt;p&gt;When used as a Kubernetes ingress controller, NGINX takes on a similar role but within a cluster. Here’s how it works:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;A cloud load balancer forwards requests to the NGINX ingress controller.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The ingress controller routes the requests to the appropriate Kubernetes service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The service forwards the requests to the pods (application instances).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy092b4vhbrhqgg7y6kvm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy092b4vhbrhqgg7y6kvm.png" alt="Image description" width="741" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This setup ensures that the Kubernetes cluster remains secure, with only the cloud load balancer exposed to external traffic.&lt;/p&gt;

&lt;p&gt;In this project, we use NGINX as a reverse proxy and load balancer for a Node.js cluster serving a web-page.&lt;/p&gt;




&lt;p&gt;Here is the &lt;a href="https://github.com/YashPatil1609/Docker-Projects/tree/main/Nginx-NodeJs-Application" rel="noopener noreferrer"&gt;&lt;strong&gt;Github&lt;/strong&gt;&lt;/a&gt; link for the project which consists of the source code, custom nginx configuration created by me and the Docker-Compose file used to containerize the whole setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  Project Overview
&lt;/h3&gt;

&lt;p&gt;The project consists of the following components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;NGINX Server:&lt;/strong&gt; Listens on port &lt;code&gt;8080&lt;/code&gt; and forwards incoming HTTP requests to a Node.js cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Node.js Cluster:&lt;/strong&gt; Comprises three Docker containers, each running a Node.js application on port &lt;code&gt;3000&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker Compose:&lt;/strong&gt; Orchestrates the deployment of all containers.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here’s how the setup works:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;A client sends an HTTP request to the NGINX server on port &lt;code&gt;8080&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;NGINX, acting as a reverse proxy, forwards the request to one of the Node.js containers using a round-robin load-balancing strategy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Node.js container processes the request and returns the response via NGINX.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  Custom NGINX Configuration
&lt;/h3&gt;

&lt;p&gt;Below is the NGINX configuration file written by me used in this project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;worker_processes auto&lt;span class="p"&gt;;&lt;/span&gt;

events &lt;span class="o"&gt;{&lt;/span&gt;
    worker_connections 1024&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

http &lt;span class="o"&gt;{&lt;/span&gt;
    include mime.types&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="c"&gt;# Upstream block to define the Node.js backend servers&lt;/span&gt;
    upstream nodejs_cluster &lt;span class="o"&gt;{&lt;/span&gt;
        server  app1:3000&lt;span class="p"&gt;;&lt;/span&gt;
        server  app2:3000&lt;span class="p"&gt;;&lt;/span&gt;
        server  app3:3000&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    server &lt;span class="o"&gt;{&lt;/span&gt;
        listen 8080&lt;span class="p"&gt;;&lt;/span&gt;  &lt;span class="c"&gt;# Listen on port 8080 for HTTP&lt;/span&gt;
        server_name localhost&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="c"&gt;# Proxying requests to the Node.js cluster&lt;/span&gt;
        location / &lt;span class="o"&gt;{&lt;/span&gt;
            proxy_pass http://nodejs_cluster&lt;span class="p"&gt;;&lt;/span&gt;
            proxy_set_header Host &lt;span class="nv"&gt;$host&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            proxy_set_header X-Real-IP &lt;span class="nv"&gt;$remote_addr&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;So basically we have blocks like &lt;strong&gt;server, http, events,&lt;/strong&gt; etc. And inside these block we have directives which decide the behavior of the server.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Nginx setups &lt;code&gt;worker_processes&lt;/code&gt; which do the work of getting and processing the requests from the browsers. These handle several concurrent requests from the users in a single threaded event loop. The worker processes influences how well Nginx handles the traffic. This should be tuned according to the server’s hardware and expected traffic load. At production level its always advised to set the worker processes with the equivalent no. of CPU cores.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then we setup &lt;code&gt;worker_connections&lt;/code&gt; directive in the events block. This configures how many concurrent connections each worker process should handle.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The main logic of the server is defined in the &lt;code&gt;http&lt;/code&gt; block. We can also configure Nginx to listen for requests using &lt;code&gt;HTTPS&lt;/code&gt; protocol and &lt;code&gt;SSL&lt;/code&gt; encryption but for this simple setup i have not configured it. So the http block defines at what port nginx handles user requests, where to forward it to for particular domains or IP addresses.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;server&lt;/code&gt; block listens on port &lt;code&gt;8080&lt;/code&gt; and forwards requests to the upstream servers defined.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;upstream&lt;/code&gt; block defines the Node.js backend servers (three containers in our case). When Nginx acts as a reverse proxy, the request coming to the backend servers originates from Nginx, not directly from the client. AS a result, backend servers would see the IP address of the Nginx server as the source of request.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Why i have mentioned :&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9bzwp3ownp8ktcuglpa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9bzwp3ownp8ktcuglpa.png" alt="Image description" width="260" height="140"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;well docker’s internal **DNS** service resolves app1, app2 and app3 to the nodejs containers services we created in the docker compose file.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We also want to forward information from the original client requests. This provides useful info for logging purposes. The &lt;code&gt;proxy_pass&lt;/code&gt; directive sends requests to the upstream cluster, while headers like &lt;code&gt;Host&lt;/code&gt; and &lt;code&gt;X-Real-IP&lt;/code&gt; preserve client information.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Another important configuration must be &lt;code&gt;include mime.types;&lt;/code&gt; in response to the client. When Nginx returns the response from upstream servers, it can include the type of file Nginx is serving which helps the browser to process and render the files.  &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So that’s pretty much it about Nginx configuration and this will be foundational while setting up other projects as the logic pretty much remains the same.&lt;/p&gt;




&lt;h3&gt;
  
  
  Docker Compose File
&lt;/h3&gt;

&lt;p&gt;Here’s the &lt;code&gt;docker-compose.yml&lt;/code&gt; file that defines the entire setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;version: &lt;span class="s1"&gt;'3'&lt;/span&gt;
services:
  app1:
    build: ./app
    environment:
      - &lt;span class="nv"&gt;APP_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;App1
    image: yashpatil16/nginx-app1:latest
    ports:
      - &lt;span class="s2"&gt;"3001:3000"&lt;/span&gt;

  app2:
    build: ./app
    environment:
      - &lt;span class="nv"&gt;APP_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;App2
    image: yashpatil16/nginx-app3:latest
    ports:
      - &lt;span class="s2"&gt;"3002:3000"&lt;/span&gt;

  app3:
    build: ./app
    environment:
      - &lt;span class="nv"&gt;APP_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;App3
    image: yashpatil16/nginx-app3:latest
    ports:
      - &lt;span class="s2"&gt;"3003:3000"&lt;/span&gt;

  nginx:
    build: ./app/nginx
    image: yashpatil16/nginx:nodejs-app
    ports:
      - &lt;span class="s2"&gt;"8080:8080"&lt;/span&gt;
    depends_on:
      - app1
      - app2
      - app3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;app1&lt;/code&gt;, &lt;code&gt;app2&lt;/code&gt;, and &lt;code&gt;app3&lt;/code&gt; services each build and run a Node.js application, exposing port &lt;code&gt;3000&lt;/code&gt; internally.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;nginx&lt;/code&gt; service builds the NGINX image, exposing port &lt;code&gt;8080&lt;/code&gt; to the host.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;depends_on&lt;/code&gt; directive ensures that the Node.js containers start before NGINX.  &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I have also written a custom &lt;strong&gt;DockerFile&lt;/strong&gt; for the Nginx server container to instruct it to use my configuration instead of the default configuration file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpopt85tq00rdl4pk84u4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpopt85tq00rdl4pk84u4.png" alt="Image description" width="530" height="397"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Running the Project
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Build and start the containers:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose up &lt;span class="nt"&gt;--build&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Access the application in your browser:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;http://localhost:8080
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;NGINX will forward the request to one of the Node.js containers and return the response.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl7j53h2pnq1u4dp2jnwr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl7j53h2pnq1u4dp2jnwr.png" alt="Image description" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;To verify the Round-Robin loadbalancing approach check the logs :
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5sgklz9ljt5270tns4ip.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5sgklz9ljt5270tns4ip.png" alt="Image description" width="800" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;As you can requests are served by different containers, in our case App1,2,3 as mentioned in the Docker Compose file.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Well that’s pretty much it. We understood what Nginx is, what functionalities it offers and how to setup a Nginx server as a reverse-proxy for our upstream servers. This was a simple setup, but for any other projects the logic remains the same with more configurations here and there.&lt;/p&gt;

&lt;p&gt;And connect with me at LinkedIn : &lt;a href="https://www.linkedin.com/in/yash-patil-24112a258/" rel="noopener noreferrer"&gt;&lt;strong&gt;https://www.linkedin.com/in/yash-patil-24112a258/&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>nginx</category>
      <category>containers</category>
    </item>
    <item>
      <title>Containerizing a MERN Stack Application!</title>
      <dc:creator>Yash Patil</dc:creator>
      <pubDate>Fri, 03 Jan 2025 20:06:30 +0000</pubDate>
      <link>https://dev.to/yash_patil16/containerizing-a-mern-stack-application-1pn1</link>
      <guid>https://dev.to/yash_patil16/containerizing-a-mern-stack-application-1pn1</guid>
      <description>&lt;p&gt;In this blog, I’ll walk you through a simple yet powerful project that demonstrates the MERN stack in action. We will go through what MERN is but also how it works together to create a robust application. Plus, we’ll explore how Docker helps containerize and orchestrate everything seamlessly.&lt;/p&gt;

&lt;p&gt;Here is the &lt;a href="https://github.com/YashPatil1609/Docker-Projects/tree/main/MERN-Dockerised" rel="noopener noreferrer"&gt;Github&lt;/a&gt; link for the project which consists of the source code and the Docker-Compose file used to containerize the application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy1zljbfeh75966v4bb9i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy1zljbfeh75966v4bb9i.png" alt="Image description" width="800" height="231"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE: This application is an open-source project to showcase the process of containerizing existing applications.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before beginning to containerize the application lets first understand what is MERN framework is and its architecture and structure and how it works.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is MERN?
&lt;/h2&gt;

&lt;p&gt;MERN stands for &lt;strong&gt;MongoDB, Express.js, React, and Node.js&lt;/strong&gt;—a popular stack for building full-stack JavaScript applications. Here’s how each component contributes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;MongoDB&lt;/strong&gt;: A NoSQL database that stores data in JSON-like documents, and acts as the database layer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Express.js&lt;/strong&gt;: A lightweight web framework for Node.js, used to build business logic of the application and manage HTTP requests and define API endpoints.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;React&lt;/strong&gt;: A frontend library for building dynamic and responsive user interfaces and client side logic. It fetches data from the backend via REST API or graphQL.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Node.js&lt;/strong&gt;: A JavaScript runtime environment that powers the backend, enabling JavaScript to run on the server. It acts a virtual server on top of our servers or host system on which the application might be hosted. Hence it hosts the application, allowing it to run and be accessed over the internet.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Together, MERN allows developers to write the entire application in JavaScript, streamlining the development process.&lt;/p&gt;

&lt;p&gt;For this basic MERN stack application which allows us to Create, View, Edit and Delete employee records, we have a frontend and a backend folder.&lt;br&gt;&lt;br&gt;
The backend of this MERN stack application is designed to handle data operations for managing employee records. Here’s a breakdown of how the &lt;code&gt;connection.js&lt;/code&gt;, &lt;code&gt;records.js&lt;/code&gt;, and &lt;code&gt;server.js&lt;/code&gt; files interact to make everything function seamlessly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxqw2aoo3owvqcg7qk1e7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxqw2aoo3owvqcg7qk1e7.png" alt="Image description" width="445" height="426"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;1. Database Connection:&lt;/strong&gt; &lt;code&gt;/db/connection.js&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;This file establishes a connection to the MongoDB database.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Client :&lt;/strong&gt; In this case, the database server is named &lt;code&gt;mongodb&lt;/code&gt; (as defined in the &lt;code&gt;docker-compose.yml&lt;/code&gt; file), and it listens on port 27017. This the database connection string and we can also read this connection string using a environment variable or a config file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fia75y0j6tyi4dmw500b4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fia75y0j6tyi4dmw500b4.png" alt="Image description" width="402" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Database Reference&lt;/strong&gt;: The &lt;code&gt;employees&lt;/code&gt; database is referenced using &lt;code&gt;client.db("employees")&lt;/code&gt;, and this reference is exported as &lt;code&gt;db&lt;/code&gt;. This enables other parts of the application, such as API routes, to interact with the database.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz97mx98nuai5qlhod0nc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz97mx98nuai5qlhod0nc.png" alt="Image description" width="331" height="78"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  2. &lt;strong&gt;API Routes:&lt;/strong&gt; &lt;code&gt;records.js&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;records.js&lt;/code&gt; file defines a set of RESTful API endpoints for CRUD operations (Create, Read, Update, Delete). These routes use the &lt;code&gt;express.Router&lt;/code&gt; to group and manage endpoints under the &lt;code&gt;/record&lt;/code&gt; path and defines RESTful routes to handle requests for employee data using the &lt;code&gt;db&lt;/code&gt; object.&lt;/p&gt;
&lt;h3&gt;
  
  
  3. &lt;strong&gt;Server Initialization:&lt;/strong&gt; &lt;code&gt;server.js&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;server.js&lt;/code&gt; file is the entry point for the backend application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkuo5e6ibybx02llsukn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkuo5e6ibybx02llsukn.png" alt="Image description" width="656" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Express App&lt;/strong&gt;: An Express application is created to handle HTTP requests and responses.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Middleware&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;cors&lt;/code&gt;: Enables Cross-Origin Resource Sharing, allowing the frontend (on a different port) to communicate with the backend.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;express.json&lt;/code&gt;: Parses incoming JSON payloads in request bodies.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Route Mounting&lt;/strong&gt;: The &lt;code&gt;records&lt;/code&gt; routes are mounted at the &lt;code&gt;/record&lt;/code&gt; path. For example, a request to &lt;code&gt;/record&lt;/code&gt; is handled by the logic defined in &lt;code&gt;records.js&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Server Startup&lt;/strong&gt;: The app listens on port 5050 (or another port if specified in the &lt;code&gt;PORT&lt;/code&gt; environment variable). A message is logged to confirm the server is running.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So basically that sums up the backend logic of the application.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;When a request is made from the browser using the REACTjs frontend application, the Nodejs listens for the incoming HTTP requests on a specified port.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Passes the request to Express.js which matches the request to the defined route.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It executes the corresponding logic, like querying(CRUD operations) the database or prepare a response, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then Node takes the response by Express.js and sends it back to the Browser where it is displayed to the user via the React.js frontend application.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And that’s the actual workflow of a MERN stack application.&lt;/p&gt;
&lt;h2&gt;
  
  
  Containerizing the Application
&lt;/h2&gt;
&lt;h3&gt;
  
  
  What is Docker Compose?
&lt;/h3&gt;

&lt;p&gt;Docker Compose is a tool that simplifies the management of multi-container applications. Using a &lt;code&gt;docker-compose.yml&lt;/code&gt; file, you can define and configure services, networks, and volumes needed for your application. With a single command, Docker Compose can build, start, and orchestrate all the containers defined in the file.&lt;/p&gt;

&lt;p&gt;Now lets begin to containerize the application. The &lt;code&gt;docker-compose.yml&lt;/code&gt; file orchestrates the three services (frontend, backend, and MongoDB) to work together seamlessly. I have created Dockerfiles for both the frontend and the backend which we will run as separate containers and create a MongoDB container using a image from the DockerHub.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;frontend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./mern/frontend&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;5173:5173"&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mern&lt;/span&gt;

  &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./mern/backend&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;5050:5050"&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mern&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mongodb&lt;/span&gt;

  &lt;span class="na"&gt;mongodb&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongo&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;27017:27017"&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mern&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mongo-data:/data/db&lt;/span&gt;

&lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;mern&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bridge&lt;/span&gt;

&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;mongo-data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Frontend Service:
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- build: ./mern/frontend`: Specifies the Dockerfile for the frontend 
   application located in the `mern/frontend` directory.

- ports: "5173:5173"`: Maps port 5173 on the host to port 5173 in the container, making the React app accessible.

- networks: `mern`: Connects the service to the `mern` network for inter-service communication.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Backend Service:
&lt;/h2&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* `build: ./mern/backend`: Specifies the Dockerfile for the backend application located in the `mern/backend` directory.

* `ports: "5050:5050"`: Maps port 5050 on the host to port 5050 in the container for backend access.

* `depends_on: mongodb`: Ensures MongoDB starts before the backend.

* `networks: mern`: Connects the service to the `mern` network.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  MongoDB Service:
&lt;/h2&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* `image: mongo`: Uses the official MongoDB image to run the database.

* `ports: "27017:27017"`: Maps port 27017 on the host to port 27017 in the container for database access.

* `volumes: mongo-data:/data/db`: Mounts a Docker volume named `mongo-data` to `/data/db` inside the container for persistent storage. The volume is mounted to `/data/db` inside the MongoDB container because `/data/db` is the default directory used by MongoDB to store its database files.

* `networks: mern`: Connects the service to the `mern` network.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Networks:
&lt;/h2&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* `mern`: A custom bridge network allowing the frontend, backend, and MongoDB services to communicate.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Volumes:
&lt;/h2&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* `mongo-data`: A named volume to persist MongoDB data between container restarts. This volume is created under `/var/lib/Docker/volumes`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now make sure you are in the directory where the &lt;code&gt;docker-compose.yml&lt;/code&gt; is present and use to following command to spin up all the containers :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; For the first time it may take some time to build the images and run the containers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F760n8n2diqe4fxz4x9i0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F760n8n2diqe4fxz4x9i0.png" alt="Image description" width="800" height="76"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now access the frontend application at &lt;code&gt;localhost:5173&lt;/code&gt; where out React frontend application is running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyd864t0szmivxo0bsc3u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyd864t0szmivxo0bsc3u.png" alt="Image description" width="800" height="291"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fczx6ausz6jgvjbrbxlj6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fczx6ausz6jgvjbrbxlj6.png" alt="Image description" width="800" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now even if i stop and recreate all the containers, the records will still be there since we have used a host volume and mounted it to the mongoDB container. This allows the data persistence.&lt;/p&gt;

&lt;p&gt;Stop all containers using :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose down
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpvbux0vdj9i7tt41skgb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpvbux0vdj9i7tt41skgb.png" alt="Image description" width="800" height="157"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that’s how easy it is to work with docker compose in a multi container environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Frontend&lt;/strong&gt;: A React app running on port 5173, providing a user-friendly interface for managing employee records.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Backend&lt;/strong&gt;: A Node.js API on port 5050, processing business logic and interacting with MongoDB.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Database&lt;/strong&gt;: MongoDB, configured with a connection URL and persisting data via a Docker volume.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker Compose&lt;/strong&gt;: Orchestrates the entire stack, making it easy to spin up the application.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Going forward the entire workflow of a MERN stack application is pretty much similar with a few teaks here and there but the logic remains the same.&lt;/p&gt;

&lt;p&gt;Always make sure the to go through the application codebase and understand how the application works before proceeding forward which will make it easy to use tools like Docker or Docker-Compose.&lt;/p&gt;

&lt;p&gt;Feel free to checkout my blog : &lt;a href="http://yashpatilofficial.hashnode.dev" rel="noopener noreferrer"&gt;yashpatilofficial.hashnode.dev&lt;/a&gt; &amp;amp; &lt;a href="https://dev.to/yash_patil16"&gt;dev.to/yash_patil16&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And connect with me at LinkedIn : &lt;a href="https://www.linkedin.com/in/yash-patil-24112a258/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/yash-patil-24112a258/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>node</category>
      <category>mongodb</category>
    </item>
    <item>
      <title>Containerizing a Django Web Application: Serving Static Pages with Docker</title>
      <dc:creator>Yash Patil</dc:creator>
      <pubDate>Fri, 03 Jan 2025 16:51:54 +0000</pubDate>
      <link>https://dev.to/yash_patil16/containerizing-a-django-web-application-serving-static-pages-with-docker-3fbp</link>
      <guid>https://dev.to/yash_patil16/containerizing-a-django-web-application-serving-static-pages-with-docker-3fbp</guid>
      <description>&lt;p&gt;In this blog post, we'll walk through the process of containerizing a Python Django web application that serves a simple static HTML page. We will cover the basics of Django project structure, how the an Django application works, and how we can use Docker to create an isolated environment for the application.&lt;/p&gt;

&lt;p&gt;Here is the &lt;a href="https://github.com/YashPatil1609/Docker-Projects/tree/main/DjangoApp-Containerised/python-web-app" rel="noopener noreferrer"&gt;Github&lt;/a&gt; link for the project which consists of the source code and the Dockerfile used to containerize the application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4cjea4siylgmnmxj2sh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4cjea4siylgmnmxj2sh.png" alt="Image description" width="800" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE: This application is an open-source project to showcase the process of containerizing existing applications.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before beginning to containerize the application lets first understand what is Django framework and its architecture and structure and how it works.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Django?
&lt;/h3&gt;

&lt;p&gt;Django is a high-level Python web framework that promotes rapid development and clean, pragmatic design. It simplifies the creation of web applications by providing tools for common tasks like handling URLs, interacting with databases, and managing authentication.&lt;/p&gt;

&lt;h3&gt;
  
  
  Django Project Structure
&lt;/h3&gt;

&lt;p&gt;A typical Django project consists of several important components. Let's break down the structure of the project we will be working with.&lt;/p&gt;

&lt;p&gt;Using django-admin utility that is provided by django to perform administrative tasks for Django projects.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;django-admin startproject devops 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using the above command a folder structure for a new Django project is created. This command creates the main project folder (&lt;code&gt;devops&lt;/code&gt;), which contains the core settings and configurations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F03phfp0hvq06s1ago3n8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F03phfp0hvq06s1ago3n8.png" alt="Image description" width="343" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="http://manage.py" rel="noopener noreferrer"&gt;&lt;code&gt;manage.py&lt;/code&gt;&lt;/a&gt; is a command-line utility that helps you interact with your Django project. For example, you can run the development server with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python manage.py runserver
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command starts the Django development server, allowing you to preview your web app on &lt;code&gt;http://127.0.0.1:8000/&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Inside devops you will find :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuqriilauk3o3y162t06k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuqriilauk3o3y162t06k.png" alt="Image description" width="469" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;devops/&lt;/code&gt;&lt;a href="http://settings.py" rel="noopener noreferrer"&gt;&lt;code&gt;settings.py&lt;/code&gt;&lt;/a&gt;: Contains all project settings such as project configuration, middleware,database configuration, static files, etc.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;devops/&lt;/code&gt;&lt;a href="http://urls.py" rel="noopener noreferrer"&gt;&lt;code&gt;urls.py&lt;/code&gt;&lt;/a&gt;: The URL configuration file that maps URLs to views. It is responsible for serving the context route. So basically whoever tries to hit the context path mentioned in the file, it will be configured to serve the rendered frontend application.For our project, here's a simple &lt;a href="http://urls.py" rel="noopener noreferrer"&gt;&lt;code&gt;urls.py&lt;/code&gt;&lt;/a&gt; file that includes routes for the demo app and the Django admin interface:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F07akstkjlqz2o4j3adib.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F07akstkjlqz2o4j3adib.png" alt="Image description" width="430" height="116"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now in Django, an application is a modular component of your project. Each application handles specific functionality. Again in our case its serving a static web page.&lt;/p&gt;

&lt;p&gt;To create an app, which serves as the core of your project. This can be done using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python manage.py startapp demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates the &lt;code&gt;demo&lt;/code&gt; app folder. Inside this folder, you will find:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxczln101x9jucwruf5cx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxczln101x9jucwruf5cx.png" alt="Image description" width="598" height="634"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;demo/&lt;/code&gt;&lt;a href="http://urls.py" rel="noopener noreferrer"&gt;&lt;code&gt;urls.py&lt;/code&gt;&lt;/a&gt;: Manages URLs for the &lt;code&gt;demo&lt;/code&gt; app, mapping them to views. We mentioned this file in our base &lt;strong&gt;devops&lt;/strong&gt; project’s &lt;strong&gt;urls.py&lt;/strong&gt; file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;demo/&lt;/code&gt;&lt;a href="http://views.py" rel="noopener noreferrer"&gt;&lt;code&gt;views.py&lt;/code&gt;&lt;/a&gt;: Contains the core logic of your application, including views that handle requests and return responses. In our case it renders and serves our static web page.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzx4smp6s1ypomixa37pz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzx4smp6s1ypomixa37pz.png" alt="Image description" width="541" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As u can see it renders and serves a &lt;strong&gt;demo_site.html&lt;/strong&gt; present under /demo/templates.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Happens in the Browser?
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The user accesses localhost:8000/demo&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Django matches the URL to the view.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The view uses &lt;code&gt;render()&lt;/code&gt; to combine the &lt;code&gt;demo_site.html&lt;/code&gt; template and the context data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Django sends the final HTML page to the user's browser.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Dockerizing the Django Web App
&lt;/h3&gt;

&lt;p&gt;Now, let's move on to containerizing the Django project. We'll use Docker to encapsulate the application and its dependencies into a single container.&lt;/p&gt;

&lt;p&gt;Here’s a simple &lt;code&gt;Dockerfile&lt;/code&gt; i have written to build the container:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12s49tlj77gme3ddmi1a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12s49tlj77gme3ddmi1a.png" alt="Image description" width="757" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Breaking Down the Dockerfile
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;FROM python:3.14.0a3-alpine3.21&lt;/code&gt;: Specifies the base image for the container. We're using an Alpine-based Python image to keep the image small.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;WORKDIR /app&lt;/code&gt;: Sets the working directory in the container. So going forward every &lt;strong&gt;ENTRYPOINT&lt;/strong&gt; or &lt;strong&gt;CMD&lt;/strong&gt; instruction will be executed inside /app directory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;COPY requirements.txt /app&lt;/code&gt;: Copies the &lt;code&gt;requirements.txt&lt;/code&gt; file, which lists the necessary Python dependencies, into the container.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;COPY devops /app&lt;/code&gt;: Copies the entire &lt;code&gt;devops&lt;/code&gt; directory (which includes the Django project) into the container.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;RUN pip install -r requirements.txt&lt;/code&gt; : Installs the required Python packages specified in &lt;code&gt;requirements.txt&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;RUN cd /devops&lt;/code&gt; takes control inside the main directory where our demo applications and configuration files are present.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;ENTRYPOINT ["python3"]&lt;/code&gt;: Specifies the default command to run when the container starts. We're using Python 3 to run the application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;CMD ["&lt;/code&gt;&lt;a href="http://manage.py" rel="noopener noreferrer"&gt;&lt;code&gt;manage.py&lt;/code&gt;&lt;/a&gt;&lt;code&gt;", "runserver", "0.0.0.0:8000"]&lt;/code&gt;: The default command that will run when the container starts. This runs the Django development server on all available network interfaces (&lt;code&gt;0.0.0.0&lt;/code&gt;) at port &lt;code&gt;8000&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Running the Container
&lt;/h3&gt;

&lt;p&gt;Once the &lt;code&gt;Dockerfile&lt;/code&gt; is ready, you can build and run the Docker container with the following commands:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Build the Docker image&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t web-app .
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I have already built the image:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fni984jhyejj5est7t24a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fni984jhyejj5est7t24a.png" alt="Image description" width="800" height="47"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Run the Docker container&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d -p 8000:8000 web-app:python
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This create and runs a container based on our web-app image we created from the docker file. The above command runs the container on the port 8000 of our host which is mapped to 8000 port of the container where we have configured our Django application(development server) to run.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fesnfb6aef8y4x9eg6ph0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fesnfb6aef8y4x9eg6ph0.png" alt="Image description" width="800" height="73"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see the container is running is the background which we specified using the &lt;strong&gt;-d flag.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And now lets access our application in the browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfzlx6jotzfc07kcw86f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfzlx6jotzfc07kcw86f.png" alt="Image description" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As u can see we accessed &lt;a href="http://localhost:8000/demo/" rel="noopener noreferrer"&gt;http://localhost:8000/demo/&lt;/a&gt;. which was configured to serve our static web page.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;And there we go, we’ve containerized a simple Django web application that serves a static HTML page. We covered the basic project structure, how django works and how Docker can help us package and deploy the application in a self-contained environment. Now next time even if its a complex 3 tier Django project, the workflow and logic will be the same.&lt;/p&gt;

&lt;p&gt;Connect with me at LinkedIn : &lt;a href="https://www.linkedin.com/in/yash-patil-24112a258/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/yash-patil-24112a258/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>django</category>
      <category>containers</category>
      <category>devops</category>
    </item>
    <item>
      <title>Docker 101: A Guide to Docker Commands, Terminologies &amp; Dockerfile</title>
      <dc:creator>Yash Patil</dc:creator>
      <pubDate>Fri, 03 Jan 2025 12:27:34 +0000</pubDate>
      <link>https://dev.to/yash_patil16/docker-101-a-guide-to-docker-commands-terminologies-dockerfile-3502</link>
      <guid>https://dev.to/yash_patil16/docker-101-a-guide-to-docker-commands-terminologies-dockerfile-3502</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Docker is a powerful tool for creating, deploying, and running applications in lightweight, portable containers. Containers allow developers to package an application along with its dependencies, making it easy to move and run across different environments. If you're just getting started with Docker, this blog will guide you through the basics, essential commands and how to create a Dockerfile to build your first containerized application.&lt;/p&gt;

&lt;p&gt;To get a deeper understanding of Docker and its evolution, check out my &lt;a href="https://yashpatilofficial.hashnode.dev/exploring-docker-the-revolutionary-tool-for-modern-application-development" rel="noopener noreferrer"&gt;first article on Docker.&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation Guide
&lt;/h2&gt;

&lt;p&gt;Before diving into the basics of Docker, let's first ensure Docker is installed on your machine. Docker can be installed on Windows, macOS, and Linux. For the most up-to-date instructions and installation files, visit the official &lt;a href="https://docs.docker.com/engine/install/" rel="noopener noreferrer"&gt;Docker documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Initially if we are using an EC2 instance, then by default the user doesn’t have permission to execute docker binaries. We have to add the user to the Docker group using :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo usermod -aG docker &amp;lt;name of user on EC2&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Creating Docker Image , Docker Container Using Docker.
&lt;/h2&gt;

&lt;p&gt;Firstly check if the docker daemon is running in your system using :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl status docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fag91abuyrh43f5v011lr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fag91abuyrh43f5v011lr.png" alt="Image description" width="800" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Make sure that status is active(running), else there might be some issue with the installation.&lt;/p&gt;

&lt;p&gt;Now lets go over basic commands before diving deep.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Check Docker version
docker --version

# Pull an image from Docker Hub
docker pull &amp;lt;image_name&amp;gt;

# List all containers (running or stopped)
docker ps -a

# Run a container from an image
docker run &amp;lt;image_name&amp;gt;

# Build an image from a Dockerfile
docker build -t &amp;lt;image_name&amp;gt; &amp;lt;path_to_dockerfile&amp;gt;

# List all images
docker images

# Stop a running container
docker stop &amp;lt;container_id&amp;gt;

# Remove a stopped container
docker rm &amp;lt;container_id&amp;gt;

# Remove an image
docker rmi &amp;lt;image_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  DockerFile
&lt;/h2&gt;

&lt;p&gt;Now lets create our first image using a very basic dockerfile for a basic python script.&lt;/p&gt;

&lt;p&gt;The python script is a simple script to print “Hello World!” message.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6yz4sy0ciy4t4jiz8tyx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6yz4sy0ciy4t4jiz8tyx.png" alt="Image description" width="800" height="51"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Make sure the DockerFile and the application, in this case Python Script are in the same directory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4sr9zuav7773i06072mw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4sr9zuav7773i06072mw.png" alt="Image description" width="800" height="52"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now lets go over the Dockerfile. A Dockerfile is a template to create images for our applications and its consists of instructions to define the application’s environment, dependencies, and how to access the application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3.14.0a3-alpine3.21
WORKDIR /app
COPY . /app
CMD ["python3", "app.py"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s go through each instruction in the Dockerfile:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;u&gt;FROM&lt;/u&gt;&lt;/strong&gt;: This instruction sets the base image for your container. &lt;br&gt;
        Here, we're using the python:3.14.0a3-alpine3.21 image, which &lt;br&gt;
        is a lightweight Alpine-based image with Python 3.14.0a3 &lt;br&gt;
        installed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;u&gt;WORKDIR&lt;/u&gt;&lt;/strong&gt;: This sets the working directory inside the container to &lt;br&gt;
           /app. All subsequent instructions will be executed in this &lt;br&gt;
           directory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;u&gt;COPY&lt;/u&gt;&lt;/strong&gt;: This command copies all the files from your local directory &lt;br&gt;
        (where your Dockerfile resides) into the /app directory &lt;br&gt;
        inside the container.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;u&gt;CMD&lt;/u&gt;&lt;/strong&gt;: This command specifies the default command to run when the &lt;br&gt;
       container starts. Here, it tells Docker to run the app.py &lt;br&gt;
       script using Python 3.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Build the Docker image:
&lt;/h2&gt;

&lt;p&gt;With the Dockerfile and app.py in the same directory, open a terminal and run the following command to build the image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t python-hello-world .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create a Docker image tagged as python-hello-world.&lt;/p&gt;

&lt;p&gt;Check for the built image using “&lt;strong&gt;docker images&lt;/strong&gt;” command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpbkx87u4bv9i8uwk0fnu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpbkx87u4bv9i8uwk0fnu.png" alt="Image description" width="800" height="35"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we can see from the output an image tagged “&lt;strong&gt;python-hello-world&lt;/strong&gt;” has been created.&lt;/p&gt;

&lt;h2&gt;
  
  
  Run the container:
&lt;/h2&gt;

&lt;p&gt;After building the image, you can run the container with this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run python-hello-world
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy4o45tb8nz6uez7mlfce.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy4o45tb8nz6uez7mlfce.png" alt="Image description" width="800" height="36"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This confirms that your Python script is running inside a Docker container.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this blog, we've covered the basics of Docker: basic commands, and how to write a Dockerfile. We also demonstrated how to use a simple Dockerfile to run a Python "Hello World" script inside a container. Docker makes it easier to manage and deploy applications by encapsulating everything needed to run the application in a container.&lt;/p&gt;

&lt;p&gt;As you continue learning Docker, you’ll discover more advanced techniques and tools. Happy containerizing!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>containers</category>
      <category>dockerfile</category>
    </item>
    <item>
      <title>Exploring Docker: The Revolutionary Tool for Modern Application Development</title>
      <dc:creator>Yash Patil</dc:creator>
      <pubDate>Fri, 03 Jan 2025 11:02:27 +0000</pubDate>
      <link>https://dev.to/yash_patil16/exploring-docker-the-revolutionary-tool-for-modern-application-development-10l0</link>
      <guid>https://dev.to/yash_patil16/exploring-docker-the-revolutionary-tool-for-modern-application-development-10l0</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In today’s fast-paced tech landscape, developers and organizations are on a constant quest to make applications faster, more efficient, and easier to manage. One tool that has been instrumental in transforming the way we build, ship, and deploy software is Docker. Let’s delve into Docker’s journey, its architecture, and the reasons why it’s a game-changer for modern development.&lt;/p&gt;

&lt;h2&gt;
  
  
  Before Microservices: The Era of Monolithic Architecture
&lt;/h2&gt;

&lt;p&gt;Before microservices entered the picture, applications were often built as monolithic entities. In a monolithic architecture, every component of an application—from the user interface to the backend logic and database—was bundled together in a single codebase. While this approach was straightforward and easy to deploy initially, it came with significant challenges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;u&gt;Scalability Issues&lt;/u&gt;&lt;/strong&gt;: Scaling specific features independently was impossible since the entire application had to scale as a whole.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;u&gt;Maintenance Challenges&lt;/u&gt;&lt;/strong&gt;: Updating or fixing a bug in one part of the application risked breaking the entire system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;u&gt;Long Deployment Cycles&lt;/u&gt;&lt;/strong&gt;: Even minor updates required thorough testing and redeployment of the entire application.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhmki1z5zriins7awcur.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhmki1z5zriins7awcur.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These limitations highlighted the need for a more modular approach to application design, paving the way for microservices.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Shift to Microservices and Docker’s Role
&lt;/h2&gt;

&lt;p&gt;Microservices architecture emerged as a solution to the challenges posed by monolithic systems. In this approach, an application is broken down into smaller, independently deployable services. Each service is responsible for a specific business function and communicates with others through APIs.&lt;/p&gt;

&lt;p&gt;While microservices solved many issues, they introduced complexities in deployment, communication, and scaling. This is where Docker revolutionized the game. Docker provides a containerization platform that packages an application and its dependencies into a lightweight, portable container. These containers run consistently across different environments, making them ideal for microservices.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5pl5cywh7uy58xaceu3k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5pl5cywh7uy58xaceu3k.png" alt="Image description" width="800" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem Docker Solves
&lt;/h2&gt;

&lt;p&gt;Traditionally, applications were hosted and deployed directly on servers, often using hardware-level virtualization through virtual machines (VMs). VMs directly claims the hardware resources from the bare metal hardware servers, that is allocated by the hypervisor.&lt;/p&gt;

&lt;p&gt;Virtual Machines holds these resources regardless of the usage and these resources once allocated, cannot be used by any other virtual machine.&lt;/p&gt;

&lt;p&gt;Docker introduced OS-level virtualization, where containers share the host OS kernel but operate in isolated user spaces. This approach offers host and the Docker engine to allocate hardware resources to containers dynamically. And hence none of the containers permanently hold compute resources of the host system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fewjk65sg5gkpo7hqscay.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fewjk65sg5gkpo7hqscay.png" alt="Image description" width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How Docker Came Into Being
&lt;/h2&gt;

&lt;p&gt;Docker’s journey began with a company called dotCloud, a platform-as-a-service (PaaS) provider. In 2013, Solomon Hykes and Sebastian Pahl unveiled Docker as an open-source project. The concept of packaging applications in containers wasn’t new, but Docker simplified it to a level that made it accessible and powerful. Its initial release gained rapid adoption, and Docker Inc. soon pivoted to focus solely on container technology.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Architecture
&lt;/h2&gt;

&lt;p&gt;Docker’s architecture is designed to simplify containerization. Here’s an overview of its core components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;u&gt;Images&lt;/u&gt;:&lt;/strong&gt; Immutable snapshots of an application, its dependencies and its environment, created from Dockerfiles. These are templates used to create containers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;u&gt;Containers&lt;/u&gt;:&lt;/strong&gt; Lightweight, standalone executables built from Docker images. Containers include everything needed to run an application (the application code, system dependencies, libraries,etc).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;u&gt;Dockerfile&lt;/u&gt;:&lt;/strong&gt; A text file that contains instructions to build a Docker image. It defines the application’s environment, dependencies, and how to access the application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;u&gt;Docker Engine/Deamon&lt;/u&gt;:&lt;/strong&gt; The runtime that builds, runs, and manages containers. It includes a server (daemon), a REST API, and a command-line interface (CLI). It runs on the host OS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;u&gt;Docker Client&lt;/u&gt;:&lt;/strong&gt; Users interact with the Docker engine/daemon through the CLI. Clients use commands and REST API to communicate with Docker Engine/Daemon.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;u&gt;Docker Hub&lt;/u&gt;:&lt;/strong&gt; A public registry for sharing and storing Docker images. Developers can pull official or community images or push their own.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;u&gt;Ecosystem&lt;/u&gt;:&lt;/strong&gt; Includes tools like Docker Compose (for managing multi-container applications) and Docker Swarm (for orchestration).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupm8be0m24zc10mfjh3y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupm8be0m24zc10mfjh3y.png" alt="Image description" width="800" height="305"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages of Docker
&lt;/h2&gt;

&lt;p&gt;Docker has transformed application development and deployment by offering several key advantages:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Consistency Across Environments&lt;/u&gt;:&lt;/strong&gt; “It works on my machine” is no longer a problem as it solves the dependencies based issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Efficiency&lt;/u&gt;:&lt;/strong&gt; Containers are lightweight compared to VMs, leading to better resource utilization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Speed&lt;/u&gt;:&lt;/strong&gt; Faster builds, startups, and deployments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Portability&lt;/u&gt;:&lt;/strong&gt; Containers can run on any system with Docker installed, from local machines to cloud environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Scalability&lt;/u&gt;:&lt;/strong&gt; Containers make it easy to scale applications horizontally by spinning up additional instances.&lt;/p&gt;

&lt;h2&gt;
  
  
  Disadvantages of Docker
&lt;/h2&gt;

&lt;p&gt;While Docker is a powerful tool, it’s not without its drawbacks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It is difficult to manage large amounts of containers without a orchestration tool (Kubernetes).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Containers share the host OS kernel, which can lead to vulnerabilities if not managed properly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker does not provide cross-platform compatibility, means if an application is designed to run in a docker container on windows, then it cant run on Linux or vice-versa. Containers share host OS kernel. A windows container cant run on a Linux kernel and vice-versa, because the required system calls, libraries and APIs differ between OS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;While lightweight, Docker containers still introduce some overhead compared to bare-metal systems.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Docker has undeniably transformed the way we develop, deploy, and scale applications. By addressing the limitations of traditional deployment methods and simplifying the complexities of microservices, Docker has become an indispensable tool for modern developers. Whether you’re building your first containerized app or managing a fleet of microservices, Docker offers a robust foundation to achieve your goals.&lt;/p&gt;

&lt;p&gt;So, as you embark on your journey with Docker, remember: it’s not just about containers; it’s about redefining possibilities in application development.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>containers</category>
      <category>devops</category>
      <category>microservices</category>
    </item>
  </channel>
</rss>
