<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Atharva Shinde</title>
    <description>The latest articles on DEV Community by Atharva Shinde (@atharvaa).</description>
    <link>https://dev.to/atharvaa</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/atharvaa"/>
    <language>en</language>
    <item>
      <title>Chapter: Jenkins</title>
      <dc:creator>Atharva Shinde</dc:creator>
      <pubDate>Tue, 08 Feb 2022 11:59:05 +0000</pubDate>
      <link>https://dev.to/atharvaa/chapter-jenkins-3hdp</link>
      <guid>https://dev.to/atharvaa/chapter-jenkins-3hdp</guid>
      <description>&lt;p&gt;Want to get notifications instantly when your code breaks? Need some tool to integrate automation for your application or software? Need a wide range of plugins to support your build, test and deployment process? Easy to configure and Free? And have an amazing community around itself? Solution: Jenkins.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why is Jenkins needed?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dsIaA_eo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1644314049515/mQ0-KP8TZ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dsIaA_eo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1644314049515/mQ0-KP8TZ.png" alt="Traditional Software Delivery Process" width="880" height="901"&gt;&lt;/a&gt;&lt;br&gt;
Above diagram depicts a raw process on how the development cycle worked traditionally. Consider your team working on a new functionality for an application and you want to have it deployed to the upcoming release version. Initially you push your code to a Version Control system like Github, BitBucket or your company's private VC system. All the manual work ends here. &lt;/p&gt;

&lt;p&gt;Now starts the integration and delivery process. The problem here is that the build process takes a considerable amount of time to build your software. Therefore the development teams had a custom nightly build convention where all the build processes were scheduled at night, but "nightly" build is relative to different time-zones. Let's say, a company decides for night build at 12:30AM EST, but that build would be 10:30AM IST and 4:00PM AEST and hence all IST and AEST developers would have to commit their code for build during their work time zones which affected their productivity. Also the test cases were limited and did not have wide coverage. So you would have to wait till your code was built and pushed to the test server to check if it passed the predefined tests, and if the tests and build failed the source-code was pushed back to your team.&lt;/p&gt;

&lt;p&gt;In the whole process if the code broke and the tests failed there were no ascertained means to pick-point which team or particularly who wrote that piece of code. &lt;/p&gt;

&lt;p&gt;As we observe here, this whole process is definitely not an optimal way to deliver software and make it production ready.&lt;/p&gt;

&lt;h3&gt;
  
  
  Jenkins
&lt;/h3&gt;

&lt;p&gt;Jenkins is an open-source versatile automation server completely built in Java, for integrating the workflow of building, testing and deploying a software. Jenkins transitioned the traditional software delivery cycle to a fast, extensive and customisable CI/CD process. With the help of Jenkins its possible to-&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Get instant build notifications every-time your code is pushed from the repository to build.&lt;/li&gt;
&lt;li&gt;Parallel Build i.e run more than one build process in parallel to reduce your build time.&lt;/li&gt;
&lt;li&gt;Test the built code on a wide range of test combinations for achieving maximum test-coverage and statistical data on how the code performed for each test-input.&lt;/li&gt;
&lt;li&gt;Get instant result notifications after every test-session.&lt;/li&gt;
&lt;li&gt;Create roles for each responsibility and manage what to hide for each role on what they could read and/or write to.&lt;/li&gt;
&lt;li&gt;Build your own set of integration tools using wide range of plugins available at &lt;a href="https://plugins.jenkins.io"&gt;https://plugins.jenkins.io&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Fun Fact:  We can integrate Jenkins with some popular cloud platforms including AWS, VMWare, Google Cloud Platform, Azure,  IBM Cloud, Digital Ocean etc.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is how Jenkins helped in rejuvenating to a frictionless CI/CD process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ctfn9k0F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1644314092880/IqeJpCUg7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ctfn9k0F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1644314092880/IqeJpCUg7.png" alt="Software Delivery Process using Jenkins" width="880" height="1013"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing Jenkins
&lt;/h3&gt;

&lt;p&gt;To install Jenkins on your local setup read &lt;a href="https://www.jenkins.io/download/"&gt;Jenkins official download guide&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Installing Jenkins as a container inside Docker.
&lt;/h4&gt;

&lt;p&gt;To install Jenkins as a container from Jenkins Docker image, execute-&lt;br&gt;
&lt;code&gt;docker pull jenkins/jenkins:lts-jdk11&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture of Jenkins Model
&lt;/h3&gt;

&lt;p&gt;Here's a high level view of Jenkins application&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aTrkar8A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1643804112483/zvV-b8Ma4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aTrkar8A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1643804112483/zvV-b8Ma4.png" alt="Jenkins Model" width="880" height="591"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Components of Jenkins
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Plugins&lt;/li&gt;
&lt;li&gt;Jenkins Pipeline&lt;/li&gt;
&lt;li&gt;JenkinsFile&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Plugins
&lt;/h3&gt;

&lt;p&gt;A Plugin is an extension tool to help streamline the process of building, testing or deploying a software. There are 1800+ plugins available built for integrating purposes such as analysing or testing the codebase, sending custom notifications of test or build results to developers, integrating Git with Jenkins and many more. We can install more than one plugin on top of our Jenkins local environment to implement these different functionalities that increase the efficiency of the CI/CD process. These plugins are built by awesome developers of Jenkins Open-source community and you can browse all these plugins and their information on &lt;a href="https://plugins.jenkins.io/"&gt;Jenkins Plugins Website&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Few examples of plugins are-&lt;br&gt;
GitHub Groovy Library, GitHub Branch Source Plugin, SSH Build Agents, LDAP, Credentials Binding Plugin, OWASP Markup Formatter, PAM Authentication.&lt;/p&gt;

&lt;p&gt;Plugins can be installed using 2 ways-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;** "Plugin Manager" in the web User Interface. **&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6Q7zIqUP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oagvx6ggyuq380gb1zlk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6Q7zIqUP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oagvx6ggyuq380gb1zlk.png" alt="Manage Plugin through Web UI" width="880" height="452"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Jenkins Command Line Interface.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;java -jar jenkins-cli.jar -s http://localhost:8080/ install-plugin SOURCE ... [-deploy] [-name VAL] [-restart]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command installs a plugin either from a file, an URL, or from update centre.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;SOURCE&lt;/code&gt;: If this points to a local file, that file will be installed. If this is an URL, Jenkins downloads the URL and installs that as a plugin.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-deploy&lt;/code&gt;: Deploy plugins right away without postponing them until the reboot.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-name VAL&lt;/code&gt; : If specified, the plugin will be installed as this short name (whereas normally the name is inferred from the source name automatically).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-restart&lt;/code&gt;  : Restart Jenkins upon successful installation.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Jenkins Pipeline
&lt;/h3&gt;

&lt;p&gt;Jenkins Pipeline is a structured set of plugins, required for integration of your software, declared inside a file called JenkinsFile.&lt;/p&gt;

&lt;h3&gt;
  
  
  JenkinsFile
&lt;/h3&gt;

&lt;p&gt;To define a Jenkins pipeline we need to write a set of steps inside a text file called &lt;code&gt;JenkinsFile&lt;/code&gt;.Thus JenkinsFile implements "Pipeline as a Code". Every software using Jenkins is required to define JenkinsFile inside its project repository. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Dz1ii8A8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tiqyrk40m7o6skykucnz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Dz1ii8A8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tiqyrk40m7o6skykucnz.png" alt="JenkinsFile" width="880" height="684"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can build your Pipeline's JenkinsFile through either-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Jenkins User Interface&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--r-fRE4Jr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1643806568720/qyFHQa9rj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--r-fRE4Jr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1643806568720/qyFHQa9rj.png" alt="Jenkins User Interface" width="880" height="536"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;or&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;BlueOcean User Interface &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XHh-QiSC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1643806377667/DCLcMwY6i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XHh-QiSC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1643806377667/DCLcMwY6i.png" alt="BlueOcean User Interface " width="880" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;or&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using Source Code Management (SCM)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jNd9tces--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1643807322190/JXZBsLRTXw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jNd9tces--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1643807322190/JXZBsLRTXw.png" alt="Using Source Code Management (SCM)" width="880" height="590"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And to code the JenkinsFile we can use either of the two syntaxes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Scripted Pipeline(should be written in &lt;a href="http://groovy-lang.org/semantics.html"&gt;Groovy &lt;/a&gt;, is flexible but have steep learning curve)&lt;/li&gt;
&lt;li&gt;Declarative Pipeline(recently added feature, easy to learn but not so powerful/flexible)&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scripted Pipeline:&lt;/strong&gt;
These pipelines are initiated with the directive &lt;code&gt;node&lt;/code&gt;.
Here's an example structure of Scripted Pipeline-
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node{
    stage('Build'){
        try{
        }
        catch(e){
        }
    }
    stage('Test'){
        if(condition){
        }
        else{
        }
    }
    stage('Deploy'){
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Declarative Pipeline:&lt;/strong&gt;
These pipelines are initiated with syntax &lt;code&gt;pipeline&lt;/code&gt;
Here's an example structure of a combination of: parallel and non-parallel declarative pipeline.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pipeline{
    agent any
    stages{
        stage('Build'){
            steps{
            }
        }
        stage('TestAndDeploy'){
            parallel{
                stage('Test'){
                    steps{
                    }
                }
                stage('Deploy'){
                    steps{
                    }
                }
            }
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;in the above pipeline stage('Build') is executed first and stages: stage('Test') &amp;amp; stage('Deploy') are run in parallel because of the &lt;code&gt;parallel&lt;/code&gt; directive.&lt;/p&gt;

&lt;p&gt;Just like the &lt;code&gt;parallel&lt;/code&gt; keyword there are many others available each having their discrete functionalities, these keywords are called Directives; a few to mention are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;when&lt;/code&gt; - It allows the Pipeline to decide whether the stage should be executed depending on the given condition.
Example: &lt;code&gt;when{ branch main}&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;stage&lt;/code&gt; - This directive can comprise of one or more directives collectively used to define-build a particular process of development cycle. 
Example: &lt;code&gt;stage('Test'){...}&lt;/code&gt; the stage 'Test' would consist a set of instructions that would trigger every time any code is being sent for testing.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;tools&lt;/code&gt; - It is used to define tools to auto-install for Jenkins server.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Wrap Up
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Alternative to Jenkins
&lt;/h4&gt;

&lt;p&gt;Here are few other CI/CD tool options to Jenkins with each having its own advantages and disadvantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CircleCI&lt;/li&gt;
&lt;li&gt;GitLab&lt;/li&gt;
&lt;li&gt;Travis CI &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;Reference&lt;/code&gt;: Jenkins Documentation&lt;/p&gt;

&lt;h4&gt;
  
  
  What next?
&lt;/h4&gt;

&lt;p&gt;Now that you are aware of Jenkins as a next step you might need to containerise your application. Containerising your application helps in shipping your application to different systems without worrying about dependencies, scaling and even deploying your application faster. I have made a detailed "&lt;a href="https://dev.to/atharvaa/chapter-a-guide-to-docker-36mj"&gt;Guide to Docker&lt;/a&gt;" so check that out!&lt;/p&gt;




&lt;p&gt;&lt;code&gt;Thank you for taking time to read my article :)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do connect with me on- &lt;br&gt;
&lt;a href="https://twitter.com/atharvashinde_"&gt;Twitter&lt;/a&gt;, &lt;br&gt;
&lt;a href="https://github.com/Atharva-Shinde"&gt;GitHub&lt;/a&gt;, and&lt;br&gt;
&lt;a href="https://www.linkedin.com/in/atharva-shinde-6468b4205/"&gt;LinkedIn&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>jenkins</category>
      <category>java</category>
      <category>docker</category>
    </item>
    <item>
      <title>Chapter: A guide to Docker</title>
      <dc:creator>Atharva Shinde</dc:creator>
      <pubDate>Wed, 01 Dec 2021 19:49:53 +0000</pubDate>
      <link>https://dev.to/atharvaa/chapter-a-guide-to-docker-36mj</link>
      <guid>https://dev.to/atharvaa/chapter-a-guide-to-docker-36mj</guid>
      <description>&lt;h4&gt;
  
  
  What is Docker?
&lt;/h4&gt;

&lt;p&gt;Docker is a platform that allows developers to containerise, build, test, deploy, ship applications much faster within multiple workspaces and deliver production ready applications. Docker wraps your application inside an abstraction called container which makes your development workflow quick and easier.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why Docker?
&lt;/h4&gt;

&lt;p&gt;If you need streamlined, fast, lightweight, efficient and orchestrated way to run, scale, deploy or test your application/software on one or more virtualised/physical systems, Docker is your solution.&lt;br&gt;
Docker being lightweight and portable makes it a cost-effective alternative to hypervisor virtual machines.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.docker.com%2Fsites%2Fdefault%2Ffiles%2Fd8%2F2018-11%2Fdocker-containerized-and-vm-transparent-bg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.docker.com%2Fsites%2Fdefault%2Ffiles%2Fd8%2F2018-11%2Fdocker-containerized-and-vm-transparent-bg.png" alt="Docker and hypervisors"&gt;&lt;/a&gt;&lt;br&gt;
With its simple CLI commands we can easily build, delete, deploy and manage containerised images on our machines. Moreover these containers are highly portable which enables them to be able to run on local systems, virtual machines, cloud providers or even in hybrid workspaces without worrying about setting up libraries and all other configuration files.&lt;/p&gt;
&lt;h4&gt;
  
  
  Docker Architecture
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.docker.com%2Fengine%2Fimages%2Farchitecture.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.docker.com%2Fengine%2Fimages%2Farchitecture.svg" alt="Docker Architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Docker implements Client-Server architecture. So what is Client-Server architecture? Consider online shopping. You (the client) orders a product from app-store and the service-provider (server) places the order, takes necessary actions and ships it to your address. Similarly a client-server architecture is an architecture where the client requests a service (in this case through Docker CLI) and the service provider (in this case Docker daemon) works on the request and provides desired result.&lt;br&gt;
Referring the above diagram, if user wants to create a container from one of the installed images and therefore enters &lt;code&gt;docker run&lt;/code&gt; command, docker daemon responds to the request and gives user a configured virtualised container along with its metadata.&lt;/p&gt;
&lt;h3&gt;
  
  
  What is Docker Daemon?
&lt;/h3&gt;

&lt;p&gt;Daemon is a program that runs consistently in the background which responds to particular request or action, and is not under any direct control of user, that means user cannot change the nature of the program. Usually daemon files/processes names are suffixed by 'd', for example:- &lt;em&gt;sshd, mysqld, dockerd.&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fexternal-content.duckduckgo.com%2Fiu%2F%3Fu%3Dhttps%253A%252F%252Fwww.oreilly.com%252Flibrary%252Fview%252Fcontinuous-delivery-with%252F9781787125230%252Fassets%252Fcadc3363-6814-489b-a770-58dd9ead6f56.png%26f%3D1%26nofb%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fexternal-content.duckduckgo.com%2Fiu%2F%3Fu%3Dhttps%253A%252F%252Fwww.oreilly.com%252Flibrary%252Fview%252Fcontinuous-delivery-with%252F9781787125230%252Fassets%252Fcadc3363-6814-489b-a770-58dd9ead6f56.png%26f%3D1%26nofb%3D1" alt="Docker Workflow"&gt;&lt;/a&gt;&lt;br&gt;
Docker Daemon listens to Docker Client through API requests and manages docker objects like images, containers, volumes, network.&lt;/p&gt;
&lt;h4&gt;
  
  
  Docker Client
&lt;/h4&gt;

&lt;p&gt;Docker client is the primary way to interact with Docker. When we run a Docker command through terminal, the client sends the request to Docker Daemon which is responsible for managing commands.&lt;/p&gt;
&lt;h3&gt;
  
  
  Docker Image
&lt;/h3&gt;

&lt;p&gt;Docker Image is built of read-only stacked layers generated from instructions inside the image's dockerfile. Each layer is a representation of an instruction from the dockerfile. Images that are pre-built by developers and are available inside Container-Registries, a place to store and download images. One such popular public registry is DockerHub. Few companies also have private registries to store their images.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# to install an image in your system
docker pull &amp;lt;image-name:version&amp;gt;
# example docker pull ubuntu:20.04

# to see if the image is installed
docker image ls
# or
docker images

# to remove an image 
docker image rm &amp;lt;image-name or image-id&amp;gt;
# if you get an error message similar to daemon: conflict: unable to delete, then you must delete the container first
read more about containers below 

# to check layers of an image
docker image inspect &amp;lt;image-name or image-id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Dockerfile
&lt;/h3&gt;

&lt;p&gt;To build your customised docker image create a Dockerfile with the extension &lt;code&gt;.dockerfile&lt;/code&gt;. A new image layer is stacked for an instruction you define inside Dockerfile, but not every instruction is responsible for creating a layer.&lt;/p&gt;

&lt;p&gt;Following is an example of dockerfile&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# this is a comment
FROM ubuntu:18.04
LABEL org.opencontainers.image.authors="org@example.com"
COPY . /app
RUN make /app
RUN rm -r $HOME/.cache
CMD python /app/app.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  .dockignore
&lt;/h3&gt;

&lt;p&gt;If you have worked with GitHub before you may have used .gitignore file to exclude .env variables, dependency-directories, temporary files etc. Similarly for Docker we have &lt;code&gt;.dockignore&lt;/code&gt; file to exclude files that are not relevant to the build.&lt;br&gt;
Before a request from docker clients CLI reaches docker daemon it goes through &lt;code&gt;.dockignore&lt;/code&gt; file and checks whether there is a context that needs to be excluded before passing to the daemon and thus preventing any large or sensitive information from reaching daemon.&lt;/p&gt;
&lt;h3&gt;
  
  
  Docker Containers
&lt;/h3&gt;

&lt;p&gt;Containers are isolated processes which run on a single host machine. Containers consists of packages and dependencies required by your application to run on your system. We can create, start, stop, delete, move and modify containers which is performed inside a thin writable layer know as 'Container Layer' built on top of immutable, read-only image layers. Containers therefore are running instances of an image. Each container has its own binaries, dependencies, and container-layer therefore each container being an isolated process make them fast, light-weight and more efficient to work with.&lt;br&gt;
Below is an example of Image layers structure-&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.docker.com%2Fstorage%2Fstoragedriver%2Fimages%2Fcontainer-layers.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.docker.com%2Fstorage%2Fstoragedriver%2Fimages%2Fcontainer-layers.jpg" alt="Image structure"&gt;&lt;/a&gt;&lt;br&gt;
P.S- If user downloads two or more version of the same image, docker only builds the layers which are new from previous versions and all the layers which are mutual won't be installed again.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# to build a container from installed image
docker run &amp;lt;image-name:version&amp;gt;

# to get list of all running containers metadata
docker ps 

# to get list of all containers metadata
docker ps -a

# to stop a container
docker stop &amp;lt;container-name or container-id&amp;gt;

# to restart a container
docker start &amp;lt;container-name or container-id&amp;gt;

# to remove a container 
docker container rm &amp;lt;container-name or container-id&amp;gt;

# to rename a container
docker stop &amp;lt;container-name or container-id&amp;gt;
docker run --name &amp;lt;your desired container-name&amp;gt; &amp;lt;image-name&amp;gt;

# to navigate inside a container's terminal
docker exec -it &amp;lt;container id or container name&amp;gt; /bin/bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Containers helps in setting up environment on any OS without worrying about configurations and dependencies, and actually focusing on building applications.&lt;/p&gt;

&lt;p&gt;So what's the catch? &lt;br&gt;
Containers are not persistent. Container when deleted or restarted, loses all its data and starts again from its image definition. So when we delete/restart a container the data inside 'container-layer' is lost and it starts from a fresh state. Now if we try to install/restart it again, the container layer is created from scratch not having any history of operations we did inside previous one. How do we solve this? &lt;/p&gt;
&lt;h3&gt;
  
  
  Persistent data
&lt;/h3&gt;

&lt;p&gt;Persisting database is the solution to the above conflict. The idea is to store the data inside the hosts filesystem apart from dockers virtual filesystem.&lt;br&gt;
Basically there are 3 ways to persist the data-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Volume &lt;/li&gt;
&lt;li&gt;Bind mount&lt;/li&gt;
&lt;li&gt;tmpfs mount (Linux)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.docker.com%2Fstorage%2Fimages%2Ftypes-of-mounts-volume.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.docker.com%2Fstorage%2Fimages%2Ftypes-of-mounts-volume.png" alt="Ways to persist data"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However Docker recommends the use of Volume as your primary choice for persisting database because of its fair advantages over the other, which includes easy and secured migrating and backup between containers and systems, flexibility on working on linux and windows, user can easily communicate to volumes using Docker Client CLI, etc.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# to create a volume 
docker volume &amp;lt;volume-name&amp;gt;

# to list all volumes
docker volume ls

# to delete a volume
docker volume rm &amp;lt;volume-name&amp;gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;

&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  What next?
&lt;/h4&gt;

&lt;p&gt;Let's consider you have more than 4-5 containers on your machine and you want to scale them up, configure their networks, test, deploy and manage them. Some organisations even have thousands of containers to manage. Managing containers becomes difficult as they scale up. So we need a tool that take care of our containers.&lt;br&gt;
That's where Container Orchestration comes in picture. Container Orchestration deploys, scales, removes, checks containers health, load balances the traffic and manages all the containers. And the most widely used Container Orchestration tool is Kubernetes. &lt;br&gt;
So now you know what to explore next!&lt;/p&gt;




&lt;p&gt;&lt;code&gt;Thank you for taking time to read my article :)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;** Do connect with me on- &lt;br&gt;
&lt;a href="https://twitter.com/atharvashinde_" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;br&gt;
&lt;a href="https://github.com/Atharva-Shinde" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;, and&lt;br&gt;
&lt;a href="https://www.linkedin.com/in/atharva-shinde-6468b4205/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;br&gt;
**&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>kubernetes</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
