<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: KodeKloud</title>
    <description>The latest articles on DEV Community by KodeKloud (@kodekloud).</description>
    <link>https://dev.to/kodekloud</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kodekloud"/>
    <language>en</language>
    <item>
      <title>CKS Challenges</title>
      <dc:creator>KodeKloud</dc:creator>
      <pubDate>Mon, 09 May 2022 08:13:12 +0000</pubDate>
      <link>https://dev.to/kodekloud/cks-challenges-57dg</link>
      <guid>https://dev.to/kodekloud/cks-challenges-57dg</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9VTy7KMX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sf84un013ybbntk1l4ig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9VTy7KMX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sf84un013ybbntk1l4ig.png" alt="Image description" width="880" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have launched a special series to help you prepare for the Kubernetes CKS certifications. Check out the &lt;strong&gt;Certified Kubernetes Security Specialist Challenge Series&lt;/strong&gt;, where you can put all your hardcore Kubernetes skills to the test.&lt;/p&gt;

&lt;p&gt;This series consists of a set of complex challenges that will assist you in mastering Kubernetes Security concepts and getting ready for the coveted Certified Kubernetes Security Specialist Certification.&lt;/p&gt;

&lt;p&gt;These challenges will test you on Kubernetes security concepts such as network policies, RBAC, seccomp, AppArmor, etc. To solve some of the tasks, you will also need to make use of third party security tools such as Aquasec Trivy, Kubesec, CIS Benchmarks and Falco from Sysdig open source. &lt;/p&gt;

&lt;p&gt;The interface of these challenges is divided into two parts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The top half contains the Quiz portal where the details related to the challenge is displayed and an interactive Architecture diagram.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Click on the icons and the arrow connectors in the architecture diagram and an associated task (if available) will be displayed on the quiz portal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HIvmteUy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mbq5p7vaxwgyy1u669u1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HIvmteUy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mbq5p7vaxwgyy1u669u1.png" alt="Image description" width="880" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2.The bottom half of the interface contains the terminal to the Kubernetes controlplane which you would use to complete the tasks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VAxuyyrL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jdsuo9bnyk1qyxr8j18z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VAxuyyrL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jdsuo9bnyk1qyxr8j18z.png" alt="Image description" width="880" height="206"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To validate if a task is complete, you can click on the “Check” button. If complete, the icons in the architecture diagram will be highlighted in green. If something is yet to be completed, it will be highlighted in red.&lt;/p&gt;

&lt;p&gt;To complete the challenge, you must complete all the tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to submit your solution&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Record your solutions and share on social media on your blog/video on YouTube.&lt;br&gt;
And these are absolutely free for anyone to attempt.&lt;/p&gt;

&lt;p&gt;Just follow the steps below to submit your solution and win exciting prizes.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Document your solution with a good explanation on your blog or GitHub&lt;/li&gt;
&lt;li&gt;Or even better record a demo and upload on YouTube&lt;/li&gt;
&lt;li&gt;Add a link to the CKS challenges on KodeKloud in your reference &lt;a href="https://bit.ly/3spNmyj"&gt;https://kodekloud.com/courses/cks-challenges/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Submit the details through the form: &lt;a href="https://g0u7lypetlq.typeform.com/cks-challenge"&gt;https://g0u7lypetlq.typeform.com/cks-challenge&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Prices&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The best solution with the best explanation wins the content.&lt;/p&gt;

&lt;p&gt;We give away 2 &lt;strong&gt;Exam vouchers&lt;/strong&gt; each month for top submissions.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;So what are you waiting for? Get started right now.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>Docker Certified Associate Exam Series (Part -2): Container Orchestration</title>
      <dc:creator>KodeKloud</dc:creator>
      <pubDate>Wed, 13 Jan 2021 07:38:44 +0000</pubDate>
      <link>https://dev.to/kodekloud/docker-certified-associate-exam-series-part-2-container-orchestration-3e3f</link>
      <guid>https://dev.to/kodekloud/docker-certified-associate-exam-series-part-2-container-orchestration-3e3f</guid>
      <description>&lt;h2&gt;
  
  
  Introduction to Container Orchestration
&lt;/h2&gt;

&lt;p&gt;An essential part of preparing for the Docker Certified Associate (DCA) exam is to familiarize yourself with Container Orchestration. Container Orchestration requires a set of tools and scripts that you can use to host, configure, and manage containers in a production environment. &lt;/p&gt;

&lt;p&gt;Deploying in Docker typically involves running various applications on different hosts. Container orchestration will help you set up a large number of application instances using a single command. Container orchestration tools also help scale your application’s instances up or down in response to fluctuations in demand. With container orchestration tools, you can also provide advanced networking between various containers.&lt;/p&gt;

&lt;h4&gt;
  
  
  Three of the most popular Container Orchestration tools are Docker Swarm, Kubernetes, and MESOS.
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Docker Swarm is hugely popular and easy to set up, yet has a few drawbacks when it comes to autoscaling and customizations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;MESOS is challenging to use, and is only recommended for advanced Cloud developers. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kubernetes is a popular container orchestration solution that offers plenty of customization options and unmatched auto-scaling capabilities. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  For this part of the study guide series, we shall cover Docker Swarm.
&lt;/h4&gt;

&lt;h3&gt;
  
  
  Docker Swarm
&lt;/h3&gt;

&lt;p&gt;Docker Swarm helps you run applications on the Docker Engine seamlessly through multiple nodes that reside in the same containers. With Docker Swarm, you can always monitor the state, health, and performance of your containers and the hosts that run your applications. &lt;/p&gt;

&lt;p&gt;As you prepare for the DCA exam, some of the topics of Docker Swarm that you’ll need an in-depth understanding of include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Swarm Architecture&lt;/li&gt;
&lt;li&gt;Setting up a 2-node cluster in Swarm&lt;/li&gt;
&lt;li&gt;Creating a demo swarm cluster setup&lt;/li&gt;
&lt;li&gt;Basic Swarm Operations&lt;/li&gt;
&lt;li&gt;Swarm High Availability and the Importance of Quorum&lt;/li&gt;
&lt;li&gt;Swam in High Availability Mode&lt;/li&gt;
&lt;li&gt;Auto-lock and a classroom demo &lt;/li&gt;
&lt;li&gt;Swarm Services&lt;/li&gt;
&lt;li&gt;Rolling Updates, Rollbacks, and Scaling&lt;/li&gt;
&lt;li&gt;Swarm Service Types&lt;/li&gt;
&lt;li&gt;Placement in Swarm&lt;/li&gt;
&lt;li&gt;Service in Swarm- Basic Operations&lt;/li&gt;
&lt;li&gt;Service in Swarm- placements, global, parallelism, and replicated&lt;/li&gt;
&lt;li&gt;Docker Config Objects&lt;/li&gt;
&lt;li&gt;The Docker Overlay Network&lt;/li&gt;
&lt;li&gt;MACVLan Networks&lt;/li&gt;
&lt;li&gt;Swarm Service Discovery&lt;/li&gt;
&lt;li&gt;Docker Stack&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Let’s explore some of these areas in detail:
&lt;/h4&gt;

&lt;h3&gt;
  
  
  Swarm Architecture
&lt;/h3&gt;

&lt;p&gt;As you study for your exam, you should develop a high level of familiarity with the structure and architecture of Docker Swarm.&lt;/p&gt;

&lt;p&gt;Docker Swarm lets you integrate different Docker machines onto a single cluster. This helps with your application’s load balancing and also improves its availability. A Docker Cluster is made up of different instances called Nodes. Nodes can be categorized into two types: Manager and Worker nodes.&lt;/p&gt;

&lt;p&gt;A Manager Node receives instructions from a user, turns these into service tasks, which are then assigned to one or more worker nodes. Such nodes also help maintain the desired state of the cluster in which it belongs. Managers can be configured to run production workloads too, when needed.&lt;/p&gt;

&lt;p&gt;On the other hand, a Worker Node receives instructions from the manager nodes, and uses these instructions to deploy and run the necessary containers. &lt;/p&gt;

&lt;h4&gt;
  
  
  Some features of Docker Swarm Architecture include:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Swarm is easy to set up and maintain since all the features of Docker Swarm are embedded in the Docker Engine.&lt;/li&gt;
&lt;li&gt;Docker Swarm deploys applications in a Declarative format.&lt;/li&gt;
&lt;li&gt;The Swarm manager automatically scales and distributes application instances across worker nodes depending on demand.&lt;/li&gt;
&lt;li&gt;Rolling updates reconfigure your application one-at-a-time for easier change management.&lt;/li&gt;
&lt;li&gt;Docker Swarm performs desired state reconciliation for self-healing applications.&lt;/li&gt;
&lt;li&gt;SSL/TLS certificates secure communication between nodes via authentication and encryption&lt;/li&gt;
&lt;li&gt;Uses an external load balancer to distribute requests between nodes
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F13hvff046cbeszaeecv4.png" alt="Alt Text"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Setting up a 2-Node Swarm Cluster
&lt;/h3&gt;

&lt;p&gt;This lesson is a demonstration of how you can create a Docker Swarm cluster with 2-worker nodes and 1-manager.&lt;br&gt;
The prerequisites needed for this session will include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Machines (Nodes) deployed and designated as Manager, Worker-1, and Worker-2.&lt;/li&gt;
&lt;li&gt;The machines should have the Docker Engine installed.&lt;/li&gt;
&lt;li&gt;Each node should be assigned a static IP address.&lt;/li&gt;
&lt;li&gt;The ports TCP 2377, TCP &amp;amp; UDP 7946 and UDP 4789 should be opened.
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8vygbzn0p1woo0uaqpq7.png" alt="Alt Text"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To initialize Docker Swarm on your manager node, use the following command while the manager is active:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker swarm init

Swarm initialized: current node (whds9866c56gtgq3uf5jmfsip) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-19nlqoifkry03y8l5242zl6e2te2k9dvzebf5b70ihhpn7r4qh-aqtxt2sd0sh0hj2f8ceupj53g 172.17.0.27:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command initializes Swarm on the selected node, which is now a manager. The command also returns a script you will use to add a worker to this swarm, as indicated on the Command Line Interface. &lt;/p&gt;

&lt;p&gt;To add the second worker to this Swarm, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker swarm join-token worker

To add a worker to this swarm, run the following command:

docker swarm join --token SWMTKN-1-19nlqoifkry03y8l5242zl6e2te2k9dvzebf5b70ihhpn7r4qh-aqtxt2sd0sh0hj2f8ceupj53g 172.17.0.27:2377
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To display a list of your nodes and workers’ names and status, type the following command onto the CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker node ls
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Swarm Operations
&lt;/h3&gt;

&lt;p&gt;You will learn some of the common Swarm operations that involve promoting, draining, and deleting nodes. &lt;/p&gt;

&lt;p&gt;To promote a node to manager, you will run the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker node promote worker1
Node worker1 promoted to a manager in the swarm.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To demote a manager node to Worker, you will run the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker node demote worker1
Manager worker1 demoted in the swarm.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you want to perform upgrades and maintenance on your cluster, you might need to drain each node independently, one at a time. Let us assume that the current state of the cluster has the following nodes, as shown below: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnhrueb91zsj6e37t14td.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnhrueb91zsj6e37t14td.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To drain your node, use the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker node update --availability drain worker1
worker1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command brings down containers on worker1 and runs replica instances on another worker until it gets back up.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fihnueof6dhkf18h13p6p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fihnueof6dhkf18h13p6p.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you are done patching or maintaining your node, you will run the update command but with the availability being active to bring it back up. The command to use is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker node update --availability active worker1
worker1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To delete a node from a cluster, drain it so that it’s workload is redistributed to another node, then run the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker swarm leave
Node left the swarm.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Swarm High Availability- Quorum
&lt;/h3&gt;

&lt;p&gt;Docker Swarm uses a RAFT algorithm to create distributed consensus when more than one manager node is running on a cluster. Having multiple managers in a cluster helps with fault tolerance. &lt;/p&gt;

&lt;p&gt;The RAFT algorithm initiates requests at random times. The first manager to respond then requests other managers in the cluster to make it a Leader. If the managers respond positively, the leader assumes this role, sending notifications and updating a shared database on the state of the cluster. &lt;br&gt;
This database is available to all managers in the cluster. Before the leader makes any changes to a cluster, he sends instructions to other managers. The managers should reach a Quorum and agree before changes are effected. If the leader manager loses connectivity, other managers initiate a process to elect a new leader. &lt;/p&gt;
&lt;h4&gt;
  
  
  The best practices for high availability in Swarm include:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Each cluster should have an odd number of managers for easier network segmentation.&lt;/li&gt;
&lt;li&gt;Every decision should be agreed upon when the number of managers presents reach a Quorum. The quorum for a cluster with N managers is N/2+1&lt;/li&gt;
&lt;li&gt;The number of failures a cluster can withstand is the Fault Tolerance and is calculated as 2*N -1&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F43exns7ssg8s0d2o8j2a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F43exns7ssg8s0d2o8j2a.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Distribute manager nodes equally over different data centres/availability zones so the cluster can withstand sitewide disruptions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If more than the allowed number of managers fail, you cannot perform managerial duties on your cluster. The worker nodes will, however, continue to run normally with all the services and configuration settings still active.&lt;/p&gt;

&lt;p&gt;To resolve a failed cluster, you could attempt to bring the failed managers back online. If this fails, you can create a new cluster using the force-create command. This will be a healthy, single-manager cluster. The command for force create is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker swarm init --force-new-cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once this cluster has been created, you can promote other workers into manager nodes using the promote command.&lt;/p&gt;

&lt;h3&gt;
  
  
  Swarm Services
&lt;/h3&gt;

&lt;p&gt;As you begin to deploy your clusters, you’ll need a way to run multiple instances of your application across several worker nodes to help with automation and load balancing. The Docker Service allows you to launch containers in a coordinated manner across several nodes. &lt;/p&gt;

&lt;p&gt;To create 3 replicas of your application using the Docker service, run the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker service create --replicas=3 App1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5qifwmysvargp3o3oqsi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5qifwmysvargp3o3oqsi.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
When you deploy your applications, an API server creates a service which is then divided into tasks by the orchestrator. The allocator then assigns each task an IP address, and a dispatcher assigns these tasks to individual workers. The scheduler then manages task handling by the workers.&lt;/p&gt;

&lt;p&gt;Here are a few common service tasks and their Docker Commands.&lt;/p&gt;

&lt;p&gt;Create an overlay network&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker network create --driver overlay my-overlay-network
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a subnet&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker network create --driver overlay --subnet 10.15.0.0/16 my-overlay-network 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make a network attachable to external containers&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker network create --driver overlay --attachable my-overlay-network
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enable IPS Encryption&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker network create --driver overlay --opt encrypted my-overlay-network
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Attach a service to a network&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker service create --network my-overlay-network my-web-service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Delete a newly created network&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker service create --network my-overlay-network my-web-service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Delete all networks&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker network prune
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fv0t0stuekx2hcbbg9me0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fv0t0stuekx2hcbbg9me0.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Here are some network ports and their purposes.
&lt;/h3&gt;

&lt;p&gt;TCP 2377: Cluster Management Communications&lt;br&gt;
TCP/UDP: Container Network Discovery/ Communication Among Nodes&lt;br&gt;
UDP 4789: Overlay Network Traffic&lt;/p&gt;

&lt;p&gt;To publish a host on port 80 pointing to a container on port 5000, use the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker service create -p 80:5000 my-web-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker service create --publish published=80, target=5000 my-web-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To include UDP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker service create -p 80:5000/UDP my-web-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker service create --publish published=80, target=5000, protocol=UDP my-web-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Swarm Service Discovery
&lt;/h3&gt;

&lt;p&gt;Containers and services in a node can communicate with each other directly using their names. To make sure that these containers can ‘see’ each other, you should create an overlay network in which you should place the application and the naming service. For instance:&lt;/p&gt;

&lt;p&gt;Create an overlay network:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker network create --driver=overlay app-network
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then create an API server within this network:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker service create --name=api-server --replicas=2 --network=app-network api server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the web service task:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker service create --name=web --network=app-network web
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The services can now reach each other using their service names. The web server can now reach the api-server using the service name api-server.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Stack
&lt;/h3&gt;

&lt;p&gt;In Docker, a stack is a group of interrelated services that together form the functionality of an application. All of your application’s configuration settings and changes are stored in a docker configuration file, known as a Docker Stack or Docker Compose File. Docker Compose lets you create stack files in YAML, which makes your application easier to manage, highly distributed, and scalable. The docker stack file also lets you perform health checks on your container and set the grace period during which the health check stays inactive. &lt;/p&gt;

&lt;p&gt;To create a docker stack file, run the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker stack deploy --compose-file docker-compose.yml
Other Docker Stack Commands include:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Task &amp;amp; Command
&lt;/h4&gt;

&lt;p&gt;Create a Stack&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker stack deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;list active stacks&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker stack ls
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;list services created by a stack&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker stack ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;list tasks running in a stack&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker stack ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Delete a Stack&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker stack rm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Docker Storage
&lt;/h3&gt;

&lt;p&gt;To understand how container orchestration tools manage storage, it is important to know how docker manages storage in containers. Getting to know storage in docker will go a long way in helping you manage storage with Kubernetes. Storage in Docker is managed by two techniques: Storage Drivers and Volume Drivers. &lt;/p&gt;

&lt;p&gt;Docker uses Storage Drivers to enable layered architecture. These are attached to containers by default and point file systems to the default path: /var/lib/docker where it stores files in subfolders such as aufs, containers, image and volumes.&lt;/p&gt;

&lt;p&gt;Popular storage drivers include: AUFS, ZFS, BDRFS, Device Mapper, Overlay and Overlay2.&lt;/p&gt;

&lt;h3&gt;
  
  
  Volume Driver Plugins in Docker
&lt;/h3&gt;

&lt;p&gt;Volume Drivers help create persistent volumes in Docker. By default, volumes are assigned a local driver that stores data on the host's volume directory.&lt;/p&gt;

&lt;p&gt;There are third-party volume driver plugins that help with storage on various public cloud platforms. These include: Azure File Storage, Convoy, DigitalOcean BlockStorage, Flocker, gce-docker, GlusterFS, NetApp, RexRay, Portworx, VMware vSphere Storage, among others. Docker automatically assigns the appropriate volume driver depending on the operating system and application needs. &lt;/p&gt;

&lt;p&gt;To create a volume on Amazon AWS ElasticBlockStorage, run the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker run -it \
--name app1
    --volume driver rexray/ebs
    --mount src=ebs -vol,target=/var/lib/app1
    app1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command creates a persistent volume storage on Amazon EBS at app1’s default file location.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sample Questions:
&lt;/h3&gt;

&lt;p&gt;Here is a quick quiz to help you assess your knowledge. Leave your answers in the comments below and tag us back. &lt;/p&gt;

&lt;p&gt;Quick Tip - Questions below may include a mix of DOMC and MCQ types.&lt;/p&gt;

&lt;h4&gt;
  
  
  Which command can be used to remove a &lt;code&gt;kubeapp&lt;/code&gt; stack?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;[A] &lt;code&gt;docker stack deploy kubeapp&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;[B] &lt;code&gt;docker stack ls kubeapp&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;[C] &lt;code&gt;docker stack services kubeapp&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;[D] &lt;code&gt;docker stack rm kubeapp&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Which command can be used to promote worker2 to a manager node? Select the right answer.
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;[A] &lt;code&gt;docker promote node worker2&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;[B] &lt;code&gt;docker node promote worker2&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;[C] &lt;code&gt;docker swarm node promote worker2&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;[D &lt;code&gt;docker swarm promote node worker2&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  What is the command to list the stacks in the Docker host?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;[A] &lt;code&gt;docker stack deploy&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;[B] &lt;code&gt;docker stack ls&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;[C] &lt;code&gt;docker stack services&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;[D] &lt;code&gt;docker stack ps&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  What is the maximum number of managers possible in a swarm cluster?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;[A] 3&lt;/li&gt;
&lt;li&gt;[B] 5&lt;/li&gt;
&lt;li&gt;[C] 7&lt;/li&gt;
&lt;li&gt;[D] No limit&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  ...  are one or more instances of a single application that runs across the Swarm Cluster.
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;[A] docker stack&lt;/li&gt;
&lt;li&gt;[B] services&lt;/li&gt;
&lt;li&gt;[C] pods&lt;/li&gt;
&lt;li&gt;[D] None of the above&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;p&gt;Once you have understood Swarm architecture, set up a cluster, and ensured high availability, you would have developed enough familiarity to tackle real-world projects. Swarm services will help you automate the interaction between various nodes to help with load balancing for distributed containers. By following this guide, you will have enough knowledge on the importance of Container Orchestration and how Docker Swarm offers a simple, no-fuss framework that helps keep your containers healthy.&lt;/p&gt;

&lt;p&gt;To test where you stand in your Docker certification journey take the DCA Readiness Test at &lt;a href="https://kodekloud.com/p/docker-certified-associate-exam-course" rel="noopener noreferrer"&gt;dca.kodekloud.com.&lt;/a&gt; On KodeKloud, you also get a learning path with recommendations, sample questions and tips for clearing the DCA exam. &lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Docker Certified Associate (DCA) - The Ultimate Certification Guide for 2021</title>
      <dc:creator>KodeKloud</dc:creator>
      <pubDate>Wed, 23 Dec 2020 09:22:08 +0000</pubDate>
      <link>https://dev.to/kodekloud/docker-certified-associate-dca-the-ultimate-certification-guide-for-2021-1l59</link>
      <guid>https://dev.to/kodekloud/docker-certified-associate-dca-the-ultimate-certification-guide-for-2021-1l59</guid>
      <description>&lt;h2&gt;
  
  
  Why Get a Docker Certification?
&lt;/h2&gt;

&lt;p&gt;With a rising pattern of organizations adopting Cloud services, Docker continues to gain popularity. At its core, Docker aids in containerization, that is, for the packaging of applications into modules that can easily be replicated and scaled independently. Unarguably, as more applications are moving to the cloud, being an expert in Docker makes you a hot-skilled candidate in the modern IT world. &lt;/p&gt;

&lt;p&gt;In this article today, we would run through the Docker Certified Associate (DCA) exam curriculum and helpful resources to crack the certification test. &lt;/p&gt;

&lt;h2&gt;
  
  
  Curriculum Covered in Docker Certification
&lt;/h2&gt;

&lt;p&gt;The DCA Certification is awarded by &lt;a href="https://www.docker.com/" rel="noopener noreferrer"&gt;Docker&lt;/a&gt; (proctored by Examity) that helps highlight your familiarity and expertise with application deployment using Docker. In the exam, apart from your knowledge of Container Orchestration, you are required to have working knowledge of Docker Enterprise Edition and Docker Swarm. On passing this exam, you’ll have proven that you can install and configure containerized applications using Docker, making it a useful benchmark of your IT skills. &lt;/p&gt;

&lt;p&gt;Below are few of the key concepts based on their weightage that you should get hands-on while you start preparing for a DCA certification:&lt;/p&gt;

&lt;h3&gt;
  
  
  Container Orchestration
&lt;/h3&gt;

&lt;p&gt;This part essentially covers Basics of Docker and Container Orchestration; and carries about 25% of the total mark of your DCA exam. Additionally, you are also required to learn various Container Orchestration Tools that help automate the process of managing containers. &lt;/p&gt;

&lt;h4&gt;
  
  
  Content break-up on container orchestration includes:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Setting up a Swarm Mode Cluster&lt;/li&gt;
&lt;li&gt;Locking a Swarm Cluster&lt;/li&gt;
&lt;li&gt;Deploying Applications into Stack Files&lt;/li&gt;
&lt;li&gt;Running a Service vs. Running a Container&lt;/li&gt;
&lt;li&gt;Manage a stack of running services&lt;/li&gt;
&lt;li&gt;Replication of Services&lt;/li&gt;
&lt;li&gt;Replicated and Global Services&lt;/li&gt;
&lt;li&gt;Troubleshoot a non-deploying Service&lt;/li&gt;
&lt;li&gt;Communication among Docker Applications and Legacy Systems&lt;/li&gt;
&lt;li&gt;Service Templates&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An essential part of this content ensures that you learn the basics of container orchestration, including tools that automate deployment of containerized applications, manage release updates, and configure failed containers. On successful completion of this part, it is expected that you’ll be able to create your first orchestrated, containerized application. &lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Swarm/Kubernetes
&lt;/h3&gt;

&lt;p&gt;Both Kubernetes and Docker Swarm are popular choices of container orchestration. In essence, Kubernetes focuses on open-source and modular orchestration, offering an efficient container orchestration solution for high-demand applications with complex configuration. On the other hand, Docker Swarm emphasizes ease of use, making it most suitable for simple applications that are quick to deploy and easy to manage. As you prepare for the DCA exam, you are required to have a working knowledge of both of these tools and should be aware about their most appropriate use-cases in different scenarios.&lt;/p&gt;

&lt;h3&gt;
  
  
  Image Creation, Registry and Management
&lt;/h3&gt;

&lt;p&gt;Image Creation, Management and Registry carries about 20% weightage of your overall mark in a DCA test. All Docker containers are based on images, which are the building blocks of containerized applications. An image is, in fact, the executable package containing all components you’ll need to run your application. &lt;/p&gt;

&lt;h5&gt;
  
  
  For the Docker Certified Associate Exam, the content will include:
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Dockerfile Options&lt;/li&gt;
&lt;li&gt;Creating Images using Dockerfile&lt;/li&gt;
&lt;li&gt;Image Management using CLI commands&lt;/li&gt;
&lt;li&gt;Docker Image Layers&lt;/li&gt;
&lt;li&gt;Deploying, Configuring and Logging into Registry&lt;/li&gt;
&lt;li&gt;Pushing, Signing and Pulling Images from the Registry&lt;/li&gt;
&lt;li&gt;Image Deletion&lt;/li&gt;
&lt;li&gt;Tagging Images&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Installation and Configuration
&lt;/h3&gt;

&lt;p&gt;This is considered to be the most crucial part of the entire DCA learning. Though Installation and Configuration accounts for 15% of your total score, it should be noted that in a real-world a thorough knowledge of these concepts would come in handy almost regularly. &lt;/p&gt;

&lt;h4&gt;
  
  
  Content covered in Installation and Configuration includes:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Upgrading the Docker Engine&lt;/li&gt;
&lt;li&gt;Installing the Docker Engine on Various Platforms&lt;/li&gt;
&lt;li&gt;Logging Drivers&lt;/li&gt;
&lt;li&gt;User and Team Creation, User Management&lt;/li&gt;
&lt;li&gt;Sizing Requirements&lt;/li&gt;
&lt;li&gt;Client-Server Authentication for Image Registry Access&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As an essential part, you will also familiarize yourself with the &lt;strong&gt;Docker Universal Control Plane (UCP), Docker Daemon and the Docker Trusted Registry (DTR).&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Security &amp;amp; Networking
&lt;/h3&gt;

&lt;p&gt;Networking and Security each carry 15% of the total score weightage. Networking in Docker involves connecting containers using Network Drivers. To fully grasp networking for the DCA exam, you’ll have to understand concepts such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Building Docker Bridge Networks for developer use&lt;/li&gt;
&lt;li&gt;Troubleshooting logs&lt;/li&gt;
&lt;li&gt;Publishing application ports&lt;/li&gt;
&lt;li&gt;Identifying container ports and IP addresses&lt;/li&gt;
&lt;li&gt;Describing the various types of network drivers&lt;/li&gt;
&lt;li&gt;Configuring the Docker engine to use an external DNS&lt;/li&gt;
&lt;li&gt;Performing HTTP HTTPS load-balancing&lt;/li&gt;
&lt;li&gt;Types of traffic on Docker Networks&lt;/li&gt;
&lt;li&gt;Deploying services on Docker Networks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The DCA security chapter explores all content relating to authentication, encryption and transport layer security. This chapter will include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensuring images pass security scans&lt;/li&gt;
&lt;li&gt;The process of signing images&lt;/li&gt;
&lt;li&gt;Docker Content Trust&lt;/li&gt;
&lt;li&gt;Docker Engine Security&lt;/li&gt;
&lt;li&gt;Swarm Security&lt;/li&gt;
&lt;li&gt;Distinguishing UCP workers from managers. &lt;/li&gt;
&lt;li&gt;Mutual Transport Layer Security (MTLS)&lt;/li&gt;
&lt;li&gt;Using External Certificates with the Docker Universal Control Plane&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Storage and Volumes
&lt;/h3&gt;

&lt;p&gt;This chapter carries about 10% of your total exam score. Volumes offer a way to store information in Docker. For the DCA Associate exam, it is expected that you develop an understanding of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to quickly create volumes&lt;/li&gt;
&lt;li&gt;The differences between volumes and bind mounts&lt;/li&gt;
&lt;li&gt;Volume drivers and their most suitable use-cases&lt;/li&gt;
&lt;li&gt;Use of the devicemapper&lt;/li&gt;
&lt;li&gt;Object storage vs. block storage&lt;/li&gt;
&lt;li&gt;Filesystem layers&lt;/li&gt;
&lt;li&gt;Persistent storage in Docker&lt;/li&gt;
&lt;li&gt;Cleanup of unused images&lt;/li&gt;
&lt;li&gt;Storage in cluster nodes. &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Docker Enterprise Edition
&lt;/h3&gt;

&lt;p&gt;The Docker Enterprise Edition (EE) is created for applications with mission-critical deployments. This gives you a managed solution, complete with advanced container management, security scanning and application logging &amp;amp; monitoring. This version can be deployed on all major Server operating systems, including Red Hat Enterprise Linux (RHEL), Ubuntu, Oracle Linux, Windows Server 2016 and SUSE Linux Enterprise Server (SLES). It is also available for major cloud providers, including Azure and AWS. &lt;/p&gt;

&lt;h2&gt;
  
  
  Exam Preparation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Certification Format
&lt;/h3&gt;

&lt;p&gt;The Docker Certified Associate Exam is of one and a half hours and consists of 55 questions, including 44 Discrete Option Multiple-Choice (DOMC) and 11 Multiple-Choice (MCQ) questions. In DOMC, options are randomly shown at a time for the examinee to choose a YES or a NO. On the other hand, in a MCQ question there are multiple correct answers, all of which the examinee has to select discreetly. This exam is proctored by Examity, and you can register by clicking this &lt;a href="https://prod.examity.com/docker/" rel="noopener noreferrer"&gt;link&lt;/a&gt;. While there are no prerequisites, it is recommended that you should have used Docker for 6-12 months to be fully prepared for the exam. The exam fee is $195 and there are no free retakes if you flunk. You may however reschedule the exam prior to taking the test, so don’t feel pressured to take the exam unless you are completely ready.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Certified Associate (DCA) Study Plan
&lt;/h3&gt;

&lt;p&gt;As you prepare to study for your DCA exam, it is best to plan well to make sure that you do not miss any important topics, while ensuring that you do not get overwhelmed with the amount of knowledge flowing in. &lt;/p&gt;

&lt;p&gt;To plan well, you may divide the entire study into three parts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The first is a lab setup that you can use for practise demos. This could either be a local Docker Command Line Interface (CLI), an on-cloud platform like AWS (if you have a subscription), or an Online Playground that emulates the Docker CLI. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Second consideration as part of your plan should include a set of practice exams that lets you acquaint yourself with Discrete Option Multiple Choice (DOMC) and Multiple Choice Question (MCQ) exam formats used in the exam. To help with this, KodeKloud provides research questions, practice tests and mock exams in both MCQ and DOMC format that helps you get familiarised with the certification exam.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Lastly, but most importantly, you should plan to consider helpful resources for meaningful research. Through KodeKloud’s practise lesson lectures, you can get in-depth understanding of the DCA curriculum in a structured schedule. Such lectures also act as great resources to understand various Docker commands, options and tips which often act handy for a thorough understanding of Docker. Ensure that your approach to learning is more practical than theoretical that helps solve real-world problems. As such, some questions in the DCA exam will check your knowledge on commands, command options,  and shortcuts. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As the ultimate focus, your plan should be to gather knowledge that helps you gain working knowledge of Docker in a practical world. &lt;/p&gt;

&lt;h3&gt;
  
  
  Study Schedule
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmn6k5x2wyh6y4u9lt5yf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmn6k5x2wyh6y4u9lt5yf.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Study Schedule above indicates various subject areas you’ll need to cover, categorized into sections with estimated study times based on studying speeds. &lt;/p&gt;

&lt;p&gt;The topics - Docker Architecture, Docker Swarm, Kubernetes and Images, carry the bulk of the work, each taking over 20 hours to study. While the topics on Security, Networking, and Disaster Recovery roughly require eight hours for certification level expertise. As part of the Study Schedule, you will also take several mock exams covering each of the subjects, requiring up to twenty eight hours of your time. &lt;/p&gt;

&lt;p&gt;In total, to gain expert level knowledge helping clear the DCA certification, we estimate that you’ll need to practice for 3 months if you’re learning for two hours a day, 1.5 months studying four hours a day and one month studying 6 hours a day. &lt;/p&gt;

&lt;h3&gt;
  
  
  Sample Questions
&lt;/h3&gt;

&lt;p&gt;Here is a quick quiz to help you assess your knowledge. Leave your answers in the comments below and tag us back. &lt;/p&gt;

&lt;p&gt;Quick Tip - Questions below include a mix of DOMC and MCQ types.&lt;/p&gt;

&lt;h4&gt;
  
  
  Which statement best describes Quorum?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;[A] Quorum is the minimum number of nodes that must be available for the cluster to function properly.&lt;/li&gt;
&lt;li&gt;[B] In the case of 3 manager nodes, the quorum is 3&lt;/li&gt;
&lt;li&gt;[C] one of the best practises; maintain an odd number of managers in the swarm to support manager node failures.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Which of the below is a recommended best practice while taking backups of a swarm cluster?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;[A] Perform the backup operations from a swarm manager node that is a leader&lt;/li&gt;
&lt;li&gt;[B] Perform the backup operations from a swarm worker node &lt;/li&gt;
&lt;li&gt;[C] Perform the backup operations from a swarm manager node that is not a leader&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Which of the following steps are required to add a worker node in the UCP cluster?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;[A] Provision a node and Install Docker enterprise engine on it.&lt;/li&gt;
&lt;li&gt;[B] Run the &lt;code&gt;docker swarm join&lt;/code&gt; command to join the new node to the cluster. &lt;/li&gt;
&lt;li&gt;[C] Deploy an instance of the ucp-agent on the new node.&lt;/li&gt;
&lt;li&gt;[D] ucp-agent then installs the necessary components on the worker node&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  What will happen if the container consumes more memory than its limit?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;[A] the container will not be killed&lt;/li&gt;
&lt;li&gt;[B] the container will be killed with an Out of Memory exception
-[C] the container’s memory usage will be throttled&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  How does docker map a port on a container to a port on the host?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;[A] Using an internal load balancer&lt;/li&gt;
&lt;li&gt;[B] FirewallD Rules&lt;/li&gt;
&lt;li&gt;[C] Using an external load balancer&lt;/li&gt;
&lt;li&gt;[D] IPTables Rules&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Which of the following solutions support network policies?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;[A] kube-router&lt;/li&gt;
&lt;li&gt;[B] Calico&lt;/li&gt;
&lt;li&gt;[C] Flannel&lt;/li&gt;
&lt;li&gt;[D] Weave-Net&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Which command can be used to stop (only and not delete) the whole stack of containers created by compose file?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;[A] &lt;code&gt;docker-compose down&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;[B] &lt;code&gt;docker-compose stop&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;[C] &lt;code&gt;docker-compose destroy&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;[D] &lt;code&gt;docker-compose halt&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  What is the command to run 3 instances of httpd on a swarm cluster?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;[A] &lt;code&gt;docker swarm  service create --instances=3 httpd&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;[B] &lt;code&gt;docker swarm  service create --replicas=3 httpd&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;[C] &lt;code&gt;docker service create --instances=3 httpd&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;[D] &lt;code&gt;docker service create --replicas=3 httpd&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  In which service does the DTR image scanning occur?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;[A] A service known as the dtr-jobrunner container&lt;/li&gt;
&lt;li&gt;[B] A service known as the dtr-registry container&lt;/li&gt;
&lt;li&gt;[C] A service known as the dtr-api container&lt;/li&gt;
&lt;li&gt;[D]A service known as the dtr-runner container&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Assume that you have 3 managers in your cluster, what will happen if 2 managers fail at the same time? Select the all right answers.
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;[A] The services hosted on the available worker nodes will continue to run.&lt;/li&gt;
&lt;li&gt;[B] The services hosted on the available worker nodes will stop running.&lt;/li&gt;
&lt;li&gt;[C] New services/workers can be created or added.&lt;/li&gt;
&lt;li&gt;[D] New services/workers can’t be created or added.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Study Resource
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://kodekloud.com/p/docker-certified-associate-exam-course" rel="noopener noreferrer"&gt;KodeKloud's Docker Certified Associate Exam Course&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Certification Readiness Test
&lt;/h3&gt;

&lt;p&gt;To assess where you stand in your certification journey, take this &lt;a href="https://kodekloud.com/p/docker-certification-readiness-test" rel="noopener noreferrer"&gt;Docker Certification Readiness Test&lt;/a&gt; &lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>PODS in Kubernetes for Developers</title>
      <dc:creator>KodeKloud</dc:creator>
      <pubDate>Wed, 23 Sep 2020 06:08:56 +0000</pubDate>
      <link>https://dev.to/kodekloud/pods-in-kubernetes-for-developers-3f0n</link>
      <guid>https://dev.to/kodekloud/pods-in-kubernetes-for-developers-3f0n</guid>
      <description>&lt;h4&gt;
  
  
  Here, we will take a look at PODS.
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;POD introduction&lt;/li&gt;
&lt;li&gt;How to deploy a pod?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Kubernetes doesn't deploy containers directly on the worker node.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The containers are encapsulated into a Kubernetes object called POD.&lt;/li&gt;
&lt;li&gt;A POD is a single instance of an application.&lt;/li&gt;
&lt;li&gt;A POD is the smallest object that you can create in Kubernetes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is a single-node Kubernetes cluster with a single instance of your application running in a single docker container encapsulated in the pod.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YplhCUOq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/a333nupsjhvsjmfhe5nh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YplhCUOq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/a333nupsjhvsjmfhe5nh.png" alt="Alt Text" width="875" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A Pod usually has a one-to-one relationship with containers running your application.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To scale up, you create a pod, and to scale down, you delete a pod.&lt;/li&gt;
&lt;li&gt;You do not add additional containers to an existing POD to scale your application.
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oq0YfjFC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/n0e5f8n6j6zfkvbpopnx.png" alt="Alt Text" width="869" height="447"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Multi-Container PODs
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;A single pod can have multiple containers except for the fact that they are usually not multiple containers of the same kind.&lt;/li&gt;
&lt;li&gt;Sometimes you might have a scenario where helper containers that might be doing some kind of supporting tasks for a web application such as processing a user-entered data, processing a file uploaded by the user, etc. and you want these helper containers to live alongside your application container. In that case, you can have both of these containers part of the same POD.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cMhcrI4M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tlqvw439h6sje1xlyyr0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cMhcrI4M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tlqvw439h6sje1xlyyr0.png" alt="Alt Text" width="816" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  How to deploy pods?
&lt;/h4&gt;

&lt;p&gt;Let’s now take a look to create an Nginx pod using kubectl.&lt;/p&gt;

&lt;p&gt;To deploy a docker container by creating a POD.&lt;br&gt;
&lt;code&gt;$ kubectl run nginx --image nginx&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To get the list of pods&lt;br&gt;
&lt;code&gt;$ kubectl get pods&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vshsFyXF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gq6o6yo38lid6fxtfws9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vshsFyXF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gq6o6yo38lid6fxtfws9.png" alt="Alt Text" width="880" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  K8s Reference Docs:
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/"&gt;https://kubernetes.io/docs/concepts/workloads/pods/pod/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/"&gt;https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-intro/"&gt;https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-intro/&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Access &lt;a href="https://kodekloud.com/p/certified-kubernetes-administrator-with-practice-tests?utm_source=pbel&amp;amp;utm_medium=DevTo&amp;amp;utm_campaign=%26src%3Dpbel_devto"&gt;certified Kubernetes administrator course with practice tests&lt;/a&gt;
&lt;/h3&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>Top DevOps Queries Answered</title>
      <dc:creator>KodeKloud</dc:creator>
      <pubDate>Sun, 13 Sep 2020 06:00:34 +0000</pubDate>
      <link>https://dev.to/kodekloud/top-devops-queries-answered-2jb2</link>
      <guid>https://dev.to/kodekloud/top-devops-queries-answered-2jb2</guid>
      <description>&lt;p&gt;In this blog, we are going to be answering some of your most pressing questions about the fast growing trend known as DevOps! &lt;/p&gt;

&lt;h3&gt;
  
  
  What is DevOps?
&lt;/h3&gt;

&lt;p&gt;Many people have varying opinions on what DevOps is defined as, and the answer you receive will depend on who you are asking. However, at a high level DevOps is simply the act of syncing the development and operation processes into a more collaborative process. It’s better to not think of DevOps as a single task or a tool, but as a culture that involves the use of a specific set of tools to ensure a fast change in a particular system while tracking and maintaining quality. These processes came into existence to bridge the divide between Dev and Ops and to smoothen the software delivery workflow. &lt;/p&gt;

&lt;h3&gt;
  
  
  Does DevOps require coding?
&lt;/h3&gt;

&lt;p&gt;While not every project will require heavy work on the development side, it is absolutely important for every engineer to have a healthy balance of both development skills and operational knowledge.It is better to have the knowledge, and practical abilities on different types of programming languages, and it is an added advantage if someone has an understanding of scripting knowledge. Almost all of the developers have a basic understanding of Linux since it is a widely used operating system for programmers.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are the skills required to become a great DevOps engineer?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Proper knowledge of different types of programming and scripting languages. .&lt;/li&gt;
&lt;li&gt;Familiarity with the various open-source tools needed in day to day  work.&lt;/li&gt;
&lt;li&gt;Knowledge of IT operations is essential.&lt;/li&gt;
&lt;li&gt;Testing and deployment of software code&lt;/li&gt;
&lt;li&gt;Understanding of VMs, containers and microservices. &lt;/li&gt;
&lt;li&gt;Knowledge of the infrastructure as code and related tools.&lt;/li&gt;
&lt;li&gt;The ability to work in a collaborative environment&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Is DevOps easy to learn?
&lt;/h3&gt;

&lt;p&gt;This is an open-ended question with no right answer. The answer really depends on a person's true passion and learning consistency. Since there is no proper course in schools and colleges related to the tools and processes surrounding DevOps, it can be hard to supplement that lack of knowledge.. On the other hand, there are plenty of external course providers  that are training potential DevOps aspirants and equipping them with the skills to be successful.. Take &lt;a href="https://kodekloud.com/"&gt;KodeKloud&lt;/a&gt; for example, we have helped equip thousands of people with the tools and skills they need to become successful in the DevOps industry and helped many receive a range of certifications. &lt;/p&gt;

&lt;h3&gt;
  
  
  How long does it take to learn DevOps?
&lt;/h3&gt;

&lt;p&gt;There is no universal benchmark set in stone for becoming a proficient DevOps practitioner. .The ability to get up to speed solely depends on an individual’s ability to learn new cocepts with passion and consistency. For some, it may take a decent amount of time and for others, it may come naturally.. Here atKodeKloud, we have a complete &lt;a href="https://kodekloud.com/p/learning-path"&gt;DevOps learning path&lt;/a&gt; set for learners interested in taking a deep dive into an education in DevOps.&lt;/p&gt;

&lt;h3&gt;
  
  
  Who can learn DevOps?
&lt;/h3&gt;

&lt;p&gt;The short answer is - anybody with a desire to learn about the culture and processes while obtaining a high earning potential can learn these skills. This can include: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Self taught practitioners looking to enter a new field&lt;/li&gt;
&lt;li&gt;Computer science students&lt;/li&gt;
&lt;li&gt;Operations Engineers&lt;/li&gt;
&lt;li&gt;Software Developers&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What are the stages in DevOps?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--I8s2yQOG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xlmw4br1fn41bym2h6co.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--I8s2yQOG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xlmw4br1fn41bym2h6co.png" alt="Alt Text" width="880" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Plan&lt;/li&gt;
&lt;li&gt;Build&lt;/li&gt;
&lt;li&gt;Continuous integration&lt;/li&gt;
&lt;li&gt;Release&lt;/li&gt;
&lt;li&gt;Deploy&lt;/li&gt;
&lt;li&gt;Operate&lt;/li&gt;
&lt;li&gt;Monitor&lt;/li&gt;
&lt;li&gt;Continuous Feedback&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  What is Continuous Integration?
&lt;/h3&gt;

&lt;p&gt;Continuous integration is a software development method in which the developers' code gets integrated several times a day. Whenever a developer pushes changes to a repo, the changes are verified by an automated pipeline and checked for any errors or bugs based on a given test suite.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Continuous Delivery?
&lt;/h3&gt;

&lt;p&gt;Continuous delivery comes after continuous integrationI and is the ability to introduce changes in the environment with every commit., thereby making the code ready for production so that it can be deployed to production on demand and as a routine activity. Code changes can be anything from new features, bug fixes, updates, configuration changes, etc. One important thing to note is that before any changes are made, safety checks are run by the CI pipeline so that bugs get detected before any issues arise. &lt;/p&gt;

&lt;h3&gt;
  
  
  What is Continuous Deployment?
&lt;/h3&gt;

&lt;p&gt;Continuous Deployment is one step ahead from Continuous Delivery. This is where once the code is delivered and safety checks are passed, those changes are deployed into production automatically without approval from a developer. &lt;/p&gt;

&lt;h3&gt;
  
  
  What are the most common DevOps tools used?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Plan: JIRA&lt;/li&gt;
&lt;li&gt;Build: Maven, Gradle, Docker, GitHub, GitLab&lt;/li&gt;
&lt;li&gt;Continuous integration: Jenkins, CircleCI, Travis CI&lt;/li&gt;
&lt;li&gt;Release: Jenkins, Bamboo&lt;/li&gt;
&lt;li&gt;Deploy: Ansible, Kubernetes, Heroku, Amazon Web Services, Azure, Google Cloud Platform&lt;/li&gt;
&lt;li&gt;Operate: Botmetric, Docker, Ansible, Puppet, Chef, Terraform&lt;/li&gt;
&lt;li&gt;Monitor: Nagios, Splunk&lt;/li&gt;
&lt;li&gt;Continuous Feedback: Slack&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What are the most popular DevOps tools that a beginner should know?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://jenkins.io/"&gt;Jenkins&lt;/a&gt;&lt;br&gt;
Jenkins is still considered as the most popular CI tool in the DevOps space. With Jenkins, it is effortless to achieve visual ops. To convert a CLI into a GUI button, click, wrap up the script as a Jenkins job, and it is done.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt;&lt;br&gt;
Docker is a tool for packaging and running containerized applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.ansible.com/"&gt;Ansible&lt;/a&gt; &lt;br&gt;
Ansible is an open-source software automation tool that automates software provisioning, configuration management, and application deployment. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt;&lt;br&gt;
Kubernetes is a powerful open-source platform for container orchestration that automates the deployment and management of containerized applications. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.openshift.com/"&gt;OpenShift&lt;/a&gt;&lt;br&gt;
OpenShift is Red Hat's open-source cloud development Platform as a Service (PaaS), which allows developers to create, test, and run their applications and deploy them to the cloud without any hassle.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What is a pipeline in DevOps?
&lt;/h3&gt;

&lt;p&gt;A pipeline consists of code (Usually YAML) written by engineering teams  to define steps that tools such as Jenkins should take during the CI/CD process.. The pipeline often goes through a process such as: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build the code&lt;/li&gt;
&lt;li&gt;Test the code, if the tests pass, deploy the application to the various environments like development, test, or production environment.
A pipeline is a series of events or jobs that happen in a flow in the software delivery pipeline from start to the end.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Where can I start my DevOps learning?
&lt;/h3&gt;

&lt;p&gt;You can start your DevOps career from &lt;a href="https://kodekloud.com/"&gt;KodeKloud&lt;/a&gt; where you will find  best in class DevOps and cloud experts. With the practical sessions and labs provided, you will get hands-on clarity on the concepts and much more. &lt;/p&gt;

&lt;h3&gt;
  
  
  How can I get DevOps experience prior to the actual job?
&lt;/h3&gt;

&lt;p&gt;Here are KodeKloud, we have kept exactly this in mind. We’ve  prepared a concept called ‘KodeKloud Engineer’, where people can work on real-time challenges and solve them. You will be assigned a set of challenges on a time to time basis and you need to solve them just like how people solve problems in the companies. This way, you  will gain knowledge of real-life scenarios and become prepared for the real job. Come join for free today - &lt;a href="https://kodekloud.com/p/kodekloud-engineer"&gt;KodeKloud Engineer&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What is the average DevOps Engineer salary?
&lt;/h3&gt;

&lt;p&gt;As per &lt;a href="https://enterprisersproject.com/article/2018/2/devops-jobs-salaries-9-statistics-see"&gt;EnterpriseProject&lt;/a&gt; publication’s 2018’s report on DevOps title salaries, &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;$133,378: The average &lt;a href="https://www.glassdoor.com/Salaries/devops-engineer-salary-SRCH_KO0,15.htm"&gt;salary&lt;/a&gt; in the U.S. for people with a DevOps Engineer title, according to the jobs site &lt;a href="https://www.glassdoor.com/index.htm"&gt;Glassdoor&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;$122,969: The average salary in the U.S. for people with a DevOps Engineer title, according to the jobs site &lt;a href="https://www.indeed.com/"&gt;Indeed&lt;/a&gt;. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;According to &lt;a href="https://www.payscale.com/research/IN/Job=Development_Operations_(DevOps)_Engineer/Salary"&gt;PayScale&lt;/a&gt;, the average salary for a DevOps engineer in India is approx. 7 lakhs per annum. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;According to &lt;a href="https://6figr.com/in/salary/senior-devops-engineer--t"&gt;6figr&lt;/a&gt;, employees as Senior DevOps Engineer earn an average of ₹19.9lakhs, mostly ranging from ₹12.0lakhs per year to ₹35.7lakhs per year based on 31 profiles. The top 10% of employees earn more than ₹30.5lakhs per year.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Start your DevOps career from the best in class experts at &lt;a href="https://kodekloud.com/"&gt;KodeKloud&lt;/a&gt; today!
&lt;/h4&gt;

</description>
      <category>devops</category>
      <category>beginners</category>
    </item>
    <item>
      <title>DevOps: Git for Beginners</title>
      <dc:creator>KodeKloud</dc:creator>
      <pubDate>Wed, 19 Aug 2020 11:25:58 +0000</pubDate>
      <link>https://dev.to/kodekloud/devops-git-for-beginners-33bc</link>
      <guid>https://dev.to/kodekloud/devops-git-for-beginners-33bc</guid>
      <description>&lt;p&gt;Software development and related methodologies have come a long way with the advent of Agile, Lean, and DevOps. Now, it is all about Automation - frequent and faster releases in small chunks, so the features and software reach the target audience in a much quicker and more efficient way. DevOps has become a center stage, and every software developer wants to learn more about modern software development approaches and tools. One such tool is 'Git,' and is a must-know for every developer out there.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://git-scm.com/"&gt;Git&lt;/a&gt; is a must for anyone who writes code or is involved in a DevOps Project. In this article, we will discuss what Git is and other concepts related to it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Git?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MFlIsI5m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nq9mmm2rs8zb3zpbjhbw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MFlIsI5m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nq9mmm2rs8zb3zpbjhbw.png" alt="Alt Text" width="880" height="495"&gt;&lt;/a&gt;&lt;br&gt;
Git was developed by Linus Torvalds, the creator of Linux Operating System. Git is a Version Control System (VCS), which is more commonly used than the alternatives. On a fundamental level, there are two remarkable things a VCS allows you to do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Track changes you make to your files&lt;/li&gt;
&lt;li&gt;It enhances collaboration by simplifying working on projects with multiple people and teams. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Git is a software that runs locally on the developers' machine. The files and history are stored on the developers' computer. Developers can also use online hosts such as GitHub and Bitbucket, to save a copy of their files and revision history. A central place to upload changes and download changes from others enables collaboration easily between the teams.&lt;/p&gt;

&lt;p&gt;Git can automatically merge the changes; that way, two developers can work on different parts of the same file and later merge the changes without waiting for each other and losing each other's work.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why GIT?
&lt;/h3&gt;

&lt;p&gt;Software development involves a team of developers working together on the same code base. To avoid code conflict between these developers, we need a centralized version control system like Git. Git helps developers go back to older versions of the code base to make revisions such as fix bugs or revert code changes. With branches Git helps developers work on multiple feature implementations or bug fixes in parallel and later merge those changes in when ready..&lt;/p&gt;

&lt;p&gt;With Git, you will be able to see what others are working on, review their code, view your previous changes, roll back to previous code, and do much more.&lt;/p&gt;

&lt;h4&gt;
  
  
  How to get Git?
&lt;/h4&gt;

&lt;p&gt;Git usually is installed by default on many systems.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You can download git for any operating system &lt;a href="https://git-scm.com/downloads"&gt;here&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Try &lt;a href="https://desktop.github.com/"&gt;GitHub Desktop&lt;/a&gt; (for Windows and Mac) if you like to use a graphical user interface (GUI).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Well, if you want to download it from scratch, this &lt;a href="https://git-scm.com/book/en/v2/Getting-Started-Installing-Git"&gt;link&lt;/a&gt; has details on installing Git on multiple operating systems.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Git key concepts
&lt;/h3&gt;

&lt;p&gt;Git key terminologies include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Version Control System:Git helps maintain various versions of the code base at different stages of the development lifecycle. It is also called a source code manager.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Commit: When a developer makes code changes, the changes are saved in the local repository. Every commit saves a copy of the changed/added files within Git.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Push - This sends the recent commits from developers' local repository to a remote server like GitHub, GitLab or BitBucket.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pull - This downloads any changes made from the remote Git repository and merges them into the developers' local repository.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SHA (Secure Hash Algorithm): This is a unique ID given to each commit.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Branch: When you diverge from the main line of software development and continue to work/code without messing with the main/master development line.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Role of GIT in DevOps
&lt;/h3&gt;

&lt;p&gt;The DevOps approach needs a version control system that will track all the changes. Git is a distributed version control system that allows developers to keep a local copy of the commits they make. GIT as a DevOps tool empowers collaboration and faster release cycles, and that is what the DevOps concept is based upon.&lt;/p&gt;

&lt;p&gt;Version control is one of the best practices of DevOps. With version control, developers working on a project have the ability to version the software, share, collaborate, merge, and have backups.&lt;/p&gt;

&lt;p&gt;When working in large organizations, where multiple teams work together on the same project, Git comes handy and makes it easy to track changes made by each team. It helps in tracking code, version control, and effective management of code.&lt;/p&gt;

&lt;p&gt;Anyone willing to start or approach DevOps as a career should start from the basics, and GIT is the fundamental tool that overrides everything else.&lt;/p&gt;

&lt;h3&gt;
  
  
  Most popular Git solutions
&lt;/h3&gt;

&lt;p&gt;Every company is now powered by software in one or the other ways. There are multiple projects handled by many developers in an organization, and they all need a means to track, upload, and receive changes to the code base. Effective repository management services are the key to fast and efficient software development. The most popular ones based on Git are GitHub, Bitbucket, and GitLab.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ENefpgQe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xi51d6jqloy6wnrgtzzp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ENefpgQe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xi51d6jqloy6wnrgtzzp.png" alt="Alt Text" width="880" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/"&gt;GitHub&lt;/a&gt; is a git-based repository host launched initially in 2008 by PJ Hyatt, Tom Preston-Werner, and Chris Wanstrath. As of now, GitHub is the largest repository hosting platform with more than 38 million projects. It authorizes to host and review code, manage projects, and build software.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://bitbucket.org/"&gt;Bitbucket&lt;/a&gt; was also launched in 2008 by an Australian startup and it initially supported only for Mercurial projects. In 2010 Bitbucket was smartly bought by Atlassian, and from the next year, it started supporting Git hosting, which is now its primary focus. Bitbucket has become a household name and provides free unlimited private repos, many powerful integrations like Jira and Trello, and has built-in continuous delivery.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://about.gitlab.com/"&gt;GitLab&lt;/a&gt; started as a small project in 2011, aiming to provide an alternative to the available repository management solutions. The company was only incorporated in 2014. Gitlab now provides a complete DevOps setup for the organizations from continuous integration and delivery, agile development, security etc.&lt;/p&gt;

&lt;p&gt;Here is a basic &lt;a href="https://education.github.com/git-cheat-sheet-education.pdf"&gt;Git cheat sheet&lt;/a&gt; you would love to have.&lt;/p&gt;

&lt;h3&gt;
  
  
  Basic Git commands
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Install git&lt;br&gt;
&lt;code&gt;yum install git&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To see the version of git installed&lt;br&gt;
&lt;code&gt;git version&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To initialize a git repository&lt;br&gt;
&lt;code&gt;git init&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To check the status of the git repository&lt;br&gt;
&lt;code&gt;git status&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To track all the files except notes.txt file&lt;br&gt;
&lt;code&gt;git add LICENSE README.md main.py ...&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To stage changes&lt;br&gt;
&lt;code&gt;git add main.py&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To commit changes&lt;br&gt;
&lt;code&gt;git commit -m "initial commit"&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To copy a repository&lt;br&gt;
&lt;code&gt;git clone username@host:/path/to/repository&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To set user-specific configuration values like email, username, file format, and so on&lt;br&gt;
&lt;code&gt;git config --global user.email youremail@example.com&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To view all remote repositories&lt;br&gt;
&lt;code&gt;git remote -v&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To create, or delete branches&lt;br&gt;
&lt;code&gt;git branch&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Git is the most basic requirement in the software development and DevOps field.&lt;/p&gt;

&lt;p&gt;This video will walk you through the basics of Git.&lt;/p&gt;

&lt;h4&gt;
  
  
  Difference between Git and GitHub
&lt;/h4&gt;

&lt;p&gt;One of the most common questions we get asked are the differences between Git and Github. Git is the tool/technology that allows versioning of code and Github is the centrally hosted Git server where the Git repository lives. Developers use the remote repository hosted on Github to share their work among themselves. Github is just one of the many tools such as Gitlab, BitBucket etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rpAyTCG_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1e63pmby2dnnbbmu8xri.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rpAyTCG_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1e63pmby2dnnbbmu8xri.png" alt="Alt Text" width="512" height="282"&gt;&lt;/a&gt;&lt;br&gt;
Image source credits: &lt;a href="https://www.theserverside.com/video/Git-vs-GitHub-What-is-the-difference-between-them"&gt;TheServerSide&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Myth: Git is only for Software Developers
&lt;/h4&gt;

&lt;p&gt;Git is not just for developers. Git is for anyone working in IT - from Systems Administrators, to Solutions Architects, Software Engineers, Team leads, Project managers - should equally learn and understand the workflows in Git.&lt;/p&gt;

&lt;p&gt;With the advent of Infrastructure as Code, System administrators and operations teams will be using Git to store code related to infrastructure such as Terraform Configuration files, Ansible Playbooks, Vagrant files, supporting shell scripts.&lt;/p&gt;

&lt;h4&gt;
  
  
  Myth: GitHub is just a code base/repository
&lt;/h4&gt;

&lt;p&gt;Git is not just a code repository. Git is where new software is developed. Today most of the popular projects are developed on Github - Kubernetes, Ansible, Terraform, Tensorflow, Helm Charts being a few of the top repositories. All of these projects have extensive documentation built on Github. The Git workflow was built with collaboration between developers in mind. Code reviews and approvals happen on Github. Github has a Project dashboard that enables project management.&lt;/p&gt;

&lt;p&gt;Checkout our new course on Git for Beginners &lt;a href="https://kodekloud.com/p/git-for-beginners?src=devto"&gt;here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>beginners</category>
      <category>git</category>
    </item>
    <item>
      <title>Learn Shell Scripting</title>
      <dc:creator>KodeKloud</dc:creator>
      <pubDate>Fri, 10 Jul 2020 07:14:15 +0000</pubDate>
      <link>https://dev.to/kodekloud/learn-shell-scripting-246a</link>
      <guid>https://dev.to/kodekloud/learn-shell-scripting-246a</guid>
      <description>&lt;p&gt;We just launched our latest course on Shell Scripting for Beginners. If you want to increase productivity by automating daily repetitive tasks in Linux, then this course is for you. This is for those who have always wanted to learn shell scripting but didn't have sufficient coding or programming experience to get through it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZB4v9ib7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/womg3te2f03pfmb2cd7b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZB4v9ib7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/womg3te2f03pfmb2cd7b.png" alt="Alt Text" width="581" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This course is a beginner level course and is for those who are absolute beginners to shell scripting or programming. System Administrators, Developers or IT engineers who do not have any prior programming experience can go through this course to gain basic knowledge of shell scripting. As part of this course we will explain the necessary programming concepts required for shell scripting.&lt;/p&gt;

&lt;h3&gt;
  
  
  Learn by doing!
&lt;/h3&gt;

&lt;p&gt;And we learn in a fun way using examples of a space station launching missions to explore the universe. As well as our embedded hands-on labs that will make sure you gain enough hands-on practice right after you learn each concept. We will test your scripts to make sure you have written them correctly and also provide feedback.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mVbTDsDG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/octf1okuhrxlhs91z7ze.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mVbTDsDG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/octf1okuhrxlhs91z7ze.gif" alt="Alt Text" width="880" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This course is a beginner level course and is for those who are absolute beginners to shell scripting or programming. System Administrators, Developers or IT engineers who do not have any prior programming experience can go through this course to gain basic knowledge of shell scripting. As part of this course we will explain the necessary programming concepts required for shell scripting.&lt;/p&gt;

&lt;h4&gt;
  
  
  This course is divided into different sections where we discuss:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;What shell scripts are&lt;/li&gt;
&lt;li&gt;Getting started with your first shell script.&lt;/li&gt;
&lt;li&gt;Making your scripts executable.&lt;/li&gt;
&lt;li&gt;Variables&lt;/li&gt;
&lt;li&gt;Conditional statements&lt;/li&gt;
&lt;li&gt;For loops&lt;/li&gt;
&lt;li&gt;While loops&lt;/li&gt;
&lt;li&gt;Arithmetic operations&lt;/li&gt;
&lt;li&gt;Best practices&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Project
&lt;/h4&gt;

&lt;p&gt;Automate the Deployment of a 2 tier application just with Shell Scripts. Watch me do it following best practices.&lt;/p&gt;

&lt;p&gt;And throughout the course we will also discuss best practices while scripting such as what to do and what not to do and how to develop a script that’s reusable. We will also see some tips and tricks such as some IDEs and utilities that can help you improve your scripting skills.&lt;br&gt;
Check it out and let me know what your thoughts.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://bit.ly/ShellScriptsonDevCommunity"&gt;Shell Scripts for Beginners&lt;/a&gt;
&lt;/h4&gt;

</description>
      <category>career</category>
    </item>
    <item>
      <title>Scripting for Beginners</title>
      <dc:creator>KodeKloud</dc:creator>
      <pubDate>Tue, 07 Jul 2020 09:19:57 +0000</pubDate>
      <link>https://dev.to/kodekloud/introduction-to-scripting-19e8</link>
      <guid>https://dev.to/kodekloud/introduction-to-scripting-19e8</guid>
      <description>&lt;p&gt;The scripting language is a language where instructions are provided for a run time environment. Scripting languages are an integral part of the engineering teams in enterprises. They are often used in many areas, other than the server-side and client-side applications; scripting languages are well suited in system administration. Some notable examples of scripts used in system administration include Shell, Perl, and Python.&lt;/p&gt;

&lt;p&gt;With shell scripting, you can automate the tasks which consume a lot of time. Shell scripting is a way for you to save a lot of time and focus on the things that matter rather than wasting time on unnecessary time consuming repetitive tasks.&lt;/p&gt;

&lt;h2&gt;
  
  
  History of the Shell
&lt;/h2&gt;

&lt;p&gt;There was a shell program by the name the V6 Shell starting with Unix in the 1970s developed by Ken Thomson. It lacked scripting ability for developers and was then followed by the Bourne Shell in 1977, and it is still in use as the default shell for the root account; this added scripting abilities that proved extremely useful for developers. &lt;/p&gt;

&lt;p&gt;Further, there were many improvements made in the 1980s and gave rise to many popular shell variants like C-Shell and Korn Shell, the important ones. These Shell scripts had their own syntax, which was, in some instances, completely different from the original shell. Bash Shell is the most popular shell today, and Bash stands for Bourne-Again-Shell and is the finest improved variant of the original Bourne Shell.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are Shell scripts?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Shells are interactive, they accept the command as an input from users and execute them. Shell can also take commands from a file; we can write these desired commands in a file and can easily execute them in shell to avoid repetitive work. Such files are called Shell Scripts or also called Shell Programs. Shell scripts are more or less similar to the &lt;a href="https://en.wikipedia.org/wiki/Batch_file"&gt;batch file&lt;/a&gt; in MS-DOS. Every shell script is saved by .sh file extension. For example, myscript.sh&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Just like any other programming language, a shell script has syntax. It would be very easy and effortless to get started if you have prior experience with programming languages like C/C++, Python, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A shell script involves the following basic elements –&lt;br&gt;
Shell Keywords – if, else, break, etc.&lt;br&gt;
Shell commands – cd, ls, echo, pwd, touch etc.&lt;br&gt;
Functions&lt;br&gt;
Control flow – if..then..else, case and shell loops, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When it comes to server-side, the scripting languages include JavaScript, Perl, PHP, etc. and client-side scripting languages include JavaScript, jQuery, AJAX, etc. Scripting languages are mainly used in system administration and by developers to automate their day to day repetitive tasks. In such cases, languages like Shell, Python scripts, and Perl, etc help them a lot.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Here are the few reasons why you should use the scripts:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;To avoid manual work, to initialize something at boot time of the system, you can write the script, and it saves a lot of time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To install pre-requisite and to build the code with user input to enable/disable some features, you can write a script that does all the work for you.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Anything from killing or starting multiple applications together, scripts come handy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Even when you have to observe a large database of files, analyze and find some patterns out of it, you can use scripts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To automate any mundane task in your day to day activities, you can write scripts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;According to the recent &lt;a href="https://insights.stackoverflow.com/survey/2020#technology-what-languages-are-associated-with-the-highest-salaries-worldwide"&gt;Stack Overflow survey&lt;/a&gt;, Bash/Shell/PowerShell is one of the languages that is associated with the highest salaries around the world.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ttX7Kf9j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/do5vnelkvlpm3kesc16l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ttX7Kf9j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/do5vnelkvlpm3kesc16l.png" alt="Alt Text" width="880" height="584"&gt;&lt;/a&gt;&lt;br&gt;
Image credits: Stack Overflow survey&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Bash/Shell/PowerShell is among one of the top 10 most popular technologies as per the &lt;a href="https://insights.stackoverflow.com/survey/2019#technology"&gt;Stackoverflow insight data&lt;/a&gt; derived from over 80K responses in the recent survey.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vg5E3_fu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4ah3zencodh1v8dpavvb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vg5E3_fu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4ah3zencodh1v8dpavvb.png" alt="Alt Text" width="771" height="613"&gt;&lt;/a&gt;&lt;br&gt;
Image credits: Stack Overflow insights data&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Scripting language use cases and examples
&lt;/h4&gt;

&lt;p&gt;Scripting is widely used by developers to automate tasks within an operating system, to enhance web pages within browser software, etc.&lt;/p&gt;

&lt;p&gt;Javascript is a scripting language that can be used to inject logic into the webpages and they don’t need to be compiled like Java or C.&lt;/p&gt;

&lt;p&gt;Python is a scripting language that can be used to run any kind of automation with few lines of code and that is one reason why DevOps people think Python fits really well into the DevOps space.&lt;/p&gt;

&lt;p&gt;These days, scripting languages are generally associated with web development, where they are extensively used to make dynamic web applications that result in smooth user experience.&lt;/p&gt;

&lt;p&gt;WordPress sites are very good examples of where scripting languages come into action. A PHP script makes it simply possible to have your three or four latest blog posts automatically appear on a website’s homepage.&lt;/p&gt;

&lt;p&gt;The visitor viewing the site will not have any clue and doesn’t see the script or its backend process; they just see the end result, a smooth user experience. Rather than hand-coding each and every single instance and outcome of a dynamic function, the site’s developer is able to implement such features with only a one-time set of instructions.&lt;/p&gt;

&lt;p&gt;Another modern example can be, say there is an automated deploy task you want to run on a GitHub repository, bash script can be used inside GitHub actions. When a commit happens, it can ssh into a machine, fetch the latest code, and serves it.&lt;/p&gt;

&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;p&gt;Scripting languages are more crucial than ever. DevOps practitioners are required to know a scripting language to save themselves some valuable time. The automation world mostly relied on scripting languages and is heavily used in software development and making things fast. &lt;/p&gt;

&lt;p&gt;Learning a scripting language is the best way to get your hands dirty in the coding world as well as in the DevOps industry. There is a scarcity of talented DevOps professionals who know the scripting languages. Just remember, the scripting language is a must and will be there until there is software running the world.&lt;/p&gt;

&lt;p&gt;People might assume that they need an in-depth knowledge of languages, such as Python or C, or even Java, for higher functionalities, but that’s not fundamentally true. The Bash scripting language is sometimes more than enough, it is compelling. There is a lot to learn to maximize its worth and usefulness when it comes to system administration and DevOps.&lt;/p&gt;

&lt;p&gt;Learn more about Shell Scripting in our ‘&lt;a href="https://kodekloud.com/p/shell-scripts-for-beginners"&gt;Shell Scripts for Beginners&lt;/a&gt;’ course!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mVgQ-lEH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/os27ai9823a8kdnl84dh.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mVgQ-lEH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/os27ai9823a8kdnl84dh.gif" alt="Alt Text" width="880" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>career</category>
      <category>bash</category>
    </item>
    <item>
      <title>CI/CD with Docker for Beginners</title>
      <dc:creator>KodeKloud</dc:creator>
      <pubDate>Tue, 23 Jun 2020 10:01:33 +0000</pubDate>
      <link>https://dev.to/kodekloud/ci-cd-with-docker-for-beginners-48e6</link>
      <guid>https://dev.to/kodekloud/ci-cd-with-docker-for-beginners-48e6</guid>
      <description>&lt;p&gt;Docker is a DevOps platform that is basically used to create, deploy, and run applications using the concept of containerization. With Docker, developers can pack all the dependencies and libraries of an application easily and ship it out as a single package. &lt;/p&gt;

&lt;p&gt;This helps developers and operations teams mitigate the environmental issues that used to happen before. Developers can now be able to focus more on the features and deliverables than being concerned about the infrastructure compatibilities and configurations aspect of the platform. Further, this promotes the microservices architecture to help teams to build highly scalable applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Docker?
&lt;/h2&gt;

&lt;p&gt;Docker is an open-source project that has changed how software is built and shipped by providing a feasible way to containerize applications. Over the last few years, this has resulted in a lot of enthusiasm around containers in all stages of the software delivery lifecycle, from development to testing to production. Docker has become a very mainstream platform in a short time since its debut in 2013. The big giants like Amazon, Cisco, Google, Microsoft, Red Hat, VMWare, and others have created the Open Container Initiative to develop a common standardization around it.&lt;/p&gt;

&lt;h4&gt;
  
  
  Here's an overview of a few commonly used commands.
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--E5vUQ5-m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/3f15dq1vsv03k04tcprv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--E5vUQ5-m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/3f15dq1vsv03k04tcprv.png" alt="Docker Commands" width="880" height="545"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Image source: &lt;a href="https://www.taniarascia.com/continuous-integration-pipeline-docker/"&gt;Tania Rasciai&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Here are some key benefits of using Docker
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;You get a high level of control over all the changes because they are made using Docker containers and images. Thus, you can return back to the previous version whenever you want to.&lt;/li&gt;
&lt;li&gt;With Docker, you get a guarantee that if a feature is working in one environment, it will work in others as well.&lt;/li&gt;
&lt;li&gt;Docker, when used with DevOps, simplifies the process of creating application topology embodying various interconnected components.&lt;/li&gt;
&lt;li&gt;It makes the process of load balancing configuration easier with Ingress and built-in service concepts.&lt;/li&gt;
&lt;li&gt;It enables you to run CI/CD using them, which is more comfortable to use when compared to just using it with Docker.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Read our full article on the same '&lt;a href="https://dev.to/kodekloud/the-role-of-docker-in-devops-1con"&gt;The role of Docker in DevOps&lt;/a&gt;'&lt;/p&gt;

&lt;p&gt;Docker can be a common interface tool between developers and operations personnel, as stated in DevOps principles, eliminating a source of friction between the two teams. It also promotes the same image/binaries to be saved and used at every step of the pipeline throughout. Moreover, being able to deploy a thoroughly tested container without environment differences is the most significant advantage, and it ensures that no errors are introduced in the build process.&lt;/p&gt;

&lt;p&gt;You can simply and seamlessly migrate applications into production, eliminating all the friction in between. Something that was once a tedious task can now be as simple as:&lt;/p&gt;

&lt;p&gt;docker stop container-id; docker run new-image&lt;/p&gt;

&lt;p&gt;And in case if something goes wrong when deploying a new version of the application, you can always roll-back quickly or change to other container:&lt;/p&gt;

&lt;p&gt;docker stop container-id; docker start other-container-id&lt;/p&gt;

&lt;p&gt;Now, we will learn about how Docker integrates with CI/CD pipeline. Let us now look at how Docker plays a key role in the CI/CD pipeline. To begin with, Docker is supported by a majority of the build systems like Jenkins, Bamboo, Travis, etc. So how it works typically is that each project has a Docker file checked into its code repository along with the rest of the code for the application. The Docker file as we learned before, has instructions on building the Docker image. Once checked-in to GitHub, Jenkins pulls the code, uses the Docker file part of the code to build the Docker image.&lt;/p&gt;

&lt;p&gt;You may use a supported Docker plugin for this purpose. On building the new Docker image, Jenkins will tag the image with a new build number, in this case, 1.0. On successful build, this image can then be used to run tests.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IHVNEstA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0fm7b4tbwf00itytev26.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IHVNEstA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0fm7b4tbwf00itytev26.png" alt="Alt Text" width="880" height="670"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the tests are successful, it can then be pushed to the image repositories known as Docker registries, either to a repository internal to the company or external on Docker Hub. The image repository can then be integrated into a container hosting platform like Amazon ECS to host our application. This entire cycle of automated actions from making a change in the application to building, testing, releasing and finally deploying in production, completes a CI/CD pipeline.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--B5-CjRr3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jj6oce95oc8e9c5e3rkl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--B5-CjRr3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jj6oce95oc8e9c5e3rkl.png" alt="Alt Text" width="880" height="486"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next step is to deploy this image in production. Major cloud service providers like Amazon, Google, Azure, all support containers. Google Container Engine supports running containers in production on Kubernetes clusters.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RrjZvvCF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4vy6qwgf1h5q743f5mc1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RrjZvvCF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4vy6qwgf1h5q743f5mc1.png" alt="Alt Text" width="880" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes is a container orchestration technology which is an alternate solution to Docker swarm that we learned earlier. AWS has ECS, which stands for EC2 Container Service, it provides another mechanism to run containers in production. &lt;/p&gt;

&lt;p&gt;On-prem solutions like Pivotal Cloud Foundry have PKS which stands for pivotal container service, which again, uses Kubernetes underneath. Finally, Docker’s own container hosting platform, Docker cloud uses Docker swarm underneath to orchestrate containers. As you can see containers and Docker are supported everywhere and there are many options to host containers online.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Because of the efficiencies of virtualizing the OS, containerization allows for a much larger scale of applications in virtualized environments. In development and testing, the applications can be built, tested, and deployed much more quickly. The use of containers is definitely booming and has been adopted by many companies so far.&lt;/p&gt;

&lt;p&gt;With the help of Docker containers, once an application is containerized, developers will be able to deploy the same container in a different environment. As the Container remains the same, the application will be running identical in all environments without causing any dependency confusion.&lt;/p&gt;

&lt;p&gt;Containers and Docker provide developers the freedom they want, as well as ways to build scalable apps that respond and change quickly to the ever-changing business conditions. It is evident from its adoption by many huge and small startups that Docker will continue to gain more fans, grow, and become of greater importance in the DevOps space.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://kodekloud.com/p/docker-for-the-absolute-beginner-hands-on"&gt;Get Your Free Docker Course Here&lt;/a&gt;
&lt;/h2&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Kubernetes Concepts Explained for Developers</title>
      <dc:creator>KodeKloud</dc:creator>
      <pubDate>Wed, 10 Jun 2020 07:06:19 +0000</pubDate>
      <link>https://dev.to/kodekloud/kubernetes-concepts-explained-for-developers-22p</link>
      <guid>https://dev.to/kodekloud/kubernetes-concepts-explained-for-developers-22p</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcz6t25j28bzclce6by58.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcz6t25j28bzclce6by58.png" alt="Kubernetes architecture 2020"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Image source: &lt;a href="https://platform9.com/blog/kubernetes-enterprise-chapter-2-kubernetes-architecture-concepts/" rel="noopener noreferrer"&gt;Platform9&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With Docker, you can run a single instance of the application with a simple Docker run command. In this case, to run a Node JS based application, you run the docker run nodejs command. But that's just one instance of your application on one Docker host. What happens when the number of users increase, and that instance is no longer able to handle the load, you deploy an additional instance of your application by running the Docker run command multiple times. So that's something you have to do yourself, you have to keep a close watch on the load and performance of your application and deploy additional instances yourself. And not just that, you have to keep a close watch on the health of these applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Container Orchestration
&lt;/h2&gt;

&lt;p&gt;And if a container was to fail, you should be able to detect that and run the Docker run command again to deploy another instance of that application. What about the health of the Docker host itself? What if the host crashes and is inaccessible?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Flyl3oex7vc0ib56phddw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Flyl3oex7vc0ib56phddw.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The containers hosted on that host become inaccessible too. So what do you do in order to solve these issues, you will need a dedicated engineer who can sit and monitor the state performance and health of the containers and take necessary actions to remediate the situation. But when you have large applications deployed with 10s of thousands of containers, that's, that's not a practical approach. So you can build your own scripts and that will help you tackle these issues to some extent. container orchestration is just a solution for that. It is a solution that consists of a set of tools and scripts that can help host containers in your production environment.&lt;/p&gt;

&lt;p&gt;Typically, a container orchestration solution consists of multiple Docker hosts that can host containers. That way even if one fails, the application is still accessible through the others. The container orchestration solution easily allows you to deploy hundreds or thousands of instances of your application with a single command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0t741howmgif1876f5i5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0t741howmgif1876f5i5.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Some orchestration solutions can help you automatically scale up the number of instances when users increase and scale down the number of instances when the demand decreases. Some solutions can even help you in automatically adding additional hosts to support the user load. And, not just clustering and scaling, the container orchestration solutions also provide support for advanced networking between these containers across different hosts. As well as load balancing user requests across different hosts. They also provide support for sharing storage between the host, as well as support for configuration management and security within the cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Orchestration solutions
&lt;/h3&gt;

&lt;p&gt;There are multiple container orchestration solutions available today - Docker has Docker swarm, Kubernetes from Google and Mesos from Apache. Well, Docker swarm is really easy to set up and get started. It lacks some of the advanced auto-scaling features required for complex production-grade applications. Mesos on the other hand is quite difficult to set up and get started but supports many advanced features.&lt;/p&gt;

&lt;p&gt;Kubernetes, arguably the most popular of all is a bit difficult to set up and get started but provides a lot of options to customize deployments and has support for many different vendors. Kubernetes is now supported on all public cloud service providers like GCP, Azure, and AWS, and the Kubernetes project is one of the top-ranked projects on GitHub. With Docker, you were able to run a single instance of an application using the Docker CLI by running the Docker run command, which is great, running and application has never been so easy before. With Kubernetes. Using the Kubernetes CLI, known as kubectl, you can run a 1000 instances of the same application with a single command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsg10ea8s89fnnpo4ycrw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsg10ea8s89fnnpo4ycrw.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes can scale it up to 2000 with another command, Kubernetes can be even configured to do this automatically so that instances and the infrastructure itself can scale up and down based on user load. Kubernetes can upgrade these 2000 instances of the application in a rolling upgrade fashion, one at a time with a single command. If something goes wrong, it can help you roll back these images with a single command. Kubernetes can help you test new features of your application by only upgrading a percentage of these instances through AB testing methods.&lt;/p&gt;

&lt;p&gt;The Kubernetes open architecture provides support for many different network and storage vendors. Any network or storage brand that you can think of has a plugin for Kubernetes. Kubernetes supports a variety of authentication and authorization mechanisms. All major cloud service providers have native support for Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1pimainog2h6c7nkd8ru.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1pimainog2h6c7nkd8ru.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The relation between Docker and Kubernetes
&lt;/h3&gt;

&lt;p&gt;Kubernetes uses Docker host to host applications in the form of Docker containers. Well, it need not be Docker all the time. Kubernetes supports as native to Dockers as well such as rocket or a crier but let's take a quick look at the Kubernetes architecture. A Kubernetes cluster consists of a set of nodes. &lt;br&gt;
Let us start with nodes. A node is a machine physical or virtual on which the Kubernetes software setup tools are installed. And node is a worker machine and that is where containers will be launched by Kubernetes. But what if the node on which the application is running fails. Well, obviously, our application goes down. So you need to have more than one node. &lt;/p&gt;

&lt;p&gt;A cluster is a set of nodes grouped together. This way, even if one node fails, you have your application still accessible from the other nodes. Now, we have a cluster but who is responsible for managing this cluster? Where is the information about the members of the cluster stored? How are the nodes monitored? When a node fails? How do you move the workload of the failed nodes to another worker node? That's where the master comes in. The Master is a node with the Kubernetes control plane components installed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fljj74g9eb90r3ykysxdr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fljj74g9eb90r3ykysxdr.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The master watches over the nodes in the cluster and is responsible for the actual orchestration of containers on the worker nodes.&lt;/p&gt;

&lt;p&gt;When you install Kubernetes on a system, you're actually installing the following components, an API server, Etcd server, a Kubelet service, contain runtime, engine like Docker, and a bunch of controllers and the scheduler.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6szhxqi5jrvj33qn4b4v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6szhxqi5jrvj33qn4b4v.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The API server acts as the front end for Kubernetes. The users, management devices, command-line interfaces, all talk to the API server to interact with the Kubernetes cluster. Next is the Etcd key-value store. The Etcd is a distributed reliable key-value store used by Kubernetes to store all data used to manage the cluster. Think of it this way, when you have multiple nodes and multiple masters in your cluster. Etcd stores all that information on all the nodes in the cluster in a distributed manner. Etcd is responsible for implementing logs within the cluster to ensure there are no conflicts between the masters. &lt;/p&gt;

&lt;p&gt;The scheduler is responsible for distributing work or containers across multiple nodes. It looks for newly created containers and assigns them to nodes. The controllers are the brain behind orchestration, they are responsible for noticing and responding when nodes containers or endpoints goes down. The controllers make decisions to bring up new containers in such cases.&lt;br&gt;
The container runtime is the underlying software that is used to run containers, In our case, it happens to be Docker. And finally, Kubelet is the agent that runs on each node in the cluster. The agent is responsible for making sure that the containers are running on the nodes as expected. &lt;/p&gt;

&lt;p&gt;And finally, we also need to learn a little bit about one of the command line utilities known as the Kube command-line tool or the Kube control tool or Kube cuddle as it is also called the Kube control tool is the Kubernetes CLI, which is used to deploy and manage applications on a Kubernetes cluster to get cluster related information, to get the status of the nodes in the cluster, and many other things. The kubectl run command is used to deploy an application on the cluster, the Kube control cluster info command is used to view information about the cluster. And the kubectl get nodes command is used to list all the nodes part of the cluster. So, to run hundreds of instances of your application across hundreds of nodes, all I need is a single Kubernetes command like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fuweabmaoh2g8zlifjwz6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fuweabmaoh2g8zlifjwz6.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Well, that's all we have for now. A quick introduction to Kubernetes and its architecture. We currently have four courses on KodeKloud on Kubernetes. That will take you from an absolute beginner to a certified expert. So have a look at it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kodekloud.com/p/kubernetes-for-the-absolute-beginners-hands-on" rel="noopener noreferrer"&gt;Kubernetes for the absolute beginners&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kodekloud.com/p/certified-kubernetes-administrator-with-practice-tests" rel="noopener noreferrer"&gt;Certified Kubernetes Administrator(CKA)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kodekloud.com/p/kubernetes-beginner-to-expert" rel="noopener noreferrer"&gt;Kubernetes - Absolute beginner to expert&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kodekloud.com/p/kubernetes-certification-course-labs" rel="noopener noreferrer"&gt;Certified Kubernetes application developer(CKAD)&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>career</category>
      <category>devops</category>
    </item>
    <item>
      <title>Docker For Absolute Beginners</title>
      <dc:creator>KodeKloud</dc:creator>
      <pubDate>Sun, 31 May 2020 07:02:42 +0000</pubDate>
      <link>https://dev.to/kodekloud/docker-for-absolute-beginners-3pb9</link>
      <guid>https://dev.to/kodekloud/docker-for-absolute-beginners-3pb9</guid>
      <description>&lt;p&gt;Every company is dependent on software for innovation and one of the biggest innovations in the software development field was the invention of containers. The containers have changed the way the software is built and shipped these days. The company Docker took this route and made containers available for everyone. It has made developers worry-free because of its amazing features. Docker also enables the adoption of DevOps in enterprises by eliminating the gap between Dev and Ops teams that often used to get into conflicts.&lt;br&gt;
Today, we will go through some fundamentals of Docker and more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Containers
&lt;/h2&gt;

&lt;p&gt;We're now going to look at a high-level overview of why you need Docker, and what it can do for you. Let me start by sharing how I got introduced to Docker. in one of my previous projects, I had this requirement to set up, an end to end stack including various different technologies, like a web server using node js, and database such as MongoDB, a messaging system like Redis, and an orchestration tool like Ansible. We had a lot of issues developing this application with all these different components. First, the thing to be taken care of was their compatibility with the underlying operating system, we had to ensure that all these different services were compatible with the version of the operating system we were planning to use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxvh044py3v1uh1qp60sa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxvh044py3v1uh1qp60sa.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There have been times when certain versions of these services were not compatible with the OS. And we've had to go back and look for another OS that was compatible with all these different services. Secondly, we had to check the compatibility between the services and the libraries and dependencies on the OS. We've had issues where one service requires one version of a dependent library, whereas another service requires another version.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1ao16z2r8cuo8ujfmy21.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1ao16z2r8cuo8ujfmy21.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The architecture of our application changed over time, we've had to upgrade to newer versions of these components or change the database, etc. And every time something changed, we had to go through the same process of checking compatibility between these various companies and the underlying infrastructure. This compatibility matrix issue is usually referred to as the matrix from hell.&lt;/p&gt;

&lt;p&gt;Next, every time we had a new developer on board, we found it really difficult to set up a new environment, the new developers had to follow a large set of instructions and run hundreds of commands to finally set up their environment. They had to make sure they were using the right operating system, the right versions of each of these components. And each developer had to set all that up by himself each time. We also had different development tests and production environments. &lt;/p&gt;

&lt;p&gt;One developer may be comfortable using one OS and the others may be using another one. And so we couldn't guarantee that the application that we were building would run the same way in different environments. And so all of this made our life really difficult.&lt;br&gt;
So I needed something that could help us with the compatibility issue, something that will allow us to modify or change these components without affecting the other components and even modify the underlying operating system as required. And that search landed me on Docker.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fc3u9w9nj4rzy565ezctl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fc3u9w9nj4rzy565ezctl.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With Docker, I was able to run each component in a separate container with its own libraries and its own dependencies, all on the same VM and the OS, but within separate environments or containers. We just had to build the Docker configuration once and all our developers could now get started with a simple Docker run command, irrespective of what the underlying operating system they run. All they needed to do was to make sure they had Docker installed on their systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  So what are containers?
&lt;/h3&gt;

&lt;p&gt;Containers are completely isolated environments. As in they can have their own processes or services, their own networking interfaces, their own mounts just like virtual machines, except they're all shared the same operating system kernel. We will look at what that means in a bit. But it's also important to note that containers are not new with Docker.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fi4yfo8lcivvntqn546dd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fi4yfo8lcivvntqn546dd.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Containers have existed for about 10 years now. And some of the different types of containers are LXC, LXD, LXCFS, etc. Docker utilizes LXC containers. Setting up these container environments is hard as they are very low level. And that is where Docker offers a high-level tool with several powerful functions, making it really easy for end users like us.&lt;/p&gt;

&lt;h3&gt;
  
  
  How does Docker work?
&lt;/h3&gt;

&lt;p&gt;To understand how Docker works, let us revisit some basic concepts of operating systems First. If you look at operating systems like Ubuntu, Fedora, CentOS, etc. They all consist of two things, an OS kernel and a set of software. The operating system kernel is responsible for interacting with the underlying hardware, while the OS kernel remains the same, which is Linux in this case, it's the software above it that makes these operating systems different.&lt;/p&gt;

&lt;p&gt;The software may consist of a different user interface drivers, compilers, file managers, developer tools, etc. So you have a common Linux kernel shared across all operating systems and some custom software that differentiates operating systems from each other.&lt;br&gt;
We said earlier that Docker containers share the underlying kernel. What does that actually mean, sharing the kernel? Let's say we have a system with an Ubuntu OS with Docker installed on it. Docker can run any flavor of OS on top of it. As long as they're all based on the same kernel, in this case Linux. if the underlying operating system is Ubuntu Docker can run a container based on another distribution like Debian, Fedora, Susi or CentOS. Each Docker container only has the additional software that we just talked about previously which makes these operating systems different.&lt;/p&gt;

&lt;p&gt;And Docker utilizes the underlying kernel of Docker host, which works with all the operating systems above. So what is an OS that did not share the same kernel as these, windows. And so you won't be able to run a Windows-based container on a Docker host with Linux OS on it. For that you would require Docker on a Windows Server.&lt;/p&gt;

&lt;p&gt;You might ask isn't that a disadvantage then? Not being able to run another kernel on the OS? The answer is no. &lt;br&gt;
Because unlike hypervisors Docker is not meant to virtualize and run different operating systems and kernels on the same hardware. The main purpose of Docker is to containerize applications and to ship them and run them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Differences between virtual machines and containers
&lt;/h3&gt;

&lt;p&gt;So that brings us to the differences between virtual machines and containers, something that we tend to do, especially those from a virtualization background. As you can see on the right, in the case of Docker, we have the underlying hardware infrastructure, then the operating system and Docker installed on the OS. Docker can then manage the containers that run with libraries and dependencies alone.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6modlz6aj09whss7rozh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6modlz6aj09whss7rozh.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the case of a virtual machine, we have the OS on the underlying hardware, then the hypervisor like ESX or virtualization of some kind, and then the virtual machines. As you can see, each virtual machine has its own operating system inside it. Then the dependencies and then the application. This overhead causes higher utilization of underlying resources as there are multiple virtual operating systems and kernels running. The virtual machines also consume higher disk space as each VM is heavy, and it's usually in gigabytes in size, whereas Docker containers are lightweight, and they're usually in megabytes in size. This allows Docker containers to boot up faster, usually in a matter of seconds, whereas virtual machines as we know take minutes to boot up as it needs to boot up the entire operating system.&lt;/p&gt;

&lt;p&gt;It is also important to note that Docker has less isolation as more resources are shared between containers like the kernel, whereas VMs have complete isolation from each other. Since VMs, don't rely on the underlying operating system or kernel. You can have different types of operating systems such as Linux based or Windows-based on the same hypervisor. Whereas It is not possible on a single Docker host. So these are some differences between the two.&lt;/p&gt;

&lt;p&gt;So how is it done? There are a lot of containerized versions of applications readily available as of today. So most organizations have their products containerized and available in a public Docker registry called Docker Hub, or Docker store already. For example, you can find images of the most common operating systems, databases, and other services and tools. Once you identify the images you need, and you install Docker on your host. Bringing up an application stack is as easy as running a Docker run command with the name of the image.&lt;/p&gt;

&lt;p&gt;In this case, running a Docker run Ansible command will run an instance of Ansible on the Docker host. Similarly run an instance of MongoDB Redis and node js using the Docker Hub command. When you run node js, just point to the location of the code repository on the host. If you need to run multiple instances of the web service, simply add as many instances as you need and configure a load balancer of some kind in the front. In case one of the instances wants to fail, simply destroy that instance and launch a new instance. There are other solutions available for handling such cases that we will look at later. We've been talking about images and containers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjeq26pvs2cto6e901ciy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjeq26pvs2cto6e901ciy.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's understand the difference between the two. An image is a package or a template just like a VM template that you might have worked with, in the virtualization world. It is used to create one or more containers. Containers are running instances of images that are isolated and have their own environments and set of processes.&lt;br&gt;
As we have seen before, a lot of products have been Dockerized already. In case you cannot find what you're looking for, you could create an image yourself and push it to the Docker Hub repository, making it available for the public.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0p19k2e6gq9c4why54nn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0p19k2e6gq9c4why54nn.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you look at it, traditionally, developers developed applications. Then they hand it over to the Ops team to deploy and manage it in production environments. They do that by providing a set of instructions such as information about how the host must be set up, what prerequisites are to be installed on the host, and how the dependencies are to be configured etc. &lt;br&gt;
The Ops team uses this guide to set up the application. Since the Ops team did not develop the application on their own, they struggle with setting it up. When they hit an issue, they work with the developers to resolve it. With Docker, a major portion of work involved in setting up the infrastructure is now in the hands of the developers in the form of a Dockerfile. The guide that the developers built previously to set up the infrastructure can now easily be put together into a Dockerfile to create an image for the applications.&lt;/p&gt;

&lt;p&gt;This image can now run on any container platform, and it's guaranteed to run the same way everywhere. So the Ops team now can simply use the image to deploy the application. Since the image was already working, when the developer built it, and operations are not modifying it, it continues to work the same way when deployed in production. To learn more about containers, check out my other courses ‘&lt;a href="https://kodekloud.com/p/docker-for-the-absolute-beginner-hands-on" rel="noopener noreferrer"&gt;Docker for the absolute beginners&lt;/a&gt;’&lt;/p&gt;

&lt;h4&gt;
  
  
  Watch the video on '&lt;a href="https://youtu.be/zJ6WbK9zFpI" rel="noopener noreferrer"&gt;Docker for Beginners: Full Course&lt;/a&gt;' on YouTube.
&lt;/h4&gt;

</description>
      <category>docker</category>
      <category>career</category>
      <category>devops</category>
    </item>
    <item>
      <title>What Makes Linux So Popular Among Developers</title>
      <dc:creator>KodeKloud</dc:creator>
      <pubDate>Wed, 27 May 2020 07:18:05 +0000</pubDate>
      <link>https://dev.to/kodekloud/what-makes-linux-so-popular-among-developers-144e</link>
      <guid>https://dev.to/kodekloud/what-makes-linux-so-popular-among-developers-144e</guid>
      <description>&lt;p&gt;Linux makes automation easy, hence, it has become an integral part of DevOps professionals. The best advice anybody can get while starting their journey in DevOps is to learn and understand the basics of Linux thoroughly. This makes the DevOps career path easy in the future. Linux is going to be there no matter what, you have to face it and work with it, to become a great DevOps engineer. Today, we will see why Linux is famous and reasons that make it so popular among developers and DevOps engineers.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Linux?
&lt;/h3&gt;

&lt;p&gt;Linux is an operating system that is high performing and completely free and it closely resembles UNIX.&lt;br&gt;
Linus Torvalds, a student from the University of Helsinki in Finland, started Linux in 1991 which was the inspiration of his dissatisfaction with MS-DOS and his strong desire to obtain a free version of UNIX for his new computer. Linux soon became a global project with so many developers supporting and contributing to the project via the internet. Slowly individual developers started using and experimenting with Linux and then the world saw heavy usage of Linux by corporations, educational institutions, and also governments.&lt;/p&gt;

&lt;h4&gt;
  
  
  Linux stats
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;In 2019, 100% of the world’s &lt;a href="https://itsfoss.com/linux-runs-top-supercomputers/"&gt;supercomputers ran on Linux&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Linux is once again the most loved platform for development in 2019 on &lt;a href="https://insights.stackoverflow.com/survey/2019#technology-_-platforms"&gt;StackOverflow developer survey&lt;/a&gt; results 2019.&lt;/li&gt;
&lt;li&gt;Linux runs on all of the &lt;a href="https://itsfoss.com/linux-runs-top-supercomputers/"&gt;top 500 Supercomputers&lt;/a&gt;, again!.&lt;/li&gt;
&lt;li&gt;96.3% of the world’s top 1 million web servers run on Linux.&lt;/li&gt;
&lt;li&gt;Every Facebook post you make, every YouTube video you watch, every Google search you run, is &lt;a href="https://www.zdnet.com/article/can-the-internet-exist-without-linux/"&gt;done on Linux&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;95% of the servers that run the world’s top 1 million domains are powered by Linux.&lt;/li&gt;
&lt;li&gt;In 2018, Android dominated the mobile OS market with 75.16%.&lt;/li&gt;
&lt;li&gt;85% of all smartphones are based on Linux.&lt;/li&gt;
&lt;li&gt;Even the famous DevOps tools like Docker, Ansible and Kubernetes etc run with the help of Linux. For most of the years initially, Docker was only available on Linux based systems. A Linux system is required to be the Ansible controller. Also, Master nodes in Kubernetes can only be Linux systems.
[Source: &lt;a href="https://hostingtribunal.com/blog/linux-statistics/#gref"&gt;Hostingtribunal.com&lt;/a&gt;]&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5 Reasons you should love Linux
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Open source&lt;/strong&gt;&lt;br&gt;
The course code of Linux falls under FOSS (Free and Open Source Software) category and the developers, who are its members can always view and modify the source however they want. Around the world, several countries are developing their own version of Linux and this will help all such countries to strategically develop their own OSs for specialized and strategic areas such as defence, communications, government etc&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Customization&lt;/strong&gt;&lt;br&gt;
With Linux, users get great flexibility in customizing the system as per their desire and requirements. The philosophy of Linux is based on several small and base programs, each of which does one task very well.&lt;br&gt;
Linux provides a powerful command-line interface that helps system administrators to write shell scripts and automate routine and various repetitive tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Free and easy to use&lt;/strong&gt;&lt;br&gt;
Linux is open-source and very simple to understand. This makes curious developers to go and see how this stuff works and experiment with it, this increases the developer adoption. Also, businesses can use this software free of cost and reduce their IT budgets considerably.&lt;br&gt;
The GUI has developed and improved to such an extent that most of what typical users want can be done and executed on Linux.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Automation&lt;/strong&gt;&lt;br&gt;
Automation is the key for modern enterprises and that is one reason why companies hiring for DevOps professionals are asking for Linux knowledge.&lt;br&gt;
Linux makes it much easier to efficiently carry the server-side work. Once the scripts are written to install server softwares ((MySQL, Apache, ssh, FTP, etc)), configure them, and maintain them, everything is taken care of automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. High endurance&lt;/strong&gt;&lt;br&gt;
The uptime and availability for Linux servers are very high. Linux has the highest number of servers running on the Internet. According to an article on the ZDNet website, 96.3 percent of the top 1 million Web servers are running on Linux. Twenty-three out of the Top twenty-five websites run on Linux. The two remaining websites in the top twenty-five are live.com and bing.com, which belong to Microsoft! [Source: &lt;a href="https://opensourceforu.com/2020/03/reasons-to-use-linux/"&gt;Opensourceforu&lt;/a&gt;]&lt;/p&gt;

&lt;h3&gt;
  
  
  Why are companies eager to hire Linux people?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;blockquote&gt;
&lt;p&gt;Linux continues to dominate employers’ needs. Today, &lt;a href="https://insights.dice.com/2020/03/20/demand-skills-february-march-2020-python-sql/"&gt;Linux is the highest-ranked skill&lt;/a&gt; in software development and the job market. According to dice.com, the demand for Linux experts is so heavy that some companies are even allowing employees to write their cheques on their own.&lt;/p&gt;
&lt;/blockquote&gt;&lt;/li&gt;
&lt;li&gt;&lt;blockquote&gt;
&lt;p&gt;The operating system capability is one of the great reasons why companies hire Linux experts. With the advancements in the cloud portfolio, firms are always in search of people that have skills with automation, private cloud, containers, orchestration, and server virtualization. We all know, these are the characters through which Linux is leading today's cloud industry.&lt;/p&gt;
&lt;/blockquote&gt;&lt;/li&gt;
&lt;li&gt;&lt;blockquote&gt;
&lt;p&gt;DevOps is booming, and in DevOps, Linux knowledge is essential because it helps in automation. Firms look for such a combination of skills that fit well in the DevOps team.&lt;/p&gt;
&lt;/blockquote&gt;&lt;/li&gt;
&lt;li&gt;&lt;blockquote&gt;
&lt;p&gt;IT professionals seeking job security, good pay, career advancement, or raises - Linux is leading the industry. Linux is a versatile, robust, and scalable solution for IT companies of any shapes and sizes.&lt;br&gt;
Getting trained in Linux basics, tools, and implementation can guarantee your entry into the DevOps career, and Linux has been listed under the best-paying and most exciting jobs in the IT field right now.&lt;br&gt;
Top corporates and governments use Linux&lt;br&gt;
Over the past few years, there has been a massive growth in the number of Linux-based products that have had a significant impact on the cloud IT field:&lt;/p&gt;
&lt;/blockquote&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Tools,
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes: Container orchestrating tool from Google, now part of CNCF&lt;/li&gt;
&lt;li&gt;OpenStack: Software platform for infrastructure as a service cloud platform&lt;/li&gt;
&lt;li&gt;OpenDaylight: Java-based project to help accelerate the adoption of SDNs and Network Functions Virtualization (NFV) by Linux Foundation.&lt;/li&gt;
&lt;li&gt;Docker: Software containerization solution&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Companies,
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Amazon Web Service - A leading cloud platform that supports millions of enterprises power their infrastructure.&lt;/li&gt;
&lt;li&gt;Sony PlayStation 4 &lt;a href="https://www.extremetech.com/gaming/159476-ps4-runs-orbis-os-a-modified-version-of-freebsd-thats-similar-to-linux"&gt;runs&lt;/a&gt; on Orbis OS, and it is developed on a Linux-based kernel.&lt;/li&gt;
&lt;li&gt;In 2019, IBM is in the process of &lt;a href="https://www.zdnet.com/article/ibms-red-hat-acquisition-moves-forward/"&gt;acquiring&lt;/a&gt; Red Hat Enterprise Linux. An IDC report states that Red Hat is expected to contribute $10 trillion to the global economy.
[Source: Hostingtribunal.com]&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Linux is here to stay, and it has become a must-know skill for everyone who is working on DevOps. It is one of the tried and most trusted solutions in the history of computer software. Longevity, maturity, and high security make Linux one of the most advanced and trusted OSes available today. Linux is an ideal solution for enterprises that want to use it and its peripherals to customize their own network and data center infrastructure. Having Linux knowledge is a boon and makes it easy for people to enter the DevOps career and get paid well.&lt;/p&gt;

&lt;p&gt;Our &lt;a href="https://kodekloud.com/p/the-linux-basics-course"&gt;Linux Basics Course&lt;/a&gt; is now live&lt;/p&gt;

</description>
      <category>linux</category>
      <category>career</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
