<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Deepak Sabhrawal</title>
    <description>The latest articles on DEV Community by Deepak Sabhrawal (@devdpk).</description>
    <link>https://dev.to/devdpk</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/devdpk"/>
    <language>en</language>
    <item>
      <title>Latest CKAD Tips 2023</title>
      <dc:creator>Deepak Sabhrawal</dc:creator>
      <pubDate>Wed, 08 Nov 2023 17:52:12 +0000</pubDate>
      <link>https://dev.to/devdpk/ckad-tips-2023-3n13</link>
      <guid>https://dev.to/devdpk/ckad-tips-2023-3n13</guid>
      <description>&lt;p&gt;Finally, I was able to clear my CKAD certification with good 92% marks. Although passing is just 66% but these marks shows how seriously I have prepared for this certification because of it's practical nature as compared to the other certification exams.&lt;/p&gt;

&lt;p&gt;Courses: &lt;br&gt;
Udemy Course by Mumshad is very good and I think sufficient enough as it covers all the topics: &lt;a href="https://www.udemy.com/course/certified-kubernetes-application-developer/"&gt;https://www.udemy.com/course/certified-kubernetes-application-developer/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Mock Exam Series: (A must have)&lt;br&gt;
I have purchase the mock exam series from the KodeKloud. &lt;a href="https://kodekloud.com/lessons/certified-kubernetes-application-developer-mock-exam-series/"&gt;https://kodekloud.com/lessons/certified-kubernetes-application-developer-mock-exam-series/&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;It's bit costly as compared to the course but it is a must have to boost the confidence for the exam.&lt;/p&gt;

&lt;p&gt;A nice free source as well: &lt;a href="https://killercoda.com/killer-shell-ckad"&gt;https://killercoda.com/killer-shell-ckad&lt;/a&gt;&lt;br&gt;
You can also use their free 1 hour sandbox environment for practice and reset it unlimited times.&lt;/p&gt;

&lt;p&gt;TIPS:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Practice thoroughly after completing the theory.&lt;/li&gt;
&lt;li&gt;It's confirmed that we get partial marks as well.
When I got to know this it was a huge relief for me. 
Because, the Kodekloud labs only consider a question 
complete if all the sub-tasks are completed.&lt;/li&gt;
&lt;li&gt;I spent good amount of time for one Month and completed 
theory twice before starting with the Mock exam series. &lt;/li&gt;
&lt;li&gt;Just try to complete the labs as many times as you start 
to feel comfortable with the questions. Mock series will 
give you a much needed boost to schedule the exam.&lt;/li&gt;
&lt;li&gt;Approach I usually follow - Go through one round of theory
at a descent pace with proper discipline, it will give you 
an idea where do you stand and how much time it can take 
for you to get the certification.
&lt;/li&gt;
&lt;li&gt;Set the target then and start working on topics you feel 
you are weak. Try to refer different resources for those 
topics and prepare well. &lt;/li&gt;
&lt;li&gt;Start with the Mock test series, it has 10 tests but 
halfway only you will start feeling comfortable then you 
can schedule your exam and go for a final sprint of few 
days.&lt;/li&gt;
&lt;li&gt;I went through theory twice and Mock test series twice. 
This was enough for me to get the confidence to face the 
real exam. &lt;/li&gt;
&lt;li&gt;Make good amount of use of free killer.sh shell mock 
exams. You get 2 session and each session has similar 
questions. My suggestion is to use only one and keep the 
another session for later on if you are unfortunate to 
clear in the first attempt.&lt;/li&gt;
&lt;li&gt;Always use VIM setting in your practice exams so that you 
can easily memorise and setup that in your real exam. 
These VIM settings and alias are great helper in time 
saving as time is the key in this exam.&lt;/li&gt;
&lt;li&gt;Don't stuck with the question, Flag it and keep moving. 
yes you get the flagging option and that is clearly 
visible how many questions you have flagged to revisit. &lt;/li&gt;
&lt;li&gt;Vim settings I guess enough for the exam: (~/.vimrc)

&lt;ul&gt;
&lt;li&gt;set st=2 (preloaded)&lt;/li&gt;
&lt;li&gt;set sw=2 (preloaded)&lt;/li&gt;
&lt;li&gt;set expandtab (preloaded)&lt;/li&gt;
&lt;li&gt;set ai (Auto Indentation)&lt;/li&gt;
&lt;li&gt;set si (Smart Indentaton)&lt;/li&gt;
&lt;li&gt;set ic (Ignore case in search)&lt;/li&gt;
&lt;li&gt;set nu (Number line)&lt;/li&gt;
&lt;li&gt;set ru (Ruler)&lt;/li&gt;
&lt;li&gt;syntax on&lt;/li&gt;
&lt;li&gt;set cursorline (I didn't find it that useful)&lt;/li&gt;
&lt;li&gt;set cursorcolumn (I didn't find it that useful - but 
              sometimes it is just give it a try)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Alias options you can set in bash profile: (~/.bashrc)

&lt;ul&gt;
&lt;li&gt;alias kn="kubectl config set-context --current -- 
      namespace"&lt;/li&gt;
&lt;li&gt;alias kf="kubectl create file -f"&lt;/li&gt;
&lt;li&gt;alias kr="kubectl replace file -f"&lt;/li&gt;
&lt;li&gt;alias kgp="kubectl get pod"&lt;/li&gt;
&lt;li&gt;alias  kga="kubectl get all"&lt;/li&gt;
&lt;li&gt;export dr="--dry-run=client -oyaml"&lt;/li&gt;
&lt;li&gt;export now="--force --grace-period=0"&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let me know in the comments if you have any specific question or need to discuss anything. &lt;/p&gt;

&lt;p&gt;Happy Learning...&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>ckad</category>
      <category>certification</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Create Multinode Kubernetes Cluster Using Kind</title>
      <dc:creator>Deepak Sabhrawal</dc:creator>
      <pubDate>Sun, 17 Sep 2023 15:25:35 +0000</pubDate>
      <link>https://dev.to/devdpk/create-multinode-kubernetes-cluster-using-kind-23a8</link>
      <guid>https://dev.to/devdpk/create-multinode-kubernetes-cluster-using-kind-23a8</guid>
      <description>&lt;p&gt;When it comes to running a Kubernetes cluster on a local system there are multiple options but &lt;a href="https://kind.sigs.k8s.io/"&gt;Kind&lt;/a&gt; provides simplicity with a near-to-real Kubernetes cluster experience as we can create a multi-node cluster as well. &lt;a href="https://minikube.sigs.k8s.io/docs/"&gt;Minikube&lt;/a&gt; is another super simple option but I found it a little bit resource-heavy and slow in comparison and abstracts to only a single master node cluster. &lt;/p&gt;

&lt;p&gt;To install Kind, follow the instructions &lt;a href="https://kind.sigs.k8s.io/docs/user/quick-start/#installation"&gt;here&lt;/a&gt; &lt;br&gt;
We can use Kind to create a super simple Kubernetes cluster as well which competes Minikube's simplicity. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;kind create cluster --name demo&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(base) ~/code/devops/kubernetes/CKAD (master ✗) kind create cluster --name demo 
Creating cluster "demo" ...
 ✓ Ensuring node image (kindest/node:v1.25.3) 🖼 
 ✓ Preparing nodes 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
Set kubectl context to "kind-demo"
You can now use your cluster with:

kubectl cluster-info --context kind-demo

Have a nice day! 👋
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(base) ~/code/devops/kubernetes/CKAD (master ✗) kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:53789
CoreDNS is running at https://127.0.0.1:53789/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
(base) ~/code/devops/kubernetes/CKAD (master ✗)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Kind uses docker to host the Kubernetes cluster. Check the docker containers&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(base) ~/code/devops/kubernetes/CKAD (master ✗) docker ps
CONTAINER ID   IMAGE                  COMMAND                  CREATED         STATUS         PORTS                       NAMES
9e4a6223159d   kindest/node:v1.25.3   "/usr/local/bin/entr…"   6 minutes ago   Up 6 minutes   127.0.0.1:53789-&amp;gt;6443/tcp   demo-control-plane
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, to create a multi-node cluster we have to use a config file. Save the file content as kind-cluster.yaml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
- role: worker
- role: worker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now create a cluster with following command&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kind create cluster --config kind-cluster.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/code/devops/kubernetes/CKAD (master ✗) kind create cluster --config kind-cluster.yaml 
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.25.3) 🖼 
 ✓ Preparing nodes 📦 📦 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
 ✓ Joining worker nodes 🚜 
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;you notice, this time cluster name is kind the default one if we do not provide any name. &lt;br&gt;
You can explore this cluster which has one master node and two worker nodes. You can provide many other configurations as well based on your needs. Check &lt;a href="https://kind.sigs.k8s.io/docs/user/configuration/"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/code/devops/kubernetes/CKAD (master ✗) kubectl get nodes -o wide
NAME                 STATUS   ROLES           AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
kind-control-plane   Ready    control-plane   5m48s   v1.25.3   172.18.0.4    &amp;lt;none&amp;gt;        Ubuntu 22.04.1 LTS   5.10.124-linuxkit   containerd://1.6.9
kind-worker          Ready    &amp;lt;none&amp;gt;          5m28s   v1.25.3   172.18.0.3    &amp;lt;none&amp;gt;        Ubuntu 22.04.1 LTS   5.10.124-linuxkit   containerd://1.6.9
kind-worker2         Ready    &amp;lt;none&amp;gt;          5m29s   v1.25.3   172.18.0.2    &amp;lt;none&amp;gt;        Ubuntu 22.04.1 LTS   5.10.124-linuxkit   containerd://1.6.9
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/code/devops/kubernetes/CKAD (master ✗) docker ps 
CONTAINER ID   IMAGE                  COMMAND                  CREATED         STATUS         PORTS                       NAMES
32dc5ba3b1b5   kindest/node:v1.25.3   "/usr/local/bin/entr…"   6 minutes ago   Up 6 minutes   127.0.0.1:53879-&amp;gt;6443/tcp   kind-control-plane
68b9e9732f8d   kindest/node:v1.25.3   "/usr/local/bin/entr…"   6 minutes ago   Up 6 minutes                               kind-worker2
616c6893f374   kindest/node:v1.25.3   "/usr/local/bin/entr…"   6 minutes ago   Up 6 minutes                               kind-worker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Happy learning...&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>kind</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Running Custom Docker Images Inside Kind Cluster</title>
      <dc:creator>Deepak Sabhrawal</dc:creator>
      <pubDate>Sun, 17 Sep 2023 14:56:33 +0000</pubDate>
      <link>https://dev.to/devdpk/running-custom-docker-images-inside-kind-cluster-o37</link>
      <guid>https://dev.to/devdpk/running-custom-docker-images-inside-kind-cluster-o37</guid>
      <description>&lt;p&gt;We are going to build a docker image locally and then run a Pod inside the Kubernetes cluster with that image.&lt;/p&gt;

&lt;p&gt;Build a docker image locally using dockerfile. &lt;/p&gt;

&lt;p&gt;Save the following docker file ubuntu-sleep.dockerfile&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; FROM ubuntu

 ENTRYPOINT ["sleep"]
 CMD ["5"] 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Build image &lt;code&gt;docker build -t ubuntu-sleep -f ubuntu-sleep.dockerfile .&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(base) ~/code/devops/kubernetes/CKAD (master ✗) docker build -t ubuntu-sleep -f ubuntu-sleep.dockerfile .
[+] Building 0.1s (5/5) FINISHED                                                                                                  
 =&amp;gt; [internal] load build definition from ubuntu-sleeper.dockerfile                                                          0.0s
 =&amp;gt; =&amp;gt; transferring dockerfile: 51B                                                                                          0.0s
 =&amp;gt; [internal] load .dockerignore                                                                                            0.0s
 =&amp;gt; =&amp;gt; transferring context: 2B                                                                                              0.0s
 =&amp;gt; [internal] load metadata for docker.io/library/ubuntu:latest                                                             0.0s
 =&amp;gt; CACHED [1/1] FROM docker.io/library/ubuntu                                                                               0.0s
 =&amp;gt; exporting to image                                                                                                       0.0s
 =&amp;gt; =&amp;gt; exporting layers                                                                                                      0.0s
 =&amp;gt; =&amp;gt; writing image sha256:b739783c0472bbc0008474a0e9f1c5a55dc0419fd4d7851bed8b08fec7596332                                 0.0s
 =&amp;gt; =&amp;gt; naming to docker.io/library/ubuntu-sleep    
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a &lt;a href="https://kind.sigs.k8s.io/docs/user/configuration/#getting-started"&gt;Kind&lt;/a&gt; Cluster with the name demo.&lt;br&gt;&lt;br&gt;
&lt;code&gt;kind create cluster --name=demo&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(base) ~/code/devops/kubernetes/CKAD (master ✗) kind create cluster --name=demo 
Creating cluster "demo" ...
 ✓ Ensuring node image (kindest/node:v1.25.3) 🖼 
 ✓ Preparing nodes 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
Set kubectl context to "kind-demo"
You can now use your cluster with:

kubectl cluster-info --context kind-demo

Have a nice day! 👋

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Load the docker image to the kind cluster demo&lt;br&gt;
&lt;code&gt;kind load docker-image ubuntu-sleep --name demo&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(base) ~/code/devops/kubernetes/CKAD (master ✗) kind load docker-image ubuntu-sleep --name demo
Image: "" with ID "sha256:b739783c0472bbc0008474a0e9f1c5a55dc0419fd4d7851bed8b08fec7596332" not yet present on node "demo-control-plane", loading...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Kind load is a very useful command and provides multiple options as well. To get more info about the kind load command use the -h option. &lt;/p&gt;

&lt;p&gt;Run a pod inside the demo cluster with the following command &lt;br&gt;
&lt;code&gt;kubectl run ubuntu-sleep --image=ubuntu-sleep&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(base) ~/code/devops/kubernetes/CKAD (master ✗) kubectl run ubuntu-sleep --image=ubuntu-sleep
pod/ubuntu-sleep created
(base) ~/code/devops/kubernetes/CKAD (master ✗) kubectl get pod ubuntu-sleep
NAME           READY   STATUS             RESTARTS   AGE
ubuntu-sleep   0/1     ImagePullBackOff   0          14s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When we check the pod status it is ImagePullBackOff which means it is failing in running the container with the given image. Wait, what did we do wrong?&lt;br&gt;
Lets, try to debug the problem. As we all know the Kind runs a kubernetes cluster inside docker right? KIND (Kubernetes Inside Docker)&lt;/p&gt;

&lt;p&gt;Run the following command to check if the image is already available.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(base) ~/code/devops/kubernetes/CKAD (master ✗) docker exec -it demo-control-plane crictl images
IMAGE                                      TAG                  IMAGE ID            SIZE
docker.io/kindest/kindnetd                 v20221004-44d545d1   7ba9b35cf55e6       23.7MB
docker.io/kindest/local-path-helper        v20220607-9a4d8d2a   304f67e47fc80       2.75MB
docker.io/kindest/local-path-provisioner   v0.0.22-kind.0       7902f9a1c54fa       15.6MB
docker.io/library/ubuntu-sleep             latest               b739783c0472b       71.8MB
registry.k8s.io/coredns/coredns            v1.9.3               b19406328e70d       13.4MB
registry.k8s.io/etcd                       3.5.4-0              8e041a3b0ba8b       81.1MB
registry.k8s.io/kube-apiserver             v1.25.3              feafd6a91eb52       74.2MB
registry.k8s.io/kube-controller-manager    v1.25.3              05b17bba8656e       62.3MB
registry.k8s.io/kube-proxy                 v1.25.3              aa31a9b19ccdf       59.6MB
registry.k8s.io/kube-scheduler             v1.25.3              253d0aeea8c69       50.6MB
registry.k8s.io/pause                      3.7                  e5a475a038057       268kB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Yeah! it's there. Then what the hack problem is?&lt;br&gt;
If we sit back and relax and look again at the pod events we see it is trying to pull the image, but why? we already have that in our local system. &lt;br&gt;
ahhh! we didn't specify the pull policy and the default one is imagePullPolicy: Always&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  12m                   default-scheduler  Successfully assigned default/ubuntu-sleep to demo-control-plane
  Normal   Pulling    10m (x4 over 12m)     kubelet            Pulling image "ubuntu-sleep"
  Warning  Failed     10m (x4 over 12m)     kubelet            Failed to pull image "ubuntu-sleep": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/ubuntu-sleep:latest": failed to resolve reference "docker.io/library/ubuntu-sleep:latest": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
  Warning  Failed     10m (x4 over 12m)     kubelet            Error: ErrImagePull
  Warning  Failed     10m (x6 over 12m)     kubelet            Error: ImagePullBackOff
  Normal   BackOff    2m19s (x43 over 12m)  kubelet            Back-off pulling image "ubuntu-sleep"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the pod again, this time with image-pull-policy option&lt;br&gt;
&lt;code&gt;kubectl run ubuntu-sleep-1 --image=ubuntu-sleep --image-pull-policy=Never&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(base) ~/code/devops/kubernetes/CKAD (master ✗) kubectl run ubuntu-sleep-1 --image=ubuntu-sleep --image-pull-policy=Never
pod/ubuntu-sleep-1 created
(base) ~/code/devops/kubernetes/CKAD (master ✗) kubectl get pod 
NAME             READY   STATUS             RESTARTS   AGE
ubuntu-sleep     0/1     ImagePullBackOff   0          8m16s
ubuntu-sleep-1   1/1     Running            0          5s
(base) ~/code/devops/kubernetes/CKAD (master ✗) 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;woala!!! It is working and the container is running!&lt;/p&gt;

&lt;p&gt;Happy Learning...&lt;/p&gt;

</description>
      <category>kind</category>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>Kubernetes - Basics</title>
      <dc:creator>Deepak Sabhrawal</dc:creator>
      <pubDate>Sat, 09 Sep 2023 11:06:29 +0000</pubDate>
      <link>https://dev.to/devdpk/kubernetes-basic-3jkc</link>
      <guid>https://dev.to/devdpk/kubernetes-basic-3jkc</guid>
      <description>&lt;p&gt;Features:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Container management - Can manage docker containers and other vendor containers as well so it is container-native orchestration tool.&lt;/li&gt;
&lt;li&gt;HA - Create a container on a healthy node if any container goes down.&lt;/li&gt;
&lt;li&gt;Scaling &amp;amp; Load Balancing - Scale containers based on load and other various parameters, hence balancing the load.&lt;/li&gt;
&lt;li&gt;Rolling Update - Update containers without impacting and support many other update strategies.
&lt;/li&gt;
&lt;li&gt;Rollback - Rollback updates if something went wrong without impact.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Architecture:&lt;/p&gt;

&lt;p&gt;K8s Cluster has two main nodes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Master node (Control Plane): Manages nodes and whole cluster.&lt;/li&gt;
&lt;li&gt;K8S Nodes/ Minions (Worker Nodes): Actual worker nodes where Pods are scheduled to run containers.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Cluster Components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;API-Server- Api server facilitates the communication with Kubernetes cluster through the &lt;code&gt;Kubectl&lt;/code&gt; command. We provide the &lt;code&gt;manifest yaml&lt;/code&gt; file through Kubectl and API-server works accordingly. It interacts with &lt;code&gt;scheduler&lt;/code&gt;, &lt;code&gt;Controller&lt;/code&gt;, and &lt;code&gt;Key-value store&lt;/code&gt; (etcd).&lt;/li&gt;
&lt;li&gt;Controller- Manages the health of the cluster and controls everything. Suppose 100 nodes are defined in the manifest file then the controller keeps track to ensure that 100 nodes are running. &lt;/li&gt;
&lt;li&gt;Scheduler- Schedules the nodes/pods in the cluster according to the manifest file.&lt;/li&gt;
&lt;li&gt;Key-value store (etcd) - This is the true source of information and store nodes information in the form of Key-value. Any node information can be fetched from this store.&lt;/li&gt;
&lt;li&gt;Kublet- Kublet is like a manager on a minion node. API-Server interacts with Kublet to deploy the containers. The Kublet interacts with the controller and reports everything about the minion node. All the decisions are then taken by the controller.&lt;/li&gt;
&lt;li&gt;Pod- Pod is the wrapping around the container. A pod is a basic unit in a Kubernetes cluster. &lt;code&gt;We do not deploy a Container, always a Pod is deployed and Container runs inside Pod. A pod can have multiple Containers but in general, a single Pod contains a single Container as a best practice.&lt;/code&gt; Pod is assigned an IP that is shared among Containers running inside Pod. A pod is also the scaling unit in Kubernetes.&lt;/li&gt;
&lt;li&gt;Kube-Proxy- Kube-Proxy acts like a network brain of the cluster and manages the network communication within the cluster.&lt;/li&gt;
&lt;li&gt;Runtime- Every node has its own runtime that might differ from other nodes. The most commonly used runtime is docker. This runtime is used to run the containers.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Interaction with a K8s Cluster:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;API: Everything happens through API requests. &lt;/li&gt;
&lt;li&gt;Native Libraries: If you are a developer and have a used case to manage the cluster using a coding language then you have native libraries to interact with the cluster.&lt;/li&gt;
&lt;li&gt;Kube controler (kubectl): This is the main and most used way to connect with a k8s cluster and get the job done.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>kubernetes</category>
      <category>k8s</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Kubernetes Config Map</title>
      <dc:creator>Deepak Sabhrawal</dc:creator>
      <pubDate>Sun, 18 Apr 2021 05:14:20 +0000</pubDate>
      <link>https://dev.to/devdpk/kubernetes-config-map-170m</link>
      <guid>https://dev.to/devdpk/kubernetes-config-map-170m</guid>
      <description>&lt;p&gt;Kubernetes ConfigMap resource is a piece of simple key-value pair information that can be passed to any application running on K8S.&lt;/p&gt;

&lt;p&gt;Sample file to create config map on a running cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: ConfigMap
metadata:
  name: my-config-map #name of the config map
data:
  myKey1: myValue1
  myKey2: myValue2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save this file as my-config-map.yaml and create config map on a cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create -f my-config-map.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, ref the values from config map inside a pod. &lt;br&gt;
Lets create a sample busybox pod.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: my-pod
    image: busybox
    command: ["sh", "-c", "echo $MY_VAR"]
    env:
    - name: MY_VAR
      valueFrom:
        configMapKeyRef:
          name: my-config-map 
          key: myKey1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save this as my-pod.yaml and create a pod.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create -f my-pod.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;check the logs to see what is printed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl logs my-pod.   #myValue1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>kubernetes</category>
      <category>containers</category>
      <category>configmap</category>
    </item>
    <item>
      <title>docker swarm cluster using docker-machine</title>
      <dc:creator>Deepak Sabhrawal</dc:creator>
      <pubDate>Thu, 30 Apr 2020 19:15:24 +0000</pubDate>
      <link>https://dev.to/devdpk/docker-swarm-cluster-using-docker-machine-4gh7</link>
      <guid>https://dev.to/devdpk/docker-swarm-cluster-using-docker-machine-4gh7</guid>
      <description>&lt;p&gt;The &lt;code&gt;docker swarm&lt;/code&gt; tool is used for managing docker host clusters or &lt;code&gt;orchestration&lt;/code&gt; of docker hosts. Today, in this post we will see how we can create a &lt;code&gt;docker swarm cluster&lt;/code&gt; locally using &lt;code&gt;Virtualbox&lt;/code&gt; and &lt;code&gt;docker-machine&lt;/code&gt;. The docker-machine creates docker hosted virtual nodes that are way faster than running virtual machines natively on Virtualbox.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why docker swarm?
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;High availability&lt;/li&gt;
&lt;li&gt;Container scaling&lt;/li&gt;
&lt;li&gt;Load Balancing&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We can have many nodes in a cluster but at least one manager node is required to manage multiple worker nodes. Version 1.2 onwards the &lt;code&gt;docker swarm&lt;/code&gt; comes natively with the docker and no separate installation is required.&lt;/p&gt;

&lt;p&gt;The manager node is responsible for all the operations like high availability, scaling, Load Balancing, etc. and can also act as a worker node to run the load if required.&lt;/p&gt;

&lt;p&gt;We will be using docker-machine here, &lt;a href="https://dev.to/dsabhrawal/create-docker-hosted-nodes-using-docker-machine-55ml"&gt;click to learn more about docker-machine and installation guide&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Create docker hosted nodes and attach to current shell&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Create one manager node using docker-machine
&lt;span class="nv"&gt;$docker&lt;/span&gt;&lt;span class="nt"&gt;-machine&lt;/span&gt; create &lt;span class="nt"&gt;--driver&lt;/span&gt; virtualbox manager
&lt;span class="c"&gt;#Get the IP address of the manager node&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;docker-machine ip manager
192.168.99.105
&lt;span class="c"&gt;#attach current shell with the worker shell&lt;/span&gt;
&lt;span class="nv"&gt;$eval&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;docker-machine &lt;span class="nb"&gt;env &lt;/span&gt;manager&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="c"&gt;#activate the manager shell&lt;/span&gt;
&lt;span class="nv"&gt;$docker&lt;/span&gt;&lt;span class="nt"&gt;-machine&lt;/span&gt; active
manager
&lt;span class="c"&gt;#The current shell is now attached to the manager node&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;Initialize the docker swarm cluster on the manager node&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker swarm init &lt;span class="nt"&gt;--advertise-addr&lt;/span&gt; 192.168.99.105
Swarm initialized: current node &lt;span class="o"&gt;(&lt;/span&gt;jrivcbnx4jh6opbrm1qed84ue&lt;span class="o"&gt;)&lt;/span&gt; is now a manager.

To add a worker to this swarm, run the following &lt;span class="nb"&gt;command&lt;/span&gt;:

   docker swarm &lt;span class="nb"&gt;join&lt;/span&gt; &lt;span class="nt"&gt;--token&lt;/span&gt; SWMTKN-1-2wb0sivbsxsx0t3va8wv0qywa303q1mcc2syplrwj0q1301kfs-8qfc9yhwst06y8c1gusftrjkl 192.168.99.105:2377

To add a manager to this swarm, run &lt;span class="s1"&gt;'docker swarm join-token manager'&lt;/span&gt; and follow the instructions.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;Check Manager Node&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$docker&lt;/span&gt; node &lt;span class="nb"&gt;ls
&lt;/span&gt;ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
jrivcbnx4jh6opbrm1qed84ue &lt;span class="k"&gt;*&lt;/span&gt;   manager             Ready               Active              Leader              19.03.5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Till now we created one manager node using docker-machine and initialized the docker swarm cluster on it.&lt;br&gt;
Now, we will create the worker nodes. You can open another shell to run these commands or run below command to detach the current shell from the docker-machine manager node or switch between the docker-machine environments.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#You will be back to the main shell, unset the environment&lt;/span&gt;
&lt;span class="nv"&gt;$eval&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;docker-machine &lt;span class="nb"&gt;env&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let's create a worker node and switch working environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$docker&lt;/span&gt;&lt;span class="nt"&gt;-machine&lt;/span&gt; create &lt;span class="nt"&gt;--driver&lt;/span&gt; virtualbox worker1
Running pre-create checks...
Creating machine...
&lt;span class="o"&gt;(&lt;/span&gt;worker1&lt;span class="o"&gt;)&lt;/span&gt; Copying /home/deepak/.docker/machine/cache/boot2docker.iso to /home/deepak/.docker/machine/machines/worker1/boot2docker.iso...
&lt;span class="o"&gt;(&lt;/span&gt;worker1&lt;span class="o"&gt;)&lt;/span&gt; Creating VirtualBox VM...
&lt;span class="o"&gt;(&lt;/span&gt;worker1&lt;span class="o"&gt;)&lt;/span&gt; Creating SSH key...
&lt;span class="o"&gt;(&lt;/span&gt;worker1&lt;span class="o"&gt;)&lt;/span&gt; Starting the VM...
&lt;span class="o"&gt;(&lt;/span&gt;worker1&lt;span class="o"&gt;)&lt;/span&gt; Check network to re-create &lt;span class="k"&gt;if &lt;/span&gt;needed...
&lt;span class="o"&gt;(&lt;/span&gt;worker1&lt;span class="o"&gt;)&lt;/span&gt; Waiting &lt;span class="k"&gt;for &lt;/span&gt;an IP...
Waiting &lt;span class="k"&gt;for &lt;/span&gt;machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting &lt;span class="k"&gt;for &lt;/span&gt;SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the &lt;span class="nb"&gt;local &lt;/span&gt;machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine &lt;span class="nb"&gt;env &lt;/span&gt;worker1

&lt;span class="c"&gt;# Run this command to configure your shell to attach to worker1 &lt;/span&gt;
&lt;span class="nv"&gt;$eval&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;docker-machine &lt;span class="nb"&gt;env &lt;/span&gt;worker1&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;$docker&lt;/span&gt;&lt;span class="nt"&gt;-machine&lt;/span&gt; active
worker1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Register this worker node with the docker-swarm cluster with the command we have got in the output of the activating the manager node.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$docker&lt;/span&gt; swarm &lt;span class="nb"&gt;join&lt;/span&gt; &lt;span class="nt"&gt;--token&lt;/span&gt; SWMTKN-1-2wb0sivbsxsx0t3va8wv0qywa303q1mcc2syplrwj0q1301kfs-8qfc9yhwst06y8c1gusftrjkl 192.168.99.105:2377
&lt;span class="c"&gt;#output: This node joined a swarm as a worker.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, switch the docker-machine environment and check the registration on the manager node.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#switch to the manager node &amp;amp; check&lt;/span&gt;
&lt;span class="nv"&gt;$eval&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;docker-machine &lt;span class="nb"&gt;env &lt;/span&gt;manager&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;$docker&lt;/span&gt;&lt;span class="nt"&gt;-machine&lt;/span&gt; active
manager
&lt;span class="nv"&gt;$docker&lt;/span&gt; node &lt;span class="nb"&gt;ls
&lt;/span&gt;ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
jrivcbnx4jh6opbrm1qed84ue &lt;span class="k"&gt;*&lt;/span&gt;   manager             Ready               Active              Leader              19.03.5
ymq8yt76ogpywk2eb6rzt9au1     worker1             Ready               Active                                  19.03.5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Repeat the same steps to add another worker node to the swarm cluster.&lt;br&gt;
The &lt;code&gt;$docker node ls&lt;/code&gt; command on manager node should return&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
jrivcbnx4jh6opbrm1qed84ue &lt;span class="k"&gt;*&lt;/span&gt;   manager             Ready               Active              Leader              19.03.5
ymq8yt76ogpywk2eb6rzt9au1     worker1             Ready               Active                                  19.03.5
ovr9a15sv0gw2lc68k756qth2     worker2             Ready               Active                                  19.03.5

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we have one manager node and two worker nodes active in our swarm cluster. We can add as many nodes as a manager or a worker. To add a manager and a worker node run the below commands to get the join token.&lt;br&gt;
Use the join token to get the node registered in the cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$docker&lt;/span&gt; swarm join-token manager
To add a manager to this swarm, run the following &lt;span class="nb"&gt;command&lt;/span&gt;:
    docker swarm &lt;span class="nb"&gt;join&lt;/span&gt; &lt;span class="nt"&gt;--token&lt;/span&gt; SWMTKN-1-2wb0sivbsxsx0t3va8wv0qywa303q1mcc2syplrwj0q1301kfs-0kpbj4agn8ptyzcmegg6c4hnk 192.168.99.105:2377

&lt;span class="nv"&gt;$docker&lt;/span&gt; swarm join-token worker
To add a worker to this swarm, run the following &lt;span class="nb"&gt;command&lt;/span&gt;:
    docker swarm &lt;span class="nb"&gt;join&lt;/span&gt; &lt;span class="nt"&gt;--token&lt;/span&gt; SWMTKN-1-2wb0sivbsxsx0t3va8wv0qywa303q1mcc2syplrwj0q1301kfs-8qfc9yhwst06y8c1gusftrjkl 192.168.99.105:2377

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we will create services on our nodes. To create a service run the below command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker service create --name webservice -p 8001:80 nginx:latest
#Check the service
$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
05ggmfg0n3o4        webservice          replicated          1/1                 nginx:latest        *:8001-&amp;gt;80/tcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;This service can run anywhere in the docker cluster on port 8001, hence 8001 is the cluster-wide port&lt;/code&gt;. Let's check where the service is running and if that is running on manager node we can &lt;code&gt;set manager node availability to drain&lt;/code&gt; and then the service run on worker nodes only.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$docker&lt;/span&gt; service ps webservice
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
9tt1y854e55p        webservice.1        nginx:latest        manager             Running             Running about a minute ago                       
&lt;span class="c"&gt;#It is running on manager node, set the manager availability to drain &lt;/span&gt;
&lt;span class="nv"&gt;$docker&lt;/span&gt; node update &lt;span class="nt"&gt;--availability&lt;/span&gt; drain manager
manager
&lt;span class="nv"&gt;$docker&lt;/span&gt; service ps webservice
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE             ERROR               PORTS
ns6xellg2hgc        webservice.1        nginx:latest        worker2             Running             Preparing 7 seconds ago                       
9tt1y854e55p         &lt;span class="se"&gt;\_&lt;/span&gt; webservice.1    nginx:latest        manager             Shutdown            Shutdown 4 seconds ago  
&lt;span class="c"&gt;#The service automatically moved to worker2 node and load removed from manager&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Irrespective of where the service is running, if we hit any node in our cluster on port 8001 we should be able to see the nginx welcome page.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Auto Scaling &amp;amp; Load Balancing&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
Now, we will see how the &lt;code&gt;docker swarm manages the scaling, Load Balancing&lt;/code&gt;. To scale the service run below command and it will scale up throughout out the cluster on all available active nodes. Our manager node is in drain state hence no service will be running over there.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$docker&lt;/span&gt; service scale &lt;span class="nv"&gt;webservice&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10
webservice scaled to 10
overall progress: 10 out of 10 tasks 
1/10: running   &lt;span class="o"&gt;[==================================================&amp;gt;]&lt;/span&gt; 
2/10: running   &lt;span class="o"&gt;[==================================================&amp;gt;]&lt;/span&gt; 
3/10: running   &lt;span class="o"&gt;[==================================================&amp;gt;]&lt;/span&gt; 
4/10: running   &lt;span class="o"&gt;[==================================================&amp;gt;]&lt;/span&gt; 
5/10: running   &lt;span class="o"&gt;[==================================================&amp;gt;]&lt;/span&gt; 
6/10: running   &lt;span class="o"&gt;[==================================================&amp;gt;]&lt;/span&gt; 
7/10: running   &lt;span class="o"&gt;[==================================================&amp;gt;]&lt;/span&gt; 
8/10: running   &lt;span class="o"&gt;[==================================================&amp;gt;]&lt;/span&gt; 
9/10: running   &lt;span class="o"&gt;[==================================================&amp;gt;]&lt;/span&gt; 
10/10: running   &lt;span class="o"&gt;[==================================================&amp;gt;]&lt;/span&gt; 
verify: Service converged 

&lt;span class="c"&gt;#Verify where these are running&lt;/span&gt;
&lt;span class="nv"&gt;$docker&lt;/span&gt; service ps webservice
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
ns6xellg2hgc        webservice.1        nginx:latest        worker2             Running             Running 16 minutes ago                           
9tt1y854e55p         &lt;span class="se"&gt;\_&lt;/span&gt; webservice.1    nginx:latest        manager             Shutdown            Shutdown 16 minutes ago                          
nza4dxb4gq5t        webservice.2        nginx:latest        worker2             Running             Running about a minute ago                       
tddgybed5mon        webservice.3        nginx:latest        worker2             Running             Running about a minute ago                       
lqgjvagrmscc        webservice.4        nginx:latest        worker1             Running             Running 57 seconds ago                           
0vt8ou31sxds        webservice.5        nginx:latest        worker1             Running             Running 57 seconds ago                           
xrmvbrbir68e        webservice.6        nginx:latest        worker1             Running             Running 57 seconds ago                           
k0f1agcqz11u        webservice.7        nginx:latest        worker1             Running             Running 57 seconds ago                           
y8oa9b9pug0u        webservice.8        nginx:latest        worker2             Running             Running about a minute ago                       
266sik5ude24        webservice.9        nginx:latest        worker1             Running             Running 57 seconds ago                           
mb4jpa4fcigk        webservice.10       nginx:latest        worker2             Running             Running about a minute ago  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you see the service is automatically load-balanced among the active nodes, this is the beauty of the docker swarm automatic load balancer. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;High Availability&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$docker&lt;/span&gt; node update &lt;span class="nt"&gt;--availability&lt;/span&gt; drain worker1
worker1
&lt;span class="nv"&gt;$docker&lt;/span&gt; service ps webservice
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE             ERROR               PORTS
ns6xellg2hgc        webservice.1        nginx:latest        worker2             Running             Running 20 minutes ago                        
9tt1y854e55p         &lt;span class="se"&gt;\_&lt;/span&gt; webservice.1    nginx:latest        manager             Shutdown            Shutdown 20 minutes ago                       
nza4dxb4gq5t        webservice.2        nginx:latest        worker2             Running             Running 5 minutes ago                         
tddgybed5mon        webservice.3        nginx:latest        worker2             Running             Running 5 minutes ago                         
fl1649e0h1vj        webservice.4        nginx:latest        worker2             Running             Running 5 seconds ago                         
lqgjvagrmscc         &lt;span class="se"&gt;\_&lt;/span&gt; webservice.4    nginx:latest        worker1             Shutdown            Shutdown 7 seconds ago                        
z8bjrweqq676        webservice.5        nginx:latest        worker2             Running             Running 5 seconds ago                         
0vt8ou31sxds         &lt;span class="se"&gt;\_&lt;/span&gt; webservice.5    nginx:latest        worker1             Shutdown            Shutdown 7 seconds ago                        
r4q56ukiyz93        webservice.6        nginx:latest        worker2             Running             Running 5 seconds ago                         
xrmvbrbir68e         &lt;span class="se"&gt;\_&lt;/span&gt; webservice.6    nginx:latest        worker1             Shutdown            Shutdown 7 seconds ago                        
00vbb5b7dk7s        webservice.7        nginx:latest        worker2             Running             Running 5 seconds ago                         
k0f1agcqz11u         &lt;span class="se"&gt;\_&lt;/span&gt; webservice.7    nginx:latest        worker1             Shutdown            Shutdown 7 seconds ago                        
y8oa9b9pug0u        webservice.8        nginx:latest        worker2             Running             Running 5 minutes ago                         
0gaahurh7d81        webservice.9        nginx:latest        worker2             Running             Running 5 seconds ago                         
266sik5ude24         &lt;span class="se"&gt;\_&lt;/span&gt; webservice.9    nginx:latest        worker1             Shutdown            Shutdown 7 seconds ago                        
mb4jpa4fcigk        webservice.10       nginx:latest        worker2             Running             Running 5 minutes ago 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you see that, when we set one of our worker nodes to drain state, how &lt;code&gt;high availability&lt;/code&gt; is handled. All the load from worker1 node automatically shifted to worker2 node to maintain HA.&lt;/p&gt;

&lt;p&gt;Holla! here it comes to an end with the learning of creating docker host using docker-machine and managing nodes, services using docker swarm.&lt;/p&gt;

&lt;p&gt;Happy dockering! &amp;amp; Keep learning!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>dockerswarm</category>
      <category>dockermachine</category>
      <category>linux</category>
    </item>
    <item>
      <title>create docker hosted nodes using docker-machine</title>
      <dc:creator>Deepak Sabhrawal</dc:creator>
      <pubDate>Sun, 26 Apr 2020 13:05:21 +0000</pubDate>
      <link>https://dev.to/devdpk/create-docker-hosted-nodes-using-docker-machine-55ml</link>
      <guid>https://dev.to/devdpk/create-docker-hosted-nodes-using-docker-machine-55ml</guid>
      <description>&lt;p&gt;&lt;strong&gt;&lt;em&gt;docker-machine&lt;/em&gt;&lt;/strong&gt; is a component of &lt;em&gt;docker&lt;/em&gt;, used to create docker hosted virtual nodes using different drivers like GCP, AWS, Azure, or Virtualbox. Docker hosted nodes are the machines running docker daemon and used to run the docker containers. The Docker machine tool is useful when you want to create a docker cluster on any of the drivers mentioned above using &lt;code&gt;docker swarm&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Today, we will learn to create a virtual machine using &lt;strong&gt;&lt;em&gt;docker-machine&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step 1:&lt;/em&gt;&lt;/strong&gt; Check the latest release version available &lt;a href="https://github.com/docker/machine/releases/"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;strong&gt;&lt;em&gt;Step 2:&lt;/em&gt;&lt;/strong&gt; Run below command to install docker-machine (change the latest version you find from step 1)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$base&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://github.com/docker/machine/releases/download/v0.16.2 &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;
curl &lt;span class="nt"&gt;-L&lt;/span&gt; &lt;span class="nv"&gt;$base&lt;/span&gt;/docker-machine-&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;uname&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;-&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;uname&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/tmp/docker-machine &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; 
&lt;span class="nb"&gt;sudo mv&lt;/span&gt; /tmp/docker-machine /usr/local/bin/docker-machine 
&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;chmod&lt;/span&gt; +x /usr/local/bin/docker-machine
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Click &lt;a href="https://docs.docker.com/machine/install-machine/"&gt;here&lt;/a&gt; for installation on Windows or Mac.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step 3:&lt;/em&gt;&lt;/strong&gt; Check successful installation&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$docker&lt;/span&gt;&lt;span class="nt"&gt;-machine&lt;/span&gt; &lt;span class="nb"&gt;ls
&lt;/span&gt;NAME   ACTIVE   DRIVER       STATE     URL     SWARM     DOCKER     ERRORS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;*Run last command &lt;code&gt;chmod +x /usr/local/bin/docker-machine&lt;/code&gt; if you see any permission issue&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step 4:&lt;/em&gt;&lt;/strong&gt; Install Virtualbox driver for local installation of virtual machines&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$sudo&lt;/span&gt; apt update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt upgrade &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt &lt;span class="nb"&gt;install &lt;/span&gt;virtualbox
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step 5:&lt;/em&gt;&lt;/strong&gt; Create virtual machine on Virtualbox with name &lt;code&gt;dev&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$docker&lt;/span&gt;&lt;span class="nt"&gt;-machine&lt;/span&gt; create &lt;span class="nt"&gt;--driver&lt;/span&gt; virtualbox dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step 6:&lt;/em&gt;&lt;/strong&gt; Now, the &lt;code&gt;$docker-machine ls&lt;/code&gt; command should return with running machine&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$docker&lt;/span&gt;&lt;span class="nt"&gt;-machine&lt;/span&gt; &lt;span class="nb"&gt;ls
&lt;/span&gt;NAME   ACTIVE   DRIVER       STATE     URL                       SWARM   DOCKER  ERRORS
dev    -        virtualbox   Running   tcp://192.168.99.100:2376           v19.03.5   
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You successfully installed &lt;code&gt;docker hosted virtual machine using Virtualbox driver locally&lt;/code&gt;, this node can be used for &lt;strong&gt;&lt;em&gt;docker swarm&lt;/em&gt;&lt;/strong&gt; worker nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Happy dockering! Keep Learning!&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>dockermachine</category>
      <category>linux</category>
      <category>virtualbox</category>
    </item>
    <item>
      <title>Install docker on Linux</title>
      <dc:creator>Deepak Sabhrawal</dc:creator>
      <pubDate>Sat, 25 Apr 2020 02:26:50 +0000</pubDate>
      <link>https://dev.to/devdpk/install-docker-on-linux-48ck</link>
      <guid>https://dev.to/devdpk/install-docker-on-linux-48ck</guid>
      <description>&lt;p&gt;Run the following command to install docker on Linux.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;em&gt;Ubuntu&lt;/em&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;$sudo apt install docker.io&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;em&gt;Arch Linux&lt;/em&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;$sudo pacman -S docker&lt;/em&gt;&lt;/strong&gt; &lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;em&gt;Fedora&lt;/em&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;$sudo dnf install docker&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;em&gt;CentOS&lt;/em&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;$sudo yum install docker&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Check if the docker is running with the following command &lt;br&gt;
&lt;strong&gt;&lt;em&gt;$sudo docker ps&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But, if you will run it without &lt;em&gt;sudo&lt;/em&gt; you will get the following error:&lt;/p&gt;

&lt;h5&gt;
  
  
  Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/json: dial unix /var/run/docker.sock: connect: permission denied
&lt;/h5&gt;

&lt;p&gt;This happens due to the permission issue as the current user is not having permission to run docker. &lt;/p&gt;

&lt;p&gt;Run the below command to add the current user to &lt;strong&gt;&lt;em&gt;docker&lt;/em&gt;&lt;/strong&gt; group&lt;br&gt;
&lt;strong&gt;&lt;em&gt;$sudo usermod -a -G docker $USER&lt;/em&gt;&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;You &lt;strong&gt;&lt;em&gt;must&lt;/em&gt;&lt;/strong&gt; logout and login or restart to let the system run group policy again and add the current user to the docker group&lt;br&gt;
Run below command after re-login&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;$docker ps -a&lt;/em&gt;&lt;/strong&gt; (to see all processes - running/exited)&lt;/p&gt;

&lt;p&gt;To check successful installation of the docker&lt;br&gt;
&lt;strong&gt;&lt;em&gt;$docker run hello-world&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Output
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Hello from Docker!&lt;br&gt;
This message shows that your installation appears to be working correctly.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Congratulations! you successfully installed docker on your system&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Happy dockering! &amp;amp; keep learning!
&lt;/h3&gt;

</description>
      <category>docker</category>
      <category>linux</category>
      <category>ubuntu</category>
      <category>archlinux</category>
    </item>
  </channel>
</rss>
