<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mohamed El Eraky</title>
    <description>The latest articles on DEV Community by Mohamed El Eraky (@mohamedeleraki).</description>
    <link>https://dev.to/mohamedeleraki</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mohamedeleraki"/>
    <language>en</language>
    <item>
      <title>Docker Swarm Series: #8th Publishing Modes</title>
      <dc:creator>Mohamed El Eraky</dc:creator>
      <pubDate>Tue, 25 Jul 2023 20:12:13 +0000</pubDate>
      <link>https://dev.to/mohamedeleraki/docker-swarm-series-8th-publishing-modes-36nk</link>
      <guid>https://dev.to/mohamedeleraki/docker-swarm-series-8th-publishing-modes-36nk</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AWWHfTH9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/352bhjzlzu45s0r3p0y9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AWWHfTH9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/352bhjzlzu45s0r3p0y9.png" alt="Image description" width="741" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Inception
&lt;/h1&gt;

&lt;p&gt;Hello everyone, This article is part of The Swarm series, The knowledge in this series is built in sequence, Check out The Swarm series section below.&lt;/p&gt;

&lt;p&gt;In the &lt;a href="https://mohamed-eleraky.hashnode.dev/docker-swarm-series-7th-advanced-managing-config-and-secret-objects"&gt;last article&lt;/a&gt;, we covered Config and Secret advanced stuff, discussed how to use and manage config objects, and how to store secrets securely and use them in your deployment. However in an advanced and efficient way, using Docker-compose YAML file, Docker CLI, And &lt;a href="https://labs.play-with-docker.com/"&gt;Play-with-docker&lt;/a&gt; lab, we going to discuss more advanced topics today.&lt;/p&gt;




&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;In This article, Will try to cover an interesting topic, which is docker publishing modes, Explain publishing modes, View the differences between ingress and host modes, how to use service end_point (VIP, DNSRR), and discover what is a global and replicas mode in deployment, Placement constraint. in this lab Also will use the &lt;a href="https://labs.play-with-docker.com/"&gt;Play-with-docker&lt;/a&gt; lab.&lt;/p&gt;




&lt;h1&gt;
  
  
  Publishing modes Overview
&lt;/h1&gt;

&lt;p&gt;Simply, In Docker Swarm publishing a service means making it accessible from outside the swarm cluster. In native docker we used the --publish or -p option when creating a container. it's almost the same concept when comes to Swarm, however with more options to match what exactly you want.&lt;/p&gt;




&lt;h1&gt;
  
  
  Differences of publishing modes ingress vs host
&lt;/h1&gt;

&lt;p&gt;First of all, Kindly build up the environment as mentioned in &lt;a href="https://mohamed-eleraky.hashnode.dev/docker-swarm-series-3rd-deploy-a-highly-available-container"&gt;Docker Swarm Series: #3rd Deploy a highly available Container&lt;/a&gt; article.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Ingress&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Docker's Ingress network is a built-in overlay network &lt;a href="https://poe.com/s/NZ92TslUZeSej6HcJ5fy"&gt;&lt;em&gt;-docker networks-&lt;/em&gt;&lt;/a&gt; that enables external access to services running in a Docker Swarm cluster. It will accept connections for published ports from every node in the Swarm Cluster, and the &lt;a href="https://docs.docker.com/engine/swarm/ingress/#:~:text=All%20nodes%20participate%20in%20an,task%20running%20on%20the%20node."&gt;Ingress routing mesh&lt;/a&gt; takes care of forwarding the traffic to a service task, regardless of which node it is running.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hint:&lt;/strong&gt; &lt;em&gt;The service itself gets published as a DNS entry in docker's build in DNS-server&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Create a service and publish a port via ingress:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker service create &lt;span class="nt"&gt;--detach&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; myservice &lt;span class="nt"&gt;--publish&lt;/span&gt; 8080:80 &lt;span class="nt"&gt;--replicas&lt;/span&gt; 3 &lt;span class="nt"&gt;--mount&lt;/span&gt; &lt;span class="nb"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/etc/hostname,target&lt;span class="o"&gt;=&lt;/span&gt;/usr/share/nginx/html/index.html,type&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;bind&lt;/span&gt;,ro nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This command publishes port 8080 on the swarm node to port 80 on the &lt;code&gt;myservice&lt;/code&gt; service. This allows external access to the service through port 8080, The &lt;code&gt;--mount&lt;/code&gt; flag is useful to print out the hostname of the node That host the service.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;list services:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker service &lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;3. &lt;strong&gt;Optional&lt;/strong&gt;: publish additional ports for an existing service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker service update &lt;span class="nt"&gt;--publish-add&lt;/span&gt; 8080:80 myservice
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: If you publish additional ports for an existing service will start updating the service one by one to avoid service down, and after updating will make some checks to ensure the update works fine.&lt;/p&gt;

&lt;p&gt;Once you have completed these steps, the &lt;code&gt;myservice&lt;/code&gt; service will be accessible from outside the swarm through the Ingress network. You can scale the service up or down as needed, and Docker will automatically handle load balancing and service discovery.&lt;/p&gt;




&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;To test out the previous steps Follow the instructions below:&lt;/p&gt;

&lt;p&gt;Here we have 3 services distributed on 3 nodes:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker ps service myservice
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;As mentioned above the &lt;a href="https://docs.docker.com/engine/swarm/ingress/#:~:text=All%20nodes%20participate%20in%20an,task%20running%20on%20the%20node."&gt;Ingress routing mesh&lt;/a&gt; takes care of forwarding the traffic to a service task, regardless of which node it is running on. therefore to test the load balancing run the following command on any node, even if it does not host the service.&lt;br&gt;
&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://localhost:8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice that the output of the &lt;code&gt;curl&lt;/code&gt; command prints out the node name that hosts the service. Rerun again a bunch of times on notice that the ingress routing mesh is load balance on the services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5nYsjKMQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1690269927312/f0319a44-4ca5-41bd-ad12-e968041134f3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5nYsjKMQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1690269927312/f0319a44-4ca5-41bd-ad12-e968041134f3.png" alt="" width="800" height="337"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Host&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The host configures the service to bind directly to a specific port on the host, without using the ingress routing mesh. This can be useful when you want to run your own load balancer (e.g. Traefik, nginx), or need the source IP of the communication &lt;strong&gt;to be retained&lt;/strong&gt;. The published host port will only be bound on nodes where at least one service task of the service is running.&lt;/p&gt;

&lt;p&gt;Follow the below steps to publish a port in host mode:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;to create a service with tasks that listens on port 8080 of the Docker host's network interface, you can use the following command:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker service create &lt;span class="nt"&gt;--name&lt;/span&gt; myservice &lt;span class="nt"&gt;--publish&lt;/span&gt; &lt;span class="nv"&gt;published&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;8080,target&lt;span class="o"&gt;=&lt;/span&gt;80,mode&lt;span class="o"&gt;=&lt;/span&gt;host nginx
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;This command creates a service called &lt;code&gt;myservice&lt;/code&gt; based on the &lt;code&gt;nginx&lt;/code&gt; image, and binds the host port 8080 to the container port 80. Instead of using the host namespace, only a single port is forwarded to the container port.&lt;/p&gt;

&lt;p&gt;Host namespaces isolate containers &lt;strong&gt;from each other&lt;/strong&gt; and &lt;strong&gt;from the host system&lt;/strong&gt; by providing each container with its &lt;strong&gt;own namespace&lt;/strong&gt; for system resources such as processes, network interfaces, and file systems. This allows containers to have a &lt;strong&gt;high level of isolation&lt;/strong&gt; and control over their resources.&lt;/p&gt;

&lt;p&gt;On the other hand, host networking allows containers to use the host system's network stack directly, without being isolated from it. This means that containers &lt;strong&gt;using host networking&lt;/strong&gt; are effectively treated as if they were &lt;strong&gt;running directly on the host system&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h1&gt;
  
  
  Service EndPoint Modes
&lt;/h1&gt;

&lt;p&gt;EndPoint modes determine how the service is exposed to the client, there're two EndPoint modes VIP (Virtual IP) and DNSRR (DNS round-robin)&lt;/p&gt;

&lt;p&gt;VIP endpoint mode is the default mode, and it assigns a single virtual IP to the service. The service name will be registered in the network DNS-based Service Discovery pointing to the virtual IP. The VIP provides a built-in load balancer, which distributes the connections in a round-robin manner to the tasks running the service.&lt;/p&gt;

&lt;p&gt;DNSRR endpoint mode registers the service name as a multi-value DNS-record in the network DNS-based Service Discovery pointing to the IP's of all tasks running the service.&lt;/p&gt;

&lt;p&gt;Typically (DNS) clients cache DNS results until the TTL expires, so that load will not be distributed among the tasks, but rather will be directed to the same task. Some clients even cache the DNS result indefinitely (like nginx does)&lt;/p&gt;

&lt;p&gt;To set the endpoint mode for a service in a Docker Swarm stack file (in YAML format), you can use the&lt;code&gt;endpoint_mode&lt;/code&gt; parameter under the deploy section of the service definition.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3.9'&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;web&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;endpoint_mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dnsrr&lt;/span&gt;
      &lt;span class="c1"&gt;#endpoint_mode: vip&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the same in the CLI syntax&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker service create &lt;span class="nt"&gt;--name&lt;/span&gt; web &lt;span class="nt"&gt;--endpoint-mode&lt;/span&gt; dnsrr nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h1&gt;
  
  
  Deploy modes
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
  &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;global&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are two deployment modes &lt;strong&gt;replicas **and **global&lt;/strong&gt;, these modes define how the service will deploy on the nodes, with replicated you define how many service instances you want to deploy, on the other hand with global you configure the service instances to run on each node, the number of instances will equal the number of nodes, even if there is a new node joined the cluster will deploy an instance from the service on it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Placement Constraints
&lt;/h3&gt;

&lt;p&gt;Placement constraints are rules that specify which nodes are eligible for running a particular service or task. Constraints can be based on node labels, you can use this approach when you want to run a particular service on particular nodes. Assign your label to the node.&lt;br&gt;
The placement constraints work either with replicas or global modes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# the lable called db&lt;/span&gt;
docker node update &lt;span class="nt"&gt;--label-add&lt;/span&gt; &lt;span class="nv"&gt;db&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true &lt;/span&gt;node-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update the YAML file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt; &lt;span class="c1"&gt;# &amp;lt;-- Set number of replicas&lt;/span&gt;
      &lt;span class="na"&gt;placement&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;constraints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;node.labels.db == "true"&lt;/span&gt;  &lt;span class="c1"&gt;# deploy only on nodes that have this lable&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h1&gt;
  
  
  Resources
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;h4&gt;
  
  
  &lt;a href="https://forums.docker.com/u/meyay/summary"&gt;Metin guidance&lt;/a&gt;
&lt;/h4&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.docker.com/compose/compose-file/compose-file-v3/#endpoint_mode"&gt;EndPoint mode Docs&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.docker.com/engine/swarm/networking/#configure-service-discovery"&gt;configure service discovery&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.amazon.com/Learn-Docker-Month-Lunches-Stoneman/dp/1617297054/ref=sr_1_1?keywords=learn+docker+in+a+month+of+lunches&amp;amp;link_code=qs&amp;amp;qid=1690103529&amp;amp;sourceid=Mozilla-search&amp;amp;sr=8-1"&gt;Learn Docker in a Month of Lunches&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="http://www.poe.com"&gt;POE AI&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;That's it, Very straightforward, very fast🚀. Hope this article inspired you and will appreciate your feedback. Thank you.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>swarm</category>
    </item>
    <item>
      <title>Docker Swarm Series: #7th Advanced Managing config and secret objects</title>
      <dc:creator>Mohamed El Eraky</dc:creator>
      <pubDate>Sat, 22 Jul 2023 05:05:58 +0000</pubDate>
      <link>https://dev.to/mohamedeleraki/docker-swarm-series-7th-advanced-managing-config-and-secret-objects-5fph</link>
      <guid>https://dev.to/mohamedeleraki/docker-swarm-series-7th-advanced-managing-config-and-secret-objects-5fph</guid>
      <description>&lt;h1&gt;
  
  
  Inception
&lt;/h1&gt;

&lt;p&gt;Hello everyone, This article is part of The Swarm series, The knowledge in this series is built in sequence, Check out The Swarm series section above.&lt;/p&gt;

&lt;p&gt;In the &lt;a href="https://mohamed-eleraky.hashnode.dev/docker-swarm-series-6th-managing-config-and-secret-objects" rel="noopener noreferrer"&gt;last article&lt;/a&gt;, we covered how to use and manage &lt;strong&gt;config objects&lt;/strong&gt; and how to &lt;strong&gt;store secrets&lt;/strong&gt; securely, using Docker-compose YAML file, Docker CLI, And &lt;a href="https://labs.play-with-docker.com/" rel="noopener noreferrer"&gt;Play-with-docker&lt;/a&gt; lab, And we going to discuss more advanced topics today.&lt;/p&gt;




&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;In This article, We will complete The Swarm tutorials by explaining Config and Secret advanced stuff, will discuss how to use and manage &lt;strong&gt;config objects&lt;/strong&gt; and how to &lt;strong&gt;store secrets&lt;/strong&gt; securely and use them in your deployment. However in an advanced and efficient way. in this lab Also will use the &lt;a href="https://labs.play-with-docker.com/" rel="noopener noreferrer"&gt;Play-with-docker&lt;/a&gt; lab.&lt;/p&gt;




&lt;h1&gt;
  
  
  Advanced Managing config objects
&lt;/h1&gt;

&lt;p&gt;In The &lt;a href="https://mohamed-eleraky.hashnode.dev/docker-swarm-series-6th-managing-config-and-secret-objects#heading-deployment-example" rel="noopener noreferrer"&gt;Last Deployment&lt;/a&gt;, we created a config file that contains The &lt;strong&gt;MongoDB username&lt;/strong&gt; and refer to The &lt;strong&gt;Mongo password&lt;/strong&gt; file. That is actually a Nice way to manage config and store secrets without defining them as clear text in your docker-compose YAML file. However &lt;strong&gt;what if we want to update our config file with new content?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In Fact, Swarm doesn't have a way to manage config versioning. you should do it on your own, That means You can't update the same config file with new content, and then the service will update the config automatically.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;However&lt;/strong&gt;, you can create another config file and update the service by defining the new config file. Therefore you will Manage the config file object versioning.&lt;/p&gt;


&lt;h1&gt;
  
  
  Update The Config object example
&lt;/h1&gt;

&lt;p&gt;This example will Continue from the &lt;a href="https://mohamed-eleraky.hashnode.dev/docker-swarm-series-6th-managing-config-and-secret-objects#heading-deployment-example" rel="noopener noreferrer"&gt;last article's&lt;/a&gt; stand-up point, Kindly submit the example there and come back here to figure out how to update and version the config object file.&lt;/p&gt;

&lt;p&gt;Now we have our environment ready, And we have a service running using config and secret stored in Swarm Database. Let's update the config and secret as below steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The config object file cannot be deleted while the service is using it. you should create another config and update the service to use the new config object, Then you can delete it. However prefer to not delete any, instead you should version your object files So you have your config files versioned so you can roll back to the old one, and then update the service YAML file to use the new version of the object file. and this is exactly what we going to do below steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a new config file called &lt;code&gt;mongo_config-v2.txt&lt;/code&gt; with the updated content below:&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;MONGO_INITDB_ROOT_USERNAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;root
&lt;span class="nv"&gt;MONGO_INITDB_ROOT_PASSWORD_FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/run/secrets/mongo_password.v2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the new config by running the following:&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker config create mongo_config.v2 mongo_config-v2.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;now you should have two config objects&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker config &lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1689965324427%2Fcc900706-60e8-49d1-a6f8-e4bf90fae1e1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1689965324427%2Fcc900706-60e8-49d1-a6f8-e4bf90fae1e1.png"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h1&gt;
  
  
  Update The Secret object example
&lt;/h1&gt;

&lt;p&gt;The secret is almost pretty much the same as the config, you cannot recreate it with the same name, And cannot be deleted while the service is using it.&lt;/p&gt;

&lt;p&gt;Instead, Also you would need to create another secret with the new content and version and update the service to use the new secret created. let's do this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a new secret file object called &lt;code&gt;mongo_password-v2.txt&lt;/code&gt; have the content below&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;supersecretp@&lt;span class="nv"&gt;$$&lt;/span&gt;w0rd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the secret by the following:&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker secret create mongo_password.v2 mongo_password-v2.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;now you should have two Secret objects&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker secret &lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1689966551884%2F47c6e494-1209-4c8b-bf1c-62bd55eff5f3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1689966551884%2F47c6e494-1209-4c8b-bf1c-62bd55eff5f3.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Some of The listed secrets are created by &lt;strong&gt;long-syntax&lt;/strong&gt; and others by &lt;strong&gt;short-syntax&lt;/strong&gt; ways, Will figure out that in a second.&lt;/p&gt;




&lt;h1&gt;
  
  
  Update The Service
&lt;/h1&gt;

&lt;p&gt;Now let's update the service YAML file to use the new config and secret have just created, will not delete or update anything in the same file instead will copy the YAML file with a new name &lt;code&gt;docker-compose-v2.yml&lt;/code&gt; so after a bunch of updates, we will have the YAML files listed and versioned. This a good way for rolling back, history, and tracking updates.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Copy the YAML file with the new name &lt;code&gt;docker-compose-v2.yml&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cp &lt;/span&gt;docker-compose.yml docker-compose-v2.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open the version two compose file using &lt;code&gt;vim&lt;/code&gt; and submit the updates as the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mongo service Config&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;configs:
  - &lt;span class="nb"&gt;source&lt;/span&gt;: mongo_config.v2
    target: /docker-entrypoint-initdb.d/mongo_config-v2.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Mongo service Secrets&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;secrets:
  - mongo_password.v2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Mongo-Express service Admin password environment variable&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ME_CONFIG_MONGODB_ADMINPASSWORD_FILE: /run/secrets/mongo_password.v2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Mongo-Express service Secrets&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;secrets:
  - mongo_password.v2
  - mongo_admin_username
  - me_username
  - me_password
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Main Secrets and Config definition&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;secrets:
  mongo_password.v2:
    external: &lt;span class="nb"&gt;true
&lt;/span&gt;mongo_admin_username:
  external: &lt;span class="nb"&gt;true
&lt;/span&gt;me_username:
  external: &lt;span class="nb"&gt;true
&lt;/span&gt;me_password:
  external: &lt;span class="nb"&gt;true

&lt;/span&gt;configs:
  mongo_config.v2:
    external: &lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Update the service by the following:&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker stack deploy &lt;span class="nt"&gt;-c&lt;/span&gt; docker-compose-v2.yml myapp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1689968329179%2F7f857226-9306-4bf4-90fb-b53f1c994d01.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1689968329179%2F7f857226-9306-4bf4-90fb-b53f1c994d01.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ensure the service running as expected by running the following&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker stack services myapp
docker stack ps myapp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1689968543438%2F1d74a05a-f1ca-4014-a5bc-1eff1bfad2a9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1689968543438%2F1d74a05a-f1ca-4014-a5bc-1eff1bfad2a9.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Yeah, The service's desired state are matched the actual state, Also notice here the swarm has shut down the old services and created new services with our updates.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can delete the config by running the following, Consider that you cannot delete the config and secret while the service is using it, will delete the config object meanwhile And will let the config object files itself for versioning list reasons.&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker config &lt;span class="nb"&gt;rm &lt;/span&gt;mongo_config
&lt;span class="s2"&gt;"""
notice that you still have the config file after deleted
the config object from swarm storage.
"""&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, you can do the same with secrets.&lt;/p&gt;




&lt;h1&gt;
  
  
  Swarm secret long and short syntax
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;"The main difference between the&lt;/em&gt; &lt;strong&gt;&lt;em&gt;long&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;and&lt;/em&gt; &lt;strong&gt;&lt;em&gt;short syntax&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;for creating Swarm secrets is the way in which the secret's value is specified."&lt;/em&gt; &lt;a href="http://WWW.POE.COM" rel="noopener noreferrer"&gt;&lt;em&gt;-POE-&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the &lt;strong&gt;long syntax&lt;/strong&gt;, The secret value is stored in a file and you run &lt;code&gt;docker secret create&lt;/code&gt; with mentioning the file path.&lt;/p&gt;

&lt;p&gt;This approach can be useful when creating secrets that have Certificate values, or any complex values.&lt;/p&gt;

&lt;p&gt;"The long syntax allows mounting a secret in any path and even set the access permissions." &lt;a href="https://forums.docker.com/t/re-docker-swarm-series-6th-managing-config-and-secret-objects/136865" rel="noopener noreferrer"&gt;&lt;em&gt;-Metin-&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the other hand, the &lt;strong&gt;short syntax&lt;/strong&gt; provides a more convenient way for defining the secrets by simply passing the secret value with a &lt;code&gt;docker secret create&lt;/code&gt; command with the - option at the end, like the one below:&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"mongo_pass"&lt;/span&gt; | docker secret create me_password -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The short syntax doesn't require to have a secret file&lt;/p&gt;




&lt;h1&gt;
  
  
  How the Cluster manages stacks
&lt;/h1&gt;

&lt;p&gt;A stack is just a group of resources that the cluster orchestration tool manages them. at the moment you have an idea of how to build and manage your stack. However, Elton Stoneman the author of &lt;a href="https://www.amazon.com/Learn-Docker-Month-Lunches-Stoneman/dp/1617297054/ref=sr_1_1?keywords=learn+docker+in+a+month+of+lunches&amp;amp;link_code=qs&amp;amp;qid=1690103529&amp;amp;sourceid=Mozilla-search&amp;amp;sr=8-1" rel="noopener noreferrer"&gt;Learn Docker in a Month of Lunches&lt;/a&gt; has sort-down some points that I think it'll make it more clear, Let me summarize these points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Volumes can be created and removed by the Swarm. Stacks will create a default volume if the service image specifies one, and that volume will be removed when the stack is removed. If you specify &lt;strong&gt;a named volume&lt;/strong&gt; for the stack, it will be created when you deploy, but it &lt;strong&gt;won’t be removed&lt;/strong&gt; when you delete the stack.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Secrets and configs are created when an external file gets uploaded to the cluster. They’re stored in the cluster database and delivered to containers where the service definition requires them. They are effectively write-once read-many objects and can’t be updated.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Networks can be managed independently of applications, with admins explicitly creating networks for applications to use, or they can be managed by the Swarm, which will create and remove them when necessary. Every stack will be deployed with a &lt;strong&gt;network to attach services to&lt;/strong&gt;, even if &lt;strong&gt;one is not specified in the Compose file.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Services are created or removed when stacks are deployed, and while they’re running, the Swarm monitors them constantly to ensure the desired service level is being met. Replicas that fail health checks get replaced, as do replicas that get lost when nodes go offline.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h1&gt;
  
  
  References
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="http://www.poe.com" rel="noopener noreferrer"&gt;POE AI&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://forums.docker.com/t/re-docker-swarm-series-6th-managing-config-and-secret-objects/136865" rel="noopener noreferrer"&gt;Metin guidance&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;That's it, Very straightforward, very fast 🚀. Hope this article inspired you and will appreciate your feedback. Thank you.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>swarm</category>
    </item>
    <item>
      <title>Docker Swarm Series: #6th Managing config and secret objects</title>
      <dc:creator>Mohamed El Eraky</dc:creator>
      <pubDate>Tue, 18 Jul 2023 06:27:32 +0000</pubDate>
      <link>https://dev.to/mohamedeleraki/docker-swarm-series-6th-managing-config-and-secret-objects-1kg</link>
      <guid>https://dev.to/mohamedeleraki/docker-swarm-series-6th-managing-config-and-secret-objects-1kg</guid>
      <description>&lt;h1&gt;
  
  
  Inception
&lt;/h1&gt;

&lt;p&gt;Hello everyone, This article is part of The Swarm series, The knowledge in this series is built in sequence, Check out The &lt;a href="https://mohamed-eleraky.hashnode.dev/series/docker-swarm"&gt;Swarm series&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In the &lt;a href="https://mohamed-eleraky.hashnode.dev/docker-swarm-series-5th-troubleshooting"&gt;last article&lt;/a&gt;, we covered the way of troubleshooting and find out how to find the exact issue and fix it, using Docker CLI, And &lt;a href="https://labs.play-with-docker.com/"&gt;Play-with-docker&lt;/a&gt; lab.&lt;/p&gt;




&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;In This article, We will complete The Swarm tutorials by explaining how to use and manage &lt;strong&gt;config objects&lt;/strong&gt; and how to &lt;strong&gt;store secrets&lt;/strong&gt; securely and use them in your deployment. in this lab Also will use the &lt;a href="https://labs.play-with-docker.com/"&gt;Play-with-docker&lt;/a&gt; lab.&lt;/p&gt;




&lt;h1&gt;
  
  
  Docker Config Overview
&lt;/h1&gt;

&lt;p&gt;In Fact, The Config object is a file that stores some configs that you want to share among multiple container services, This config file can store &lt;a href="https://www.amazon.com/Learn-Docker-Month-Lunches-Stoneman/dp/1617297054/ref=sr_1_1?keywords=month+of+lunch+docker&amp;amp;link_code=qs&amp;amp;qid=1689585935&amp;amp;sourceid=Mozilla-search&amp;amp;sr=8-1"&gt;any type of data&lt;/a&gt; (e.g. JSON, Key value, XML)&lt;/p&gt;

&lt;p&gt;The value behind the usage of the config file is to ensure consistency across all the services and containers that use the same configuration data. This helps to avoid configuration errors and reduces the risk of downtime, Surface the &lt;a href="https://docs.docker.com/engine/reference/commandline/config/"&gt;docker config Docs&lt;/a&gt;.&lt;/p&gt;




&lt;h1&gt;
  
  
  Docker Secret Overview
&lt;/h1&gt;

&lt;p&gt;Secrets are almost exactly like config, "&lt;em&gt;Secrets are encrypted throughout their lifetime in the cluster. The data is stored encrypted in the database shared by the managers, and secrets are only delivered to nodes that are scheduled to run replicas that need the secret. Secrets are encrypted in transit from the manager node to the worker, and they are only unencrypted inside the container, where they appear with the original file contents."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"The key difference with secrets is that you can only read them in plain text at one point in the workflow: inside the container when they are loaded from the Swarm."&lt;/em&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://www.amazon.com/Learn-Docker-Month-Lunches-Stoneman/dp/1617297054/ref=sr_1_1?keywords=month+of+lunch+docker&amp;amp;link_code=qs&amp;amp;qid=1689585935&amp;amp;sourceid=Mozilla-search&amp;amp;sr=8-1"&gt;&lt;em&gt;-Docker in a month of lunches-&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h1&gt;
  
  
  Deployment Example
&lt;/h1&gt;

&lt;p&gt;In This example will deploy &lt;strong&gt;MongoDB&lt;/strong&gt; and &lt;strong&gt;Mongo_Express&lt;/strong&gt; container services and store the &lt;strong&gt;configs&lt;/strong&gt; and &lt;strong&gt;secrets&lt;/strong&gt; in external files and load them to the &lt;strong&gt;swarm database&lt;/strong&gt; to ensure that the secret is secure and not appears in &lt;strong&gt;clear text,&lt;/strong&gt; And will be using &lt;a href="https://labs.play-with-docker.com/"&gt;Play-with-Docker&lt;/a&gt; labs, Docker Swarm mode, Docker-compose file, config &amp;amp; secret files, and Docker CLI.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;open &lt;a href="https://labs.play-with-docker.com/"&gt;Play-with-Docker&lt;/a&gt; labs, and create the below environment:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FBdYZh0y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1689622709143/9b04a5fd-b89f-420e-9d83-3443d905ed9c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FBdYZh0y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1689622709143/9b04a5fd-b89f-420e-9d83-3443d905ed9c.png" alt="" width="800" height="374"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a config file object called &lt;code&gt;mongo_config.txt&lt;/code&gt; that have the common configs including the username and the path to the password file.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;MONGO_INITDB_ROOT_USERNAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;admin
&lt;span class="nv"&gt;MONGO_INITDB_ROOT_PASSWORD_FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/run/secrets/mongo_password
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;
&lt;/ul&gt;



&lt;ul&gt;
&lt;li&gt;Create a secret object file with the MongoDB password called &lt;code&gt;mongo_password.txt&lt;/code&gt; with the following content
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;supersecretpassword
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7NqvSJX5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1689623063787/8c405dd9-dd39-448d-8bd8-18f5b9f89bcd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7NqvSJX5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1689623063787/8c405dd9-dd39-448d-8bd8-18f5b9f89bcd.png" alt="" width="800" height="401"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Create a secret object from the secret file created:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker secret create mongo_password mongo_password.txt
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Qp9GFNb1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1689623213344/a3d67e9d-c769-4cac-96ba-711fbbb92f11.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Qp9GFNb1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1689623213344/a3d67e9d-c769-4cac-96ba-711fbbb92f11.png" alt="" width="800" height="298"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a config object from the config file created:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker config create mongo_config mongo_config.txt
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DiM8tGoU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1689623329299/df709655-9dbe-4165-a66a-f513dff5449d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DiM8tGoU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1689623329299/df709655-9dbe-4165-a66a-f513dff5449d.png" alt="" width="506" height="58"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check the Secrets in the cluster&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker secret &lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kuYWuZ1T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1689623461461/d83e4433-04c7-4e55-ba39-6777350d6fb4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kuYWuZ1T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1689623461461/d83e4433-04c7-4e55-ba39-6777350d6fb4.png" alt="" width="780" height="79"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check the config in the cluster&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker config &lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7lxx23Qz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1689623507856/6e3e36cd-f2c3-4136-9d94-5a35b6ebb32a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7lxx23Qz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1689623507856/6e3e36cd-f2c3-4136-9d94-5a35b6ebb32a.png" alt="" width="688" height="75"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;inspect the secret with a pretty flag using the following:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker secret inspect &lt;span class="nt"&gt;--pretty&lt;/span&gt; mongo_password
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9LqK3Hjg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1689623674044/3537a50e-8e06-4192-8508-2efb5454c8e5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9LqK3Hjg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1689623674044/3537a50e-8e06-4192-8508-2efb5454c8e5.png" alt="" width="560" height="163"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;note that the inspection doesn't print out the secret file content&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;inspect the config with a pretty flag using the following:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker config inspect &lt;span class="nt"&gt;--pretty&lt;/span&gt; mongo_config
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PGUayUtI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1689623841096/825d5f50-e7f2-4b39-af0c-6cdc113c018e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PGUayUtI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1689623841096/825d5f50-e7f2-4b39-af0c-6cdc113c018e.png" alt="" width="635" height="176"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;note that the inspection print out the content of the config file due to the config file isn't secured as secrets.&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a Docker Compose file called &lt;em&gt;docker-compose.yml&lt;/em&gt; with the &lt;strong&gt;MongoDB&lt;/strong&gt; and &lt;strong&gt;Mongo_Express&lt;/strong&gt; services, and reference the &lt;strong&gt;config&lt;/strong&gt; and &lt;strong&gt;secret&lt;/strong&gt; objects.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3.7"&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;mongo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongo:4.4&lt;/span&gt;
    &lt;span class="na"&gt;configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongo_config&lt;/span&gt;
        &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/docker-entrypoint-initdb.d/mongo_config.txt&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;MONGO_INITDB_DATABASE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
    &lt;span class="na"&gt;secrets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mongo_password&lt;/span&gt;
    &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;placement&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;constraints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;node.role == manager&lt;/span&gt;

  &lt;span class="na"&gt;mongo-express&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongo-express:0.54&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;8081:8081"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;ME_CONFIG_MONGODB_ADMINUSERNAME_FILE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/run/secrets/mongo_admin_username&lt;/span&gt;
      &lt;span class="na"&gt;ME_CONFIG_MONGODB_ADMINPASSWORD_FILE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/run/secrets/mongo_password&lt;/span&gt;
      &lt;span class="na"&gt;ME_CONFIG_MONGODB_SERVER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongo&lt;/span&gt;
      &lt;span class="na"&gt;ME_CONFIG_BASICAUTH_USERNAME_FILE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/run/secrets/me_username&lt;/span&gt;
      &lt;span class="na"&gt;ME_CONFIG_BASICAUTH_PASSWORD_FILE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/run/secrets/me_password&lt;/span&gt;
    &lt;span class="na"&gt;secrets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mongo_password&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mongo_admin_username&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;me_username&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;me_password&lt;/span&gt;
    &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;placement&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;constraints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;node.role == manager&lt;/span&gt;

&lt;span class="na"&gt;secrets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;mongo_password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;external&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;mongo_admin_username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;external&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;me_username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;external&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;me_password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;external&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;

&lt;span class="na"&gt;configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;mongo_config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;external&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Just an important hint, once you create the secret you cannot update it, So will deploy the other secrets that are mentioned in the YAML file as below:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"mongo_pass"&lt;/span&gt; | docker secret create me_password -
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"mongo_user"&lt;/span&gt; | docker secret create me_username -
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"admin"&lt;/span&gt; | docker secret create mongo_admin_username -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YV8DN-9f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1689625172417/0939cc66-6709-435c-8c09-ee30dc448480.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YV8DN-9f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1689625172417/0939cc66-6709-435c-8c09-ee30dc448480.png" alt="" width="800" height="223"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy the stack to the Swarm cluster using the following
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker stack deploy &lt;span class="nt"&gt;-c&lt;/span&gt; docker-compose.yml myapp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--K6tsvJi5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1689624877272/6b7ee774-1bbb-4e8e-9321-1c5f3c8c9a8b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--K6tsvJi5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1689624877272/6b7ee774-1bbb-4e8e-9321-1c5f3c8c9a8b.png" alt="" width="800" height="410"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Get Stack services&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker stack &lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Q5fGh3Z6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1689625280085/b81198f4-eb63-4f64-a7be-ed0e159710c5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Q5fGh3Z6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1689625280085/b81198f4-eb63-4f64-a7be-ed0e159710c5.png" alt="" width="379" height="78"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Get Stack services&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker stack services myapp
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uaXLtw0W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1689625364989/e8345b58-cd4e-4944-9d8e-a2746c830ee1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uaXLtw0W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1689625364989/e8345b58-cd4e-4944-9d8e-a2746c830ee1.png" alt="" width="800" height="76"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;More info about services&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker stack ps myapp
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ak7HRZfk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1689625456568/3a290873-ca57-484f-baef-423b8b5686d6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ak7HRZfk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1689625456568/3a290873-ca57-484f-baef-423b8b5686d6.png" alt="" width="800" height="119"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As Cleared the container services up and running.&lt;/p&gt;




&lt;h1&gt;
  
  
  Steps summarization
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create a config object file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a secret object file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a secret object from the file content.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a config object from the file content.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;list and inspect our config and secret and ensure that the secret is secured and does not appear as clear text.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a Docker-compose YAML file for the stack deployment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Manually create the missing secrets due to once you create the secret you cannot update it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy the stack using the docker-compose YAML file and Docker CLI.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;print out the deployed services and ensure the services are up and running.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;References&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.amazon.com/Learn-Docker-Month-Lunches-Stoneman/dp/1617297054/ref=sr_1_1?keywords=learn+docker+in+a+month+of+lunches&amp;amp;link_code=qs&amp;amp;qid=1689766789&amp;amp;sourceid=Mozilla-search&amp;amp;sr=8-1"&gt;Learn Docker in a Month of Lunches&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;That's it, Very straightforward, very fast🚀. Hope this article inspired you and will appreciate your feedback. Thank you.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>swarm</category>
    </item>
    <item>
      <title>Docker Swarm Series: #5th Troubleshooting</title>
      <dc:creator>Mohamed El Eraky</dc:creator>
      <pubDate>Wed, 12 Jul 2023 20:09:36 +0000</pubDate>
      <link>https://dev.to/mohamedeleraki/docker-swarm-series-5th-troubleshooting-422l</link>
      <guid>https://dev.to/mohamedeleraki/docker-swarm-series-5th-troubleshooting-422l</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--u7BZrCXp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5dhscpvf2hb3mx334z13.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u7BZrCXp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5dhscpvf2hb3mx334z13.png" alt="Image description" width="741" height="400"&gt;&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Inception &lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;Hello everyone, This article is part of The Swarm series, The knowledge in this series is built in sequence, Check-out The Swarm series section above.&lt;/p&gt;

&lt;p&gt;At the &lt;a href="https://dev.to/mohamedeleraki/docker-swarm-series-4th-deploy-a-stack-to-a-swarm-cluster-36eg"&gt;last article&lt;/a&gt; we covered How to Deploy a Stack to a swarm cluster and the value behind The Stack, And deployed a simple Nignx web-app, and mysql data-base using Docker compose YAML file, Stack deploy command, And &lt;a href="https://labs.play-with-docker.com/"&gt;Play-with-docker&lt;/a&gt; lab.&lt;/p&gt;




&lt;h1&gt;
  
  
  Overview &lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;In This article We will complete The Swarm tutorials from the stand-up point Which is the containers stack went through errors and cannot be start, So in this article will go through the way of troubleshooting step-by-step, and find-out how to find the exact issue. in this lab Also will use &lt;a href="https://labs.play-with-docker.com/"&gt;Play-with-docker&lt;/a&gt; lab.&lt;/p&gt;




&lt;h1&gt;
  
  
  Setup the environment&lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;To start troubleshooting first we need to redeploy the &lt;a href="https://dev.to/mohamedeleraki/docker-swarm-series-4th-deploy-a-stack-to-a-swarm-cluster-36eg"&gt;last Stack&lt;/a&gt;, Or simply deploy the YAML file below on &lt;a href="https://labs.play-with-docker.com/#"&gt;play-with-docker&lt;/a&gt; lab:&lt;br&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;version: &lt;span class="s1"&gt;'3'&lt;/span&gt;

services:  &lt;span class="c"&gt;# services list&lt;/span&gt;
  nginx:  &lt;span class="c"&gt;# service name&lt;/span&gt;
    image: nginx:latest  &lt;span class="c"&gt;# specify an image with it's tag&lt;/span&gt;
    ports:  &lt;span class="c"&gt;# defining ports&lt;/span&gt;
      - &lt;span class="s2"&gt;"8080:80"&lt;/span&gt;
    volumes:  &lt;span class="c"&gt;# mount volume disk | mount nginx.conf stored on local device to nginx container&lt;/span&gt;
      - ./nginx.conf:/etc/nginx/nginx.conf

    &lt;span class="c"&gt;# establish connection to mysql container by mention the defined variables at mysql environment below&lt;/span&gt;
    environment:
      MYSQL_HOST: mysql
      MYSQL_PORT: 3306
      MYSQL_DATABASE: myapp
      MYSQL_USER: root
      MYSQL_PASSWORD: password

    &lt;span class="c"&gt;# The build of this container in depends on mysql&lt;/span&gt;
    depends_on:
      - mysql

  mysql:
    image: mysql:latest
    volumes:
      - ./data:/var/lib/mysql

   &lt;span class="c"&gt;# Define mysql variables&lt;/span&gt;
    environment:
      MYSQL_DATABASE: myapp
      MYSQL_USER: root
      MYSQL_PASSWORD: password
      MYSQL_ROOT_PASSWORD: password
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Deploy&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker stack deploy &lt;span class="nt"&gt;-c&lt;/span&gt; docker-stack-file.yaml myapp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Print-out the deploy service status&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker stack services myapp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OcgIq-SB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mkalvp30ih9z05p82gt2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OcgIq-SB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mkalvp30ih9z05p82gt2.png" alt="Image description" width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;






&lt;h1&gt;
  
  
  Troubleshooting &lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;When print-out the services of &lt;strong&gt;myapp&lt;/strong&gt; stack you will find out that the replica table is 0/1 which means that the desired state of replica for this container service is set to 1 however the actual state is 0, That means the container service isn't deployed yet. &lt;/p&gt;

&lt;p&gt;Docker stack will try to deploy these services all the time, however it's the same result, let's figure-out where is the issue.&lt;/p&gt;




&lt;h2&gt;
  
  
  Down we go&lt;br&gt;&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2ioMnQ3c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i9xdqyrsilvkjcad0dgo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2ioMnQ3c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i9xdqyrsilvkjcad0dgo.jpg" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run the following To get the deployed stacks
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker stack &lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ezprzU4F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mwkk5w0p7zwterai80rq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ezprzU4F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mwkk5w0p7zwterai80rq.png" alt="Image description" width="404" height="128"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;we have one stack called &lt;strong&gt;myapp&lt;/strong&gt; with two services.&lt;/em&gt;&lt;/p&gt;





&lt;ul&gt;
&lt;li&gt;Run the following to print-out the services of that stack
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker stack services myapp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--48416T1J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pmpgwwokayaab8hxkrl7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--48416T1J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pmpgwwokayaab8hxkrl7.png" alt="Image description" width="700" height="103"&gt;&lt;/a&gt;&lt;br&gt;&lt;/p&gt;

&lt;p&gt;As mentioned above, the actual state is 0, let's find out &lt;strong&gt;why&lt;/strong&gt;....&lt;/p&gt;





&lt;ul&gt;
&lt;li&gt;Run the following to print-out the services with more info
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker stack ps myapp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fLMX4v-r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/20gydrmvkb1sxaafxxn3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fLMX4v-r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/20gydrmvkb1sxaafxxn3.png" alt="Image description" width="800" height="160"&gt;&lt;/a&gt;&lt;/p&gt;





&lt;p&gt;&lt;br&gt;&lt;br&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sgQU02_V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/725vcv1y362u3ayx0jm1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sgQU02_V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/725vcv1y362u3ayx0jm1.jpg" alt="Image description" width="474" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ok..Okay, Now let's explain what we're seeing here.&lt;br&gt;&lt;/p&gt;

&lt;p&gt;Let's focus on the highlighted in yellow at red square area, At the first column is for &lt;strong&gt;&lt;em&gt;container service ID&lt;/em&gt;&lt;/strong&gt;, the second for the &lt;strong&gt;&lt;em&gt;container service name&lt;/em&gt;&lt;/strong&gt;, the fourth is for &lt;strong&gt;&lt;em&gt;Node name&lt;/em&gt;&lt;/strong&gt; that host this service, Next to it is &lt;strong&gt;&lt;em&gt;Desired state&lt;/em&gt;&lt;/strong&gt; column with &lt;strong&gt;Ready&lt;/strong&gt; status that means the Swarm was trying to deploy on this host while &lt;em&gt;&lt;strong&gt;rejected&lt;/strong&gt;&lt;/em&gt; at the next column which is &lt;strong&gt;&lt;em&gt;Current state&lt;/em&gt;&lt;/strong&gt;, why is that happened let's view the last column &lt;strong&gt;&lt;em&gt;Error&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;





&lt;p&gt;However the error column &lt;strong&gt;isn't wide enough&lt;/strong&gt; let's expand it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker stack ps &lt;span class="nt"&gt;--no-trunc&lt;/span&gt; myapp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cA-pw-nB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ce26f4se91svhxt1prng.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cA-pw-nB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ce26f4se91svhxt1prng.png" alt="Image description" width="800" height="228"&gt;&lt;/a&gt;&lt;/p&gt;






&lt;p&gt;&lt;br&gt;&lt;br&gt;&lt;br&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KKixBldy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nl5wmt9j44j6wjl64ly1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KKixBldy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nl5wmt9j44j6wjl64ly1.png" alt="Image description" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Yes, But&lt;/strong&gt; There are too many services, let's focus on Nginx service by running the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker service ps &lt;span class="nt"&gt;--no-trunc&lt;/span&gt; myapp_nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1Tta0K8S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4uwwmaf05cmks37m8t5f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1Tta0K8S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4uwwmaf05cmks37m8t5f.png" alt="Image description" width="800" height="132"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Yeah, Now the error is clear enough &lt;strong&gt;&lt;em&gt;"bind source path does not exist: /root/nginx.conf"&lt;/em&gt;&lt;/strong&gt; &lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;p&gt;Actually we didn't create an &lt;strong&gt;&lt;em&gt;Nginx.conf&lt;/em&gt; file&lt;/strong&gt; to mount it inside the container, Let's fix this out.&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a very simple &lt;strong&gt;&lt;em&gt;nginx.conf&lt;/em&gt;&lt;/strong&gt; file at the same path of the YAML file:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vim nginx.conf

&lt;span class="c"&gt;# past the below in it&lt;/span&gt;
worker_processes 1&lt;span class="p"&gt;;&lt;/span&gt;

events &lt;span class="o"&gt;{&lt;/span&gt;
  worker_connections 1024&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

http &lt;span class="o"&gt;{&lt;/span&gt;
  server &lt;span class="o"&gt;{&lt;/span&gt;
    listen 80&lt;span class="p"&gt;;&lt;/span&gt;
    server_name example.com&lt;span class="p"&gt;;&lt;/span&gt;
    root /var/www/html&lt;span class="p"&gt;;&lt;/span&gt;

    location / &lt;span class="o"&gt;{&lt;/span&gt;
      try_files &lt;span class="nv"&gt;$uri&lt;/span&gt; &lt;span class="nv"&gt;$uri&lt;/span&gt;/ &lt;span class="o"&gt;=&lt;/span&gt;404&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VRLn-_F0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qdpk8krxlguidnpfrh1p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VRLn-_F0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qdpk8krxlguidnpfrh1p.png" alt="Image description" width="800" height="263"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update the stack by running the following:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker stack deploy &lt;span class="nt"&gt;-c&lt;/span&gt; docker-stack.yaml myapp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--b_mXsiij--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mt8zkvbq12px0q855m3o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--b_mXsiij--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mt8zkvbq12px0q855m3o.png" alt="Image description" width="452" height="73"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Yeah, Now the container service is running on manager node 2:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qek5d27p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ibmq4w51pof20qhf9p43.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qek5d27p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ibmq4w51pof20qhf9p43.png" alt="Image description" width="800" height="177"&gt;&lt;/a&gt;&lt;br&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--orSpMw95--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i36sz3kjqjdrtyat9zm0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--orSpMw95--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i36sz3kjqjdrtyat9zm0.png" alt="Image description" width="597" height="77"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Optional&lt;/em&gt;, ssh on manager 2 and run the following:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# ssh remote&lt;/span&gt;
ssh root@manager2

&lt;span class="c"&gt;# print containers list&lt;/span&gt;
docker container &lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9gmtI1wl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3uu7gqhjkv650kn9y5g4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9gmtI1wl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3uu7gqhjkv650kn9y5g4.png" alt="Image description" width="800" height="63"&gt;&lt;/a&gt;&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Let's make a simple somke-test by running curl:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl localhost:8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IkwiIQAV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8n2eaocfd10hdal0ztdw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IkwiIQAV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8n2eaocfd10hdal0ztdw.png" alt="Image description" width="379" height="122"&gt;&lt;/a&gt;&lt;br&gt;&lt;/p&gt;






&lt;h1&gt;
  
  
  spotlight&lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0uTPExoX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qotan1bu1wfmtibxppw8.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0uTPExoX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qotan1bu1wfmtibxppw8.jpeg" alt="Image description" width="474" height="351"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;p&gt;Here we did troubleshooting on &lt;strong&gt;myapp_nginx&lt;/strong&gt; service, if you do troubleshoot the &lt;strong&gt;myapp_mysql&lt;/strong&gt; service will find-out it's the same issue. However it more &lt;em&gt;complex&lt;/em&gt; and there's no time to discuses here.&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;p&gt;The Issue that we faced here it was with failed to &lt;strong&gt;start-up&lt;/strong&gt; the service, and we noticed that while &lt;strong&gt;print-out&lt;/strong&gt; the service status using &lt;code&gt;docker service ps --no-trunc myapp_nginx&lt;/code&gt; command, &lt;strong&gt;What if&lt;/strong&gt; the container service start successfully however, the app that hosted inside the container service &lt;em&gt;-Nginx webapp in our case-&lt;/em&gt; have an issue that obstacle its progress, here you should go to the next &lt;strong&gt;layer of troubleshooting&lt;/strong&gt; which is the &lt;strong&gt;container service logs&lt;/strong&gt;, fetch the service logs by running the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker service logs &amp;lt;service-name&amp;gt;

&lt;span class="c"&gt;# for a specific task&lt;/span&gt;
docker service logs &amp;lt;service-name&amp;gt; &lt;span class="nt"&gt;--task&lt;/span&gt; &amp;lt;task-id&amp;gt;

&lt;span class="c"&gt;# for interactive session&lt;/span&gt;
docker service logs &lt;span class="nt"&gt;--follow&lt;/span&gt; &lt;span class="nt"&gt;--tail&lt;/span&gt; 100 &amp;lt;service-name&amp;gt;

&lt;span class="c"&gt;# or land on the hosted node and run:&lt;/span&gt;
docker container logs &amp;lt;container-id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another way, You can land-on the hosted node and remote on the container using &lt;code&gt;docker exec&lt;/code&gt; command and troubleshoot the logs.&lt;/p&gt;






&lt;p&gt;That's it, Very straightforward, very fast🚀. &lt;br&gt;
Hope this article inspired you and will appreciate your feedback. Thank you.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>swarm</category>
      <category>stack</category>
    </item>
    <item>
      <title>Docker Swarm Series: #4th Deploy a Stack to a swarm Cluster</title>
      <dc:creator>Mohamed El Eraky</dc:creator>
      <pubDate>Mon, 03 Jul 2023 05:44:48 +0000</pubDate>
      <link>https://dev.to/mohamedeleraki/docker-swarm-series-4th-deploy-a-stack-to-a-swarm-cluster-36eg</link>
      <guid>https://dev.to/mohamedeleraki/docker-swarm-series-4th-deploy-a-stack-to-a-swarm-cluster-36eg</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--u7BZrCXp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5dhscpvf2hb3mx334z13.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u7BZrCXp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5dhscpvf2hb3mx334z13.png" alt="Image description" width="741" height="400"&gt;&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Inception &lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;Hello everyone, This article is part of The Swarm series, The knowledge in this series is built in sequence, Check-out The Swarm series section above.&lt;/p&gt;

&lt;p&gt;At the &lt;a href="https://dev.to/mohamedeleraki/docker-swarm-series-3th-create-a-highly-available-container-faa"&gt;last article&lt;/a&gt; we covered How to Deploy a highly available Container, And deployed a simple Nignx web-app using DockerCLI and &lt;a href="https://labs.play-with-docker.com/"&gt;Play-with-docker&lt;/a&gt; lab.&lt;/p&gt;






&lt;h1&gt;
  
  
  Overview &lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;In This article We will complete The Swarm tutorials, Will Deploy a Stack consists of Nginx and MySql containers, Also will explain the values behind the usage of stack for deployment, in this lab will use &lt;a href="https://labs.play-with-docker.com/"&gt;Play-with-docker&lt;/a&gt; lab, Docker Stack commands, and compose YAML file.&lt;br&gt;&lt;/p&gt;






&lt;h1&gt;
  
  
  What is Stack&lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://docs.docker.com/engine/reference/commandline/stack/"&gt;Docker Stack command&lt;/a&gt; is used for Managing Swarm stacks, The Stack use to run multi-container Docker applications. With Stack you use a YAML file to define your Containers configurations, Then you use a single command to start and run The Stack from this YAML file.&lt;/p&gt;

&lt;p&gt;It's the same concept As the Docker-compose command with normal docker mode, However we're using Docker Stack command with Swarm mode.&lt;br&gt;
&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The value behind the Stack:&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;Defining all stack containers and configuration at one YAML file &lt;em&gt;-Docker-compose file-&lt;/em&gt; The value of using docker stack and compose is the same value of using blueprints. Blueprints &lt;em&gt;-which is YAML file in our case-&lt;/em&gt; is very useful for history. Instead of Saving you commands of the running containers and it's configurations at some files out-there, You use all of stack containers and configuration at one file and simply publish it with one single command.&lt;br&gt;
And when you want to update your stack configuration, instead of save the updated commands at The File and the hassle of remembering which one to use in future, Simply you update the Same YAML file and run with The stack command. So you have one single file consist of the updated configurations.&lt;/p&gt;






&lt;h1&gt;
  
  
  Example Docker Stack YAML file&lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;Let's start out by creating our stack YAML file:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start by Build the past Swarm environment &lt;a href="https://dev.to/mohamedeleraki/docker-swarm-series-3th-create-a-highly-available-container-faa"&gt;Here&lt;/a&gt; with out deploy any applications.&lt;/li&gt;
&lt;li&gt;On the manager node, Create a file called &lt;code&gt;docker-stack.yaml&lt;/code&gt; by the command below:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vim docker-stack.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Past the below content in the YAML file.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;version: &lt;span class="s1"&gt;'3'&lt;/span&gt;

services:  &lt;span class="c"&gt;# services list&lt;/span&gt;
  nginx:  &lt;span class="c"&gt;# service name&lt;/span&gt;
    image: nginx:latest  &lt;span class="c"&gt;# specify an image with it's tag&lt;/span&gt;
    ports:  &lt;span class="c"&gt;# defining ports&lt;/span&gt;
      - &lt;span class="s2"&gt;"8080:80"&lt;/span&gt;
    volumes:  &lt;span class="c"&gt;# mount volume disk | mount nginx.conf stored on local device to nginx container&lt;/span&gt;
      - ./nginx.conf:/etc/nginx/nginx.conf

    &lt;span class="c"&gt;# establish connection to mysql container by mention the defined variables at mysql environment below&lt;/span&gt;
    environment:
      MYSQL_HOST: mysql
      MYSQL_PORT: 3306
      MYSQL_DATABASE: myapp
      MYSQL_USER: root
      MYSQL_PASSWORD: password

    &lt;span class="c"&gt;# The build of this container in depends on mysql&lt;/span&gt;
    depends_on:
      - mysql

  mysql:
    image: mysql:latest
    volumes:
      - ./data:/var/lib/mysql

   &lt;span class="c"&gt;# Define mysql variables&lt;/span&gt;
    environment:
      MYSQL_DATABASE: myapp
      MYSQL_USER: root
      MYSQL_PASSWORD: password
      MYSQL_ROOT_PASSWORD: password
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Now you're ready to deploy your stack&lt;/li&gt;
&lt;/ul&gt;






&lt;h1&gt;
  
  
  Stack deploy&lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Run the following command to run &lt;code&gt;myapp&lt;/code&gt; stack:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker stack deploy &lt;span class="nt"&gt;-c&lt;/span&gt; docker-stack.yaml myapp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The result for this command will create a network for the stack, the nginx service and the mysql service.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wH_7t537--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2z15fo9qzpts91trl6fs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wH_7t537--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2z15fo9qzpts91trl6fs.png" alt="Image description" width="800" height="352"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;list docker stack by the following:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker stack &lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s_5H9_px--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gfg233nlcf5lslqke777.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s_5H9_px--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gfg233nlcf5lslqke777.png" alt="Image description" width="800" height="353"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;list Container Services by the following:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# stack services&lt;/span&gt;
docker stack services myapp

&lt;span class="c"&gt;# all swarm services&lt;/span&gt;
docker service &lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RZtZTT-Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bkyki9m6k8vpr2ayz96y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RZtZTT-Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bkyki9m6k8vpr2ayz96y.png" alt="Image description" width="800" height="357"&gt;&lt;/a&gt;&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Get more info with &lt;code&gt;docker stack ps myapp&lt;/code&gt; 
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8ySJRaCS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ddqs0sz26vsaw2e4z1g1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8ySJRaCS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ddqs0sz26vsaw2e4z1g1.png" alt="Image description" width="800" height="357"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;






&lt;h1&gt;
  
  
  Stack Delete&lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;Now let's delete our environment with single &amp;amp; simple command:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;delete by the following:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker stack &lt;span class="nb"&gt;rm &lt;/span&gt;myapp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hNtO4Yld--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/haynwbyi2vf66mikxthx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hNtO4Yld--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/haynwbyi2vf66mikxthx.png" alt="Image description" width="800" height="357"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;At the end Note that the deployment went through Error, And the container cannot start; And Will discover how to troubleshoot in the next article....&lt;/strong&gt;&lt;/p&gt;






&lt;p&gt;That's it, Very straightforward, very fast🚀. &lt;br&gt;
Hope this article inspired you and will appreciate your feedback. Thank you.&lt;/p&gt;






&lt;h1&gt;
  
  
  References&lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/engine/swarm/stack-deploy/"&gt;Deploy a stack to a swarm&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/compose/"&gt;Docker Compose overview&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/engine/reference/commandline/stack/"&gt;Docker stack&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://labs.play-with-docker.com"&gt;Play with docker&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>docker</category>
      <category>swarm</category>
    </item>
    <item>
      <title>Docker Swarm Series: #3rd Deploy a highly available Container</title>
      <dc:creator>Mohamed El Eraky</dc:creator>
      <pubDate>Tue, 27 Jun 2023 18:12:37 +0000</pubDate>
      <link>https://dev.to/mohamedeleraki/docker-swarm-series-3th-create-a-highly-available-container-faa</link>
      <guid>https://dev.to/mohamedeleraki/docker-swarm-series-3th-create-a-highly-available-container-faa</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--u7BZrCXp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5dhscpvf2hb3mx334z13.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u7BZrCXp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5dhscpvf2hb3mx334z13.png" alt="Image description" width="741" height="400"&gt;&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Inception &lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;Hello everyone, This article is part of The Swarm series, The knowledge in this series is built in sequence, Check it out at my profile.&lt;/p&gt;

&lt;p&gt;At the &lt;a href="https://dev.to/mohamedeleraki/docker-swarm-series-2nd-create-a-highly-available-environment-4p1k"&gt;last article&lt;/a&gt; we covered How to create a highly available environment, And deployed a simple Nignx web-app using DockerCLI on &lt;a href="https://labs.play-with-docker.com/"&gt;Play-with-docker&lt;/a&gt; lab.&lt;/p&gt;

&lt;p&gt;In This article We will complete from the stand-up point, Will  scaling up our Container, Update the Container and roll-back again, Also will submit some production scenarios for deep understanding how Swarm deal with, in this lab will use &lt;a href="https://labs.play-with-docker.com/"&gt;Play-with-docker&lt;/a&gt; lab and DockerCLI commands.&lt;br&gt;&lt;/p&gt;




&lt;h1&gt;
  
  
  Lab Overview &lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;After finalize Creation of The Infrastructure and collect the required info,  Today's Article will focus on The Container application section, How to scale-up and scale-down, cover the limits of routing mesh &lt;em&gt;-Load balancer of swarm-&lt;/em&gt;, Rolling-Update and Rolling-back, After all will test the high availability of our environment by removing node from our cluster that host the Container and view how Swarm will handle this.&lt;br&gt;&lt;/p&gt;




&lt;h1&gt;
  
  
  Build the infrastructure &lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;At the &lt;a href="https://dev.to/mohamedeleraki/docker-swarm-series-2nd-create-a-highly-available-environment-4p1k"&gt;last article&lt;/a&gt; we went through the hardest way of creating up our environment, However it's the appropriate way to build up your environment at any platform &lt;em&gt;(i.e. On-prem, Cloud, any where)&lt;/em&gt;, Today's will go through the easiest way to build up our environment on &lt;a href="https://labs.play-with-docker.com/"&gt;Play-with-docker&lt;/a&gt; platform.&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open up &lt;a href="https://labs.play-with-docker.com/"&gt;Play-with-docker&lt;/a&gt; platform.&lt;/li&gt;
&lt;li&gt;Press on the template button as below:
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sfUWPLNh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9oxiapnzwmqnfff1sobu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sfUWPLNh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9oxiapnzwmqnfff1sobu.png" alt="Image description" width="800" height="354"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Let's specify the template that similar to what we used at the past articles.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--E_2sjwPD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/98tfmxpp1siis3n2u23h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--E_2sjwPD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/98tfmxpp1siis3n2u23h.png" alt="Image description" width="800" height="354"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This will build the environment as below:
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ts1sKl2y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/re0k3hx95tlbrsy5msub.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ts1sKl2y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/re0k3hx95tlbrsy5msub.png" alt="Image description" width="800" height="356"&gt;&lt;/a&gt;&lt;br&gt;&lt;/p&gt;




&lt;h1&gt;
  
  
  Build the Past Container&lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;At the &lt;a href="https://dev.to/mohamedeleraki/docker-swarm-series-2nd-create-a-highly-available-environment-4p1k"&gt;last article&lt;/a&gt; we built a Service that have an Nginx container by running the following command on any one of the manager nodes:&lt;br&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker service create &lt;span class="nt"&gt;--detach&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; nginx1 &lt;span class="nt"&gt;--publish&lt;/span&gt; 80:80  &lt;span class="nt"&gt;--mount&lt;/span&gt; &lt;span class="nb"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/etc/hostname,target&lt;span class="o"&gt;=&lt;/span&gt;/usr/share/nginx/html/index.html,type&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;bind&lt;/span&gt;,ro nginx:1.12
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And you can print out the Running services by running the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker service &lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aE70t92x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/elqcvkfue5i13nl9o8s7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aE70t92x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/elqcvkfue5i13nl9o8s7.png" alt="Image description" width="800" height="357"&gt;&lt;/a&gt;&lt;br&gt;&lt;/p&gt;




&lt;h1&gt;
  
  
  Scale-up and Down &lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;Till Now we didn't deploy a highly available container, Let's try out to scale our container with extra 6 replicas by running the following:&lt;br&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker service update &lt;span class="nt"&gt;--replicas&lt;/span&gt; 6 &lt;span class="nt"&gt;--detach&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true &lt;/span&gt;nginx1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--40vtit4M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wkrtq49rwx1ko4lgzm6l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--40vtit4M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wkrtq49rwx1ko4lgzm6l.png" alt="Image description" width="800" height="357"&gt;&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;p&gt;When this command is run the following events occur:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The service state is update with 6 replicas, Which in stored in Swarm internal storage.&lt;/li&gt;
&lt;li&gt;Swarm recognize that the number of replicas that is scheduled now Does not equal The Actual state.&lt;/li&gt;
&lt;li&gt;Swarm Schedule a new task with 6 replicas to match the desired state.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;&lt;br&gt;
Use the following commands to discover the deployed service:&lt;br&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker service &lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker service ps nginx1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;br&gt;&lt;br&gt;&lt;br&gt;
Use the update command to &lt;strong&gt;Scale-Down&lt;/strong&gt; as well, By running the following Will Scale-Down the replicas to be 5 instead of 6:&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker service update &lt;span class="nt"&gt;--replicas&lt;/span&gt; 5 &lt;span class="nt"&gt;--detach&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true &lt;/span&gt;nginx1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;br&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Zpjb0HbM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w42dld4zwrkn8qbdhud3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Zpjb0HbM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w42dld4zwrkn8qbdhud3.png" alt="Image description" width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h1&gt;
  
  
  Routing mesh limits &lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;Now on our scenario, When you send a request on port 80, The Routing mesh has multiple containers &lt;em&gt;-working on this port-&lt;/em&gt; in which to route requests to, Routing mesh act as a load balancer for these containers, Therefor the Routing mesh will send the coming requests randomly to these containers.&lt;/p&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
let's test this out by running the following multiple times on any Node:&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl localhost:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;br&gt;&lt;br&gt;
You should see which node is serving each request because of the useful &lt;code&gt;--mount&lt;/code&gt; command you used earlier.&lt;/p&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WSsvJMql--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xzqnpz8crs32ost8l8fe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WSsvJMql--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xzqnpz8crs32ost8l8fe.png" alt="Image description" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;&lt;br&gt;
Another easy way to check which node is serving the requests is to checking the &lt;strong&gt;aggregated&lt;/strong&gt; logs by using the &lt;code&gt;docker service logs [service-name]&lt;/code&gt; command. This aggregates the output from every running container and get the output from &lt;code&gt;docker container logs [container-name]&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---t5sD1K1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/572pcx31xpwot09e0q95.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---t5sD1K1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/572pcx31xpwot09e0q95.png" alt="Image description" width="800" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Limits of the routing mesh:&lt;/strong&gt; The routing mesh can publish only one service on port 80. If you want multiple services exposed on port 80, you can use an external application load balancer outside of the swarm to accomplish this.&lt;/p&gt;




&lt;h1&gt;
  
  
  Rolling update and Rolling back&lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;Now you have your service deployed and running successfully, and you're going to update the version of Nginx container. You want to update the pulled image and the deployed container that use this image version, with &lt;code&gt;Docker service update&lt;/code&gt; command you can update the pulled image and the service as well.&lt;/p&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
Let's update our Nginx service to version 1.13 by using the following:&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker service update &lt;span class="nt"&gt;--image&lt;/span&gt; nginx:1.13 &lt;span class="nt"&gt;--detach&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true &lt;/span&gt;nginx1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;br&gt;&lt;br&gt;
This triggers a rolling update of the swarm, Run the following command to view the updates in real time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# using watch command&lt;/span&gt;
watch &lt;span class="nt"&gt;-n&lt;/span&gt; 1 docker service ps nginx1

&lt;span class="c"&gt;# Or you can run this command over and over again&lt;/span&gt;
docker service ps nginx1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;br&gt;&lt;br&gt;
You can fine-tune the rolling update by using these options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;--update-parallelism&lt;/code&gt;: specifies the number of containers to update immediately (defaults to 1).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--update-delay&lt;/code&gt;: specifies the delay between finishing updating a set of containers before moving on to the next set.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
After a few seconds you figure out that your Nginx application has update successfully and the old version has shunted-down.&lt;/p&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LshhII5l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8mbuoa5d2rbfti9g5uc7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LshhII5l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8mbuoa5d2rbfti9g5uc7.png" alt="Image description" width="624" height="158"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  roll-back
&lt;/h2&gt;

&lt;p&gt;What if find out that the latest update have an issue and your application didn't work as expected and you want to roll-back you can achieve that by running the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# roll back&lt;/span&gt;
docker service rollback nginx1

&lt;span class="c"&gt;# remove the in useful services&lt;/span&gt;
docker service &lt;span class="nb"&gt;rm&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;








&lt;h1&gt;
  
  
  Test the Swarm environment high available functionality &lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;Now we have our services running on the distrusted nodes, What if a node with it's containers hosted goes down, In this section will test the Swarm high availability Function by make the node 4 leaving the Swarm cluster.&lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;Go to node 1 for example and run the following command to watch the cluster change in real time:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;watch &lt;span class="nt"&gt;-n&lt;/span&gt; 1 docker service ps nginx2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Now, land on node 4 and run the following to leave the cluster:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker swarm leave
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Get back to node 1 and Note that the Swarm will rebuild the containers in another node to match the desired state.&lt;/li&gt;
&lt;/ul&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BtyXU5AT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/30n2jzmbukdy3292c4w2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BtyXU5AT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/30n2jzmbukdy3292c4w2.png" alt="Image description" width="624" height="137"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;br&gt;&lt;br&gt;&lt;br&gt;
That's it, Very straightforward, very fast🚀.  Hope this article inspired you and will appreciate your feedback. Thank you.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>swarm</category>
    </item>
    <item>
      <title>Docker Swarm Series: #2nd Create a highly available environment</title>
      <dc:creator>Mohamed El Eraky</dc:creator>
      <pubDate>Sat, 24 Jun 2023 13:42:31 +0000</pubDate>
      <link>https://dev.to/mohamedeleraki/docker-swarm-series-2nd-create-a-highly-available-environment-4p1k</link>
      <guid>https://dev.to/mohamedeleraki/docker-swarm-series-2nd-create-a-highly-available-environment-4p1k</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--u7BZrCXp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5dhscpvf2hb3mx334z13.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u7BZrCXp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5dhscpvf2hb3mx334z13.png" alt="Image description" width="741" height="400"&gt;&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Inception &lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;Hello everyone, This article is part of The Swarm series, The knowledge in this series is built in sequence, Check out my profile.&lt;/p&gt;

&lt;p&gt;In the last article we covered a high overview about Orchestration tools and Swarm, what ports should to be opened between nodes, And setup the environment using &lt;a href="https://labs.play-with-docker.com/"&gt;Play-with-docker&lt;/a&gt; labs.&lt;/p&gt;

&lt;p&gt;In This article We will complete from the stand-up point, Will create a highly available Swarm environment using &lt;a href="https://labs.play-with-docker.com/"&gt;Play-with-docker&lt;/a&gt; labs, and deploy a simple webapp application using DockerCLI commands.&lt;br&gt;&lt;/p&gt;






&lt;h1&gt;
  
  
  Lab Overview &lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;The MAIN point of having an &lt;strong&gt;Orchestration tool&lt;/strong&gt; is to have a &lt;strong&gt;highly available environment&lt;/strong&gt; and &lt;strong&gt;highly available applications&lt;/strong&gt;, besides the other features that provide &lt;em&gt;(e.g. load balancer, monitoring, etc,,)&lt;/em&gt; Because of that &lt;strong&gt;Today's article&lt;/strong&gt; will &lt;strong&gt;focus&lt;/strong&gt; on how to &lt;strong&gt;create a high available environment&lt;/strong&gt; and &lt;strong&gt;deploy&lt;/strong&gt; a simple webapp using DockerCLI. &lt;br&gt;&lt;br&gt;
&lt;em&gt;enough talking and let's get started&lt;/em&gt;&lt;/p&gt;






&lt;h1&gt;
  
  
  Highly available environment overview &lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;As we need our application to be highly available to avoid &lt;strong&gt;single point of failure&lt;/strong&gt; if the application goes down. it's the same idea on the &lt;strong&gt;environment level&lt;/strong&gt;, if we created a high available application and load balance the traffic between them but the application lives on the &lt;strong&gt;same node server&lt;/strong&gt;. here we have a high available application &lt;em&gt;(from the traffic aspect)&lt;/em&gt; however we &lt;strong&gt;don't&lt;/strong&gt; on environment level, if the node server goes down the application will goes down too. To avoid this we should have a &lt;strong&gt;highly available environment&lt;/strong&gt; and the application &lt;strong&gt;lives on more than one node&lt;/strong&gt;, And this is the &lt;strong&gt;idea behind&lt;/strong&gt; the &lt;strong&gt;Orchestration tools&lt;/strong&gt; to deliver the &lt;strong&gt;simplicity&lt;/strong&gt; to achieve this.&lt;br&gt;&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To create a highly available environment on Swarm we should have at least three manager node but typically no more than seven, manager nodes container the necessary information to manage the cluster, if the manager node goes down this will cause &lt;strong&gt;cluster failed function&lt;/strong&gt;, &lt;strong&gt;How to determine the number of manager nodes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Three manager nodes tolerate one node failure.&lt;/li&gt;
&lt;li&gt;Five manager nodes tolerate two node failures.&lt;/li&gt;
&lt;li&gt;Sever manager nodes tolerate three node failures.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Regarding worker nodes, at least two worker nodes for redundancy and fault tolerance, but you can add more nodes as needed to handle the workload.&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Actually, The Swarm Manager node will host the applications like  worker node as well.&lt;br&gt;&lt;br&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Therefor, In this lab will create Three manager nodes with two worker nodes.&lt;/p&gt;






&lt;h1&gt;
  
  
  Create a highly available environment &lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Open &lt;a href="https://labs.play-with-docker.com"&gt;Play-with-docker&lt;/a&gt; labs.&lt;/li&gt;
&lt;li&gt;Press &lt;strong&gt;ADD NEW INSTANCE&lt;/strong&gt;, and &lt;strong&gt;initiate Docker swarm mode&lt;/strong&gt; using:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker swarm init &lt;span class="nt"&gt;--advertise-addr&lt;/span&gt; eth0

&lt;span class="c"&gt;# to get the interface name use&lt;/span&gt;
ip a s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8xbYiU_r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ksshy6vorpj69e3zo7xa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8xbYiU_r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ksshy6vorpj69e3zo7xa.png" alt="Image description" width="800" height="406"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;copy and past &lt;em&gt;-ctrl+shift+v-&lt;/em&gt;  the highlighted command at the screenshot to generate a token &lt;em&gt;-at the same node-&lt;/em&gt; that will use to join the other managers to the cluster.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YNp5_JE6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d69kffjufnprbv0b2hsj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YNp5_JE6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d69kffjufnprbv0b2hsj.png" alt="Image description" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;Press &lt;strong&gt;ADD NEW INSTANCE&lt;/strong&gt; to create, And join another Manager node to the the cluster.&lt;/li&gt;
&lt;li&gt;copy and past the highlighted command at the last screenshot to join the cluster as a manager node.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KmdkoiD0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wlumh3dniu195jhes6zq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KmdkoiD0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wlumh3dniu195jhes6zq.png" alt="Image description" width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;Repeat the last step to join another manager node at the same cluster.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--L__aO-NO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6iidtwmlxx097oluwoy2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--L__aO-NO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6iidtwmlxx097oluwoy2.png" alt="Image description" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;let's create the Worker nodes, Go back to the 1st manager node created and copy the command that join the worker nodes to the swarm cluster.&lt;br&gt;&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Press &lt;strong&gt;ADD NEW INSTANCE&lt;/strong&gt;, past the command to join the cluster as worker node:&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wPoprqwS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/synzvinugk9qsx1wyzzz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wPoprqwS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/synzvinugk9qsx1wyzzz.png" alt="Image description" width="800" height="405"&gt;&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Qho8pFIN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s9ejyu18klh6pp4vq13p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Qho8pFIN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s9ejyu18klh6pp4vq13p.png" alt="Image description" width="800" height="407"&gt;&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Repeat the last step to join another worker node to the cluster.&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Let's check out our environment, Go to the any one of the manager nodes, and type the below command:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker node &lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OOrW_jXs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k98ie4foisqbayr5p8nl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OOrW_jXs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k98ie4foisqbayr5p8nl.png" alt="Image description" width="800" height="404"&gt;&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;p&gt;As listed there are three manager nodes Besides two worker nodes, The asterisk means the node number two that handle the command you've written&lt;br&gt;
&lt;br&gt;
 &lt;code&gt;docker node ls&lt;/code&gt;&lt;br&gt;
&lt;br&gt;
 &lt;br&gt;&lt;br&gt;
Also at the manager status column, There is a leader and the reachable, The leader is the manager node leader initiator and the reachable means that the node is currently available and can communicate with the other nodes in the Swarm cluster.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;






&lt;h1&gt;
  
  
  Deploy a webapp application overview &lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;Now as we have our cluster running and ready to deploy containers, To deploy a container we need to create a service, the service is and abstraction that represent multiple container of the same image deployed across the cluster, you can assume the service is the same concept as POD in K8s.&lt;br&gt;&lt;/p&gt;

&lt;p&gt;run the service by using &lt;code&gt;docker service&lt;/code&gt; command instead of using &lt;code&gt;docker run&lt;/code&gt; or &lt;code&gt;start&lt;/code&gt; in normal docker mode, try to differentiate between &lt;strong&gt;normal docker mode&lt;/strong&gt; and &lt;strong&gt;docker swarm&lt;/strong&gt; mode with these commands &lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy a webapp application &lt;br&gt;&lt;br&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;copy and past the following command to run your first app:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker service create &lt;span class="nt"&gt;--detach&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; nginx1 &lt;span class="nt"&gt;--publish&lt;/span&gt; 80:80 &lt;span class="nt"&gt;--mount&lt;/span&gt; &lt;span class="nb"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/etc/hostname,target&lt;span class="o"&gt;=&lt;/span&gt;/usr/share/nginx/html/index.html,type&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;bind&lt;/span&gt;,ro nginx:1.12

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;blockquote&gt;
&lt;p&gt;This command statement is &lt;strong&gt;declarative,&lt;/strong&gt; and Swarm will try to maintain the state declared, which means Swarm will compare the &lt;strong&gt;desired state&lt;/strong&gt; of the application With &lt;strong&gt;the Actual state,&lt;/strong&gt; However this is the first running of this application So the desired have been declared at the command however the actual Swarm will created for us.&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;p&gt;The --mount flag is useful to have Nignx print out the hostname of the node its running on, will try this out at the next article.&lt;br&gt;&lt;br&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;After run the application list see its status by using:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker service &lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SrDm3Eur--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jjwzbnow3osj4pw48h4n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SrDm3Eur--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jjwzbnow3osj4pw48h4n.png" alt="Image description" width="800" height="402"&gt;&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To take deeper look at the running tasks use:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker service ps nginx1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;p&gt;This will print out more info about this application including the node that host this app.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;mention here that you can land on this node and simple run the normal node command: &lt;code&gt;docker container ls&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ozWLGcQ5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qwnbdeq9vv6rafuld2y1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ozWLGcQ5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qwnbdeq9vv6rafuld2y1.png" alt="Image description" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Let's Test the app service, go to another node &lt;em&gt;-that not host the service-&lt;/em&gt; and run:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl localhost:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PMQgOSfA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eiv3ymxsqwo71enwd0od.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PMQgOSfA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eiv3ymxsqwo71enwd0od.png" alt="Image description" width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;






&lt;p&gt;That's it, Hope this article inspired you and will appreciate your feedback. Thank you.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Docker Swarm Series: #1st Setup the Environment</title>
      <dc:creator>Mohamed El Eraky</dc:creator>
      <pubDate>Wed, 21 Jun 2023 09:37:19 +0000</pubDate>
      <link>https://dev.to/mohamedeleraki/docker-swarm-series-1st-setup-the-environment-3ao7</link>
      <guid>https://dev.to/mohamedeleraki/docker-swarm-series-1st-setup-the-environment-3ao7</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tQsXpy_c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6mx78zob35c9xvfag11h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tQsXpy_c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6mx78zob35c9xvfag11h.png" alt="Image description" width="741" height="400"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Introduction to Container Orchestration&lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;Assuming that we have a containerized Python application and wanna to &lt;strong&gt;deploy&lt;/strong&gt; on &lt;strong&gt;production&lt;/strong&gt;. However to deploy on production we have some &lt;strong&gt;prerequisites&lt;/strong&gt; to handle first:&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;service discovery, &lt;em&gt;(i.e. connections between containers)&lt;/em&gt;.&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Auto Scaling.&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;High availability and fault tolerance.&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With Traditional servers &lt;em&gt;(i.e. virtual machines)&lt;/em&gt; All of these prerequisites was handled by bunch of tools (e.g. Vmware,) But &lt;strong&gt;how to achieve that while using containers!?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here's the value behind &lt;strong&gt;The Container orchestration tools&lt;/strong&gt;, Container orchestration is the process of &lt;strong&gt;managing&lt;/strong&gt; and &lt;strong&gt;deploying containers&lt;/strong&gt;, Orchestration helps to automate the deployment, scaling, and management of containerized applications.&lt;/p&gt;

&lt;p&gt;Containers provide a lightweight and portable way to package and deploy applications, but managing them at scale can be challenging. Container orchestration tools help to &lt;strong&gt;simplify this process&lt;/strong&gt; by providing a platform for deploying and managing containers across &lt;strong&gt;multiple hosts&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;&lt;/p&gt;




&lt;h1&gt;
  
  
  Orchestration tools typically provide the following features:&lt;br&gt;
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Service discovery: Containers need to communicate with each other, and orchestration tools provide a way to automatically discover other containers in the cluster.&lt;br&gt;&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Load balancing: Orchestration tools can automatically distribute incoming traffic across multiple containers, ensuring that the workload is evenly distributed.&lt;br&gt;&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scaling: Orchestration tools can automatically scale the number of containers up or down based on the demand for the application.&lt;br&gt;&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Health monitoring: Orchestration tools can monitor the health of containers and automatically restart them if they fail.&lt;br&gt;&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Declare desired state: Orchestration tools will work to compare the desired state -&lt;em&gt;that your provided&lt;/em&gt;- with the actual state -&lt;em&gt;The real state of the running applications&lt;/em&gt;- and works to ensure that the actual state matches that desired state. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some popular container orchestration tools include &lt;strong&gt;Docker Swarm&lt;/strong&gt;, &lt;strong&gt;Kubernetes&lt;/strong&gt;, &lt;strong&gt;Mesos&lt;/strong&gt;. Each tool has its own strengths and weaknesses, and the choice of tool will depend on your specific requirements.&lt;br&gt;&lt;br&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Container ecosystem layers:&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6Lp8UzxI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x8ymxvz3vva7gyw0szl5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6Lp8UzxI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x8ymxvz3vva7gyw0szl5.png" alt="Image description" width="624" height="353"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The image provided by IBM docker essentials course&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In this series will be using Docker swarm, Docker Swarm is a powerful and &lt;strong&gt;easy-to-use&lt;/strong&gt; tool for managing containers at scale, and it has become a popular choice for organizations looking to deploy containerized applications in production environments.&lt;br&gt;&lt;br&gt;&lt;/p&gt;


&lt;h1&gt;
  
  
  Docker swarm Overview&lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;Docker swarm is a container orchestration tool That allows you to Create, Deploy, Scale, and manage Cluster of Docker Hosts using declarative configuration file called Docker Compose or DockerCLI.&lt;br&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker swarm&lt;/strong&gt; As Defined by &lt;a href="https://docs.docker.com/engine/swarm/key-concepts/#what-is-a-swarm"&gt;Docker Docs&lt;/a&gt; &lt;em&gt;"The cluster management and orchestration features **embedded in the Docker Engine&lt;/em&gt;* are built using swarmkit, Swarmkit is a separate project which implements Docker’s orchestration layer and is used directly within Docker."* Docker Swarm is the Docker-native container orchestration platform that uses SwarmKit as its core library, &lt;strong&gt;That means you don't need any extra installation.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker swarm&lt;/strong&gt; is consists of two main &lt;strong&gt;components&lt;/strong&gt;: Manager nodes, and worker Nodes, The Manager node server is responsible on managing the entire cluster and schedule the tasks, While The Worker node is the host that run the container; That means we must have couple of servers to initiate the Swarm cluster., And that is the idea to deliver high availability to your applications&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;p&gt;Swarm provide very powerful features with self managed and easy to use, Press &lt;a href="https://docs.docker.com/engine/swarm/#feature-highlights"&gt;Feature highlights&lt;/a&gt; for more.&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;p&gt;kindly consider to assign a static IP for Nodes, Also consider &lt;a href="https://docs.docker.com/engine/swarm/swarm-tutorial/#open-protocols-and-ports-between-the-hosts"&gt;Open protocols and ports between the hosts&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The following ports must be available. On some systems, these ports are open by default.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Port 2377 TCP for communication with and between manager nodes&lt;/li&gt;
&lt;li&gt;Port 7946 TCP/UDP for overlay network node discovery&lt;/li&gt;
&lt;li&gt;Port 4789 UDP (configurable) for overlay network traffic
&lt;/li&gt;
&lt;/ul&gt;




&lt;h1&gt;
  
  
  Setup the environment&lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;To initiate the Swarm Cluster we need couple of servers which is hard to achieve for learning purpose, In this lab will going to use &lt;a href="https://labs.play-with-docker.com/#"&gt;Play with Docker&lt;/a&gt;, Play with docker is provided by Docker Inc to provide your the ability to initiate Nodes that have docker preinstalled and ready to use.&lt;br&gt;&lt;/p&gt;

&lt;p&gt;Enough talking, Down we go!!&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setup the Environment:&lt;/strong&gt;&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Press to open &lt;a href="https://labs.play-with-docker.com/#"&gt;Play with docker&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Sign-in with your docker account.&lt;/li&gt;
&lt;li&gt;Press Start, to start creating your workspace.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2qkpkjhc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dtbh5r001thw2ma3cdox.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2qkpkjhc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dtbh5r001thw2ma3cdox.jpg" alt="Image description" width="800" height="545"&gt;&lt;/a&gt;&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This will open workspace like the below, Press Add new instance to initiate a server.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HlFRtBJ1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3w8t0q2bii94yms0n0q2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HlFRtBJ1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3w8t0q2bii94yms0n0q2.png" alt="Image description" width="800" height="333"&gt;&lt;/a&gt;&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Initiate Swarm Mode:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# get the NIC name "network interface card"&lt;/span&gt;
&lt;span class="c"&gt;# in our case its eth0&lt;/span&gt;
ip a s

&lt;span class="c"&gt;# initiate docker swarm&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;docker swarm init &lt;span class="nt"&gt;--advertise-addr&lt;/span&gt; eth0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;This will print out the below:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Swarm initialized: current node &lt;span class="o"&gt;(&lt;/span&gt;yz3vrc2w1hwnrwkr5dfsctxkj&lt;span class="o"&gt;)&lt;/span&gt; is now a manager.

To add a worker to this swarm, run the following &lt;span class="nb"&gt;command&lt;/span&gt;:

    docker swarm &lt;span class="nb"&gt;join&lt;/span&gt; &lt;span class="nt"&gt;--token&lt;/span&gt; SWMTKN-1-4xj2egkkxq8ofqkeg0s3zblrdpzcpqokgjyl5zpc1pja100641-3eqtbf6doialoa2spbr1o4dp0 192.168.0.28:2377

To add a manager to this swarm, run &lt;span class="s1"&gt;'docker swarm join-token manager'&lt;/span&gt; and follow the instructions.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;em&gt;I think it's defined well&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Now, Let's create Two instances and join them to our Swarm cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--96iEiy_9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vjk0b5b9t8cyc4y6mgrg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--96iEiy_9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vjk0b5b9t8cyc4y6mgrg.png" alt="Image description" width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Press Node2, Copy and past the command provided by Swarm initiation
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker swarm &lt;span class="nb"&gt;join&lt;/span&gt; &lt;span class="nt"&gt;--token&lt;/span&gt; SWMTKN-1-4xj2egkkxq8ofqkeg0s3zblrdpzcpqokgjyl5zpc1pja100641-3eqtbf6doialoa2spbr1o4dp0 192.168.0.28:2377
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lj28F8J8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vkwpstv0e1hu8vr9asrj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lj28F8J8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vkwpstv0e1hu8vr9asrj.png" alt="Image description" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Do the same on Node3&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5mZ6Y903--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v5k8uolvh05rtkbpytgv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5mZ6Y903--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v5k8uolvh05rtkbpytgv.png" alt="Image description" width="800" height="335"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Here we go, list worker nodes:

&lt;ul&gt;
&lt;li&gt;Press node1 which is the manager node, and type the below.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; docker node &lt;span class="nb"&gt;ls&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kkJNsh8c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q3dikx6xvyb0lt9boyw6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kkJNsh8c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q3dikx6xvyb0lt9boyw6.png" alt="Image description" width="800" height="334"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;p&gt;here we have three nodes with Ready status, Active availability, and one act as leader "Manager".&lt;br&gt;
The asterisk means this node1 it's the node that handle this command&lt;br&gt;
&lt;br&gt;
&lt;code&gt;docker node ls&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;






&lt;h1&gt;
  
  
  The Underline information &lt;br&gt;&lt;br&gt;
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Container orchestration tools help to automate the deployment, scaling, high availability, and management of containerized applications.&lt;br&gt;&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Orchestration tools typically provide Service discovery, Load balancing, Scaling, Health monitoring, Declare desired state.&lt;br&gt;&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker Swarm is a powerful and easy-to-use tool for managing containers.&lt;br&gt;&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;initiate Swarm mode Dose not need any extra installation.&lt;br&gt;&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker swarm is consists of two main components: Manager nodes, and worker Nodes, The Manager node server is responsible on managing the entire cluster and schedule the tasks, While The Worker node is the host that run the container.&lt;br&gt;&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;use this command to initiate the swarm mode:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker swarm init &lt;span class="nt"&gt;--advertise-addr&lt;/span&gt; eth0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;use this command on the manager node to list the node:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker node &lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;








&lt;p&gt;That's it, Very straightforward, very fast🚀. &lt;br&gt;
Hope this article inspired you and will appreciate your feedback. Thank you.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>swarm</category>
    </item>
  </channel>
</rss>
