<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: bhargavirengarajan21</title>
    <description>The latest articles on DEV Community by bhargavirengarajan21 (@bhargavirengarajan21).</description>
    <link>https://dev.to/bhargavirengarajan21</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bhargavirengarajan21"/>
    <language>en</language>
    <item>
      <title>Cloud Computing Project Idea</title>
      <dc:creator>bhargavirengarajan21</dc:creator>
      <pubDate>Wed, 25 Feb 2026 18:08:07 +0000</pubDate>
      <link>https://dev.to/bhargavirengarajan21/-3lbo</link>
      <guid>https://dev.to/bhargavirengarajan21/-3lbo</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/bhargavirengarajan21" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F298689%2Febaa8850-d0b9-4419-8dd9-c77b58935752.png" alt="bhargavirengarajan21"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/bhargavirengarajan21/can-blockchain-and-cloud-work-together-for-real-time-logging-help-me-c6l" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Block chain in Real time cloud logging&lt;/h2&gt;
      &lt;h3&gt;bhargavirengarajan21 ・ Mar 24 '25&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#web3&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#cloudcomputing&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#kubernetes&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#docker&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>web3</category>
      <category>cloudcomputing</category>
      <category>kubernetes</category>
      <category>docker</category>
    </item>
    <item>
      <title>Block chain in Real time cloud logging</title>
      <dc:creator>bhargavirengarajan21</dc:creator>
      <pubDate>Mon, 24 Mar 2025 06:19:12 +0000</pubDate>
      <link>https://dev.to/bhargavirengarajan21/can-blockchain-and-cloud-work-together-for-real-time-logging-help-me-c6l</link>
      <guid>https://dev.to/bhargavirengarajan21/can-blockchain-and-cloud-work-together-for-real-time-logging-help-me-c6l</guid>
      <description>&lt;h2&gt;
  
  
  Can Blockchain and Cloud Work Together for Real-Time Logging?
&lt;/h2&gt;

&lt;p&gt;The number of likes gonna decide my project idea, on an interesting note about the cloud logging system, I and my teammate proposed a real-time logging using Hyperledger &lt;/p&gt;

&lt;p&gt;The idea was simple: How can we make cloud logging real-time and secure without compromising performance or cost? &lt;/p&gt;

&lt;p&gt;So we asked ourselves, "What if we could combine the speed of real-time processing with the security of blockchain?"&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture &amp;amp; Workflow:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F928g8o3053mpihlpju69.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F928g8o3053mpihlpju69.png" alt=" " width="761" height="521"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Working:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Log Generation: Cloud server generates logs → Sent to Pub/Sub for processing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Log Routing: Pub/Sub sends logs to Redis for caching and fast access.&lt;br&gt;
Pub/Sub also sends logs for classification (critical vs non-critical).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Critical Log Handling: If it’s a critical log → Log is hashed and sent to Hyperledger for tamper-proof storage. Redis stores the log for quick future access.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Non-Critical Log Handling: If it’s a non-critical log → Sent to traditional logging for storage. Redis stores frequently accessed logs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Retrieval: If a log is requested → Redis provides a fast response if the log is available. If Redis misses → Hyperledger provides the log → Redis updates its cache.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Monitoring:&lt;br&gt;
Grafana monitors: Redis (real-time performance), Hyperledger (security and consistency), Traditional logging (success/failure feedback)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Failure Handling:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If Redis fails → Pub/Sub retries the request or falls back to 
Hyperledger.&lt;/li&gt;
&lt;li&gt;If Hyperledger write fails → Log is stored in Redis as "pending" 
and retried later.&lt;/li&gt;
&lt;li&gt;If traditional logging fails → Grafana generates an alert.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;/ol&gt;

&lt;h2&gt;
  
  
  Advantages:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Speed and Real-Time Performance:&lt;/strong&gt;&lt;br&gt;
Redis + Pub/Sub = High-speed log processing (low latency).&lt;br&gt;
Logs are available instantly for search and monitoring.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Security and Data Integrity:&lt;/strong&gt;&lt;br&gt;
Hyperledger ensures logs are tamper-proof and cryptographically secured.&lt;br&gt;
Blockchain-based hashing guarantees that logs cannot be altered post-storage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Fault Tolerance and High Availability:&lt;/strong&gt;&lt;br&gt;
Pub/Sub allows retry and message buffering → Prevents data loss.&lt;br&gt;
Redis replication and Hyperledger peer-based consensus ensure high availability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Scalability:&lt;/strong&gt;&lt;br&gt;
Redis + Pub/Sub = Horizontal scaling for high-volume log handling.&lt;br&gt;
Hyperledger can scale for increased load by adding more peers.&lt;br&gt;
Decoupling of Processing and Storage:&lt;br&gt;
Redis (for speed) and Hyperledger (for integrity) work independently → No bottlenecks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Unified Monitoring:&lt;/strong&gt;&lt;br&gt;
Grafana provides centralized monitoring across all components (Redis, Hyperledger, traditional logs).&lt;br&gt;
Alerts and health checks ensure system health and proactive failure handling.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Compliance and Regulatory Value:&lt;/strong&gt;&lt;br&gt;
Financial and healthcare industries require immutable logs for audits → Blockchain provides proof of integrity.&lt;br&gt;
Perfect for industries needing regulatory-compliant logging.&lt;/p&gt;

&lt;h2&gt;
  
  
  Commercial Value:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Target Market:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cloud-native applications (SaaS, web apps, microservices).&lt;/p&gt;

&lt;p&gt;Kubernetes deployments needing scalable log pipelines.&lt;/p&gt;

&lt;p&gt;DevOps teams seeking real-time insights + security.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Competitive Edge:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Datadog and Splunk provide real-time logging but lack blockchain-based integrity. Your solution offers both speed and verifiability → Strong selling point. Multi-cloud support (via Kubernetes) = Portable across AWS, GCP, Azure → Reduces vendor lock-in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Drawbacks:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. High Complexity:&lt;/strong&gt;&lt;br&gt;
More components (Redis + Pub/Sub + Hyperledger) = More maintenance and complexity.&lt;br&gt;
Datadog and Splunk are easier to set up — Your system requires skilled DevOps knowledge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Blockchain Latency:&lt;/strong&gt;&lt;br&gt;
Hyperledger consensus adds latency (~100ms–500ms) → Could affect real-time responsiveness.&lt;br&gt;
Solution → Cache critical logs in Redis before blockchain commit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Cost of Hyperledger Storage:&lt;/strong&gt;&lt;br&gt;
Storing full logs in Hyperledger could get expensive at scale.&lt;br&gt;
Solution → Store only log hashes in Hyperledger and full logs in Redis or traditional storage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Limited Hyperledger Scalability:&lt;/strong&gt;&lt;br&gt;
Hyperledger scales linearly → Adding more peers increases consensus time.&lt;br&gt;
Solution → Use a batching model to reduce the number of consensus events.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Pub/Sub Complexity:&lt;/strong&gt;&lt;br&gt;
Pub/Sub introduces message handling complexity (ordering, retries).&lt;br&gt;
Solution → Use dead-letter queues (DLQ)&lt;/p&gt;

&lt;h2&gt;
  
  
  Poll &amp;amp; Feedbacks
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Do you think its an overkill project? &lt;/li&gt;
&lt;li&gt;Can you please explain to me if this a complex and does not solve any purpose?&lt;/li&gt;
&lt;li&gt;if its good points to improve ?&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>web3</category>
      <category>cloudcomputing</category>
      <category>kubernetes</category>
      <category>docker</category>
    </item>
    <item>
      <title>Docker storage commands?</title>
      <dc:creator>bhargavirengarajan21</dc:creator>
      <pubDate>Wed, 17 Aug 2022 05:22:57 +0000</pubDate>
      <link>https://dev.to/bhargavirengarajan21/docker-storage-commands-2ee6</link>
      <guid>https://dev.to/bhargavirengarajan21/docker-storage-commands-2ee6</guid>
      <description>&lt;h2&gt;
  
  
  Create volume:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$sudo docker volume create  volume-b1
volume-b1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Inspect a Volume&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo docker inspect volume-b1
[
    {
        "CreatedAt": "2022-08-17T09:52:47+05:30",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/volume-b1/_data",
        "Name": "volume-b1",
        "Options": {},
        "Scope": "local"
    }
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;list a volume:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$sudo docker volume ls

DRIVER    VOLUME NAME
local     vol-busybox
local     volume-b1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;delete a volume:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; $sudo docker volume rm volume-b1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Mount the driver to a container
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;-v or --volume :  contains all fields and also order should be maintained

&lt;ul&gt;
&lt;li&gt;name of the volume&lt;/li&gt;
&lt;li&gt;path&lt;/li&gt;
&lt;li&gt;options like &lt;em&gt;ro&lt;/em&gt;--(read-only)
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo docker run -d --volume vol-ubuntu:/tmp ubuntu
14efcc03cc75c98877f1074bc30d3570b4f062c122cccb83272409d677c9ae4c

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;--mount: Separate fields , and contains as key-value pair, order is not important, easier to understand.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker service create \
    --mount 'type=volume/bind/tempfs,src=&amp;lt;VOLUME-NAME&amp;gt;,dst=&amp;lt;CONTAINER-PATH&amp;gt;,volume-driver=local,volume-opt=type=nfs,volume-opt=device=&amp;lt;nfs-server&amp;gt;:&amp;lt;nfs-path&amp;gt;,"volume-opt=o=addr=&amp;lt;nfs-address&amp;gt;,vers=4,soft,timeo=180,bg,tcp,rw"'
    --name myservice \
    &amp;lt;IMAGE&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo docker run -d   --name devtest   --mount source=myvol2,target=/app   nginx:latest
8639e7cc80f422fdbc00b7209a3f976368af7692d38f75b4310b81961c27fc11
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;*&lt;em&gt;inspect volume: *&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo docker inspect myvol2
[
    {
        "CreatedAt": "2022-08-17T10:37:28+05:30",
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/myvol2/_data",
        "Name": "myvol2",
        "Options": null,
        "Scope": "local"
    }
]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>docker</category>
      <category>devops</category>
      <category>webdev</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How to Handle Storage in Docker?</title>
      <dc:creator>bhargavirengarajan21</dc:creator>
      <pubDate>Wed, 17 Aug 2022 03:07:00 +0000</pubDate>
      <link>https://dev.to/bhargavirengarajan21/docker-storage-19i2</link>
      <guid>https://dev.to/bhargavirengarajan21/docker-storage-19i2</guid>
      <description>&lt;h2&gt;
  
  
  Why should i need to store data?
&lt;/h2&gt;

&lt;p&gt;Lets have a scenario here, we have our Mysql containers running and we try to fetch data in a application. While fetching the data, container stopped abbruptly. if you start container again and request for data. NOW ALL YOUR DATA IS GONE ! &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Bc3oYUq2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gkh1l5h36qdtsebxvtdr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Bc3oYUq2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gkh1l5h36qdtsebxvtdr.png" alt="Image description" width="377" height="344"&gt;&lt;/a&gt;  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The life of the data depends on the container, once container is gone, data too is gone, even if another process needs it , bring back could be herculean task&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;2.Container Writeable layer is tightly coupled with host , &lt;br&gt;
  where you can't move the data.&lt;/p&gt;

&lt;p&gt;3.If need to store data , we can use storage driver to manage &lt;br&gt;
  file system, then again this extra abstraction will reduce &lt;br&gt;
  the performence&lt;/p&gt;

&lt;h2&gt;
  
  
  Which data should I back-up?
&lt;/h2&gt;

&lt;p&gt;We need to back up data to a permanent storage. We have 2 layers of data. Read only layer(permanently stores data) and Read/Write data(Volatile). Obivous we need to back up R/W data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ETHXPDxa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qtndrk71l4aj7a6y7pe1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ETHXPDxa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qtndrk71l4aj7a6y7pe1.png" alt="Image description" width="752" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Should to Backup ?
&lt;/h2&gt;

&lt;p&gt;Docker provides storage objects:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Volume&lt;/li&gt;
&lt;li&gt;Bind mount&lt;/li&gt;
&lt;li&gt;temp fs&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--I9Z0QDN3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o771vmvt5kf6f9th797g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--I9Z0QDN3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o771vmvt5kf6f9th797g.png" alt="Image description" width="553" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Volume:&lt;/strong&gt; &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Managed by Docker,Dedicated Directory in host's file system which are mounted on containers. &lt;/li&gt;
&lt;li&gt;We can use Volume for multiple containers simulatneously, No automatic deletion of volumes, we need to delete if its not required.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It may be named or anonymous. Anonymous volumes are not given an explicit name when they are first mounted, so Docker provides unique random name withing the Docker host.Named volumes can persist data after we restart or remove a container. Also, it's accessible by other containers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Volumes supports volume drivers, which allow you to store your data on remote hosts or cloud providers.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Volume:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vkufhJiz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ci24v84ngi27221hk70x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vkufhJiz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ci24v84ngi27221hk70x.png" alt="Image description" width="880" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Container Provides data  and User provides commands to store/manage data to docker engine. But what container knows about is just name of the volume but not the path of the volume in the host. Even the external application having access to the container, wont be able to access Data stored in volume. Providing isolation and security for both host and containers&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;where can we use ?&lt;/strong&gt;&lt;br&gt;
    1. Sharing data among multiple running containers. &lt;br&gt;
    2. When the Docker host is not guaranteed to have a &lt;br&gt;
       given directory or file structure. &lt;br&gt;
    3. When you want to store your container’s data on a &lt;br&gt;
       remote host or a cloud provider, rather than locally.&lt;br&gt;
    4. We can use for backup,restore or migration from one &lt;br&gt;
       host to another&lt;br&gt;
    5. Application requires high-performance I/O on Docker &lt;br&gt;
       Desktop. and also fully native files system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bind mount:&lt;/strong&gt; &lt;br&gt;
Its is very similar to Volume mounts, but with limited benefits. A file or Directory in host system is mounted on container. It is referenced by absolute path on host machine.&lt;/p&gt;

&lt;p&gt;It is created on demand if not existed. Bind mounts useful in this case but it expects the host system to contain the sprcific directory structure. hence developer himself sometimes might not have that structure in his host.&lt;/p&gt;

&lt;p&gt;since it exposes the storage location of the container, which can make dents on the overall security of application or host.&lt;/p&gt;

&lt;p&gt;consider using named volumes. we can't use CLI commands to directly manage bind mounts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;where can we use ?&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Sharing configuration file from host to containers. By this docker provides DNS resolution to containers, by mounting /etc/resolve.conf into each container.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When we share source code/build artifacts between a development environment host and a container. we may mount a app project/directory on a host, when u create same project everytime , it uses the built artifacts.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you use Docker for development this way, your production Dockerfile would copy the production-ready artifacts directly into the image, rather than relying on a bind mount.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When the file or directory structure of the Docker host is guaranteed to be consistent.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;tempfs&lt;/strong&gt;&lt;br&gt;
tmpfs a temporary file system. &lt;/p&gt;

&lt;p&gt;Volumes and bind-mount allows you to share files between host and container, and data is persisted even container is stopped.On the other hand,tmpfs mount, only persists in the host's memory, not in storage. When the container stops, the tmpfs mount removed.&lt;/p&gt;

&lt;p&gt;Only in Linux we have tmpfs mounts. When you create a container with tmpfs mount, the container can create files outside the container's writable layer. They can be created not shipped&lt;/p&gt;

&lt;p&gt;example we can use this type of storage is like user session,  browser history in incognito.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;where to use ?&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When we don't want to persist data, if we want data until 
the container is running.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Named pipes:&lt;/strong&gt;&lt;br&gt;
An npipe mount used for communication between the host and container. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;where to use ?&lt;/strong&gt;&lt;br&gt;
This is used to run a third-party tool inside a container and connect to the Docker Engine API using a named pipe.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>beginners</category>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>Docker network Commands (Docker Series - IV)</title>
      <dc:creator>bhargavirengarajan21</dc:creator>
      <pubDate>Tue, 16 Aug 2022 13:53:00 +0000</pubDate>
      <link>https://dev.to/bhargavirengarajan21/docker-network-part-2-docker-series-iv-4h6c</link>
      <guid>https://dev.to/bhargavirengarajan21/docker-network-part-2-docker-series-iv-4h6c</guid>
      <description>&lt;p&gt;Some network commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo docker network create --driver bridge my-bridge
3973cea88d3f345c60ade7414dee4ff0a09f2863d667e713acef61dcf4badfa6
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above code is to create a network of type bridge with default Ip address.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo docker network create --driver bridge --subnet=192.168.0.0/16 --ip-range=192.168.5.0/24 my-bridge-1

eb05210ca8a0c722684271373dbdb7ea2da57dd82754324c59ffac363be22cd0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above code is to create bridge network with specified ip range and subnet&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo docker network ls

NETWORK ID     NAME          DRIVER    SCOPE
a2884fc4f337   bridge        bridge    local
6bffa8afea2f   host          host      local
3973cea88d3f   my-bridge     bridge    local
eb05210ca8a0   my-bridge-1   bridge    local
4278b3734e55   none          null      local

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;apart from created network , docker provides us 3 network, host, bridge, and none is a special case where its completely isolated and lack of connnectivity.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo docker network ls --filter driver=bridge
NETWORK ID     NAME          DRIVER    SCOPE
a2884fc4f337   bridge        bridge    local
3973cea88d3f   my-bridge     bridge    local
eb05210ca8a0   my-bridge-1   bridge    local

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;the above code helps to filter the network which has driver as bridge.&lt;/p&gt;

&lt;h2&gt;
  
  
  Connect Network:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo docker network connect my-bridge-1 flamboyant_wing
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above code the we have established a bridge connection to running container. This wont return any id, to check we need to do &lt;br&gt;
&lt;code&gt;{% embed $ sudo docker inspect %} flamboyant_wing(container name)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;result &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nFBzhqm5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pcep7kshd4zi7rmjlwt2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nFBzhqm5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pcep7kshd4zi7rmjlwt2.png" alt="Image description" width="373" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Connect using network flag in run command:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo docker container run -itd --network host --name cont_ngnix nginx:latest
9227518678e384232b6e34f31674e5618f17851d0f49a317c33294faecec06e5

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we provided the host network. when we try to find the port &lt;br&gt;
&lt;code&gt;sudo docker container port cont_ngnix&lt;/code&gt;&lt;br&gt;
it returns nothing, since there is no port mapping as it's uses &lt;strong&gt;host&lt;/strong&gt; ip address. instead try hitting the localhost 80 port which is host port.&lt;/p&gt;

&lt;p&gt;Result on &lt;code&gt;$ sudo docker inspect cont_ngnix&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--j0gB9iyu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d2zgrk88ybtsivoatjus.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--j0gB9iyu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d2zgrk88ybtsivoatjus.png" alt="Image description" width="560" height="344"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Inspect Network
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; $ sudo docker network inspect my-bridge-1

[
    {
        "Name": "my-bridge-1",
        "Id": "eb05210ca8a0c722684271373dbdb7ea2da57dd82754324c59ffac363be22cd0",
        "Created": "2022-08-15T21:57:03.886178549+05:30",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        **"IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.0.0/16",
                    "IPRange": "192.168.5.0/24",
                    "Gateway": "192.168.5.0"
                }
            ]
        },**
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
       ** "Containers": {},**
        "Options": {},
        "Labels": {}
    }
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;the above command inspect user created network displays the related details.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;format inspected details:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$sudo docker network inspect --format "{{.Scope}}" bridge
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;the above command helps to list the scope of bridge network&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$sudo docker network inspect --format "{{.Id}}:{{.Name}}" bridge

f46032790f9dcb26f99afec265b1eea44fc96563836a85316c4346f36aa2c6fe:bridge

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;this command list the id and name of the bridge network &lt;/p&gt;

&lt;h2&gt;
  
  
  Disconnect Network
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$sudo docker network disconnect my-bridge-1 flamboyant_wing

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>docker</category>
      <category>devops</category>
      <category>webdev</category>
      <category>beginners</category>
    </item>
    <item>
      <title>How Docker Communicates ? (Docker Series - IV)</title>
      <dc:creator>bhargavirengarajan21</dc:creator>
      <pubDate>Mon, 15 Aug 2022 16:05:00 +0000</pubDate>
      <link>https://dev.to/bhargavirengarajan21/docker-networking-docker-series-iv-3hpc</link>
      <guid>https://dev.to/bhargavirengarajan21/docker-networking-docker-series-iv-3hpc</guid>
      <description>&lt;p&gt;This is a learner post so if anything needs to be corrected or updated, kindly comment. will be helpful to know !.&lt;/p&gt;

&lt;p&gt;When our application contains more than one container. For instance a food application might need a frontend, backend, databases based container. &lt;br&gt;
We need to exchange information between containers. we need a network to carry out crucial data between containers. communication can be one to one or many to one.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do they manage ?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Network Drivers&lt;/strong&gt;!!&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Piece of software , that handles container networking. 
created by network command.&lt;/li&gt;
&lt;li&gt;invoking network inside the cluster or host&lt;/li&gt;
&lt;li&gt;docker itself uses these network drivers to communicate 
with other container. So it provide native network 
driver. We can even use 3rd party drivers for specific 
use case.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Container Networking model:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mgJqBXuX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u7b8wakli6w013oloj4v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mgJqBXuX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u7b8wakli6w013oloj4v.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Host N/W infrastructure:&lt;/strong&gt; includes software and hardware infrastructure  Wifi , ethernet and host kernel network stack.&lt;br&gt;
&lt;strong&gt;IPAM&lt;/strong&gt; manages IP address within context&lt;br&gt;
&lt;strong&gt;Docker Engine&lt;/strong&gt; creates Individual network object. user created and default container network.&lt;/p&gt;

&lt;p&gt;If a container contains more than one network , it will have more than one endpoints and different set of IP address.&lt;br&gt;
When its single host implementation, scopes are limited.&lt;br&gt;
Within the same scope, if two containers are connected to same network it can communicate via DNS where container names can be used instead of IP. These informations are provided by IPAM. Then the driver and IPAM convert the request as host supported network packets. We need to make sure the containers communicate outside, if not simple apt-get command won't execute.&lt;/p&gt;

&lt;h2&gt;
  
  
  Types of Native Network ?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Host Network:&lt;/strong&gt;&lt;br&gt;
The container is not isolated from the docker host. If you run container that binds with ports 80 and if we use host networking then the container is available in port 80 of host IP address.&lt;/p&gt;

&lt;p&gt;This is used for handling large number of ports, where we require NTA or proxy.&lt;br&gt;
But this type of network is available only for Linux, not for windows or Mac.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bridge Network:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tvmGpseD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1s5v8a5riz9sggu3xhyc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tvmGpseD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1s5v8a5riz9sggu3xhyc.png" alt="Image description" width="526" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Default for docker container. If we don't specify any network it will assign to default bridge network.&lt;/li&gt;
&lt;li&gt;This bridge network connect containers with virtual ethernet and communicates to host via this ethernet. Container will have different IPs as the host is isolated from the container. The container and host communication within only same bridge scope.&lt;/li&gt;
&lt;li&gt;We can provide IP range and subnet mask for the bridge network, or the IPAM will manage the ip address for us, we can ping the address provided in the virtual bridge.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Overlay Network:&lt;/strong&gt;&lt;br&gt;
  This does not apply for single host infrastructure. This applies where there is more than one docker host daemon or containers. Hence keeping track of container Ip is not enough , as we need to communicate with container and host, so we will maintain 2 layer of information.&lt;br&gt;
   &lt;strong&gt;overlay information - Ingress:&lt;/strong&gt;&lt;br&gt;
            connect the data and control of the source and destination Container IP. Docker Swarm data and network traffic is handled by ingress network. Hence use this information by default&lt;br&gt;
   &lt;strong&gt;underlay information- docker_gwbridge:&lt;/strong&gt;&lt;br&gt;
            Data regarding the source and destination host IP. Docker Swarn , it is bridge network which connects the individual Docker daemon to the other daemons.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MacVlan Network:(virtual local area network)&lt;/strong&gt; It is a virtualisation , where we have multiple mac of a physical interface which has its own ip address. This type can be used for application whch monitors the network traffice where it is directly connected to physical interface.&lt;br&gt;
Advantage of using macvlan is performence is better than overlay and does not linux bridge*. &lt;br&gt;
things to be noted before using mcvlan&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;unintentional damange to the physical network since it is directly connect.&lt;/li&gt;
&lt;li&gt;able to handle "promiscuous mode" where one physical network is assigned to multiple mac address.&lt;/li&gt;
&lt;li&gt;if your application works in both overlay and bridge, then they should be long term.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;IpVlan Network:&lt;/strong&gt; same as macvlan but endpoint has same mac address,has 2 mode l2 and l3. L2 where end points has same mac address different ip address , L3 packets are rounted between endpoints.&lt;br&gt;
  uses of Ipvlan &lt;br&gt;
     ipvlan should be used in cases where some switches restrict the maximum number of mac address per physical port due to port security configuration.&lt;/p&gt;

&lt;p&gt;Linux bridge* - it is like a regular hardware switch with learning and also supports protocols like STP for loop prevention. Here VMs or Containers will connect to bridge and bridge will connect to outside world. For external connectivity, we would need to use NAT.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>webdev</category>
      <category>network</category>
    </item>
    <item>
      <title>What is Dockerfile ? (Docker Series - III)</title>
      <dc:creator>bhargavirengarajan21</dc:creator>
      <pubDate>Mon, 15 Aug 2022 09:17:00 +0000</pubDate>
      <link>https://dev.to/bhargavirengarajan21/what-is-dockerfile-docker-series-iii-35ai</link>
      <guid>https://dev.to/bhargavirengarajan21/what-is-dockerfile-docker-series-iii-35ai</guid>
      <description>&lt;p&gt;&lt;strong&gt;If anything missed kindly add in comment&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A sequential set of instruction for Docker engine , or a set of commands to assemble the an image&lt;/li&gt;
&lt;li&gt;Primary way of interacting with Docker and migrating containers.&lt;/li&gt;
&lt;li&gt;Each sequential instructions are processed individually and results are stored as stack of layers. This sequential layer managed by file system becomes a docker image.&lt;/li&gt;
&lt;li&gt;Sequential layers helps to cache and helps to troubleshoot.This precreated layer are reused by docker daemon when we run two docker file that has same layer at some stage.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Format:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ARG
FROM
RUN
ADD|COPY
ENV
CMD
ENTRYPOINT
EXPOSE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No extention required, can be created from any text editor, its good to start a docker file name with 'D' capital.&lt;/p&gt;

&lt;h2&gt;
  
  
  EXAMPLE:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ARG VERSION=16.04
FROM Ubuntu:${VERSION}
RUN apt-get update -y
CMD ["bash"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Comments in Dockerfile is represented using '#'. But all hash preceeded sentence is not comments. That also means parser directive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Parser directive&lt;/strong&gt;&lt;br&gt;
  parser directive, one of the docker command indicates, how the docker file should be handled or read. It should be at the top of the file. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Types of parser:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;syntax:  Its only available in buildKit, but its used &lt;br&gt;
          to know the location of the dockerfile.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;escape: it available anywher, it used to specify the &lt;br&gt;
         character to escape. default is '\'.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;COMMANDS:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;ARG&lt;/strong&gt; is the only instruction that may precede FROM&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;FROM&lt;/strong&gt; can appear multiple times within a single Dockerfile to create multiple images or use one build stage as a dependency for another. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;WORKDIR&lt;/strong&gt; This sets the working directory for Run, cmd, copy. By default the WORKDIR is created if not specified.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RUN&lt;/strong&gt; to execute any command in new layer on top of current layer 
RUN  ["executable", "param1", "param2"]. (exec form) another way of executing command irrespective of the shell based. eg RUN ["/bin/bash", "-c", "echo hello"].&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ADD&lt;/strong&gt;  Add new files from src and copies to dest. used to copy to docker image. should be used with caution.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;COPY&lt;/strong&gt; Copy new files from src to dest, used to copy to docker container.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Env&lt;/strong&gt;  Provide value for environment variable&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CMD&lt;/strong&gt;  Used to execute command in shell
docker file can contain only on CMD , if more than one specified last final will be executed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Entry point&lt;/strong&gt; Used to execute command when container is initiated&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Expose&lt;/strong&gt; expose a particular port with specific protocols&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>webdev</category>
      <category>docker</category>
      <category>beginners</category>
      <category>series</category>
    </item>
    <item>
      <title>How Docker Works ? (Docker Series - II)</title>
      <dc:creator>bhargavirengarajan21</dc:creator>
      <pubDate>Sat, 13 Aug 2022 17:04:00 +0000</pubDate>
      <link>https://dev.to/bhargavirengarajan21/how-docker-works-docker-series-ii-1ml4</link>
      <guid>https://dev.to/bhargavirengarajan21/how-docker-works-docker-series-ii-1ml4</guid>
      <description>&lt;p&gt;Docker is a &lt;strong&gt;Client- Server&lt;/strong&gt; architecture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LU_lsp-J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m7wt1sfpmljzouivmlfb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LU_lsp-J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m7wt1sfpmljzouivmlfb.png" alt="Image description" width="712" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What does the docker client do ?
&lt;/h2&gt;

&lt;p&gt;The client is a tool for users to communicate/ interact  with Docker. We can use commands and also Docker provided API to communicate.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is docker host and what does docker host contain ?
&lt;/h2&gt;

&lt;p&gt;The docker host environment to run or execute your application. It runs a program or a piece of software called Daemon. It communicates bi-directionally where it gets user-input and prints the response received.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;contains ?&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker Daemon:&lt;/strong&gt; This manages docker objects such as images and containers. Daemon builds the docker file and processes it as an image.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker Image:&lt;/strong&gt; Read-only template, contains instructions to build a container. This can be either  own image or pulled existing image from docker registry. Docker images are built from a docker file. We can publish our own image in docker. Docker image communicates directly to Daemon. When redeploying an image it rebuilds where the image has changed, so it's light-weight and faster than vm technology.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker Container:&lt;/strong&gt; Runnable instance of docker image, we can create, stop, run using cli or api. Containers are attached to networks and storage. By default it runs isolated  from its host machine. The container communicates to daemon through images.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Docker Registry ?
&lt;/h2&gt;

&lt;p&gt;Docker registry is a repository to store the images. We can also maintain a private registry. Docker will look for the images from the registry by default.&lt;/p&gt;

&lt;p&gt;We can pull or run images that are available in the registry as well as push images.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>docker</category>
      <category>tutorial</category>
      <category>series</category>
    </item>
    <item>
      <title>What is Docker ? (Docker Series - I)</title>
      <dc:creator>bhargavirengarajan21</dc:creator>
      <pubDate>Sun, 10 Jul 2022 10:35:00 +0000</pubDate>
      <link>https://dev.to/bhargavirengarajan21/what-is-docker-docker-series-i-1kb8</link>
      <guid>https://dev.to/bhargavirengarajan21/what-is-docker-docker-series-i-1kb8</guid>
      <description>&lt;p&gt;Docker is a open platform to package your code ,run, deploy, ship anywhere. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why docker ?&lt;/strong&gt;&lt;br&gt;
Let's say my application stack contains of MongoDB, Redis, NodeJ.  Are we sure that these libraries and dependencies compatabile with the OS we are using ?("Matrix from hell"). Herecomes the docker, it seperates the application running from the infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How docker does that ?&lt;/strong&gt;&lt;br&gt;
Containers !!. Docker does that with the help containers.&lt;br&gt;
It is a loosely isolated environment , light-weight, runnable instance. Isolation and security lets us run many containers on the same host by sharing the os kernal.&lt;/p&gt;

&lt;p&gt;The above issue can also be handled by virtualisation, We need to create seperate VM instance for each dependancies to run which could be a overhead to maintain. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker Container vs Virtualisation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hbx_zlAv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l66b02sltdmb5n1eurt7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hbx_zlAv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l66b02sltdmb5n1eurt7.png" alt="Image description" width="724" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I can use Docker for?&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Fast and consisten delivery of application:&lt;/li&gt;
&lt;li&gt;Responsive Deployment, portable and scaling &lt;/li&gt;
&lt;li&gt;Running more workload on same hardware &lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>webdev</category>
      <category>beginners</category>
      <category>docker</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
