<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Waji</title>
    <description>The latest articles on DEV Community by Waji (@waji97).</description>
    <link>https://dev.to/waji97</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/waji97"/>
    <language>en</language>
    <item>
      <title>Kubernetes Introduction for Starters</title>
      <dc:creator>Waji</dc:creator>
      <pubDate>Thu, 27 Apr 2023 05:45:13 +0000</pubDate>
      <link>https://dev.to/waji97/kubernetes-introduction-for-starters-22lj</link>
      <guid>https://dev.to/waji97/kubernetes-introduction-for-starters-22lj</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Kubernetes is a container orchestration tool that has revolutionized the way we deploy and manage containerized applications. It is an open-source platform that automates container deployment, scaling, and management. Kubernetes architecture is designed to make container orchestration easier, faster, and more efficient&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Container Orchestration
&lt;/h3&gt;

&lt;p&gt;Container orchestration refers to the process of managing complex configurations of various containers using server administration and management code. It supports automation of tasks such as clustering server resources, container deployment and management, service discovery and access, load handling, and fault recovery&lt;/p&gt;

&lt;p&gt;Docker Swarm is also one of the popular container orchestration tools, with Kubernetes being the de facto standard&lt;/p&gt;

&lt;h3&gt;
  
  
  Why use Container Orchestration?
&lt;/h3&gt;

&lt;p&gt;In typical container virtualization, deployment and operation management consume a lot of resources, making efficient container management impossible.&lt;/p&gt;

&lt;p&gt;By using container orchestration tools, clustered servers can be managed in a centralized manner, greatly reducing management resources.&lt;/p&gt;

&lt;p&gt;With container orchestration tools, automated deployment and scaling, rollouts/rollbacks, and container recovery operations can be implemented.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;▣ Example:&lt;/strong&gt; Building container images using only container virtualization and deploying them to multiple servers&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0giyatnit1c563yxaasx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0giyatnit1c563yxaasx.png" alt="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0giyatnit1c563yxaasx.png" width="800" height="161"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Kubernetes Architecture
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5yen6r9v6opaq3cxldof.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5yen6r9v6opaq3cxldof.png" alt="K8s Architecture" width="779" height="531"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes is a de facto standard container orchestration tool that was open-sourced by &lt;strong&gt;Google in 2014&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Kubernetes supports most of the functions required for container-based service operation, such as container deployment in MSA structure and service fault recovery.&lt;/li&gt;
&lt;li&gt;Kubernetes has high scalability as it supports most of the functions and components required for operation in a cloud environment and can be easily integrated with other cloud operating tools.&lt;/li&gt;
&lt;li&gt;It has high reliability and stability as it is developed and maintained by an open-source project involving companies such as Google and Red Hat.&lt;/li&gt;
&lt;li&gt;Most of the various components that make up the container orchestration are developed and updated based on Kubernetes.&lt;/li&gt;
&lt;li&gt;Currently, Kubernetes is an open-source project managed by the Cloud Native Computing Foundation (CNCF) (URL: &lt;a href="https://landscape.cncf.io/"&gt;https://landscape.cncf.io/&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So what are the components included in the Kubernetes Cluster? &lt;/p&gt;

&lt;p&gt;💡 A cluster refers to a logical binding of several servers configured to be used as if they were one server&lt;/p&gt;

&lt;p&gt;Kubernetes composes of the &lt;code&gt;Master&lt;/code&gt; and its &lt;code&gt;Worker&lt;/code&gt;  nodes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Master Node
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Master node has different core components that make up a “&lt;strong&gt;Kubernetes control Plane&lt;/strong&gt;”&lt;/li&gt;
&lt;li&gt;The core components include an API Server, Controller Manager, Scheduler and etcd&lt;/li&gt;
&lt;li&gt;Master node is used for administrative tasks only&lt;/li&gt;
&lt;li&gt;It manages the entire Kubernetes cluster, assigning scheduled tasks to the worker nodes, managing the health of the system, scaling applications to go up and down&lt;/li&gt;
&lt;li&gt;Essentially its the brain of the Kubernetes Cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s look into the Components included in the Master Node&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;→ kube-API Server:&lt;/strong&gt; The API Server refers to a server that provides APIs to control internal resources within the Kubernetes cluster (The API Server must be accessible from outside the Kubernetes cluster)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;→ kube-scheduler:&lt;/strong&gt;  The scheduler is responsible for scheduling resource requests for Kubernetes resources. It selects the optimal node to handle resource requests that require node allocation by examining the status of the worker nodes that make up the Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;→ kube-controller-mananger:&lt;/strong&gt; It watches the state of the Kubernetes cluster and makes changes to bring the state of the Kubernetes cluster back to the desired state.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;→ etcd:&lt;/strong&gt; &lt;code&gt;etcd&lt;/code&gt; is a distributed key-value store used to store configuration data for the Kubernetes cluster. It is used to store the state of the Kubernetes cluster, such as node status and service status. &lt;/p&gt;

&lt;p&gt;💡 Also, it keeps the cluster’s current and its desired states so if the Kubernetes find any distinguishes between the two states - it will apply the desired state&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;There are several Add-ons inside the Control Plane that can further activate some extra features. These add on components are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Metrics-server —&lt;/strong&gt; It collects resource usage status of the nodes inside the K8s cluster by collecting the metrics such as CPU and memory usage&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Core-DNS —&lt;/strong&gt; A DNS server used within the cluster&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dashboard —&lt;/strong&gt; Provides a GUI web-based dashboard for managing the cluster&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Worker Node
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;It was previously known as the &lt;code&gt;minion&lt;/code&gt; node&lt;/li&gt;
&lt;li&gt;Worker nodes run the applications and workloads of the Kubernetes cluster.&lt;/li&gt;
&lt;li&gt;It is responsible for running containers and handling the container runtime environment.&lt;/li&gt;
&lt;li&gt;The core components include Kubelet, Container runtime and Kube-proxy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Looking into the main components of the worker node:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;→ kubelet:&lt;/strong&gt; It is the main Kubernetes component that communicates with the API server of the Master node to register nodes in a Kubernetes cluster. It manages the lifecycle of pods and also monitors the status of nodes and pods.&lt;/p&gt;

&lt;p&gt;💡 The &lt;strong&gt;kubelet&lt;/strong&gt; communicates to Docker(or another container runtime) daemon via its API to create and manage containers. After any changes in a Pod on a Node – it will send them to the API server, which in its turn will save them to the &lt;code&gt;etcd&lt;/code&gt; database&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;→ kube-proxy:&lt;/strong&gt; It is a network proxy that runs on every worker node and generates and manages network rules for each node. It is just like a reverse-proxy that forwards requests to appropriate service or application inside a K8s private network&lt;/p&gt;

&lt;p&gt;→ &lt;strong&gt;Container Runtime:&lt;/strong&gt; It runs on all worker nodes and is responsible for running and managing containers (Docker in almost all cases)&lt;/p&gt;

&lt;h3&gt;
  
  
  The K8s Workflow
&lt;/h3&gt;

&lt;p&gt;So basically the workflow looks like the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5nz52o0nnv3ym63iqqfz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5nz52o0nnv3ym63iqqfz.png" alt="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5nz52o0nnv3ym63iqqfz.png" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;The engineer/developer creates a &lt;strong&gt;manifest&lt;/strong&gt; file that describes the desired state of the application or workload that has to be deployed in the cluster. It contains details about the containers, volumes, networking and other resources that are required for the application&lt;/p&gt;


💡 The manifest file uses the YAML format as it is human-readable and easy to understand

&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The manifest file is applied using the &lt;code&gt;kubectl&lt;/code&gt; command-line tool that communicates with the K8s API-server&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;strong&gt;API server&lt;/strong&gt; receives the manifest file and validates it for correctness and compliance with the Kubernetes API schema. If the manifest file is valid, it stores the desired state of the application in &lt;code&gt;etcd&lt;/code&gt; database&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;strong&gt;scheduler&lt;/strong&gt; component of the Master node continuously monitors the state of the Kubernetes cluster and the available resources on each worker node. When a new pod needs to be scheduled, the scheduler queries the API server to obtain the current state of the cluster along with the current state of the worker nodes and the desired state.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;strong&gt;scheduler&lt;/strong&gt; selects a suitable worker node according to the scheduling policy and then instructs the API server to create the pod on that node. The API server then updates the desired state of the cluster in &lt;code&gt;etcd&lt;/code&gt; to include the new pod and its current status.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;strong&gt;kubelet&lt;/strong&gt; component of the worker node receives the instructions to run the new pod from the API server. It communicates with the &lt;strong&gt;container runtime&lt;/strong&gt; to create the container for the pod, based on the specifications provided in the manifest file&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After the container is created, the &lt;strong&gt;kubelet&lt;/strong&gt; communicates with the &lt;strong&gt;kube-proxy&lt;/strong&gt; to set up a network routing rules and load balancing for the pod, so it can communicate with other pods and services inside the cluster&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Kubernetes Features
&lt;/h3&gt;

&lt;p&gt;Let’s look into some of the core features that Kubernetes offer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Self-healing:&lt;/strong&gt;  Kubernetes can automatically restart or replace containers that fail to run properly&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated Rollouts &amp;amp; Rollbacks:&lt;/strong&gt;  The current deployment status can be changed at a speed that suits the desired deployment state&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service Discovery &amp;amp; Load Balancing:&lt;/strong&gt; Kubernetes can expose containers within the cluster to the outside world using DNS names or their own IP addresses. For services with high network traffic loads, network traffic can be load balanced by deploying them to ensure stable service operation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage Orchestration:&lt;/strong&gt;  Kubernetes can link local storage servers and storage services provided by public cloud providers for use. External storage server resources can be easily used, and data persistence can be ensured&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secret and Configuration Management:&lt;/strong&gt; Kubernetes can safely store and manage important information such as passwords, SSH keys, and OAuth tokens. When the container configuration information is changed, Kubernetes can deploy and update it by reflecting the changes in the configuration information without reconstructing the container image&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic Bin Packing:&lt;/strong&gt; Kubernetes receives resource utilization required for container operation and provides the most appropriate cluster node for container operation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwvs5e755yy8maamkzue1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwvs5e755yy8maamkzue1.png" alt="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wvs5e755yy8maamkzue1.png" width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Today we looked at Kubernetes, the most popular container orchestration tool today. We also looked at what components are present inside the Kubernetes cluster and how they work. Finally, we discussed important features that Kubernetes bring to the table. To check out how to set up a working Kubernetes cluster, without using any managed cloud services(often referred to "Vanilla" Kubernetes), follow this &lt;a href="https://dev.to/waji97/docker-kubernetes-setup-5bf6"&gt;link&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://dev.to/setevoy/kubernetes-part-1-architecture-and-main-components-overview-22g6"&gt;K8s Architecture and main components overview&lt;/a&gt;&lt;br&gt;
&lt;a href="https://dev.to/zephinzer/kubernetes-in-five-minutes-31m6"&gt;K8s in 5 minutes&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Docker Registry Management</title>
      <dc:creator>Waji</dc:creator>
      <pubDate>Wed, 08 Mar 2023 03:11:03 +0000</pubDate>
      <link>https://dev.to/waji97/docker-registry-management-3ag2</link>
      <guid>https://dev.to/waji97/docker-registry-management-3ag2</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;A Docker registry is a place to store Docker images. There are two types of Docker registries&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Private registry &lt;/li&gt;
&lt;li&gt;Public registry&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Docker Hub is a popular public registry, but organizations often prefer to use their own private registry to store and manage their images as it provides more control over image access and security.&lt;/p&gt;

&lt;p&gt;Docker registry management involves pushing and pulling images from a registry, as well as managing access control, security, and versioning. To facilitate these tasks, various Docker registry management tools are available, such as Docker Trusted Registry, Harbor, and JFrog Artifactory&lt;/p&gt;

&lt;p&gt;👉 In this post, I will be testing Docker-Hub repository, Nexus Repository and then apply GitHub actions to build an image automatically (CI)&lt;/p&gt;




&lt;h2&gt;
  
  
  Hands on Docker-Hub
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 If you don't have a dockerhub ID, you can refer to the steps mentioned over &lt;a href="https://docs.docker.com/docker-id/" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;After logging in to our account in Docker hub, we need to head to 'repositories' and select 'create repository'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnh9ftvnso5r0ii60wvbe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnh9ftvnso5r0ii60wvbe.png" alt="Creat"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Creating a private repository&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8kz9kao7iio3nm0ozjz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8kz9kao7iio3nm0ozjz.png" alt="Repo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the repository is created, we can confirm the repository name &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuupzipvhox2p02tmsqmw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuupzipvhox2p02tmsqmw.png" alt="Repo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now back to our Linux terminal. I will be working on a new directory&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;mkdir&lt;/span&gt; /DockerFile_Root
&lt;span class="nb"&gt;cd&lt;/span&gt; /DockerFile_Root


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Creating a simple test &lt;code&gt;index.html&lt;/code&gt; file&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="s2"&gt;"Test"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; index.html


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Creating a Dockerfile&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

vi Dockerfile

FROM nginx:latest
LABEL maintainer &lt;span class="s2"&gt;"Author &amp;lt;Author@localhost.com&amp;gt;"&lt;/span&gt;
ADD index.html /usr/share/nginx/html


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Building this dockerfile&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker build &lt;span class="nt"&gt;-t&lt;/span&gt; mynginx:v1 ./

Successfully built a367f58de111
Successfully tagged mynginx:v1


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Logging in to my docker hub&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker login
Username:
Password:
Login Succeeded


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Pushing the image to our docker hub repository&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker tag mynginx:v1 waji97/myrepo:v1
docker push waji97/myrepo:v1


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Deleting the local images and tags&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker rmi mynginx:v1
docker rmi waji97/myrepo:v1


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Checking the repository from dockerhub&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fttyny2v8d19tg0mj5cmw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fttyny2v8d19tg0mj5cmw.png" alt="Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we can start a docker container on our local machine using this image from our dockerhub&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; Mynginx_1 &lt;span class="nt"&gt;-p&lt;/span&gt; 80:80 waji97/myrepo:v1
Unable to find image &lt;span class="s1"&gt;'waji97/myrepo:v1'&lt;/span&gt; locally
v1: Pulling from waji97/myrepo
&lt;span class="nb"&gt;.&lt;/span&gt;
&lt;span class="nb"&gt;.&lt;/span&gt;
Status: Downloaded newer image &lt;span class="k"&gt;for &lt;/span&gt;waji97/myrepo:v1
c3eeab8e250c18e472f105356790368253eb60991532d0b9b786829f94bf6bbe


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;👉 We can see that it pulled from my repository&lt;/p&gt;

&lt;p&gt;Checking the docker process and images&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS              PORTS                NAMES
c3eeab8e250c        waji97/myrepo:v1    &lt;span class="s2"&gt;"/docker-entrypoint.…"&lt;/span&gt;   About a minute ago   Up About a minute   0.0.0.0:80-&amp;gt;80/tcp   Mynginx_1

docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
waji97/myrepo       v1                  a367f58de111        10 minutes ago      142MB


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We can also visit our nginx server &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2dwqopt65t3u5bs9sx1o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2dwqopt65t3u5bs9sx1o.png" alt="Test"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Hands on Nexus Repository
&lt;/h2&gt;

&lt;p&gt;Nexus repository is a popular open-source repository manager developed by Sonatype. Nexus provides storing &amp;amp; managing software components, access control and integration with tools like Jenkins and Docker. &lt;/p&gt;

&lt;p&gt;It was mainly used with Java projects and the repositories were accessible by an WEB UI. We will be using Nexus repository to build a private image repository using docker format.&lt;/p&gt;

&lt;p&gt;There are three types of nexus repository:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hosted: a private repository within a company (uploads are only possible locally)&lt;/li&gt;
&lt;li&gt;Proxy: mirrors remote repositories (cache)&lt;/li&gt;
&lt;li&gt;Group: groups various types of repositories together&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd15nslmb7u16jmvh4ln8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd15nslmb7u16jmvh4ln8.png" alt="Nexus"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 Before starting the hands on, I recommend turning off swap memory using &lt;code&gt;swapoff -a&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We will start by creating a volume&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker volume create nexus_volume
nexus_volume


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next we will change the ownership for the data directory for this volume&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;chown &lt;/span&gt;200:200 /var/lib/docker/volumes/nexus_volume/_data/


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;👉 We need to change this as not changing this could possibly incur push/pull errors later&lt;/p&gt;

&lt;p&gt;Pulling the nexus image&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker pull sonatype/nexus3:latest


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, we will create a new container for our nexus repo&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; Nexus &lt;span class="nt"&gt;-p&lt;/span&gt; 8081:8081 &lt;span class="nt"&gt;-p&lt;/span&gt; 5000-5001:5000-5001 &lt;span class="nt"&gt;-v&lt;/span&gt; nexus_volume:/nexus-data sonatype/nexus3


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;👉 I would recommend to have atleast 3GBs of RAM in your host system for the above to actually run&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                                      NAMES
9c68a0f95437        sonatype/nexus3     &lt;span class="s2"&gt;"/opt/sonatype/nexus…"&lt;/span&gt;   3 seconds ago       Up 1 second         0.0.0.0:5000-5001-&amp;gt;5000-5001/tcp, 0.0.0.0:8081-&amp;gt;8081/tcp   Nexus


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now we need to copy the password from&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;cat&lt;/span&gt; /var/lib/docker/volumes/nexus_volume/_data/admin.password 


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;👉 Copy the password that is displayed&lt;/p&gt;

&lt;p&gt;Navigating to the Nexus Repo Manager via our browser and logging in as 'admin' user &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu5jlb33gjaourwdqckr1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu5jlb33gjaourwdqckr1.png" alt="Admin"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will trigger a new 'change password' wizard. We can change our admin account password using this wizard&lt;/p&gt;

&lt;p&gt;From settings, we will navigate to 'Blob Stores'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12szhsv12okz66vngjea.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12szhsv12okz66vngjea.png" alt="Blobs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Creating a new blob to use as docker-registry&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff207xxcht6q3vyjpem7u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff207xxcht6q3vyjpem7u.png" alt="Registry"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now heading to "Repositories" and clicking on "Create repository"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvj20uelx1quuyuuoewkx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvj20uelx1quuyuuoewkx.png" alt="Create"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We need to select 'docker(hosted)' from here and set up as following&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4wh347oi3o66i231n3wi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4wh347oi3o66i231n3wi.png" alt="Pull"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After creating the above repository, we need to create another one named 'docker(proxy)'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplbn1czhiju0rbrnzsbb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplbn1czhiju0rbrnzsbb.png" alt="P"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faoe70e5w42ba6zw92ztb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faoe70e5w42ba6zw92ztb.png" alt="Pu"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally we will create our 'docker(group)' repository as well&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcti2d602ppnw84i5nza9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcti2d602ppnw84i5nza9.png" alt="Group"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Adding all the members to our group&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fasy8zc604l3c1t9vilcw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fasy8zc604l3c1t9vilcw.png" alt="Group"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now from the "Security" section, we will navigate to Realms&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxww7q68r6qo3urjpics.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxww7q68r6qo3urjpics.png" alt="Realm"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Adding the "Docker Bearer Token Realm"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4oa7bf440mknp4pcd1kj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4oa7bf440mknp4pcd1kj.png" alt="Realms"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 Going back to our Linux terminal&lt;/p&gt;

&lt;p&gt;We will add the following to our daemon JSON file&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

vi /etc/docker/daemon.json

&lt;span class="s2"&gt;"insecure-registries"&lt;/span&gt; : &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"192.168.1.10:5000"&lt;/span&gt;, &lt;span class="s2"&gt;"192.168.1.10:5001"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 We had to add this as Nexus supports HTTPS but we are using HTTP&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Reloading the daemon and restarting docker&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

systemctl daemon-reload
systemctl restart docker

docker start Nexus
Nexus


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now we can pull a test image for our nexus repository&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker pull 192.168.1.10:5000/hello-world
Using default tag: latest
latest: Pulling from hello-world
&lt;span class="nb"&gt;.&lt;/span&gt;
Status: Downloaded newer image &lt;span class="k"&gt;for &lt;/span&gt;192.168.1.10:5000/hello-world:latest


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To confirm if this image was downloaded,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fimriuqbspon3uejeubwk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fimriuqbspon3uejeubwk.png" alt="Nexus"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will perform another test by tagging our existing 'alpine' image to our nexus repository&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker tag alpine 192.168.1.10:5001/myalpine:v1


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Logging in to our repository&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker login 192.168.1.10:5001
Username:
Password:
Login Succeeded


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Pushing the tagged image to our repository&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker push 192.168.1.10:5001/myalpine:v1
The push refers to repository &lt;span class="o"&gt;[&lt;/span&gt;192.168.1.10:5001/myalpine]
7cd52847ad77: Pushed 

&lt;span class="c"&gt;# Logging out&lt;/span&gt;
docker &lt;span class="nb"&gt;logout &lt;/span&gt;192.168.1.10:5001/myalpine:v1


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Confirming this push from the WEB UI&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs5akocydfhw9k41m5zm4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs5akocydfhw9k41m5zm4.png" alt="UI"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Hands on GitHub actions (CI)
&lt;/h2&gt;

&lt;p&gt;GitHub Actions is a powerful CI/CD tool for automating software development workflows. It allows developers to automate tasks, build, test, and deploy code directly from GitHub.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Workflows are defined in YAML format and consist of Events and Jobs.&lt;/li&gt;
&lt;li&gt;Events are triggers that execute a workflow.&lt;/li&gt;
&lt;li&gt;Events can be configured to trigger on actions such as a push to a repository.&lt;/li&gt;
&lt;li&gt;Workflows can contain multiple Jobs.&lt;/li&gt;
&lt;li&gt;When multiple Jobs are present, parallel execution is the default behavior.&lt;/li&gt;
&lt;li&gt;A Runner is required to execute Jobs and is typically a container.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✨ WorkFlow:-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Workflow: The entire workspace for building, deploying, and testing.&lt;/li&gt;
&lt;li&gt;Event: The trigger for a Workflow to run (when it executes).&lt;/li&gt;
&lt;li&gt;Job: The unit of work in a Workflow.&lt;/li&gt;
&lt;li&gt;Step: Defines the sequence of tasks that a Job should perform.&lt;/li&gt;
&lt;li&gt;Action: A pre-defined function (library) that performs a specific task.
Runner: A containerized instance used to execute a Workflow.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl18vm55osrx2zh80fihe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl18vm55osrx2zh80fihe.png" alt="Github Actions"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 I will be doing a short hands on in which we will use Github actions to automatically create a docker image and push it to the dockerhub repository&lt;/p&gt;

&lt;p&gt;From GitHub, we need to create a new repository for testing Github actions&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0beb797jmgu570kubhn4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0beb797jmgu570kubhn4.png" alt="Github"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now from DockerHub&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7gx5clz8viyc8cnt51o5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7gx5clz8viyc8cnt51o5.png" alt="Dockerhub"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we will navigate to "My profile" =&amp;gt; "Edit Profile" =&amp;gt; "Security"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcnjiygjoxxzs2rrfwxoh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcnjiygjoxxzs2rrfwxoh.png" alt="KDK"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From this section, we will create a new access token with Read, Write and Delete permissions&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flh1448sm6bib0an1w3h9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flh1448sm6bib0an1w3h9.png" alt="Token"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 Going back to Github&lt;/p&gt;

&lt;p&gt;From our repository, we will head to settings and select&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49sv7ps1q498v4qrn6bm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49sv7ps1q498v4qrn6bm.png" alt="Actions"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will make 2 new secrets, one contains your DockerHub username and the other should contain the Access Token that we created from Dockerhub&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuqn1vni0m7852tw0dxwd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuqn1vni0m7852tw0dxwd.png" alt="Token"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now from our Github repository, we will create a new Dockerfile and also add a test html file&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2s6soo77wz0xoca0k7ao.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2s6soo77wz0xoca0k7ao.png" alt="File"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will head to "Actions" and setup our custom workflow&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ggltbopcmrdglegkevn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ggltbopcmrdglegkevn.png" alt="Configure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My workflow YAML file looks like this&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpq7q8bfzj00prij84g2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpq7q8bfzj00prij84g2.png" alt="YAML"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 This file will build an image and push it automatically to Dockerhub using Dockerfile present in the Github repository's main branch&lt;/p&gt;

&lt;p&gt;Upon committing this file, we will have a new workflows folder in our repository&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5po51btxg8y2m4eu4oj4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5po51btxg8y2m4eu4oj4.png" alt="Workflow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can also check our actions section&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6yhp59e15g0l7n5crv2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6yhp59e15g0l7n5crv2.png" alt="Dockerpush"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To test if this actually worked, we will go to our Dockerhub repository&lt;/p&gt;

&lt;p&gt;👉 Dockerhub test repository&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj03hd43mm2d1fmsc6feg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj03hd43mm2d1fmsc6feg.png" alt="Tag"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will test this image from our Linux Terminal now&lt;/p&gt;

&lt;p&gt;Creating a container using our image from Dockerhub&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker login

docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; Mynginx_1 &lt;span class="nt"&gt;-p&lt;/span&gt; 80:80 waji97/testrepo:latest
latest: Pulling from waji97/testrepo
&lt;span class="nb"&gt;.&lt;/span&gt;
&lt;span class="nb"&gt;.&lt;/span&gt;
Status: Downloaded newer image &lt;span class="k"&gt;for &lt;/span&gt;waji97/testrepo:latest
6bd35189a8261f80ed6763bd75f78cb0172bd21a0abcdb871ffb047c0d9e9729


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Testing from the browser&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fee1jyfrx912patozgo8f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fee1jyfrx912patozgo8f.png" alt="WEB UI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 We will now try changing our &lt;code&gt;index.html&lt;/code&gt; file from Github and commit so that Github actions can actually integrate this automatically&lt;/p&gt;

&lt;p&gt;Upon committing the changes, we will be able to see a new workflow&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnfru79j3ua5noifsrc3q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnfru79j3ua5noifsrc3q.png" alt="Workflow2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now from the Linux System, we will delete the container and the image. &lt;/p&gt;

&lt;p&gt;After deleting the current image, we will pull the latest image from our Dockerhub repository and create a new container using the image to check our results&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzwbep5jerdys69g0tle.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzwbep5jerdys69g0tle.png" alt="Final"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;✨ This is Continuous Integration &lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, managing a Docker registry is an essential aspect of containerization. Dockerhub, Nexus Repository, and Github Actions CI are three popular options for Docker registry management, each with their own advantages and limitations ✔ &lt;/p&gt;

</description>
      <category>linux</category>
      <category>docker</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Docker-Compose Management</title>
      <dc:creator>Waji</dc:creator>
      <pubDate>Tue, 07 Mar 2023 04:52:43 +0000</pubDate>
      <link>https://dev.to/waji97/docker-compose-management-1d84</link>
      <guid>https://dev.to/waji97/docker-compose-management-1d84</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Docker Compose is a tool for defining and running multi-container Docker applications. It allows you to define the services that make up your application and how they interact with each other. Docker Compose uses a &lt;strong&gt;YAML&lt;/strong&gt; file to configure the application services, networks, and volumes.&lt;/p&gt;

&lt;p&gt;Here are some of the key elements we can find in a Docker Compose YAML file:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;version:&lt;/strong&gt; Specifies the version of the Docker Compose file format being used.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;services:&lt;/strong&gt; Defines the various services that make up the application. Each service is given a name and specifies its image, ports, environment variables, and any other necessary configuration options.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;networks:&lt;/strong&gt; Specifies the networks that the application's services will use to communicate with each other.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;volumes:&lt;/strong&gt; Defines the volumes that will be mounted to the containers in the application.
configs: Specifies configuration files that will be injected into the containers as Docker Configs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;secrets:&lt;/strong&gt; Specifies secrets that will be injected into the containers as Docker Secrets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;deploy:&lt;/strong&gt; Specifies options for deploying the application as a stack, including replicas, update policies, and placement constraints.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 To read more regarding docker compose, you can look at the official documentation over &lt;a href="https://docs.docker.com/compose/" rel="noopener noreferrer"&gt;here&lt;/a&gt; &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7r2e68nujcivbr4umrf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7r2e68nujcivbr4umrf.png" alt="compose!"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Some of the important terminologies
&lt;/h3&gt;

&lt;p&gt;✨ Infrastructure as Code (IaC)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Technology for deploying and managing IT system infrastructure in the form of software, rather than by people.&lt;/li&gt;
&lt;li&gt;As the software is written in source code, management quality can be improved.&lt;/li&gt;
&lt;li&gt;Decreased work time for infrastructure managers due to parallel processing of multiple systems.&lt;/li&gt;
&lt;li&gt;Cost savings due to efficient processing possible by reducing work time.&lt;/li&gt;
&lt;li&gt;Lower error rate because only defined operations are performed by the software.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✨ Container Scaling Out (Scale Out)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scale out refers to increasing the number of servers operating the service in response to increasing loads to distribute the load.&lt;/li&gt;
&lt;li&gt;Horizontal scaling of containers: one of the core technologies of microservices, it prevents unnecessary system expansion by horizontally scaling only the containers responsible for a specific service.&lt;/li&gt;
&lt;li&gt;Vertical scaling (Scale Up): Concept of expanding the server vertically by increasing insufficient resources such as CPU and RAM.&lt;/li&gt;
&lt;li&gt;Traditional monolithic scaling method / Hardware is added because the entire system server cannot be added (there are limitations).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✨ Service Dependency and Discovery&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Services running on each container in a specific project have interdependent relationships (e.g., WEB-DB-Kafka, etc.).&lt;/li&gt;
&lt;li&gt;In a cloud environment, each service runs on an instance, and these instance information (IP, Port, etc.) has a characteristic that can easily change depending on the situation.&lt;/li&gt;
&lt;li&gt;Services with interdependencies are highly sensitive to such changes, and to quickly reflect the changed information, service discovery must be configured.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Docker Compose Wordpress
&lt;/h2&gt;

&lt;p&gt;For starters, I will be creating an emtpy directory&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;mkdir&lt;/span&gt; /Compose
&lt;span class="nb"&gt;cd&lt;/span&gt; /Compose


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Creating a new &lt;code&gt;.yml&lt;/code&gt; file&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

vi docker-compose.yml

&lt;span class="c"&gt;# Compose File Format Version&lt;/span&gt;

version: &lt;span class="s1"&gt;'3.7'&lt;/span&gt;    
services:
  wordpress_db:
    image: mysql:5.7
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: wordpress
      MYSQL_DATABASE: wordpress
      MYSQL_USER: wordpress
      MYSQL_PASSWORD: wordpress
    networks:
      - wordpress_net
    volumes:
      - wordpress_data:/var/lib/mysql

  wordpress:
    depends_on:
      - wordpress_db
    image: wordpress:latest
    restart: always
    ports:
      - &lt;span class="s2"&gt;"80:80"&lt;/span&gt;
    environment:
      WORDPRESS_DB_HOST: wordpress_db:3306
      WORDPRESS_DB_USER: wordpress
      WORDPRESS_DB_PASSWORD: wordpress
      WORDPRESS_DB_NAME: wordpress
    networks:
      - wordpress_net
    volumes:
      - wordpress_web:/var/www/html

volumes:
  wordpress_data: &lt;span class="o"&gt;{}&lt;/span&gt;
  wordpress_web: &lt;span class="o"&gt;{}&lt;/span&gt;

networks:
  wordpress_net: &lt;span class="o"&gt;{}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Checking the docker-compose version&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker-compose &lt;span class="nt"&gt;-v&lt;/span&gt;
Docker Compose version v2.2.3


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If we use the following,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker-compose config


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It will show us the current configuration for the docker compose file in the current directory&lt;/p&gt;

&lt;p&gt;Now composing the YAML file&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker-compose up &lt;span class="nt"&gt;-d&lt;/span&gt;

 ⠿ Network compose_wordpress_net     Created                                        0.4s
 ⠿ Volume &lt;span class="s2"&gt;"compose_wordpress_data"&lt;/span&gt;   Created                                        0.0s
 ⠿ Volume &lt;span class="s2"&gt;"compose_wordpress_web"&lt;/span&gt;    Created                                        0.0s
 ⠿ Container compose-wordpress_db-1  Created                                        0.7s
 ⠿ Container compose-wordpress-1     Created                                        0.0s


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;👉 The &lt;code&gt;-d&lt;/code&gt; option declares the container to run in the background&lt;/p&gt;

&lt;p&gt;After the compose build is done, we can check the docker volume and network are created as per the compose file&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker volume &lt;span class="nb"&gt;ls
local               &lt;/span&gt;compose_wordpress_data
&lt;span class="nb"&gt;local               &lt;/span&gt;compose_wordpress_web

docker network &lt;span class="nb"&gt;ls
&lt;/span&gt;755a8af21193        compose_wordpress_net   bridge              &lt;span class="nb"&gt;local&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We can also check compose status as well&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker-compose &lt;span class="nb"&gt;ls
&lt;/span&gt;docker-compose ps


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now if we open the website using the IP address of our host system&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3t8kwbs9v1su04c5bhoy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3t8kwbs9v1su04c5bhoy.png" alt="wordpress"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 When deleting composed containers, we can use &lt;code&gt;docker-compose down&lt;/code&gt; however this won't delete the docker volume. We will need to include the &lt;code&gt;-v&lt;/code&gt; option to delete the docker volume as well&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Docker Compose building Nginx
&lt;/h2&gt;

&lt;p&gt;I will now demonstrate using docker compose to build a Dockerfile&lt;/p&gt;

&lt;p&gt;Creating a new directory&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;mkdir&lt;/span&gt; /Compose/build
&lt;span class="nb"&gt;cd&lt;/span&gt; /Compose/build


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Creating an &lt;code&gt;index.html&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Hello My Nginx"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; index.html


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Creating the Dockerfile&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

vi Dockerfile

FROM nginx:latest
LABEL maintainer &lt;span class="s2"&gt;"Author &amp;lt;Author@localhost.com&amp;gt;“
ADD index.html /usr/share/nginx/html


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Creating the docker-compose YAML file&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

vi docker-compose.yml

version: &lt;span class="s1"&gt;'3.7'&lt;/span&gt;
services:
  web:
    image: myweb/nginx:v1
    build: &lt;span class="nb"&gt;.&lt;/span&gt;
    restart: always


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now using docker compose to build our Dockerfile&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker-compose &lt;span class="nt"&gt;-p&lt;/span&gt; myweb up &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--build&lt;/span&gt;

&lt;span class="o"&gt;[&lt;/span&gt;+] Running 2/2
 ⠿ Network myweb_default  Created                                                   0.1s
 ⠿ Container myweb-web-1  Started                                                   0.2s


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;👉 Here the "-p" option specifies a project name, and the "--build" option skips the image search and pull, performing only the build operation. If the "--build" option is omitted, the pull operation is automatically performed, but using it is recommended during build operations to avoid errors. &lt;/p&gt;

&lt;p&gt;We can confirm the process&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker-compose &lt;span class="nt"&gt;-p&lt;/span&gt; myweb ps &lt;span class="nt"&gt;-a&lt;/span&gt;
NAME                COMMAND                  SERVICE             STATUS              PORTS
myweb-web-1         &lt;span class="s2"&gt;"/docker-entrypoint.…"&lt;/span&gt;   web                 running             80/tcp


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Another thing we can try is,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker-compose &lt;span class="nt"&gt;-p&lt;/span&gt; myweb up &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--scale&lt;/span&gt; &lt;span class="nv"&gt;web&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3

&lt;span class="o"&gt;[&lt;/span&gt;+] Running 3/3
 ⠿ Container myweb-web-3  Started                                                   0.9s
 ⠿ Container myweb-web-2  Started                                                   0.9s
 ⠿ Container myweb-web-1  Started                                                   0.9s


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Checking the compose process again&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker-compose &lt;span class="nt"&gt;-p&lt;/span&gt; myweb ps

NAME                COMMAND                  SERVICE             STATUS              PORTS
myweb-web-1         &lt;span class="s2"&gt;"/docker-entrypoint.…"&lt;/span&gt;   web                 running             80/tcp
myweb-web-2         &lt;span class="s2"&gt;"/docker-entrypoint.…"&lt;/span&gt;   web                 running             80/tcp
myweb-web-3         &lt;span class="s2"&gt;"/docker-entrypoint.…"&lt;/span&gt;   web                 running             80/tcp


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;👉 With the "--scale" option, it's possible to horizontally scale specific service containers, explicitly declaring a scale of 1 reduces the containers to 1. However, this option cannot be used when ports are specified, as it's not possible to expose multiple containers to the outside with the same port number. To serve multiple instances of the same web container, a proxy server must be used&lt;/p&gt;




&lt;h2&gt;
  
  
  Docker Compose HAproxy with Nginx LB
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy03m0j35a18nxl3fqomo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy03m0j35a18nxl3fqomo.png" alt="Proxy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Starting with an empty directory&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;mkdir&lt;/span&gt; /Compose/prod
&lt;span class="nb"&gt;cd&lt;/span&gt; /Compose/prod


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Creating an &lt;code&gt;index.html&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Hello My Nginx"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; index.html


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Creating a Dockerfile for Nginx and HAproxy&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

vi Dockerfile_nginx

FROM nginx:latest
LABEL maintainer &lt;span class="s2"&gt;"Author &amp;lt;Author@localhost.com&amp;gt;"&lt;/span&gt;
ADD index.html /usr/share/nginx/html
WORKDIR /usr/share/nginx/html

vi Dockerfile_haproxy

FROM haproxy:2.3
LABEL maintainer &lt;span class="s2"&gt;"Author &amp;lt;Author@localhost.com&amp;gt;"&lt;/span&gt;
ADD haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Configuring the &lt;code&gt;haproxy.cfg&lt;/code&gt; file&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

vi haproxy.cfg
global
        log /dev/log  local0
        log /dev/log  local1 notice
        &lt;span class="nb"&gt;chroot&lt;/span&gt; /var/lib/haproxy
        stats &lt;span class="nb"&gt;timeout &lt;/span&gt;30s
        user haproxy
        group haproxy
        daemon

defaults
        log global
        mode http
        option httplog
        option dontlognull
        option dontlog-normal
        option http-server-close
        maxconn 3000
        &lt;span class="nb"&gt;timeout &lt;/span&gt;connect 10s
        &lt;span class="nb"&gt;timeout &lt;/span&gt;http-request 10s
        &lt;span class="nb"&gt;timeout &lt;/span&gt;http-keep-alive 10s
        &lt;span class="nb"&gt;timeout &lt;/span&gt;client 1m
        &lt;span class="nb"&gt;timeout &lt;/span&gt;server 1m
        &lt;span class="nb"&gt;timeout &lt;/span&gt;queue 1m

listen stats
        &lt;span class="nb"&gt;bind&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt;:9000
        stats &lt;span class="nb"&gt;enable
        &lt;/span&gt;stats realm Haproxy Stats Page
        stats uri /
        stats auth admin:haproxy1

frontend proxy
        &lt;span class="nb"&gt;bind&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt;:80
        default_backend WEB_SRV_list

backend WEB_SRV_list
        balance roundrobin
        option httpchk HEAD /
        server prod-web-1 prod-web-1:80 check inter 3000 fall 5 rise 3
        server prod-web-2 prod-web-2:80 check inter 3000 fall 5 rise 3
        server prod-web-3 prod-web-3:80 check inter 3000 fall 5 rise 3


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;👉 At the backend section of the configuration file, the web server's status is checked using the HEAD method every 3 seconds. If the status check fails 5 times in a row, the web server is removed from the load balancer group. If the server becomes available again and passes 3 consecutive status checks, it is added back to the load balancer group.&lt;/p&gt;

&lt;p&gt;Finally creating the docker compose YAML file&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

vi docker-compose.yml

version: &lt;span class="s1"&gt;'3.7'&lt;/span&gt;
services:
  proxy:
    depends_on:
      - web
    image: prod/haproxy:v1
    build:
      context: ./
      dockerfile: ./Dockerfile_haproxy

    restart: always
    ports:
      - &lt;span class="s2"&gt;"80:80"&lt;/span&gt;
      - &lt;span class="s2"&gt;"9000:9000"&lt;/span&gt;
    networks:
      - myweb_net

  web:
    image: prod/nginx:v1
    build:
      context: ./
      dockerfile: ./Dockerfile_nginx
    restart: always
    deploy:
      mode: replicated
      replicas: 3
    networks:
      - myweb_net

networks:
  myweb_net: &lt;span class="o"&gt;{}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Using docker compose to build our Dockerfiles&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker-compose up &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--build&lt;/span&gt;

Successfully built 5deb0e3b5821
Successfully tagged prod/haproxy:v1
&lt;span class="o"&gt;[&lt;/span&gt;+] Running 5/5
 :: Network prod_myweb_net  Created                                                                  0.2s
 :: Container prod-web-3    Started                                                                  0.7s
 :: Container prod-web-1    Started                                                                  0.7s
 :: Container prod-web-2    Started                                                                  0.6s
 :: Container prod-proxy-1  Started                                                                  1.5s


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once the job is done, we can check the processes&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker-compose ps
NAME                COMMAND                  SERVICE             STATUS              PORTS
prod-proxy-1        &lt;span class="s2"&gt;"docker-entrypoint.s…"&lt;/span&gt;   proxy               running             0.0.0.0:80-&amp;gt;80/tcp, 0.0.0.0:9000-&amp;gt;9000/tcp
prod-web-1          &lt;span class="s2"&gt;"/docker-entrypoint.…"&lt;/span&gt;   web                 running             80/tcp
prod-web-2          &lt;span class="s2"&gt;"/docker-entrypoint.…"&lt;/span&gt;   web                 running             80/tcp
prod-web-3          &lt;span class="s2"&gt;"/docker-entrypoint.…"&lt;/span&gt;   web                 running             80/tcp


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To test if the load balancing if it is working between these 3 web pages, we can include a different entry in each of the web server&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; prod-web-1 /bin/bash
root@c6f1c535ce6c:/usr/share/nginx/html# &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"prod-web-1 Server Main Page"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; index.html
root@c6f1c535ce6c:/usr/share/nginx/html# &lt;span class="nb"&gt;cat &lt;/span&gt;index.html
Hello My Nginx
prod-web-1 Server Main Page

docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; prod-web-2 /bin/bash
root@e86233007a76:/usr/share/nginx/html# &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"prod-web-2 Server Main Page"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; index.html
root@e86233007a76:/usr/share/nginx/html# &lt;span class="nb"&gt;cat &lt;/span&gt;index.html
Hello My Nginx
prod-web-2 Server Main Page

docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; prod-web-3 /bin/bash
root@11aa89e08619:/usr/share/nginx/html# &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"prod-web-3 Server Main Page"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; index.html
root@11aa89e08619:/usr/share/nginx/html# &lt;span class="nb"&gt;cat &lt;/span&gt;index.html
Hello My Nginx
prod-web-3 Server Main Page


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Testing our setup&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgasqzxzyfplkwp5izw8j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgasqzxzyfplkwp5izw8j.png" alt="1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqkew4c9ecq7hxsaoc5qb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqkew4c9ecq7hxsaoc5qb.png" alt="2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe17w1my193n72tjd15sk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe17w1my193n72tjd15sk.png" alt="3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 We can confirm that round robin load balancing is working&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, Docker Compose is a powerful tool that simplifies the process of managing and deploying complex multi-container applications. Through hands-on experience building an Nginx server with HAProxy and a WordPress application, we have seen how Docker Compose streamlines the management of container orchestration, network configuration, and service scaling. &lt;/p&gt;

</description>
      <category>linux</category>
      <category>docker</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Docker Logs &amp; Monitoring</title>
      <dc:creator>Waji</dc:creator>
      <pubDate>Mon, 06 Mar 2023 05:46:55 +0000</pubDate>
      <link>https://dev.to/waji97/docker-log-monitoring-24gn</link>
      <guid>https://dev.to/waji97/docker-log-monitoring-24gn</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Effective log and monitoring management is essential for ensuring the health and performance of Docker containers. By leveraging Docker's built-in log and monitoring tools, as well as third-party tools, we can gain insights into our containers' behavior and troubleshoot any issues that arise&lt;/p&gt;

&lt;p&gt;I will be utilizing &lt;code&gt;docker logs&lt;/code&gt; and &lt;code&gt;docker events&lt;/code&gt; commands to manage logs. Furthermore, I will also use cAdvisor to monitor logs&lt;/p&gt;




&lt;h2&gt;
  
  
  Utilizing Docker Logs
&lt;/h2&gt;

&lt;p&gt;Starting a mysql container&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; log_con1 mysql:5.7


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Checking the status&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker ps &lt;span class="nt"&gt;-a&lt;/span&gt;
8f2640092ab8        mysql:5.7           &lt;span class="s2"&gt;"docker-entrypoint.s…"&lt;/span&gt;   17 minutes ago      Exited &lt;span class="o"&gt;(&lt;/span&gt;1&lt;span class="o"&gt;)&lt;/span&gt; 10 seconds ago                       log_con1


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To check logs for this container&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker logs log_con1
&lt;span class="nb"&gt;.&lt;/span&gt;
&lt;span class="nb"&gt;.&lt;/span&gt;
2023-03-06 05:04:29+00:00 &lt;span class="o"&gt;[&lt;/span&gt;ERROR] &lt;span class="o"&gt;[&lt;/span&gt;Entrypoint]: Database is uninitialized and password option is not specified
    You need to specify one of the following as an environment variable:
    - MYSQL_ROOT_PASSWORD
    - MYSQL_ALLOW_EMPTY_PASSWORD
    - MYSQL_RANDOM_ROOT_PASSWORD


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, to test live logs, we will start another container&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; log_con2 ubuntu:bionic


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Opening another terminal and entering the following&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker logs &lt;span class="nt"&gt;-f&lt;/span&gt; log_con2



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;👉 It will be in a standby to wait for any action in the ubuntu container&lt;/p&gt;

&lt;p&gt;From the container&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Log Test"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We will be able to see the logs from the host system on the other terminal&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

root@6246287978ac:/# &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Log Test"&lt;/span&gt;
Log Test


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We can inspect the container to confirm how logs are saved&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker container inspect log_con2

&lt;span class="s2"&gt;"LogConfig"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
 &lt;span class="s2"&gt;"Type"&lt;/span&gt;: &lt;span class="s2"&gt;"json-file"&lt;/span&gt;,
 &lt;span class="s2"&gt;"Config"&lt;/span&gt;: &lt;span class="o"&gt;{}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;,


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;👉 We can see that log files are saved in JSON format in the host system&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 The log files are present under &lt;code&gt;/var/lib/docker/containers/&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We can also edit the docker daemon file to rotate logs if they reach a certain size or value&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

vi /etc/docker/daemon.json

&lt;span class="o"&gt;{&lt;/span&gt;
 &lt;span class="s2"&gt;"log-driver"&lt;/span&gt;: &lt;span class="s2"&gt;"json-file"&lt;/span&gt;,
 &lt;span class="s2"&gt;"log-opts"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
 &lt;span class="s2"&gt;"max-size"&lt;/span&gt;: &lt;span class="s2"&gt;"10k"&lt;/span&gt;,
 &lt;span class="s2"&gt;"max-file"&lt;/span&gt;: &lt;span class="s2"&gt;"5"&lt;/span&gt;
 &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 If there is no &lt;code&gt;daemon.json&lt;/code&gt; file, we can create one&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Restarting the daemon and docker service&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

systemctl daemon-reload
systemctl restart docker


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Starting another container&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; log_con3 ubuntu:bionic
docker container inspect log_con3

&lt;span class="s2"&gt;"LogConfig"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
 &lt;span class="s2"&gt;"Type"&lt;/span&gt;: &lt;span class="s2"&gt;"json-file"&lt;/span&gt;,
 &lt;span class="s2"&gt;"Config"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
 &lt;span class="s2"&gt;"max-file"&lt;/span&gt;: &lt;span class="s2"&gt;"5"&lt;/span&gt;,
 &lt;span class="s2"&gt;"max-size"&lt;/span&gt;: &lt;span class="s2"&gt;"10k"&lt;/span&gt;
 &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;,


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;👉 We can confirm that the container started with the log rotate configurations&lt;/p&gt;




&lt;h2&gt;
  
  
  Utilizing Docker Events
&lt;/h2&gt;

&lt;p&gt;With another terminal connected, we will enter&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker events



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;👉 On standby mode to fetch live logs&lt;/p&gt;

&lt;p&gt;From the other terminal,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; events_con nginx:latest
docker &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; events_con


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now going back to the first terminal to check logs,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker events

2023-03-06T14:35:09.394975329+09:00 container create 6ec4910cc606a135ca6fe1d10b8f9fdaf88c5dc008e53c00f28c7d1e8325f51b &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx:latest, &lt;span class="nv"&gt;maintainer&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;NGINX Docker Maintainers &amp;lt;docker-maint@nginx.com&amp;gt;, &lt;span class="nv"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;events_con&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;.&lt;/span&gt;
&lt;span class="nb"&gt;.&lt;/span&gt;
&lt;span class="nb"&gt;.&lt;/span&gt;
2023-03-06T14:35:15.131094800+09:00 container destroy 6ec4910cc606a135ca6fe1d10b8f9fdaf88c5dc008e53c00f28c7d1e8325f51b &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx:latest, &lt;span class="nv"&gt;maintainer&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;NGINX Docker Maintainers &amp;lt;docker-maint@nginx.com&amp;gt;, &lt;span class="nv"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;events_con&lt;span class="o"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  Utilizing Container Advisor
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;cAdvisor (Container Advisor)&lt;/strong&gt; is an open-source container monitoring tool that is integrated with Docker. It is designed to provide detailed information about the resource usage and performance of running containers, including CPU, memory, disk, and network utilization.&lt;/p&gt;

&lt;p&gt;👉 The best part about this tool is that we can examine container logs via a WEB interface &lt;/p&gt;

&lt;p&gt;Starting a cAdvisor container&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker run &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--volume&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/:/rootfs:ro &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--volume&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/var/run:/var/run:rw &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--volume&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/sys/fs/cgroup:/sys/fs/cgroup:ro &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--volume&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/dev/disk/:/dev/disk:ro &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--privileged&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--publish&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;8080:8080 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--detach&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cAdvisor google/cadvisor:latest


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now if we open our browser and go to the address of our host linux system&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fev36a652k7baatk1votl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fev36a652k7baatk1votl.png" alt="cAdvisor"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F89bi0e29lr3bwv7xzhxy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F89bi0e29lr3bwv7xzhxy.png" alt="CPU"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I wil use a Stress container to check CPU usage via cAdvisor&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; progrium/stress &lt;span class="nt"&gt;--cpu&lt;/span&gt; 2 &lt;span class="nt"&gt;--vm&lt;/span&gt; 2 &lt;span class="nt"&gt;--vm-bytes&lt;/span&gt; 128M &lt;span class="nt"&gt;--timeout&lt;/span&gt; 30s


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmshrb017loo1nsfu6xku.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmshrb017loo1nsfu6xku.png" alt="CPU Stress"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, managing Docker logs is essential for monitoring the health and performance of Docker containers. I utilized several docker commands and an open-source tool to keep tabs with our container logs ✔ &lt;/p&gt;

</description>
      <category>linux</category>
      <category>docker</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Docker Image Management</title>
      <dc:creator>Waji</dc:creator>
      <pubDate>Mon, 06 Mar 2023 04:57:41 +0000</pubDate>
      <link>https://dev.to/waji97/docker-image-management-3558</link>
      <guid>https://dev.to/waji97/docker-image-management-3558</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Docker image management involves creating, managing, and distributing Docker images. Docker images are the building blocks of Docker containers, which are lightweight and portable virtualized environments that can run anywhere. It uses a layered architecture to build and manage Docker images. Each layer in the Docker image represents a specific set of changes or additions to the previous layer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkoopxexhloipwbvq6d6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkoopxexhloipwbvq6d6.png" alt="Docker Image"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Docker Image Archive
&lt;/h2&gt;

&lt;p&gt;Docker provides two different methods to save and load Docker images: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;docker save/docker load&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6w7ecghr2nresu35o0bo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6w7ecghr2nresu35o0bo.png" alt="Save/Load"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;docker export/docker import&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqxz8rvh6050hbl0c2c4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqxz8rvh6050hbl0c2c4.png" alt="Export/Import"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 &lt;code&gt;docker save&lt;/code&gt; and &lt;code&gt;docker load&lt;/code&gt; are used to save and load entire Docker images along with all of their layers and metadata. &lt;/p&gt;

&lt;p&gt;👉 &lt;code&gt;docker export&lt;/code&gt; and &lt;code&gt;docker import&lt;/code&gt; are used to export and import a container as a tar file. Docker export and docker import do not include metadata or information about the image's layers.&lt;/p&gt;




&lt;h2&gt;
  
  
  Short hands on
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Save/Load
&lt;/h3&gt;

&lt;p&gt;Creating an empty directory&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;mkdir&lt;/span&gt; /image_backup


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Checking current images detail&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker images&lt;span class="p"&gt;;&lt;/span&gt;
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
nginx               latest              904b8cb13b93        4 days ago          142MB
ubuntu              bionic              b89fba62bc15        4 days ago          63.1MB
mysql               latest              4f06b49211c0        10 days ago         530MB
mysql               5.7                 be16cf2d832a        4 weeks ago         455MB


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To save a backup for a specific image&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker save &lt;span class="nt"&gt;-o&lt;/span&gt; /image_backup/ubuntu.tar ubuntu:bionic


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;So now if we delete the image&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker rmi ubuntu:bionic

&lt;span class="c"&gt;# Checking for the image&lt;/span&gt;
docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
nginx               latest              904b8cb13b93        4 days ago          142MB
mysql               latest              4f06b49211c0        10 days ago         530MB
mysql               5.7                 be16cf2d832a        4 weeks ago         455MB


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We can load the &lt;code&gt;ubuntu:bionic&lt;/code&gt; image from the backup directory&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker load &lt;span class="nt"&gt;-i&lt;/span&gt; /image_backup/ubuntu.tar


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Checking the results&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
nginx               latest              904b8cb13b93        4 days ago          142MB
ubuntu              bionic              b89fba62bc15        4 days ago          63.1MB
mysql               latest              4f06b49211c0        10 days ago         530MB
mysql               5.7                 be16cf2d832a        4 weeks ago         455MB


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Export/Import
&lt;/h3&gt;

&lt;p&gt;Creating a new container using the ubuntu image&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; image_con ubuntu:bionic


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Exporting this container&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker &lt;span class="nb"&gt;export &lt;/span&gt;image_con &lt;span class="nt"&gt;-o&lt;/span&gt; /image_backup/image_con.tar

&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; /image_backup/
total 127984
&lt;span class="nt"&gt;-rw-------&lt;/span&gt; 1 root root 65521664 Mar  6 09:34 image_con.tar
&lt;span class="nt"&gt;-rw-------&lt;/span&gt; 1 root root 65529856 Mar  6 09:30 ubuntu.tar


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now we can import this image with a different tag&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker import /image_backup/image_con.tar myubuntu:v1


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Checking the results&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker images

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
myubuntu            v1                  3e3a5217f941        3 seconds ago       63.1MB
nginx               latest              904b8cb13b93        4 days ago          142MB
ubuntu              bionic              b89fba62bc15        4 days ago          63.1MB


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  Docker Image Commit &amp;amp; Build
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;docker commit&lt;/code&gt; and &lt;code&gt;docker build&lt;/code&gt; are both Docker commands used to create Docker images&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0n1vbfh1888uerz2gffo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0n1vbfh1888uerz2gffo.png" alt="Commit"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker commit&lt;/code&gt; command is used to create a new Docker image from an existing container. This command is useful when you have made changes to a running container, and you want to save those changes as a new image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7i944a3k7unp3kepd98c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7i944a3k7unp3kepd98c.png" alt="Build"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker build&lt;/code&gt; command is used to create a new Docker image from a Dockerfile. A Dockerfile is a script that contains instructions on how to build a Docker image&lt;/p&gt;

&lt;p&gt;Some of the commonly used Dockerfile instructions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;FROM:&lt;/strong&gt; This instruction is used to specify the base image that the new image will be built on top of&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LABEL:&lt;/strong&gt; This instruction is used to add metadata to the image&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RUN:&lt;/strong&gt; This instruction is used to execute commands within the container&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ADD:&lt;/strong&gt; This instruction is used to copy files from the host system into the container&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;WORKDIR:&lt;/strong&gt; This instruction is used to set the working directory for any subsequent commands in the Dockerfile&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EXPOSE:&lt;/strong&gt; This instruction is used to specify which port(s) the container will listen on at runtime&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CMD:&lt;/strong&gt; This instruction is used to specify the default command to be executed when the container starts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ENTRYPOINT:&lt;/strong&gt; This instruction is used to specify the command that will be executed when the container starts, and it cannot be overridden when the container is run&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Short hands on
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Docker Commit
&lt;/h3&gt;

&lt;p&gt;To check how many layers are there in the ubuntu image&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker inspect ubuntu:bionic

&lt;span class="s2"&gt;"Layers"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="s2"&gt;"sha256:52c5ca3e9f3bf4c13613fb3269982734b189e1e09563b65b670fc8be0e223e03"&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Creating the container using the &lt;code&gt;-it&lt;/code&gt; option&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; commit_con ubuntu:bionic


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;From inside the container,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; TestFile
TestFile

&lt;span class="nb"&gt;ls
&lt;/span&gt;TestFile  boot  etc   lib    media  opt   root  sbin  sys  usr
bin       dev   home  lib64  mnt    proc  run   srv   tmp  var


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now from the host machine&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker commit &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="s2"&gt;"MyName"&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"Image Comment"&lt;/span&gt; commit_con myubuntu:v1.0.0


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We can confirm this commit using&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
myubuntu            v1.0.0              2e214faf96f7        32 seconds ago      63.1MB


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Inspecting our v1.0.0 ubuntu image&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

            &lt;span class="s2"&gt;"Layers"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="s2"&gt;"sha256:52c5ca3e9f3bf4c13613fb3269982734b189e1e09563b65b670fc8be0e223e03"&lt;/span&gt;,
                &lt;span class="s2"&gt;"sha256:cea6ad35f448cdba9f2bb5c32b245c497e90cafa36c8f856706c8257bb666e34"&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;👉 We can see that a new image was created with an extra layer when we committed the changes for our ubuntu container &lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Build
&lt;/h3&gt;

&lt;p&gt;Creating an empty directory for our dockerfile&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;mkdir&lt;/span&gt; /DockerFile_root
&lt;span class="nb"&gt;cd&lt;/span&gt; /DockerFile_root


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Creating an empty dockerfile&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

vi Dockerfile &lt;span class="c"&gt;# Using this default name is more efficient&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Adding the following inside this file&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

FROM ubuntu:bionic  
LABEL maintainer &lt;span class="s2"&gt;"Author &amp;lt;Author@localhost.com&amp;gt;"&lt;/span&gt; &lt;span class="c"&gt;# (Key : Value) Format&lt;/span&gt;
RUN apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install &lt;/span&gt;apache2 &lt;span class="nt"&gt;-y&lt;/span&gt; 
ADD index.html /var/www/html
WORKDIR /var/www/html &lt;span class="c"&gt;# works just like 'cd' &lt;/span&gt;
RUN &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"/bin/bash"&lt;/span&gt;, &lt;span class="s2"&gt;"-c"&lt;/span&gt;, &lt;span class="s2"&gt;"echo RunTest &amp;gt; Test.html"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
EXPOSE 80
CMD &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"apachectl"&lt;/span&gt;, &lt;span class="s2"&gt;"-DFOREGROUND"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="c"&gt;# Either have to use 'CMD' or 'ENTRYPOINT' to run the service &lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Creating an &lt;code&gt;index.html&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;echo &lt;/span&gt;Test &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; index.html


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Our working directory is currently &lt;code&gt;/DockerFile_root&lt;/code&gt; which is also known as &lt;strong&gt;Build Context Directory&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is where the docker build should happen&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker build &lt;span class="nt"&gt;-t&lt;/span&gt; my:v1.0.0 ./
Successfully built e5581d044d20
Successfully tagged my:v1.0.0


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3982hetel2t8nx63pqt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3982hetel2t8nx63pqt.png" alt="Build"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Checking our new image&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
my                  v1.0.0              e5581d044d20        28 seconds ago      204MB


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Creating and starting the container &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; apache2_con &lt;span class="nt"&gt;-p&lt;/span&gt; 80:80 my:v1.0.0
891473bcd83ed3f94830e5595aa698f3e40652183183a205bc3aa60fe8eef843


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If we check inside the container&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; apache2_con /bin/bash
root@891473bcd83e:/var/www/html# &lt;span class="nb"&gt;cat&lt;/span&gt; ./Test.html 
RunTest


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Returning to the host system and checking port 80&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

curl localhost:80
Test

curl localhost:80/Test.html
RunTest


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  Docker Build Cache &amp;amp; Image Sizes
&lt;/h2&gt;

&lt;p&gt;Finally, I want to discuss regarding Docker build sizes and how to manage them&lt;/p&gt;

&lt;p&gt;Creating a new Dockerfile&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

vi Dockerfile_L

FROM ubuntu:bionic
LABEL maintainer &lt;span class="s2"&gt;"Author &amp;lt;Author@localhost.com&amp;gt;"&lt;/span&gt;
RUN &lt;span class="nb"&gt;mkdir&lt;/span&gt; /dummy
RUN fallocate &lt;span class="nt"&gt;-l&lt;/span&gt; 100m /dummy/A
RUN &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /dummy/A


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To build this image&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker build &lt;span class="nt"&gt;-t&lt;/span&gt; dummy:v1.0.0 ./ &lt;span class="nt"&gt;-f&lt;/span&gt; Dockerfile_L


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;👉 The &lt;code&gt;-f&lt;/code&gt; option in the docker build command specifies the Dockerfile name to use during the build process, when the Dockerfile is not named "Dockerfile"&lt;/p&gt;

&lt;p&gt;This will build the image as follows&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
dummy               v1.0.0              4ef94c4001fe        2 minutes ago       168MB


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8aawwlki7lv4m193dfn8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8aawwlki7lv4m193dfn8.png" alt="3 Run"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 As we can see that it is 168MBs big. We know that ubuntu:bionic is only about 63MBs. In the Dockerfile we just created dummy directory and deleted a file inside the directory that cost us over 100MBs&lt;/p&gt;

&lt;p&gt;To decrease the size we can try to edit our Dockerfile&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

FROM ubuntu:bionic
LABEL maintainer &lt;span class="s2"&gt;"Author &amp;lt;Author@localhost.com&amp;gt;"&lt;/span&gt;
RUN &lt;span class="nb"&gt;mkdir&lt;/span&gt; /dummy &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
fallocate &lt;span class="nt"&gt;-l&lt;/span&gt; 100m /dummy/A &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /dummy/A


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now if we check the image size after building this new image &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker build &lt;span class="nt"&gt;-t&lt;/span&gt; dummy:v1.0.1 ./ &lt;span class="nt"&gt;-f&lt;/span&gt; Dockerfile_L

docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
dummy               v1.0.1              6ced40e23a77        11 seconds ago      63.1MB
dummy               v1.0.0              4ef94c4001fe        8 minutes ago       168MB


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffqerzp3qk4tax0bespsg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffqerzp3qk4tax0bespsg.png" alt="1 RUN"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;✨ Another important thing is &lt;strong&gt;Docker Cache&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 Docker caches build layers to speed up subsequent builds of a Dockerfile. However, when using GitHub for building, changes to source code may not be properly reflected due to cached layers. To force a rebuild of all layers and ensure changes are properly incorporated, use the &lt;code&gt;--no-cache&lt;/code&gt; option with the docker build command&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We can try to compile a &lt;code&gt;C&lt;/code&gt; code as well&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

vi ./app.c
&lt;span class="c"&gt;#include &amp;lt;stdio.h&amp;gt;&lt;/span&gt;
void main&lt;span class="o"&gt;()&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;
 &lt;span class="nb"&gt;printf&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"Hello World&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To compile our &lt;code&gt;C&lt;/code&gt; code, we need the &lt;code&gt;gcc&lt;/code&gt; compiler&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

vi Dockerfile_M

FROM gcc:latest
LABEL maintainer &lt;span class="s2"&gt;"Author &amp;lt;Author@localhost.com&amp;gt;"&lt;/span&gt;
ADD app.c /root
WORKDIR /root
RUN gcc &lt;span class="nt"&gt;-o&lt;/span&gt; ./app ./app.c
CMD &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"./app"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now building this Dockerfile&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker build &lt;span class="nt"&gt;-t&lt;/span&gt; multi:v1.0.0 ./ &lt;span class="nt"&gt;-f&lt;/span&gt; Dockerfile_M


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;👉 This will first pull the gcc:latest image from the docker hub website&lt;/p&gt;

&lt;p&gt;If we check the image size &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
multi               v1.0.0              8b00f89eb22c        23 seconds ago      1.27GB
gcc                 latest              c6aa7ca27d67        4 days ago          1.27GB


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Testing the image by running it in a container&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; pritn_c multi:v1.0.0
Hello World


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Well this works but the size of images are obnoxiously large for such a simple task 😥&lt;/p&gt;

&lt;p&gt;To solve this, we can edit the Dockerfile as follows&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# GCC Compile Block&lt;/span&gt;
FROM gcc:latest as compile_base
LABEL maintainer &lt;span class="s2"&gt;"Author &amp;lt;Author@localhost.com&amp;gt;"&lt;/span&gt;
ADD app.c /root
WORKDIR /root
RUN gcc –o ./app ./app.c

&lt;span class="c"&gt;# APP Running Block&lt;/span&gt;
FROM alpine:latest
RUN apk add &lt;span class="nt"&gt;--no-cache&lt;/span&gt; gcompat
WORKDIR /root
COPY &lt;span class="nt"&gt;--from&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;compile_base /root/app ./ &lt;span class="c"&gt;# we set an alias in the compile block as "compile_base" which is being used here&lt;/span&gt;
CMD &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"./app"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This is called &lt;strong&gt;multi-stage&lt;/strong&gt; build. Multi-stage builds in Docker allow you to use multiple FROM statements in a single Dockerfile to create multiple intermediate images, each with its own set of instructions and layers. &lt;/p&gt;

&lt;p&gt;✨ The advantage of using multi-stage builds is that it allows you to create smaller and more efficient Docker images.&lt;/p&gt;

&lt;p&gt;Now we will build this image &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker build &lt;span class="nt"&gt;-t&lt;/span&gt; multi:v1.0.1 ./ &lt;span class="nt"&gt;-f&lt;/span&gt; Dockerfile_M


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Checking this image size&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
multi               v1.0.1              1cc2b8149d7d        6 seconds ago       7.23MB


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Testing the image by running it in a container&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; print_c multi:v1.0.1
Hello World


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We can even reduce this image size further by dividing the Package install block as well&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

vi Dockerfile_M2

&lt;span class="c"&gt;# GCC Compile Block&lt;/span&gt;
FROM gcc:latest as compile_base
LABEL maintainer &lt;span class="s2"&gt;"Author &amp;lt;Author@localhost.com&amp;gt;"&lt;/span&gt;
ADD app.c /root
WORKDIR /root
RUN gcc –o ./app ./app.c

&lt;span class="c"&gt;# Package Install Block&lt;/span&gt;
FROM alpine:latest as package_install
RUN apk add &lt;span class="nt"&gt;--no-cache&lt;/span&gt; gcompat
WORKDIR /root
COPY &lt;span class="nt"&gt;--from&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;compile_base /root/app ./

&lt;span class="c"&gt;# App running Block&lt;/span&gt;
FROM package_install as run
WORKDIR /root
CMD &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"./app"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Building the image&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker build &lt;span class="nt"&gt;-t&lt;/span&gt; multi:v1.0.2 ./ &lt;span class="nt"&gt;-f&lt;/span&gt; Dockerfile_M2


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Checking the size&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
multi               v1.0.2              60500955b71d        4 seconds ago       7.23MB


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Testing the image by running it in a container&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; print_c multi:v1.0.2
Hello World


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 The size different can't be seen in the above case as the actual compile file is really small as compared to an actual program&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, managing Docker images is an important part of working with Docker containers. By understanding the various Docker image management commands and best practices I discussed above, users can effectively manage their Docker images to optimize their container environment and workflow ✔&lt;/p&gt;

</description>
      <category>docker</category>
      <category>linux</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Docker Volume Management</title>
      <dc:creator>Waji</dc:creator>
      <pubDate>Sun, 05 Mar 2023 12:24:34 +0000</pubDate>
      <link>https://dev.to/waji97/docker-volume-management-m53</link>
      <guid>https://dev.to/waji97/docker-volume-management-m53</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;We are aware of the fact that Docker images are &lt;strong&gt;Read-only&lt;/strong&gt;. When you run a container in Docker, it creates a writable layer on top of the image file system. This layer is used to store any changes made to the container during its lifetime, such as installed software or modified files. When a container is terminated, the writable layer is removed, and any data or settings that were stored in the layer are lost.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1r0i0pkgt5daafrlhne.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1r0i0pkgt5daafrlhne.png" alt="Read - only" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To solve this, we can use &lt;strong&gt;Docker volumes&lt;/strong&gt;. They allow us to mount directories from the host machine or other containers into the container. By storing data and settings in a volume, they can be easily accessed and reused between container runs. There are three types of volume management:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Host Volume&lt;/li&gt;
&lt;li&gt;Container Volume&lt;/li&gt;
&lt;li&gt;Docker Volume&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 I will be going through each of them along with a short hands on examples&lt;/p&gt;




&lt;h2&gt;
  
  
  Host Volumes
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxdu4w8q9ja09d5p8ajc7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxdu4w8q9ja09d5p8ajc7.png" alt="Host" width="800" height="147"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Host volumes are directories on the host machine that are mounted into containers, allowing the container to read and write files directly to the host file system&lt;/p&gt;

&lt;p&gt;✨ Useful for scenarios where we need to persist data outside of the container, or share data between the host and container&lt;/p&gt;

&lt;h3&gt;
  
  
  Simple Test
&lt;/h3&gt;

&lt;p&gt;Creating a mysql_DB container&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; mysql_db &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;MYSQL_ROOT_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;root &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;MYSQL_DATABASE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;testDB &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; /docker_dir/con_volume_1:/var/lib/mysql &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; mysql:5.7
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 The &lt;code&gt;-e&lt;/code&gt; option lets us create root user's password and a test DB as well&lt;/p&gt;

&lt;p&gt;👉 The line containing &lt;code&gt;-v&lt;/code&gt; option means that any data written to the &lt;code&gt;/var/lib/mysql&lt;/code&gt; directory within the container will be stored in the &lt;code&gt;/docker_dir/con_volume_1&lt;/code&gt; directory on the host, allowing the data to persist even if the container is deleted&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 In Kubernetes, credentials are handled through a secret file&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Checking docker process&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 We will be able to see the mysql5.7 container that we just created and started&lt;/p&gt;

&lt;p&gt;In our host system,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; /docker_dir/con_volume_1/
drwxr-x--- 2 polkitd input     4096 Mar  5 19:34 mysql
drwxr-x--- 2 polkitd input       20 Mar  5 19:34 testDB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The data is saved in our local host&lt;/p&gt;

&lt;p&gt;We can also find the same data inside our container&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; mysql_db /bin/bash
bash-4.2# &lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; /var/lib/mysql
drwxr-x--- 2 mysql mysql     4096 Mar  5 10:34 mysql
drwxr-x--- 2 mysql mysql       20 Mar  5 10:34 testDB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Returning to the host and after deleting the container, we can still see the database data still remains&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker stop mysql_db
docker &lt;span class="nb"&gt;rm &lt;/span&gt;mysql_db
&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; /docker_dir/con_volume_1
drwxr-x--- 2 polkitd input     4096 Mar  5 19:34 mysql
drwxr-x--- 2 polkitd input       20 Mar  5 19:34 testDB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;😥 But using the &lt;code&gt;-v&lt;/code&gt; option has a slight issue&lt;/p&gt;

&lt;p&gt;Creating a new container with a different DB name&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; mysql_db &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;MYSQL_ROOT_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;root &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;MYSQL_DATABASE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;MYSQLDB &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; /docker_dir/con_volume_1:/var/lib/mysql &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; mysql:5.7
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 The above should create a &lt;code&gt;MYSQLDB&lt;/code&gt; but instead it shows the previous database data&lt;/p&gt;

&lt;p&gt;From the host&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; /docker_dir/con_volume_1
drwxr-x--- 2 polkitd input     4096 Mar  5 19:34 mysql
drwxr-x--- 2 polkitd input       20 Mar  5 19:34 testDB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Inside the container&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; /var/lib/mysql
drwxr-x--- 2 mysql mysql     4096 Mar  5 10:34 mysql
drwxr-x--- 2 mysql mysql       20 Mar  5 10:34 testDB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 We can confirm that the container directory replicates the same data from the host directory. This means that the &lt;code&gt;-v&lt;/code&gt; option is actually mounting the container's directory into the host directory.&lt;/p&gt;




&lt;h2&gt;
  
  
  Container Volume
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faj5h4dcnevzz91kq0ddw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faj5h4dcnevzz91kq0ddw.png" alt="Container" width="800" height="145"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Container volumes are volumes created and managed by Docker, and are associated with a specific container&lt;/p&gt;

&lt;p&gt;✨ Useful when we need to isolate and share data between containers&lt;/p&gt;

&lt;h3&gt;
  
  
  Simple Test
&lt;/h3&gt;

&lt;p&gt;Creating an &lt;code&gt;index.html&lt;/code&gt; file inside the directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /docker_dir/con_volume_1/index.html
Nginx Main Page &lt;span class="o"&gt;!!&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Starting a container on port 8001&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; Nginx_1 &lt;span class="nt"&gt;-p&lt;/span&gt; 8001:80 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; /docker_dir/con_volume_1:/usr/share/nginx/html &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; nginx:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On port 8002&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; Nginx_2 &lt;span class="nt"&gt;-p&lt;/span&gt; 8002:80 &lt;span class="se"&gt;\ &lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nt"&gt;--volumes-from&lt;/span&gt; Nginx_1 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; nginx:latest    
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On port 8003&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; Nginx_3 &lt;span class="nt"&gt;-p&lt;/span&gt; 8003:80 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nt"&gt;--volumes-from&lt;/span&gt; Nginx_1 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; nginx:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 Here &lt;code&gt;--volumes-from&lt;/code&gt; shares the host directory that is connected to the specified container with another container&lt;/p&gt;

&lt;p&gt;Now from the host system&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl localhost:8001
Nginx Main Page &lt;span class="o"&gt;!!&lt;/span&gt;

curl localhost:8002
Nginx Main Page &lt;span class="o"&gt;!!&lt;/span&gt;

curl localhost:8003
Nginx Main Page &lt;span class="o"&gt;!!&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we add something to the existing file,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; /docker_dir/con_volume_1/index.html
Volumes From Test &lt;span class="o"&gt;!!&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl localhost:8001
Nginx Main Page &lt;span class="o"&gt;!!&lt;/span&gt;
Volumes From Test &lt;span class="o"&gt;!!&lt;/span&gt;

curl localhost:8002
Nginx Main Page &lt;span class="o"&gt;!!&lt;/span&gt;
Volumes From Test &lt;span class="o"&gt;!!&lt;/span&gt;

curl localhost:8003
Nginx Main Page &lt;span class="o"&gt;!!&lt;/span&gt;
Volumes From Test &lt;span class="o"&gt;!!&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 We can see that one volume directory is shared across several containers&lt;/p&gt;




&lt;h2&gt;
  
  
  Docker Volume
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fid9ikk9dpy85u2o3m9va.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fid9ikk9dpy85u2o3m9va.png" alt="Docker Volume" width="800" height="146"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Docker volumes are volumes created and managed by Docker, and can be used by multiple containers, not just a single container. &lt;/p&gt;

&lt;p&gt;✨ Useful for scenarios where we need to share data between multiple containers, or when we need to back up or restore volumes&lt;/p&gt;

&lt;h3&gt;
  
  
  Simple Test
&lt;/h3&gt;

&lt;p&gt;Creating a docker volume&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker volume create myVolume_1
myVolume_1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To check current volumes&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker volume &lt;span class="nb"&gt;ls

&lt;/span&gt;DRIVER              VOLUME NAME
&lt;span class="nb"&gt;local               &lt;/span&gt;97e467c79d320138ef408f661ea6ce37d8294ce7e58524ae018d4225ad5ba914
&lt;span class="nb"&gt;local               &lt;/span&gt;db9eb4a8b3eb12c36056d398b65c6385315a28e3d3a43b7a7913afc94efc86aa
&lt;span class="nb"&gt;local               &lt;/span&gt;myVolume_1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 We can also use &lt;code&gt;docker volume inspect myVolume_1&lt;/code&gt; to inspect further details such as mountpoint of the volume&lt;/p&gt;

&lt;p&gt;Now we can start a new container connected to the docker volume&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; mysql_DB &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;MYSQL_ROOT_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;root &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;MYSQL_DATABASE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;testDB &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; myVolume_1:/var/lib/mysql &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; mysql:5.7
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Accessing the CLI inside the container&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; mysql_DB /bin/bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Inside the container&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;mysql &lt;span class="nt"&gt;-u&lt;/span&gt; root &lt;span class="nt"&gt;-p&lt;/span&gt; testDB

mysql&amp;gt; create database testDB2&lt;span class="p"&gt;;&lt;/span&gt;
Query OK, 1 row affected &lt;span class="o"&gt;(&lt;/span&gt;0.00 sec&lt;span class="o"&gt;)&lt;/span&gt;

mysql&amp;gt; show databases&lt;span class="p"&gt;;&lt;/span&gt;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
| testDB             |
| testDB2            |
+--------------------+
6 rows &lt;span class="k"&gt;in &lt;/span&gt;&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;0.00 sec&lt;span class="o"&gt;)&lt;/span&gt;

mysql&amp;gt; &lt;span class="nb"&gt;exit&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 I created a &lt;code&gt;testDB2&lt;/code&gt; database to check results&lt;/p&gt;

&lt;p&gt;From the host system&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; /var/lib/docker/volumes/myVolume_1/_data/
drwxr-x--- 2 polkitd input     4096  6월 23 11:53 mysql
drwxr-x--- 2 polkitd input       20  6월 23 11:53 testDB
drwxr-x--- 2 polkitd input       20  6월 23 11:55 testDB2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 We can confirm the database that we created inside the container can be seen from the host system&lt;/p&gt;

&lt;h3&gt;
  
  
  Read-only Docker Volume
&lt;/h3&gt;

&lt;p&gt;Creating another docker volume&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker volume create myVolume_2
myVolume_2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Creating a new &lt;code&gt;index.html&lt;/code&gt; inside this volume&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /var/lib/docker/volumes/myVolume_2/_data/index.html
My Nginx Web Site &lt;span class="o"&gt;!!&lt;/span&gt;

&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; /var/lib/docker/volumes/myVolume_2/_data/
&lt;span class="nt"&gt;-rw-r--r--&lt;/span&gt; 1 root root 41  6월 23 12:06 index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Creating a new nginx container and checking&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; Nginx_1 &lt;span class="nt"&gt;-p&lt;/span&gt; 80:80 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; myVolume_2:/usr/share/nginx/html:ro &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; nginx:latest

curl localhost:80
My Nginx Web Site &lt;span class="o"&gt;!!&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 The &lt;code&gt;ro&lt;/code&gt; is used to connect it as read-only&lt;/p&gt;

&lt;p&gt;Now from inside this container if we try to edit the file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; /usr/share/nginx/html/index.html
bash: /usr/share/nginx/html/index.html: Read-only file system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 As we mounted as &lt;strong&gt;read-only&lt;/strong&gt;, we cannot write from inside the container&lt;/p&gt;

&lt;p&gt;However, we can add contents from the host system&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; /var/lib/docker/volumes/myVolume_2/_data/index.html
Change Content &lt;span class="o"&gt;!!&lt;/span&gt;

curl localhost:80
My Nginx Web Site &lt;span class="o"&gt;!!&lt;/span&gt;
Change Content &lt;span class="o"&gt;!!&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, Docker volumes provide a way to manage and persist data within containers. The three types of Docker volumes - host volumes, container volumes, and Docker volumes - offer different approaches to managing data, each with its own strengths and limitations ✔&lt;/p&gt;

</description>
      <category>devops</category>
      <category>automation</category>
      <category>cicd</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Docker Network Management</title>
      <dc:creator>Waji</dc:creator>
      <pubDate>Sun, 05 Mar 2023 10:24:22 +0000</pubDate>
      <link>https://dev.to/waji97/docker-network-management-4cc9</link>
      <guid>https://dev.to/waji97/docker-network-management-4cc9</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Docker provides various options for managing networking between containers and between containers and the host system&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F54m2d5jckhkef9yr5891.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F54m2d5jckhkef9yr5891.png" alt="Networking"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Docker's network management features allow developers to easily create and manage network connections between containers and between containers and the host system&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fadmb65lrleaiezn2hr91.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fadmb65lrleaiezn2hr91.png" alt="netowkring"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Bridge network driver:&lt;/strong&gt; Creates a virtual bridge network that allows containers to communicate with each other and with the host system&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Host network driver:&lt;/strong&gt; Removes the network isolation between the container and the host system, and allows the container to use the host system's networking stack&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;None network driver:&lt;/strong&gt; Disables networking for the container, which means that the container cannot connect to the network or access the Internet&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Overlay network driver:&lt;/strong&gt; Creates a multi-host network that spans multiple Docker hosts, allowing containers to communicate with each other across hosts&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Short hands on
&lt;/h2&gt;

&lt;p&gt;Creating 2 containers&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; myWEB nginx

&lt;span class="c"&gt;# The second container&lt;/span&gt;
docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; myWEB2 nginx


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now we if check our network interfaces&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

ifconfig
veth14ce8a6
veth5e588d5


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;👉 We will be able to see 2 new interfaces that are created automatically when the containers are created&lt;/p&gt;

&lt;p&gt;Using following command we can check bridge interface details&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

brctl show docker0
bridge name bridge &lt;span class="nb"&gt;id       &lt;/span&gt;STP enabled interfaces
docker0     8000.0242cc3ec3d9   no      veth14ce8a6
                            veth5e588d5


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;👉 So as we didn't use the &lt;code&gt;-it&lt;/code&gt; command when running the container, we can use&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; myWEB /bin/bash


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;✨ This will let us connect to the bash shell of our nginx server image&lt;/p&gt;

&lt;p&gt;Now we need to check the interface and IP address of this container&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install &lt;/span&gt;net-tools

ifconfig
ifconfig
eth0: &lt;span class="nv"&gt;flags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4163&amp;lt;UP,BROADCAST,RUNNING,MULTICAST&amp;gt;  mtu 1500
        inet 172.17.0.2  netmask 255.255.0.0  broadcast 172.17.255.255


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If we check on our second container&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

eth0: &lt;span class="nv"&gt;flags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4163&amp;lt;UP,BROADCAST,RUNNING,MULTICAST&amp;gt;  mtu 1500
        inet 172.17.0.3  netmask 255.255.0.0  broadcast 172.17.255.255


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We are able to confirm that &lt;/p&gt;

&lt;p&gt;myWEB1 =&amp;gt; eth0: 172.17.0.2&lt;br&gt;
myWEB2 =&amp;gt; eth0: 172.17.0.3&lt;/p&gt;

&lt;p&gt;Another thing we can try is creating a &lt;strong&gt;custom bridge&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker network create &lt;span class="nt"&gt;--driver&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;bridge &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--subnet&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10.1.1.0/24 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--ip-range&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10.1.1.0/24 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--gateway&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10.1.1.1 myNet


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To confirm &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker network &lt;span class="nb"&gt;ls

&lt;/span&gt;dd6eaf504ee5        myNet               bridge              &lt;span class="nb"&gt;local&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If we check the local host network interfaces&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

ifconfig

br-dd6eaf504ee5: &lt;span class="nv"&gt;flags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4099&amp;lt;UP,BROADCAST,MULTICAST&amp;gt;  mtu 1500
        inet 10.1.1.1  netmask 255.255.255.0  broadcast 10.1.1.255


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We can also inspect using&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker network inspect myNet


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;So what if we want to run a new container using our Network interface?&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--net&lt;/span&gt; myNet &lt;span class="nt"&gt;-p&lt;/span&gt; 80:80 &lt;span class="nt"&gt;--name&lt;/span&gt; myNet_nginx1 nginx

docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--net&lt;/span&gt; myNet &lt;span class="nt"&gt;-p&lt;/span&gt; 8080:80 &lt;span class="nt"&gt;--name&lt;/span&gt; myNet_nginx2 nginx


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd3sxo0w5uriufiq82au6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd3sxo0w5uriufiq82au6.png" alt="Port forwarding"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are actually performing port forwarding here&lt;/p&gt;

&lt;p&gt;If we use the Host Linux's base Interface IP address,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpl4cakrvzg1etvd25yyc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpl4cakrvzg1etvd25yyc.png" alt="Result"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will mean that we need to tell each client that they need to connect to either port 80 or port 8080 to access the website. This is not a very efficient way. This is where we could potentially use a Proxy server (HAproxy Docker Image) along with the Net-alias&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cd50z7kj8oyiiw7nxnu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cd50z7kj8oyiiw7nxnu.png" alt="Net alias"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;In this post, I shared some of the basic network management tools used in Docker. In the future posts, I will be sharing how we can actually utilize Proxy with the net-aliases ✔&lt;/p&gt;

</description>
      <category>linux</category>
      <category>docker</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Process Scheduling in Linux</title>
      <dc:creator>Waji</dc:creator>
      <pubDate>Sun, 05 Mar 2023 04:12:03 +0000</pubDate>
      <link>https://dev.to/waji97/process-scheduling-in-linux-262k</link>
      <guid>https://dev.to/waji97/process-scheduling-in-linux-262k</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;cron&lt;/strong&gt; is a process scheduler for Linux. The &lt;strong&gt;&lt;code&gt;crontab&lt;/code&gt;&lt;/strong&gt; is a list of commands that you want to run on a regular schedule, and also the name of the command used to manage that list.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;👉 &lt;code&gt;cron&lt;/code&gt; is a time-based job scheduler in Unix-like operating systems. Users can schedule jobs (commands or scripts) to run automatically at a specified time and date.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;crontab&lt;/code&gt; is the program used to install, deinstall or list the tables used to drive the cron daemon. Each user can have their own crontab, and though these are files in /var, they are not intended to be edited directly.&lt;/p&gt;

&lt;p&gt;💡 &lt;strong&gt;Format&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;MIN HOUR DOM MON DOW CMD&lt;/code&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Field    Description    Allowed Value

MIN      Minute field    0 to 59
HOUR     Hour field      0 to 23
DOM      Day of Month    1-31
MON      Month field     1-12
DOW      Day Of Week     0-6
CMD      Command         Any command to be executed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;code&gt;“ * “ 모든 값을 의미                   “ * ” : 모든 일 마다 ( 3번 필드 )&lt;br&gt;
“ - " 범위 지정                        “ 1-12 ” : 1월 부터 12월 ( 4번 필드 )&lt;br&gt;
“ , “ 여러 개의 값 지정                “ 10,15 ” : 10시 그리고 15시 ( 2번 필드 )&lt;br&gt;
“ / “ 특정 주기로 나눌 때 사용          “ */10 ” : 매 10분 마다 ( 1번 필드 )&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Linux System에서 주기적인 작업처리를 진행할 때 주로사용 된다. ( 예약 작업을 의미 )&lt;/li&gt;
&lt;li&gt;Cron은 프로세스 예약 데몬이며, 특정시각에 지정 된 작업을 수행한다.&lt;/li&gt;
&lt;li&gt;Crontab은 Cron에 의해 실행 될 예약 작업의 목록을 정의하는 파일을 말한다. ( CronTable )&lt;/li&gt;
&lt;li&gt;Cron은 사용자별 예약작업을 따로 가질 수 있다&lt;/li&gt;
&lt;li&gt;Cron작업에 대한 로그기록은 /var/log/cron에 저장 된다.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;rpm &lt;span class="nt"&gt;-qa&lt;/span&gt; | &lt;span class="nb"&gt;grep &lt;/span&gt;cronie
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ps &lt;span class="nt"&gt;-ef&lt;/span&gt; | &lt;span class="nb"&gt;grep &lt;/span&gt;cron
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; /var/log/cron
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; /var/spool/cron
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; /etc/cron
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Setting up cron
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;crontab -l&lt;/code&gt; : 예약 작업 리스트 확인&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;crontab -e&lt;/code&gt; : 예약 작업 편집&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;crontab -r&lt;/code&gt; : 예약 작업 삭제&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;crontab -u [UserName]&lt;/code&gt; : 특정 사용자의 예약작업 확인 및 편집&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;💡 We can schedule a task or a process using the &lt;code&gt;crontab -e&lt;/code&gt; command within the vi editor. Also, only the ‘root’ user can use the &lt;code&gt;crontab -u [UserName]&lt;/code&gt; command.ca&lt;/p&gt;

&lt;p&gt;We can also see how the &lt;code&gt;crontab&lt;/code&gt; file looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;➜  ~ &lt;span class="nb"&gt;cat&lt;/span&gt; /etc/crontab
&lt;span class="nv"&gt;SHELL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/bin/bash
&lt;span class="nv"&gt;PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/sbin:/bin:/usr/sbin:/usr/bin
&lt;span class="nv"&gt;MAILTO&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;root

&lt;span class="c"&gt;# For details see man 4 crontabs&lt;/span&gt;

&lt;span class="c"&gt;# Example of job definition:&lt;/span&gt;
&lt;span class="c"&gt;# .---------------- minute (0 - 59)&lt;/span&gt;
&lt;span class="c"&gt;# |  .------------- hour (0 - 23)&lt;/span&gt;
&lt;span class="c"&gt;# |  |  .---------- day of month (1 - 31)&lt;/span&gt;
&lt;span class="c"&gt;# |  |  |  .------- month (1 - 12) OR jan,feb,mar,apr ...&lt;/span&gt;
&lt;span class="c"&gt;# |  |  |  |  .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat&lt;/span&gt;
&lt;span class="c"&gt;# |  |  |  |  |&lt;/span&gt;
&lt;span class="c"&gt;# *  *  *  *  * user-name  command to be executed&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Examples
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Example1:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Making an empty directory with a script file in it. Providing ‘execute’ permission to the file as well.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;➜  ~ &lt;span class="nb"&gt;mkdir&lt;/span&gt; ./script
➜  ~ &lt;span class="nb"&gt;cd&lt;/span&gt; ./script

➜  script vi ./cron_test1.sh

&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Cron Test"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /root/cron.txt

➜  script &lt;span class="nb"&gt;chmod&lt;/span&gt; +x ./cron_test1.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Setting up the schedule for the script to run and confirming it.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;➜  script crontab &lt;span class="nt"&gt;-e&lt;/span&gt;
0 19 &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; /root/script/cron_test1.sh

➜  script crontab &lt;span class="nt"&gt;-l&lt;/span&gt;
0 19 &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; /root/script/cron_test1.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;After the script runs at 19:00&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;➜  script &lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; /root
total 12
&lt;span class="nt"&gt;-rw-r--r--&lt;/span&gt;   1 root root    0 Jan 12 15:12 풀이
&lt;span class="nt"&gt;-rw-------&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;  1 root root 1483 Jan 10 11:41 anaconda-ks.cfg
&lt;span class="k"&gt;**&lt;/span&gt;&lt;span class="nt"&gt;-rw-r--r--&lt;/span&gt;   1 root root   10 Jan 26 23:24 cron.txt

&lt;span class="c"&gt;# Checking the logs for cron**&lt;/span&gt;
➜  script &lt;span class="nb"&gt;tail&lt;/span&gt; /var/log/cron
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Example2:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Creating a new script that saves logs as &lt;code&gt;tar&lt;/code&gt; backup file and deletes it after 10 days.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;➜  script vi ./cron_test2.sh

&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="nv"&gt;DATE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; +%Y-%m-%d&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;BACKUP_DIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/backup“
tar -cvzpf &lt;/span&gt;&lt;span class="nv"&gt;$BACKUP_DIR&lt;/span&gt;&lt;span class="s2"&gt;/test-&lt;/span&gt;&lt;span class="nv"&gt;$DATE&lt;/span&gt;&lt;span class="s2"&gt;.tar.gz /var/log 
find &lt;/span&gt;&lt;span class="nv"&gt;$BACKUP_DIR&lt;/span&gt;&lt;span class="s2"&gt;/* -mtime +10 -exec rm {} &lt;/span&gt;&lt;span class="se"&gt;\;&lt;/span&gt;&lt;span class="s2"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Giving the file executable permissions and also adding new crontab settings.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;➜  script &lt;span class="nb"&gt;chmod&lt;/span&gt; +x ./cron_test2.sh
➜  script crontab &lt;span class="nt"&gt;-e&lt;/span&gt;
crontab: installing new crontab

0 20 &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; /root/script/cron_test2.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;After the script runs at 20:00&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;➜  script &lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; /backup
test-26-01-2023.tar.gz

&lt;span class="k"&gt;**&lt;/span&gt;&lt;span class="c"&gt;# Checking the logs for cron**&lt;/span&gt;
➜  script &lt;span class="nb"&gt;tail&lt;/span&gt; /var/log/cron
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Deletion, Backup and Control
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Before deleting our crontab settings,&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;➜  ~ crontab &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /root/cron_back.txt
➜  ~ &lt;span class="nb"&gt;cat&lt;/span&gt; /root/cron_back.txt 
0 19 &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; /root/script/cron_test1.sh
0 20 &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; /root/script/cron_test2.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Deleting the crontab settings&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;➜  ~ crontab &lt;span class="nt"&gt;-r&lt;/span&gt;
➜  ~ crontab &lt;span class="nt"&gt;-l&lt;/span&gt;
no crontab &lt;span class="k"&gt;for &lt;/span&gt;root
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;We can easily backup our crontab settings using the bakcup .txt file that we created.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;➜  ~ crontab /root/cron_back.txt
➜  ~ crontab &lt;span class="nt"&gt;-l&lt;/span&gt;
0 19 &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; /root/script/cron_test1.sh
0 20 &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; /root/script/cron_test2.sh
&lt;/code&gt;&lt;/pre&gt;



👉 Just a note that we cannot delete a specific `crontab`

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;We can ‘control’ what users can set up &lt;code&gt;crontab&lt;/code&gt; configs&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;➜  ~ vi /etc/cron.deny
itbank

&lt;span class="c"&gt;# If we try to use the following command as 'itbank'&lt;/span&gt;
➜ itbank ~ crontab &lt;span class="nt"&gt;-e&lt;/span&gt;
You &lt;span class="o"&gt;(&lt;/span&gt;itbank&lt;span class="o"&gt;)&lt;/span&gt; are not allowed to use this program &lt;span class="o"&gt;(&lt;/span&gt;crontab&lt;span class="o"&gt;)&lt;/span&gt;
See crontab&lt;span class="o"&gt;(&lt;/span&gt;1&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="k"&gt;for &lt;/span&gt;more information
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>linux</category>
      <category>bash</category>
      <category>beginners</category>
    </item>
    <item>
      <title>LVM &amp; VG in Linux</title>
      <dc:creator>Waji</dc:creator>
      <pubDate>Sun, 05 Mar 2023 04:08:45 +0000</pubDate>
      <link>https://dev.to/waji97/lvm-vg-in-linux-5bc2</link>
      <guid>https://dev.to/waji97/lvm-vg-in-linux-5bc2</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;We can think LVM as a device mapper that provides logical volume management for Linux. It is used for creating single logical volumes of multiple physical volumes or entire hard disks.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;👉 For example, we can consider &lt;code&gt;sda1&lt;/code&gt; partition for the &lt;code&gt;sda&lt;/code&gt; disk for the &lt;code&gt;/&lt;/code&gt; directory. If there is no space left inside this partition, normally we would consider deleting some files to make space for new files or application. However, if we use LVM, we will be connecting the ‘Volume Group’ and divide ‘Logical Volumes’ which will be connected with the ‘/’. This will allow different hard disks (&lt;code&gt;sdb&lt;/code&gt;, &lt;code&gt;sdc&lt;/code&gt;, etc.) to be added to this ‘Volume Group’ ultimately leading to more space in ‘/’.&lt;/p&gt;

&lt;p&gt;💡 Just a note that we can add hard disks to the Volume Group however we cannot &lt;strong&gt;remove&lt;/strong&gt; them!&lt;/p&gt;




&lt;h1&gt;
  
  
  LVM Hands on
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Preparation for the hands on&amp;gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Creating 2 new disks in our Linux - 1 system.

&lt;ul&gt;
&lt;li&gt;2GB Hard disk+1GB Hard disk&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;The file should contain only two entries:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# /etc/fstab&lt;/span&gt;
&lt;span class="c"&gt;# Created by anaconda on Tue Jan 10 10:45:44 2023&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;# Accessible filesystems, by reference, are maintained under '/dev/disk'&lt;/span&gt;
&lt;span class="c"&gt;# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
/dev/mapper/centos_linux--1-root /                       xfs     defaults        0 0
&lt;span class="nv"&gt;UUID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2d2f3276-dc8a-403c-bb04-53e472b9184c /boot                   xfs     defaults        0 0
/dev/mapper/centos_linux--1-swap swap                    swap    defaults        0 0
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;We can check the mount status as well using the &lt;code&gt;df -h&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# &lt;span class="nb"&gt;df&lt;/span&gt; &lt;span class="nt"&gt;-h&lt;/span&gt;
Filesystem                        Size  Used Avail Use% Mounted on
devtmpfs                          475M     0  475M   0% /dev
tmpfs                             487M     0  487M   0% /dev/shm
tmpfs                             487M  7.6M  479M   2% /run
tmpfs                             487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos_linux--1-root   17G  1.6G   16G  10% /
/dev/sda1                        1014M  168M  847M  17% /boot
tmpfs                              98M     0   98M   0% /run/user/0
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;We can also use the &lt;code&gt;lsblk&lt;/code&gt; command to see the current hard disks:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# lsblk
NAME                     MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                        8:0    0   20G  0 disk 
├─sda1                     8:1    0    1G  0 part /boot
└─sda2                     8:2    0   19G  0 part 
  ├─centos_linux--1-root 253:0    0   17G  0 lvm  /
  └─centos_linux--1-swap 253:1    0    2G  0 lvm  &lt;span class="o"&gt;[&lt;/span&gt;SWAP]
sdb                        8:16   0    1G  0 disk 
sdc                        8:32   0    1G  0 disk 
sdd                        8:48   0    2G  0 disk 
sde                        8:64   0    1G  0 disk 
sr0                       11:0    1 1024M  0 rom
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Hands on&amp;gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;We will create 1 partition for each disks with the default configuration.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048     2097151     1047552   83  Linux

Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048     2097151     1047552   83  Linux

Device Boot      Start         End      Blocks   Id  System
/dev/sdd1            2048     4194303     2096128   83  Linux

Device Boot      Start         End      Blocks   Id  System
/dev/sde1            2048     2097151     1047552   83  Linux

&lt;span class="c"&gt;#Checking the results using lsblk command.&lt;/span&gt;

&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# lsblk
NAME                     MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                        8:0    0   20G  0 disk 
├─sda1                     8:1    0    1G  0 part /boot
└─sda2                     8:2    0   19G  0 part 
  ├─centos_linux--1-root 253:0    0   17G  0 lvm  /
  └─centos_linux--1-swap 253:1    0    2G  0 lvm  &lt;span class="o"&gt;[&lt;/span&gt;SWAP]
sdb                        8:16   0    1G  0 disk 
└─sdb1                     8:17   0 1023M  0 part /sdb1
sdc                        8:32   0    1G  0 disk 
└─sdc1                     8:33   0 1023M  0 part 
sdd                        8:48   0    2G  0 disk 
└─sdd1                     8:49   0    2G  0 part 
sde                        8:64   0    1G  0 disk 
└─sde1                     8:65   0 1023M  0 part 
sr0                       11:0    1 1024M  0 rom
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In &lt;code&gt;/dev/sdb&lt;/code&gt;,&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048     2097151     1047552   83  Linux
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;We can use ‘t’ and ‘L’ . Upon pressing those buttons, we will be able to see,&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Command &lt;span class="o"&gt;(&lt;/span&gt;m &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="nb"&gt;help&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;: t
Selected partition 1
Hex code &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;type &lt;/span&gt;L to list all codes&lt;span class="o"&gt;)&lt;/span&gt;: L

 0  Empty           24  NEC DOS         81  Minix / old Lin bf  Solaris        
 1  FAT12           27  Hidden NTFS Win 82  Linux swap / So c1  DRDOS/sec &lt;span class="o"&gt;(&lt;/span&gt;FAT-
 2  XENIX root      39  Plan 9          83  Linux           c4  DRDOS/sec &lt;span class="o"&gt;(&lt;/span&gt;FAT-
 3  XENIX usr       3c  PartitionMagic  84  OS/2 hidden C:  c6  DRDOS/sec &lt;span class="o"&gt;(&lt;/span&gt;FAT-
 4  FAT16 &amp;lt;32M      40  Venix 80286     85  Linux extended  c7  Syrinx         
 5  Extended        41  PPC PReP Boot   86  NTFS volume &lt;span class="nb"&gt;set &lt;/span&gt;da  Non-FS data    
 6  FAT16           42  SFS             87  NTFS volume &lt;span class="nb"&gt;set &lt;/span&gt;db  CP/M / CTOS / &lt;span class="nb"&gt;.&lt;/span&gt;
 7  HPFS/NTFS/exFAT 4d  QNX4.x          88  Linux plaintext de  Dell Utility   
 8  AIX             4e  QNX4.x 2nd part 8e  Linux LVM       &lt;span class="nb"&gt;df  &lt;/span&gt;BootIt
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Here, we need to find the “Linux LVM”. The code is &lt;code&gt;8e&lt;/code&gt; for this so we will type it and see our results&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048     2097151     1047552   8e  Linux LVM
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;




&lt;h1&gt;
  
  
  VG Hands on
&lt;/h1&gt;

&lt;p&gt;To create a physical volume, &lt;/p&gt;

&lt;p&gt;👉 pvcreate /dev/sdb&lt;/p&gt;

&lt;p&gt;👉 pvcreate /dev/sdc&lt;/p&gt;

&lt;p&gt;👉 pvcreate /dev/sdd&lt;/p&gt;

&lt;p&gt;👉 pvcreate /dev/sde&lt;/p&gt;

&lt;p&gt;To create the Volume Group,&lt;/p&gt;

&lt;p&gt;vgcreate VG /dev/sdb /dev/sdc /dev/sdd&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# vgcreate VG /dev/sdb /dev/sdc /dev/sdd
Volume group &lt;span class="s2"&gt;"VG"&lt;/span&gt; successfully created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can use vgdisplay command to see our VGs,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# vgdisplay
  &lt;span class="nt"&gt;---&lt;/span&gt; Volume group &lt;span class="nt"&gt;---&lt;/span&gt;
  VG Name               centos_linux-1
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             &lt;span class="nb"&gt;read&lt;/span&gt;/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               &amp;lt;19.00 GiB
  PE Size               4.00 MiB
  Total PE              4863
  Alloc PE / Size       4863 / &amp;lt;19.00 GiB
  Free  PE / Size       0 / 0   
  VG UUID               7hBnc9-6dXE-9f1q-rpmW-SuBd-TZv0-TVBa6X

  &lt;span class="nt"&gt;---&lt;/span&gt; Volume group &lt;span class="nt"&gt;---&lt;/span&gt;
  VG Name               VG
  System ID             
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  1
  VG Access             &lt;span class="nb"&gt;read&lt;/span&gt;/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                3
  Act PV                3
  VG Size               &amp;lt;3.99 GiB
  PE Size               4.00 MiB
  Total PE              1021
  Alloc PE / Size       0 / 0   
  Free  PE / Size       1021 / &amp;lt;3.99 GiB
  VG UUID               DdSDJQ-EpWo-uy8S-Pb3m-9AWP-tzBw-vaLSov
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we will create the logical volumes for this VG,&lt;/p&gt;

&lt;p&gt;This will create a logical volume ‘LV-1’ using 1GB of the VG’s space.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# lvcreate &lt;span class="nt"&gt;-L&lt;/span&gt; 1GB &lt;span class="nt"&gt;-n&lt;/span&gt; LV-1 VG
  Logical volume &lt;span class="s2"&gt;"LV-1"&lt;/span&gt; created.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create a logical volume ‘LV-2’ using 50% of the Volume Group’s total space.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# lvcreate &lt;span class="nt"&gt;-l&lt;/span&gt; +50%VG &lt;span class="nt"&gt;-n&lt;/span&gt; LV-2 VG
WARNING: xfs signature detected on /dev/VG/LV-2 at offset 0. Wipe it? &lt;span class="o"&gt;[&lt;/span&gt;y/n]: y
  Wiping xfs signature on /dev/VG/LV-2.
  Logical volume &lt;span class="s2"&gt;"LV-2"&lt;/span&gt; created.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create a logical volume ‘LV-3’ using 100% of the remaining of the Volume Group.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# lvcreate &lt;span class="nt"&gt;-l&lt;/span&gt; +100%FREE &lt;span class="nt"&gt;-n&lt;/span&gt; LV-3 VG
  Logical volume &lt;span class="s2"&gt;"LV-3"&lt;/span&gt; created.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can use ‘lvscan’ here,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# lvscan
  ACTIVE            &lt;span class="s1"&gt;'/dev/centos_linux-1/swap'&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;2.00 GiB] inherit
  ACTIVE            &lt;span class="s1"&gt;'/dev/centos_linux-1/root'&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&amp;lt;17.00 GiB] inherit
  ACTIVE            &lt;span class="s1"&gt;'/dev/VG/LV-1'&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;1.00 GiB] inherit
  ACTIVE            &lt;span class="s1"&gt;'/dev/VG/LV-2'&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;1.99 GiB] inherit
  ACTIVE            &lt;span class="s1"&gt;'/dev/VG/LV-3'&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;1020.00 MiB] inherit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have logical volumes created inside the VG. Now we can just give them the filesystem,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# mkfs.xfs /dev/VG/LV-1
meta-data&lt;span class="o"&gt;=&lt;/span&gt;/dev/VG/LV-1           &lt;span class="nv"&gt;isize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;512    &lt;span class="nv"&gt;agcount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4, &lt;span class="nv"&gt;agsize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;65536 blks
         &lt;span class="o"&gt;=&lt;/span&gt;                       &lt;span class="nv"&gt;sectsz&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;512   &lt;span class="nv"&gt;attr&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2, &lt;span class="nv"&gt;projid32bit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
         &lt;span class="o"&gt;=&lt;/span&gt;                       &lt;span class="nv"&gt;crc&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1        &lt;span class="nv"&gt;finobt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0, &lt;span class="nv"&gt;sparse&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0
data     &lt;span class="o"&gt;=&lt;/span&gt;                       &lt;span class="nv"&gt;bsize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4096   &lt;span class="nv"&gt;blocks&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;262144, &lt;span class="nv"&gt;imaxpct&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;25
         &lt;span class="o"&gt;=&lt;/span&gt;                       &lt;span class="nv"&gt;sunit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0      &lt;span class="nv"&gt;swidth&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 blks
naming   &lt;span class="o"&gt;=&lt;/span&gt;version 2              &lt;span class="nv"&gt;bsize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4096   ascii-ci&lt;span class="o"&gt;=&lt;/span&gt;0 &lt;span class="nv"&gt;ftype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
log      &lt;span class="o"&gt;=&lt;/span&gt;internal log           &lt;span class="nv"&gt;bsize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4096   &lt;span class="nv"&gt;blocks&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2560, &lt;span class="nv"&gt;version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2
         &lt;span class="o"&gt;=&lt;/span&gt;                       &lt;span class="nv"&gt;sectsz&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;512   &lt;span class="nv"&gt;sunit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 blks, lazy-count&lt;span class="o"&gt;=&lt;/span&gt;1
realtime &lt;span class="o"&gt;=&lt;/span&gt;none                   &lt;span class="nv"&gt;extsz&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4096   &lt;span class="nv"&gt;blocks&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0, &lt;span class="nv"&gt;rtextents&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0

&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# mkfs.xfs /dev/VG/LV-2
meta-data&lt;span class="o"&gt;=&lt;/span&gt;/dev/VG/LV-2           &lt;span class="nv"&gt;isize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;512    &lt;span class="nv"&gt;agcount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4, &lt;span class="nv"&gt;agsize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;130560 blks
         &lt;span class="o"&gt;=&lt;/span&gt;                       &lt;span class="nv"&gt;sectsz&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;512   &lt;span class="nv"&gt;attr&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2, &lt;span class="nv"&gt;projid32bit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
         &lt;span class="o"&gt;=&lt;/span&gt;                       &lt;span class="nv"&gt;crc&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1        &lt;span class="nv"&gt;finobt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0, &lt;span class="nv"&gt;sparse&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0
data     &lt;span class="o"&gt;=&lt;/span&gt;                       &lt;span class="nv"&gt;bsize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4096   &lt;span class="nv"&gt;blocks&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;522240, &lt;span class="nv"&gt;imaxpct&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;25
         &lt;span class="o"&gt;=&lt;/span&gt;                       &lt;span class="nv"&gt;sunit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0      &lt;span class="nv"&gt;swidth&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 blks
naming   &lt;span class="o"&gt;=&lt;/span&gt;version 2              &lt;span class="nv"&gt;bsize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4096   ascii-ci&lt;span class="o"&gt;=&lt;/span&gt;0 &lt;span class="nv"&gt;ftype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
log      &lt;span class="o"&gt;=&lt;/span&gt;internal log           &lt;span class="nv"&gt;bsize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4096   &lt;span class="nv"&gt;blocks&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2560, &lt;span class="nv"&gt;version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2
         &lt;span class="o"&gt;=&lt;/span&gt;                       &lt;span class="nv"&gt;sectsz&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;512   &lt;span class="nv"&gt;sunit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 blks, lazy-count&lt;span class="o"&gt;=&lt;/span&gt;1
realtime &lt;span class="o"&gt;=&lt;/span&gt;none                   &lt;span class="nv"&gt;extsz&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4096   &lt;span class="nv"&gt;blocks&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0, &lt;span class="nv"&gt;rtextents&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0

&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# mkfs.xfs /dev/VG/LV-3
meta-data&lt;span class="o"&gt;=&lt;/span&gt;/dev/VG/LV-3           &lt;span class="nv"&gt;isize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;512    &lt;span class="nv"&gt;agcount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4, &lt;span class="nv"&gt;agsize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;65280 blks
         &lt;span class="o"&gt;=&lt;/span&gt;                       &lt;span class="nv"&gt;sectsz&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;512   &lt;span class="nv"&gt;attr&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2, &lt;span class="nv"&gt;projid32bit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
         &lt;span class="o"&gt;=&lt;/span&gt;                       &lt;span class="nv"&gt;crc&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1        &lt;span class="nv"&gt;finobt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0, &lt;span class="nv"&gt;sparse&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0
data     &lt;span class="o"&gt;=&lt;/span&gt;                       &lt;span class="nv"&gt;bsize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4096   &lt;span class="nv"&gt;blocks&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;261120, &lt;span class="nv"&gt;imaxpct&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;25
         &lt;span class="o"&gt;=&lt;/span&gt;                       &lt;span class="nv"&gt;sunit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0      &lt;span class="nv"&gt;swidth&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 blks
naming   &lt;span class="o"&gt;=&lt;/span&gt;version 2              &lt;span class="nv"&gt;bsize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4096   ascii-ci&lt;span class="o"&gt;=&lt;/span&gt;0 &lt;span class="nv"&gt;ftype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
log      &lt;span class="o"&gt;=&lt;/span&gt;internal log           &lt;span class="nv"&gt;bsize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4096   &lt;span class="nv"&gt;blocks&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;855, &lt;span class="nv"&gt;version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2
         &lt;span class="o"&gt;=&lt;/span&gt;                       &lt;span class="nv"&gt;sectsz&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;512   &lt;span class="nv"&gt;sunit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 blks, lazy-count&lt;span class="o"&gt;=&lt;/span&gt;1
realtime &lt;span class="o"&gt;=&lt;/span&gt;none                   &lt;span class="nv"&gt;extsz&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4096   &lt;span class="nv"&gt;blocks&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0, &lt;span class="nv"&gt;rtextents&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can use blkid to see the filesystem type and short summary for our partitions&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# blkid
/dev/sda1: &lt;span class="nv"&gt;UUID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"2d2f3276-dc8a-403c-bb04-53e472b9184c"&lt;/span&gt; &lt;span class="nv"&gt;TYPE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"xfs"&lt;/span&gt; 
/dev/sda2: &lt;span class="nv"&gt;UUID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"UJrgq8-Rei9-528e-Hg0W-2QHD-n7Jp-OL4L13"&lt;/span&gt; &lt;span class="nv"&gt;TYPE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"LVM2_member"&lt;/span&gt; 
/dev/sdb: &lt;span class="nv"&gt;UUID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"gsms5M-xl9S-DxUo-BO4O-Qf3V-35ch-LXUOaB"&lt;/span&gt; &lt;span class="nv"&gt;TYPE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"LVM2_member"&lt;/span&gt; 
/dev/sdc: &lt;span class="nv"&gt;UUID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"l59wcF-iURW-suWZ-dfC6-Gf2z-5T0A-qb8xaK"&lt;/span&gt; &lt;span class="nv"&gt;TYPE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"LVM2_member"&lt;/span&gt; 
/dev/sdd: &lt;span class="nv"&gt;UUID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"BV4Gjx-jMBs-4K3V-sinB-lc7R-7DEM-bYYGXS"&lt;/span&gt; &lt;span class="nv"&gt;TYPE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"LVM2_member"&lt;/span&gt; 
/dev/sde: &lt;span class="nv"&gt;UUID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"UdVQLK-TlsI-cDsh-isfT-tuAT-yURe-np6RZW"&lt;/span&gt; &lt;span class="nv"&gt;TYPE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"LVM2_member"&lt;/span&gt; 
/dev/mapper/centos_linux--1-root: &lt;span class="nv"&gt;UUID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"8f826410-dc1d-4aba-bc6d-36d25621e5cf"&lt;/span&gt; &lt;span class="nv"&gt;TYPE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"xfs"&lt;/span&gt; 
/dev/mapper/centos_linux--1-swap: &lt;span class="nv"&gt;UUID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"ad01e948-9fdd-43ad-84e5-ce4ab14a995c"&lt;/span&gt; &lt;span class="nv"&gt;TYPE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"swap"&lt;/span&gt; 
/dev/mapper/VG-LV--1: &lt;span class="nv"&gt;UUID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"048f9bc6-f8f0-46b3-95f1-c7b1900adda6"&lt;/span&gt; &lt;span class="nv"&gt;TYPE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"xfs"&lt;/span&gt; 
/dev/mapper/VG-LV--2: &lt;span class="nv"&gt;UUID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"3074670a-13de-47ed-abc0-08b3319fb20e"&lt;/span&gt; &lt;span class="nv"&gt;TYPE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"xfs"&lt;/span&gt; 
/dev/mapper/VG-LV--3: &lt;span class="nv"&gt;UUID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"96062612-1a8a-4a21-bfc6-c16ccd3e96a6"&lt;/span&gt; &lt;span class="nv"&gt;TYPE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"xfs"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can make 3 directories for these logical volumes to be mounted,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# &lt;span class="nb"&gt;mkdir&lt;/span&gt; /data1
&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# &lt;span class="nb"&gt;mkdir&lt;/span&gt; /data2
&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# &lt;span class="nb"&gt;mkdir&lt;/span&gt; /data3

&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# mount /dev/VG/LV-1 /data1
&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# mount /dev/VG/LV-2 /data2
&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# mount /dev/VG/LV-3 /data3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Confirming the mount,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# &lt;span class="nb"&gt;df&lt;/span&gt; &lt;span class="nt"&gt;-h&lt;/span&gt;
Filesystem                        Size  Used Avail Use% Mounted on
devtmpfs                          475M     0  475M   0% /dev
tmpfs                             487M     0  487M   0% /dev/shm
tmpfs                             487M  7.6M  479M   2% /run
tmpfs                             487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos_linux--1-root   17G  1.6G   16G  10% /
/dev/sda1                        1014M  168M  847M  17% /boot
tmpfs                              98M     0   98M   0% /run/user/0
/dev/mapper/VG-LV--1             1014M   33M  982M   4% /data1
/dev/mapper/VG-LV--2              2.0G   33M  2.0G   2% /data2
/dev/mapper/VG-LV--3             1017M   33M  985M   4% /data3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we want this mount to happen automatically, we can just mention them in the /etc/fstab file&lt;/p&gt;

&lt;p&gt;For now, we will just umount these volumes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# umount /data1
&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# umount /data2
&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# umount /data3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We now want to add /dev/sde to this VG. &lt;/p&gt;

&lt;p&gt;First,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# pvcreate /dev/sde
  Physical volume &lt;span class="s2"&gt;"/dev/sde"&lt;/span&gt; successfully created.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we will use the ‘vgextend’ command,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# vgextend VG /dev/sde
  Volume group &lt;span class="s2"&gt;"VG"&lt;/span&gt; successfully extended
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see the VG&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# vgdisplay VG
  &lt;span class="nt"&gt;---&lt;/span&gt; Volume group &lt;span class="nt"&gt;---&lt;/span&gt;
  VG Name               VG
  System ID             
  Format                lvm2
  Metadata Areas        4
  Metadata Sequence No  5
  VG Access             &lt;span class="nb"&gt;read&lt;/span&gt;/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               0
  Max PV                0
  Cur PV                4
  Act PV                4
  VG Size               4.98 GiB
  PE Size               4.00 MiB
  Total PE              1276
  Alloc PE / Size       1021 / &amp;lt;3.99 GiB
  Free  PE / Size       255 / 1020.00 MiB
  VG UUID               DdSDJQ-EpWo-uy8S-Pb3m-9AWP-tzBw-vaLSov
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now if we see using lvscan,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# lvscan
  ACTIVE            &lt;span class="s1"&gt;'/dev/centos_linux-1/swap'&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;2.00 GiB] inherit
  ACTIVE            &lt;span class="s1"&gt;'/dev/centos_linux-1/root'&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&amp;lt;17.00 GiB] inherit
  ACTIVE            &lt;span class="s1"&gt;'/dev/VG/LV-1'&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;1.00 GiB] inherit
  ACTIVE            &lt;span class="s1"&gt;'/dev/VG/LV-2'&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;1.99 GiB] inherit
  ACTIVE            &lt;span class="s1"&gt;'/dev/VG/LV-3'&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;1020.00 MiB] inherit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We want to add the extended logical volume to the LV-3,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# lvextend &lt;span class="nt"&gt;-l&lt;/span&gt; +100%FREE /dev/VG/LV-3
  Size of logical volume VG/LV-3 changed from 1020.00 MiB &lt;span class="o"&gt;(&lt;/span&gt;255 extents&lt;span class="o"&gt;)&lt;/span&gt; to 1.99 GiB &lt;span class="o"&gt;(&lt;/span&gt;510 extents&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
  Logical volume VG/LV-3 successfully resized.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, if we mount the LV-3 into the data3 directory, we can find,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# mount /dev/VG/LV-3 /data3

&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# &lt;span class="nb"&gt;df&lt;/span&gt; &lt;span class="nt"&gt;-h&lt;/span&gt;
Filesystem                        Size  Used Avail Use% Mounted on
devtmpfs                          475M     0  475M   0% /dev
tmpfs                             487M     0  487M   0% /dev/shm
tmpfs                             487M  7.6M  479M   2% /run
tmpfs                             487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos_linux--1-root   17G  1.6G   16G  10% /
/dev/sda1                        1014M  168M  847M  17% /boot
tmpfs                              98M     0   98M   0% /run/user/0
/dev/mapper/VG-LV--3             1017M   33M  985M   4% /data3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this current mounted state,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# xfs_growfs /dev/VG/LV-3
meta-data&lt;span class="o"&gt;=&lt;/span&gt;/dev/mapper/VG-LV--3   &lt;span class="nv"&gt;isize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;512    &lt;span class="nv"&gt;agcount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4, &lt;span class="nv"&gt;agsize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;65280 blks
         &lt;span class="o"&gt;=&lt;/span&gt;                       &lt;span class="nv"&gt;sectsz&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;512   &lt;span class="nv"&gt;attr&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2, &lt;span class="nv"&gt;projid32bit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
         &lt;span class="o"&gt;=&lt;/span&gt;                       &lt;span class="nv"&gt;crc&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1        &lt;span class="nv"&gt;finobt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 &lt;span class="nv"&gt;spinodes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0
data     &lt;span class="o"&gt;=&lt;/span&gt;                       &lt;span class="nv"&gt;bsize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4096   &lt;span class="nv"&gt;blocks&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;261120, &lt;span class="nv"&gt;imaxpct&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;25
         &lt;span class="o"&gt;=&lt;/span&gt;                       &lt;span class="nv"&gt;sunit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0      &lt;span class="nv"&gt;swidth&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 blks
naming   &lt;span class="o"&gt;=&lt;/span&gt;version 2              &lt;span class="nv"&gt;bsize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4096   ascii-ci&lt;span class="o"&gt;=&lt;/span&gt;0 &lt;span class="nv"&gt;ftype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
log      &lt;span class="o"&gt;=&lt;/span&gt;internal               &lt;span class="nv"&gt;bsize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4096   &lt;span class="nv"&gt;blocks&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;855, &lt;span class="nv"&gt;version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2
         &lt;span class="o"&gt;=&lt;/span&gt;                       &lt;span class="nv"&gt;sectsz&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;512   &lt;span class="nv"&gt;sunit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 blks, lazy-count&lt;span class="o"&gt;=&lt;/span&gt;1
realtime &lt;span class="o"&gt;=&lt;/span&gt;none                   &lt;span class="nv"&gt;extsz&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4096   &lt;span class="nv"&gt;blocks&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0, &lt;span class="nv"&gt;rtextents&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0
data blocks changed from 261120 to 522240
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now if we check,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# &lt;span class="nb"&gt;df&lt;/span&gt; &lt;span class="nt"&gt;-h&lt;/span&gt;
Filesystem                        Size  Used Avail Use% Mounted on
devtmpfs                          475M     0  475M   0% /dev
tmpfs                             487M     0  487M   0% /dev/shm
tmpfs                             487M  7.6M  479M   2% /run
tmpfs                             487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos_linux--1-root   17G  1.6G   16G  10% /
/dev/sda1                        1014M  168M  847M  17% /boot
tmpfs                              98M     0   98M   0% /run/user/0
/dev/mapper/VG-LV--3              2.0G   33M  2.0G   2% /data3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s add this to the auto-mount.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# vi /etc/fstab

&lt;span class="c"&gt;# /etc/fstab&lt;/span&gt;
&lt;span class="c"&gt;# Created by anaconda on Tue Jan 10 10:45:44 2023&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;# Accessible filesystems, by reference, are maintained under '/dev/disk'&lt;/span&gt;
&lt;span class="c"&gt;# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
/dev/mapper/centos_linux--1-root /                       xfs     defaults        0 0
&lt;span class="nv"&gt;UUID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2d2f3276-dc8a-403c-bb04-53e472b9184c /boot                   xfs     defaults        0 0
/dev/mapper/centos_linux--1-swap swap                    swap    defaults        0 0

/dev/VG/LV-1    /data1          xfs     defaults        0 0
/dev/VG/LV-2    /data2          xfs     defaults        0 0
/dev/VG/LV-3    /data3          xfs     defaults        0 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the reboot,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# &lt;span class="nb"&gt;df&lt;/span&gt; &lt;span class="nt"&gt;-h&lt;/span&gt;
Filesystem                        Size  Used Avail Use% Mounted on
devtmpfs                          475M     0  475M   0% /dev
tmpfs                             487M     0  487M   0% /dev/shm
tmpfs                             487M  7.6M  479M   2% /run
tmpfs                             487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos_linux--1-root   17G  1.6G   16G  10% /
/dev/mapper/VG-LV--3              2.0G   33M  2.0G   2% /data3
/dev/mapper/VG-LV--1             1014M   33M  982M   4% /data1
/dev/mapper/VG-LV--2              2.0G   33M  2.0G   2% /data2
/dev/sda1                        1014M  168M  847M  17% /boot
tmpfs                              98M     0   98M   0% /run/user/0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Deleting the VG and its volumes.
&lt;/h3&gt;

&lt;p&gt;Let’s umount and remove the auto-mount for these 3 volumes.&lt;/p&gt;

&lt;p&gt;Removing the local volumes from the VG,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# lvremove /dev/VG/LV-1
Do you really want to remove active logical volume VG/LV-1? &lt;span class="o"&gt;[&lt;/span&gt;y/n]: y
  Logical volume &lt;span class="s2"&gt;"LV-1"&lt;/span&gt; successfully removed

&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# lvremove /dev/VG/LV-2
Do you really want to remove active logical volume VG/LV-2? &lt;span class="o"&gt;[&lt;/span&gt;y/n]: y
  Logical volume &lt;span class="s2"&gt;"LV-2"&lt;/span&gt; successfully removed

&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# lvremove /dev/VG/LV-3
Do you really want to remove active logical volume VG/LV-3? &lt;span class="o"&gt;[&lt;/span&gt;y/n]: y
  Logical volume &lt;span class="s2"&gt;"LV-3"&lt;/span&gt; successfully removed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s check now,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# lvscan
  ACTIVE            &lt;span class="s1"&gt;'/dev/centos_linux-1/swap'&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;2.00 GiB] inherit
  ACTIVE            &lt;span class="s1"&gt;'/dev/centos_linux-1/root'&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&amp;lt;17.00 GiB] inherit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Removing the VG,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# vgremove VG
  Volume group &lt;span class="s2"&gt;"VG"&lt;/span&gt; successfully removed

&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# vgdisplay VG
  Volume group &lt;span class="s2"&gt;"VG"&lt;/span&gt; not found
  Cannot process volume group VG
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Removing the physical volumes,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# pvremove /dev/sdb
  Labels on physical volume &lt;span class="s2"&gt;"/dev/sdb"&lt;/span&gt; successfully wiped.

&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# pvremove /dev/sdc
  Labels on physical volume &lt;span class="s2"&gt;"/dev/sdc"&lt;/span&gt; successfully wiped.

&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# pvremove /dev/sdd
  Labels on physical volume &lt;span class="s2"&gt;"/dev/sdd"&lt;/span&gt; successfully wiped.

&lt;span class="o"&gt;[&lt;/span&gt;root@Linux-1 ~]# pvremove /dev/sde
  Labels on physical volume &lt;span class="s2"&gt;"/dev/sde"&lt;/span&gt; successfully wiped.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>linux</category>
      <category>bash</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Docker Container Management</title>
      <dc:creator>Waji</dc:creator>
      <pubDate>Fri, 03 Mar 2023 01:41:59 +0000</pubDate>
      <link>https://dev.to/waji97/docker-container-management-2fnf</link>
      <guid>https://dev.to/waji97/docker-container-management-2fnf</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Docker container management involves tasks such as creating, starting, stopping, and removing containers, as well as monitoring and troubleshooting container performance and health&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh6ceemvd0i360uu7cswg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh6ceemvd0i360uu7cswg.png" alt="Docker container" width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Some Basic Commands Hands-on
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 This will show us some options that we can use and also some of the management commands&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker search hello-world
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 This will search for the 'hello-world' images from dockerhub&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker pull hello-world
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 This will pull the latest hello-world image from the dockerhub&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker images
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 This will show us pulled images&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;--name&lt;/span&gt; hello1 &lt;span class="nt"&gt;-it&lt;/span&gt; hello-world
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 This command creates and starts a new Docker container using the hello-world image &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;'hello1'&lt;/strong&gt; is the container name and &lt;strong&gt;'-it'&lt;/strong&gt; enables interactive mode but '-it' shouldn't be used as it is for developing or debugging only as it enables us to interact with the container's shell&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In the above case, as the 'hello-world' image doesn't have any shell to be interacted with, it automatically pushes us back to our Linux terminal. To check any running container process&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 This will show us nothing as the 'hello1' container was stopped immediately&lt;/p&gt;

&lt;p&gt;But we can see previous container using&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker ps &lt;span class="nt"&gt;-a&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 This will show us 'hello1' container and at what time it exited&lt;/p&gt;

&lt;p&gt;Now let's use the &lt;code&gt;-d&lt;/code&gt; option to run a docker container&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;--name&lt;/span&gt; hello2 &lt;span class="nt"&gt;-d&lt;/span&gt; hello-world
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 This will show us the container ID and quit itself&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 The 'hello-world' image is designed to automatically quit itself after its processing. &lt;code&gt;-d&lt;/code&gt; option indicates the container to run in the background&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If we want to delete these exited containers,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;rm&lt;/span&gt; &amp;lt;first-four-digits-of-container-ID&amp;gt;

&lt;span class="c"&gt;# OR&lt;/span&gt;

docker &lt;span class="nb"&gt;rm&lt;/span&gt; &amp;lt;container-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To remove the &lt;code&gt;hello-world&lt;/code&gt; image&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker rmi hello-world
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can confirm this using &lt;code&gt;docker images&lt;/code&gt; commmand&lt;/p&gt;

&lt;p&gt;Next, we can try creating a container&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker create &lt;span class="nt"&gt;--name&lt;/span&gt; myWEB nginx:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 &lt;code&gt;create&lt;/code&gt; command simply 'creates' the container and doesn't run it while the &lt;code&gt;run&lt;/code&gt; command creates and run the container &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;code&gt;run&lt;/code&gt; is a shortcut for the &lt;code&gt;docker create&lt;/code&gt; and &lt;code&gt;docker start&lt;/code&gt; commands combined&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5931k93h0ttgtnhy3xz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5931k93h0ttgtnhy3xz.png" alt="Run and Create" width="800" height="188"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can also check the status using the &lt;code&gt;ps -a&lt;/code&gt; that will show us the container status as 'Created'&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker start myWEB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 This will start the container and we can see the process using &lt;code&gt;docker ps&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;What if we want to delete this container?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;rm &lt;/span&gt;myWEB
Error response from daemon: You cannot remove a running container
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can either stop the container first and delete it or force delete it using&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; myWEB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally removing the nginx image&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker rmi nginx:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another test we can do using an ubuntu image&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; ubuntu_1 ubuntu:bionic
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 This will allow us to connect to ubuntu shell automatically &lt;/p&gt;

&lt;p&gt;So what if we want to exit from the ubuntu image terminal while keeping the container running?&lt;/p&gt;

&lt;p&gt;✨ We will use &lt;code&gt;CTRL + P + Q&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;We can confirm that our container is running even after returning to our own terminal using &lt;code&gt;docker ps&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To return to the container shell&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker attach &amp;lt;container-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can also pause and unpause the container&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker pause ubuntu_1

docker unpause ubuntu_1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Stopping a container&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker stop ubuntu_1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Killing a container&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;kill &lt;/span&gt;ubuntu_1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 This is force stop&lt;/p&gt;

&lt;p&gt;To remove all Docker containers that are currently stopped&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;docker ps &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nt"&gt;-q&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now what if we don't apply a custom name for our container&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; ubuntu:bionic
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 The &lt;code&gt;--rm&lt;/code&gt; option automatically removes the container when we 'exit' from the container &lt;/p&gt;

&lt;p&gt;We can also change the container name&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker rename &amp;lt;current-container-name&amp;gt; &amp;lt;new-custom-container-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To inspect a container or an image&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker inspect &amp;lt;container-name&amp;gt; 

&lt;span class="c"&gt;# OR&lt;/span&gt;

docker inspect &amp;lt;image-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;✍ In this post I walked through some of the basic docker container commands that are beginner friendly&lt;/p&gt;

</description>
      <category>fintech</category>
      <category>productivity</category>
      <category>gemini</category>
    </item>
    <item>
      <title>Docker &amp; Kubernetes Setup</title>
      <dc:creator>Waji</dc:creator>
      <pubDate>Thu, 02 Mar 2023 02:06:39 +0000</pubDate>
      <link>https://dev.to/waji97/docker-kubernetes-setup-5bf6</link>
      <guid>https://dev.to/waji97/docker-kubernetes-setup-5bf6</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Docker&lt;/strong&gt; is a platform that allows developers to create, deploy, and run applications in containers. Docker Compose simplifies managing multi-container applications by defining and running multiple containers as a single application with dependencies and configurations. Compose plugins extend the functionality of Docker Compose, allowing developers to add new commands, modify behavior, or integrate with external services&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxnx6nueoygyapc2vbr38.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxnx6nueoygyapc2vbr38.png" alt="Docker Virtualization" width="800" height="222"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Kubernetes&lt;/strong&gt; is an open-source platform for container orchestration and management that automates deployment, scaling, and management of containerized applications. It is often used in conjunction with Docker to manage containerized applications. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;✨  Kubernetes provides a framework for automating deployment, scaling, and operations of application containers across clusters of hosts, while Docker provides a standardized way to package and distribute those containers&lt;/p&gt;

&lt;p&gt;👉 I will be installing Docker in 3 CentOS7 Virtual Machines in my VMWare workstation&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;192.168.1.10 👉 Master&lt;br&gt;
192.168.1.20 👉 Node-1&lt;br&gt;
192.168.1.30 👉 Node-2&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Before I begin, I will share official documentations available on installing the docker engine and compose&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/engine/install/centos/"&gt;https://docs.docker.com/engine/install/centos/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/compose/install/linux/"&gt;https://docs.docker.com/compose/install/linux/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Installing Docker
&lt;/h2&gt;

&lt;p&gt;In all systems,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yum &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nb"&gt;install &lt;/span&gt;yum-utils

&lt;span class="c"&gt;# Saving the docker repository to install docker from it&lt;/span&gt;
yum-config-manager &lt;span class="nt"&gt;--add-repo&lt;/span&gt; https://download.docker.com/linux/centos/docker-ce.repo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Checking docker files&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yum list docker-ce &lt;span class="nt"&gt;--showduplicates&lt;/span&gt; | &lt;span class="nb"&gt;sort&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 This should show us different versions of docker available. I will be proceeding with version 18.x&lt;/p&gt;

&lt;p&gt;Installing the Docker Engine&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yum &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nb"&gt;install &lt;/span&gt;docker-ce-18.09.8 docker-ce-cli-18.09.8 containerd.io docker-compose-plugin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Checking the docker version&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;rpm &lt;span class="nt"&gt;-qa&lt;/span&gt; | &lt;span class="nb"&gt;grep &lt;/span&gt;docker
docker-ce-cli-18.09.8-3.el7.x86_64
docker-compose-plugin-2.6.0-3.el7.x86_64
docker-ce-18.09.8-3.el7.x86_64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enabling and starting the docker service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl start docker
systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can check the Docker version&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 Working only from the 'Master' Linux&lt;/p&gt;

&lt;p&gt;Installing Docker Compose&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-SL&lt;/span&gt; https://github.com/docker/compose/releases/download/v2.2.3/docker-compose-linux-x86_64 &lt;span class="nt"&gt;-o&lt;/span&gt; /usr/local/bin/docker-compose
&lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; /usr/local/bin/docker-compose /usr/bin/docker-compose
&lt;span class="nb"&gt;chmod&lt;/span&gt; +x /usr/local/bin/docker-compose
docker-compose version
Docker Compose version v2.2.3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Installing Kubernetes
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;👉 I have VSCode and Kubernetes installed in my Host PC to write manifest files for Kubernetes with ease&lt;/p&gt;

&lt;p&gt;✨ From the 'Master' VM&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Configuring SWAP memory to be deactivated&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'/ swap / s/^\(.*\)$/#\1/g'&lt;/span&gt; /etc/fstab
swapoff &lt;span class="nt"&gt;-a&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Creating the &lt;code&gt;daemon.json&lt;/code&gt; file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vi /etc/docker/daemon.json

&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"exec-opts"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"native.cgroupdriver=systemd"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Reloading the daemon and restarting docker&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl daemon-reload
systemctl restart docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 This will change the Docker cgroup drive&lt;/p&gt;

&lt;p&gt;Adding the Kubernetes Local Repository&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vi /etc/yum.repos.d/kubernetes.repo

&lt;span class="o"&gt;[&lt;/span&gt;kubernetes]
&lt;span class="nv"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Kubernetes
&lt;span class="nv"&gt;baseurl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
&lt;span class="nv"&gt;enabled&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
&lt;span class="nv"&gt;gpgcheck&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0
&lt;span class="nv"&gt;repo_gpgcheck&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0
&lt;span class="nv"&gt;gpgkey&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

&lt;span class="c"&gt;# Installing Kuber&lt;/span&gt;
yum &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; kubelet-1.19.16-0.x86_64 kubectl-1.19.16-0.x86_64 kubeadm-1.19.16-0.x86_64 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Confirming kubernetes installation&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;rpm &lt;span class="nt"&gt;-qa&lt;/span&gt; | &lt;span class="nb"&gt;grep &lt;/span&gt;kube
kubelet-1.19.16-0.x86_64
kubectl-1.19.16-0.x86_64
kubernetes-cni-0.8.7-0.x86_64
kubeadm-1.19.16-0.x86_64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enabling ports used&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;firewall-cmd &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--add-port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80/tcp
firewall-cmd &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--add-port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;443/tcp
firewall-cmd &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--add-port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2376/tcp
firewall-cmd &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--add-port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2379/tcp
firewall-cmd &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--add-port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2380/tcp
firewall-cmd &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--add-port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;6443/tcp
firewall-cmd &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--add-port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;8472/udp
firewall-cmd &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--add-port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;9099/tcp
firewall-cmd &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--add-port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10250/tcp
firewall-cmd &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--add-port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10251/tcp
firewall-cmd &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--add-port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10252/tcp
firewall-cmd &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--add-port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10254/tcp
firewall-cmd &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--add-port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10255/tcp
firewall-cmd &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--add-port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;30000-32767/tcp
firewall-cmd &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--add-port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;30000-32767/udp
firewall-cmd &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--add-masquerade&lt;/span&gt;
firewall-cmd &lt;span class="nt"&gt;--reload&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Configuring the Kuber cluster on our Master&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubeadm init &lt;span class="nt"&gt;--apiserver-advertise-address&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;192.168.1.10 &lt;span class="nt"&gt;--pod-network-cidr&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10.244.0.0/16

Then you can &lt;span class="nb"&gt;join &lt;/span&gt;any number of worker nodes by running the following on each as root:

kubeadm &lt;span class="nb"&gt;join &lt;/span&gt;192.168.1.10:6443 &lt;span class="nt"&gt;--token&lt;/span&gt; y20gfe.s5kx71a4nh0gzhsw &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--discovery-token-ca-cert-hash&lt;/span&gt; sha256:7c46fa0f4ce64ea4642183250afb3305ca17a89867ed877e2eacdf2a835095b3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 The final line says to use the "join" command when adding nodes to the cluster.&lt;/p&gt;

&lt;p&gt;👉 I specified the Master Node's IP address with the "apiserver-advertise-address" command and the network area for Pod usage with the "pod-network-cidr" command&lt;/p&gt;

&lt;p&gt;Moving the authentication data to use &lt;code&gt;kubectl&lt;/code&gt; under &lt;code&gt;root&lt;/code&gt; user's Home directory &lt;code&gt;.kube&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube
&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; /etc/kubernetes/admin.conf &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config
&lt;span class="nb"&gt;chown&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;:&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Installing the Network Plugin to be used in Kuber cluster (Flannel)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-O&lt;/span&gt; &lt;span class="nt"&gt;-L&lt;/span&gt; https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 As I am using a VM with NAT connection, I needed to add the NIC device name to the &lt;code&gt;flannel.yml&lt;/code&gt; file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vi kube-flannel.yml

args:
        - &lt;span class="nt"&gt;--ip-masq&lt;/span&gt;
        - &lt;span class="nt"&gt;--kube-subnet-mgr&lt;/span&gt;
        - &lt;span class="nt"&gt;--iface&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ens32
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we just need to apply the flannel plugin&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; kube-flannel.yml
systemctl restart kubelet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;✨ From Both Node-1 and Node-2 VMs&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;👉 Same steps as Master node till firewall settings in both nodes&lt;/p&gt;

&lt;p&gt;After firewall settings are done, we will use the &lt;code&gt;join&lt;/code&gt; command that we got from the Master node&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubeadm &lt;span class="nb"&gt;join &lt;/span&gt;192.168.1.10:6443 &lt;span class="nt"&gt;--token&lt;/span&gt; 172vji.r0u77jcmcnccm6no &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--discovery-token-ca-cert-hash&lt;/span&gt; sha256:72b9648c647f724ab52471847cb06c47b23097375f2e67633b745fc69db16e8d 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 This will add both nodes to the Kuber cluster created by the Master&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;This node has joined the cluster:
&lt;span class="k"&gt;*&lt;/span&gt; Certificate signing request was sent to apiserver and a response was received.
&lt;span class="k"&gt;*&lt;/span&gt; The Kubelet was informed of the new secure connection details.

Run &lt;span class="s1"&gt;'kubectl get nodes'&lt;/span&gt; on the control-plane to see this node &lt;span class="nb"&gt;join &lt;/span&gt;the cluster.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Upon successful joining,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes
NAME     STATUS   ROLES    AGE    VERSION
master   Ready    master   107m   v1.19.16
node-1   Ready    &amp;lt;none&amp;gt;   91s    v1.19.16
node-2   Ready    &amp;lt;none&amp;gt;   48s    v1.19.16
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can also check &lt;code&gt;pods&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;--all-namespaces&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 A pod represents a single instance of a running process in the cluster&lt;/p&gt;




&lt;p&gt;✍ Today I walked through installing Docker and Kubernetes in Linux systems and joined 2 working nodes to the Master Kuber cluster&lt;/p&gt;

</description>
      <category>linux</category>
      <category>docker</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Proxy Server Setup (HAproxy)</title>
      <dc:creator>Waji</dc:creator>
      <pubDate>Tue, 28 Feb 2023 06:14:51 +0000</pubDate>
      <link>https://dev.to/waji97/proxy-server-setup-jk3</link>
      <guid>https://dev.to/waji97/proxy-server-setup-jk3</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Proxy servers&lt;/strong&gt; are computer servers that act as intermediaries between clients (such as web browsers) and servers. When a client requests a resource from a server, the request is first sent to the proxy server, which then forwards the request to the server on behalf of the client. The server then sends the response back to the proxy server, which in turn sends it back to the client&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;👉 Why do we use it?&lt;/p&gt;

&lt;p&gt;Mainly we would use a proxy server for load balancing. They can improve performance, provide anonymity, filter content, enable access to blocked resources, enhance security, control content, bypass geographic restrictions, protect privacy, aid in debugging and troubleshooting, and save bandwidth&lt;/p&gt;

&lt;p&gt;There are 2 modes in Load balancing. L4 load balancing is based on transport layer while L7 load balancing is based on application layer. L4 load balancing uses IP addresses and port numbers while L7 load balancing uses content of the traffic like HTTP headers or cookies&lt;/p&gt;

&lt;p&gt;✨ I will be using HAProxy that uses both modes of Load Balancing &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Furthermore, I will be setting up 2 Proxy servers (one as the backup server). I already have 2 web servers prepared for testing purpose&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Installing HAProxy
&lt;/h2&gt;

&lt;p&gt;I will install some initial required packages&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yum &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nb"&gt;install &lt;/span&gt;gcc openssl openssl-devel systemd-devel wget
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 &lt;strong&gt;gcc&lt;/strong&gt; is a compiler, &lt;strong&gt;openssl&lt;/strong&gt; for ssl/tls setting, &lt;strong&gt;systemd-devel&lt;/strong&gt; for including the source into systemd, &lt;strong&gt;wget&lt;/strong&gt; for downloading the haproxy package from the website&lt;/p&gt;

&lt;p&gt;Creating an empty directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; /HAproxy
&lt;span class="nb"&gt;cd&lt;/span&gt; /HAproxy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Downloading the HAproxy package,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wget http://www.haproxy.org/download/2.3/src/haproxy-2.3.10.tar.gz

&lt;span class="c"&gt;# Unzipping&lt;/span&gt;
&lt;span class="nb"&gt;tar &lt;/span&gt;xvfz haproxy-2.3.10.tar.gz
&lt;span class="nb"&gt;cd &lt;/span&gt;haproxy-2.3.10

&lt;span class="c"&gt;# Compiling &amp;amp; installing required files&lt;/span&gt;
make &lt;span class="nv"&gt;TARGET&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;linux-glibc &lt;span class="nv"&gt;USE_OPENSSL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="nv"&gt;USE_SYSTEMD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
make &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Downloading and saving HAproxy service file into &lt;code&gt;/etc/systemd/system/haproxy-service&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="s2"&gt;"https://git.haproxy.org/?p=haproxy-2.3.git;a=blob_plain;f=contrib/systemd/haproxy.service.in;"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /etc/systemd/system/haproxy.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can confirm the file after this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; /etc/systemd/system/haproxy.service
&lt;span class="nt"&gt;-rw-r--r--&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; 1 root root 1409 Feb 28 10:38 /etc/systemd/system/haproxy.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Editing service lines inside this file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vi /etc/systemd/system/haproxy.service

&lt;span class="nv"&gt;ExecStartPre&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/sbin/haproxy &lt;span class="nt"&gt;-Ws&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nv"&gt;$CONFIG&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nt"&gt;-q&lt;/span&gt; &lt;span class="nv"&gt;$EXTRAOPTS&lt;/span&gt;
&lt;span class="nv"&gt;ExecStart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/sbin/haproxy &lt;span class="nt"&gt;-Ws&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nv"&gt;$CONFIG&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="nv"&gt;$PIDFILE&lt;/span&gt; &lt;span class="nv"&gt;$EXTRAOPTS&lt;/span&gt;
&lt;span class="nv"&gt;ExecReload&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/sbin/haproxy &lt;span class="nt"&gt;-Ws&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nv"&gt;$CONFIG&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nt"&gt;-q&lt;/span&gt; &lt;span class="nv"&gt;$EXTRAOPTS&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After this, we can try starting the haproxy service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl start haproxy
Job &lt;span class="k"&gt;for &lt;/span&gt;haproxy.service failed because the control process exited with error code. See &lt;span class="s2"&gt;"systemctl status haproxy.service"&lt;/span&gt; and &lt;span class="s2"&gt;"journalctl -xe"&lt;/span&gt; &lt;span class="k"&gt;for &lt;/span&gt;details.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 This means that the primary setup has been done as the control process actually tried to start the service&lt;/p&gt;

&lt;p&gt;Now, we need to make some directories for HAproxy&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; /etc/haproxy
&lt;span class="nb"&gt;mkdir&lt;/span&gt; /etc/haproxy/certs
&lt;span class="nb"&gt;mkdir&lt;/span&gt; /etc/haproxy/errors
&lt;span class="nb"&gt;mkdir&lt;/span&gt; /var/log/haproxy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copying errorfiles under haproxy errors&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ./examples/errorfiles
&lt;span class="nb"&gt;cp&lt;/span&gt; ./&lt;span class="k"&gt;*&lt;/span&gt; .http /etc/haproxy/errors
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Adding a HAproxy service user&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;useradd &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"HAproxy Daemon User"&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; /sbin/nologin haproxy

&lt;span class="nb"&gt;tail&lt;/span&gt; &lt;span class="nt"&gt;-1&lt;/span&gt; /etc/passwd
haproxy:x:1001:1001:HAproxy Daemon User:/home/haproxy:/sbin/nologin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Setting up the log file configuration&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vi /etc/rsyslog.d/haproxy.conf

&lt;span class="nv"&gt;$ModLoad&lt;/span&gt; imudp
&lt;span class="nv"&gt;$UDPServerAddress&lt;/span&gt; 127.0.0.1
&lt;span class="nv"&gt;$UDPServerRun&lt;/span&gt; 514
local0.&lt;span class="k"&gt;*&lt;/span&gt; /var/log/haproxy/haproxy-traffic.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Setting up logrotate&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vi /etc/logrotate.d/haproxy

  /var/log/haproxy/&lt;span class="k"&gt;*&lt;/span&gt;.log &lt;span class="o"&gt;{&lt;/span&gt;
        daily
        rotate 30
        create 0600 root root
        compress
        notifempty
        missingok
        sharedscripts
        postrotate
                /bin/systemctl restart rsyslog.service &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null 2&amp;gt;/dev/null &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true
        &lt;/span&gt;endscript
   &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Main Configuration file
&lt;/h2&gt;

&lt;p&gt;We need the following lines inside the main config file for HAproxy&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vi /etc/haproxy/haproxy.cfg

global
    daemon
    maxconn 4000 
    user haproxy
    group haproxy
    log 127.0.0.1:514 local0

defaults
    mode http
    option redispatch 
    retries 3
        &lt;span class="c"&gt;# Redispatching to the another web server if retries are over 3&lt;/span&gt;
    log global
    option httplog
    option dontlognull
    option dontlog-normal
    option http-server-close
    option forwardfor
        &lt;span class="c"&gt;# These are the log formats&lt;/span&gt;

    maxconn 3000
    &lt;span class="nb"&gt;timeout &lt;/span&gt;connect 10s
    &lt;span class="nb"&gt;timeout &lt;/span&gt;http-request 10s
    &lt;span class="nb"&gt;timeout &lt;/span&gt;http-keep-alive 10s
    &lt;span class="nb"&gt;timeout &lt;/span&gt;client 1m
    &lt;span class="nb"&gt;timeout &lt;/span&gt;server 1m
    &lt;span class="nb"&gt;timeout &lt;/span&gt;queue 1m

    errorfile 400 /etc/haproxy/errors/400.http
    errorfile 403 /etc/haproxy/errors/403.http
    errorfile 408 /etc/haproxy/errors/408.http
    errorfile 500 /etc/haproxy/errors/500.http
    errorfile 502 /etc/haproxy/errors/502.http
    errorfile 503 /etc/haproxy/errors/503.http
    errorfile 504 /etc/haproxy/errors/504.http

listen stats
    &lt;span class="nb"&gt;bind&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt;:9000
    stats &lt;span class="nb"&gt;enable
    &lt;/span&gt;stats realm Haproxy Stats Page
    stats uri /
    stats auth admin:haproxy1
        &lt;span class="c"&gt;# Authentication for the Admin page&lt;/span&gt;

frontend proxy
    &lt;span class="nb"&gt;bind&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt;:80
    default_backend WEB_SRV_list


backend WEB_SRV_list
    balance roundrobin
    option httpchk HEAD /
    http-request set-header X-Forwarded-Port %[dst_port]
    cookie SRVID insert indirect nocache maxlife 10m
    server WEB_01 192.168.1.128:80 cookie WEB_01 check inter 3000 fall 5 rise 3
    server WEB_02 192.168.1.129:80 cookie WEB_02 check inter 3000 fall 5 rise 3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 HAproxy configuration file setup reference:&lt;br&gt;
※ &lt;a href="https://www.haproxy.com/blog/the-four-essential-sections-of-an-haproxy-configuration/" rel="noopener noreferrer"&gt;https://www.haproxy.com/blog/the-four-essential-sections-of-an-haproxy-configuration/&lt;/a&gt;&lt;br&gt;
※ &lt;a href="https://cbonte.github.io/haproxy-dconv/2.3/configuration.html#3.1" rel="noopener noreferrer"&gt;https://cbonte.github.io/haproxy-dconv/2.3/configuration.html#3.1&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The above sets rules for load-balancing incoming traffic across two backend web servers using round-robin, health checks, and cookies, and includes settings for connection limits, timeouts, error files, logging, and authentication. The "listen stats" section enables access to HAProxy statistics with authentication&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Verifying the config file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;haproxy &lt;span class="nt"&gt;-f&lt;/span&gt; /etc/haproxy/haproxy.cfg &lt;span class="nt"&gt;-c&lt;/span&gt;
Configuration file is valid
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Starting and enabling haproxy&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl start haproxy
systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;haproxy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Editing firewall settings&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;firewall-cmd &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--add-service&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http
firewall-cmd &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--add-port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;9000/tcp
firewall-cmd &lt;span class="nt"&gt;--reload&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;👉 From both Web Servers&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We will open the configuration file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vi /etc/httpd/conf/httpd.conf

&lt;span class="c"&gt;# Line 196&lt;/span&gt;
SetEnvIf Request_Method HEAD Health-Check        
LogFormat &lt;span class="s2"&gt;"%h %l %u %t &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;%r&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; %&amp;gt;s %b &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;%{Referer}i&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;%{User-Agent}i&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; combined
LogFormat &lt;span class="s2"&gt;"%{x-forwarded-for}i %l %u %t &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;%r&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; %&amp;gt;s %b"&lt;/span&gt; common

&lt;span class="c"&gt;# Line 219&lt;/span&gt;
CustomLog &lt;span class="s2"&gt;"logs/access_log"&lt;/span&gt; common &lt;span class="nb"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=!&lt;/span&gt;Health-Check
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we need to restart the web server service and set firewall&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl restart httpd
firewall-cmd &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--add-service&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http
firewall-cmd &lt;span class="nt"&gt;--reload&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 If we don't have any page running on web servers&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; /var/www/html
&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; index.html
WEB 1

&lt;span class="c"&gt;# From Server 2&lt;/span&gt;
&lt;span class="nb"&gt;cd&lt;/span&gt; /var/www/html
&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; index.html
WEB 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To test if our proxy server is working, we can navigate to our proxy server's IP address,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jaah6x237eikb8lemba.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jaah6x237eikb8lemba.png" alt="Proxy" width="800" height="154"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After deleting the cookies data, we can reload the page to see that the proxy server forwards the client to the other server as well&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk51pw7w2xb1r9iv3w0m2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk51pw7w2xb1r9iv3w0m2.png" alt="Proxy2" width="743" height="188"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can also check report for HAproxy by connecting to the port 9000&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffr0s0bd6f0zwz1jb6quq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffr0s0bd6f0zwz1jb6quq.png" alt="Stats" width="800" height="283"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Applying SSL/TLS
&lt;/h2&gt;

&lt;p&gt;👉 From the proxy server&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;rpm &lt;span class="nt"&gt;-qa&lt;/span&gt; | &lt;span class="nb"&gt;grep &lt;/span&gt;openssl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We needed to confirm that we have openssl package available&lt;/p&gt;

&lt;p&gt;Creating the private key&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openssl genrsa &lt;span class="nt"&gt;-out&lt;/span&gt; /etc/haproxy/certs/ha01.key 2048
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Creating the authentication .csr file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openssl req &lt;span class="nt"&gt;-new&lt;/span&gt; &lt;span class="nt"&gt;-key&lt;/span&gt; /etc/haproxy/certs/ha01.key &lt;span class="nt"&gt;-out&lt;/span&gt; /etc/haproxy/certs/ha01.csr
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Creating the public key cert&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openssl x509 &lt;span class="nt"&gt;-req&lt;/span&gt; &lt;span class="nt"&gt;-days&lt;/span&gt; 365 &lt;span class="nt"&gt;-in&lt;/span&gt; /etc/haproxy/certs/ha01.csr &lt;span class="nt"&gt;-signkey&lt;/span&gt; /etc/haproxy/certs/ha01.key &lt;span class="nt"&gt;-out&lt;/span&gt; /etc/haproxy/certs/ha01.crt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We need a backup of the Public and the private key in one file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; /etc/haproxy/certs
&lt;span class="nb"&gt;cat &lt;/span&gt;ha01.crt ha01.key &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; ha01_ssl.crt
&lt;span class="nb"&gt;mv &lt;/span&gt;ha01.&lt;span class="k"&gt;*&lt;/span&gt; /backup &lt;span class="c"&gt;# Create /backup if it isn't available&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally need to edit the main haproxy config file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vi /etc/haproxy/haproxy.cfg

&lt;span class="c"&gt;# Adding these lines below the 'global' block&lt;/span&gt;
ssl-default-bind-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256 &lt;span class="c"&gt;# the hash cipher set for SSL/TLS&lt;/span&gt;
 ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets &lt;span class="c"&gt;# Allowing only TLSv1.2 above versions&lt;/span&gt;

&lt;span class="c"&gt;# Adding these lines below the 'frontend' block&lt;/span&gt;
&lt;span class="nb"&gt;bind&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt;:443 ssl crt /etc/haproxy/certs/ha01_ssl.crt
 http-request redirect scheme https code 308 unless &lt;span class="o"&gt;{&lt;/span&gt; ssl_fc &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="c"&gt;# the `unless {ssl_fc}` prevents looping when redirecting to `https`&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we need to verify the configuration and add firewall&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;haproxy &lt;span class="nt"&gt;-f&lt;/span&gt; /etc/haproxy/haproxy.cfg &lt;span class="nt"&gt;-c&lt;/span&gt;

firewall-cmd &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--add-service&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https
firewall-cmd &lt;span class="nt"&gt;--reload&lt;/span&gt;

systemctl restart haproxy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can use &lt;code&gt;https&lt;/code&gt; to see the result&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcq25haocprvr2fpvz5ie.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcq25haocprvr2fpvz5ie.png" alt="Https" width="697" height="145"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Setting up a Backup Server
&lt;/h2&gt;

&lt;p&gt;👉 What if the proxy server goes down? We would need a backup proxy server to serve when the main server goes down. To make a backup server and to make it run when the main goes down we first need a duplicate of the Proxy server that we created.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I have already created a duplicate proxy server (only the SSL key names are HA02 instead of HA01)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So on both Proxy servers,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo &lt;/span&gt;net.ipv4.ip_nonlocal_bind&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; /etc/sysctl.conf
sysctl &lt;span class="nt"&gt;-p&lt;/span&gt;
net.ipv4.ip_nonlocal_bind &lt;span class="o"&gt;=&lt;/span&gt; 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 We are making the nonlocal_bind parameter to be active. &lt;/p&gt;

&lt;p&gt;From both of the Proxy Servers&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yum &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nb"&gt;install &lt;/span&gt;keepalived-&lt;span class="k"&gt;*&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we have to set up the config files for the &lt;code&gt;keepalived&lt;/code&gt; service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Main Proxy Server&lt;/span&gt;
vi /etc/keepalived/keepalived.conf

global_defs &lt;span class="o"&gt;{&lt;/span&gt;
   router_id HA_01
&lt;span class="o"&gt;}&lt;/span&gt;

vrrp_script HA_Check &lt;span class="o"&gt;{&lt;/span&gt;
        script &lt;span class="s2"&gt;"killall -0 haproxy"&lt;/span&gt;
        interval 1
    rise 3
    fall 3
        weight 2
&lt;span class="o"&gt;}&lt;/span&gt;

vrrp_instance HAGroup_1 &lt;span class="o"&gt;{&lt;/span&gt;
    state MASTER
    interface ens32
    garp_master_delay 5
    virtual_router_id 51
    priority 110
    advert_int 1
    authentication &lt;span class="o"&gt;{&lt;/span&gt;
        auth_type PASS
        auth_pass test123
    &lt;span class="o"&gt;}&lt;/span&gt;
    virtual_ipaddress &lt;span class="o"&gt;{&lt;/span&gt;
        192.168.1.150
    &lt;span class="o"&gt;}&lt;/span&gt;
    track_script &lt;span class="o"&gt;{&lt;/span&gt;
        HA_Check
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Backup Proxy Server&lt;/span&gt;
vi /etc/keepalived/keepalived.conf

global_defs &lt;span class="o"&gt;{&lt;/span&gt;
   router_id HA_02
&lt;span class="o"&gt;}&lt;/span&gt;

vrrp_script HA_Check &lt;span class="o"&gt;{&lt;/span&gt;
        script &lt;span class="s2"&gt;"killall -0 haproxy"&lt;/span&gt;
        interval 1
    rise 3
    fall 3
        weight 2
&lt;span class="o"&gt;}&lt;/span&gt;

vrrp_instance HAGroup_1 &lt;span class="o"&gt;{&lt;/span&gt;
    state BACKUP
    interface ens32
    garp_master_delay 5
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication &lt;span class="o"&gt;{&lt;/span&gt;
        auth_type PASS
        auth_pass test123
    &lt;span class="o"&gt;}&lt;/span&gt;
    virtual_ipaddress &lt;span class="o"&gt;{&lt;/span&gt;
        192.168.1.150
    &lt;span class="o"&gt;}&lt;/span&gt;
    track_script &lt;span class="o"&gt;{&lt;/span&gt;
        HA_Check
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;✔ How does &lt;code&gt;Keepalived&lt;/code&gt; works&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The master device sends VRRP messages according to a set schedule&lt;/li&gt;
&lt;li&gt;The backup device receives the VRRP messages sent by the master device&lt;/li&gt;
&lt;li&gt;If the backup device does not receive VRRP messages from the master device, it changes its state to become the master device&lt;/li&gt;
&lt;li&gt;When the backup device becomes the master device, it sends a Gratuitous ARP (GARP) message, which includes its MAC address, to inform other devices on the network of the change&lt;/li&gt;
&lt;li&gt;The GARP message is used by the switch devices connected to the server equipment where VRRP is running&lt;/li&gt;
&lt;li&gt;The switch devices update their MacAddress Table by mapping the VIP (Virtual IP) address to the MAC address of the device that has become the master&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;👉 As VRRP is L3 protocol, we need to use iptable rules to set up the firewall&lt;/p&gt;

&lt;p&gt;From both Proxy Servers&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;firewall-cmd &lt;span class="nt"&gt;--direct&lt;/span&gt; &lt;span class="nt"&gt;--add-rule&lt;/span&gt; ipv4 filter INPUT 1 &lt;span class="nt"&gt;-i&lt;/span&gt; ens32 &lt;span class="nt"&gt;-d&lt;/span&gt; 224.0.0.18 &lt;span class="nt"&gt;-p&lt;/span&gt; vrrp &lt;span class="nt"&gt;-j&lt;/span&gt; ACCEPT
firewall-cmd &lt;span class="nt"&gt;--direct&lt;/span&gt; &lt;span class="nt"&gt;--add-rule&lt;/span&gt; ipv4 filter OUTPUT 1 &lt;span class="nt"&gt;-o&lt;/span&gt; ens32 &lt;span class="nt"&gt;-d&lt;/span&gt; 224.0.0.18 &lt;span class="nt"&gt;-p&lt;/span&gt; vrrp &lt;span class="nt"&gt;-j&lt;/span&gt; ACCEPT
firewall-cmd &lt;span class="nt"&gt;--runtime-to-permanent&lt;/span&gt;
firewall-cmd &lt;span class="nt"&gt;--direct&lt;/span&gt; &lt;span class="nt"&gt;--get-all-rules&lt;/span&gt;
ipv4 filter OUTPUT 1 &lt;span class="nt"&gt;-o&lt;/span&gt; ens32 &lt;span class="nt"&gt;-d&lt;/span&gt; 224.0.0.18 &lt;span class="nt"&gt;-p&lt;/span&gt; vrrp &lt;span class="nt"&gt;-j&lt;/span&gt; ACCEPT
ipv4 filter INPUT 1 &lt;span class="nt"&gt;-i&lt;/span&gt; ens32 &lt;span class="nt"&gt;-d&lt;/span&gt; 224.0.0.18 &lt;span class="nt"&gt;-p&lt;/span&gt; vrrp &lt;span class="nt"&gt;-j&lt;/span&gt; ACCEPT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl start keepalived
systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;keepalived
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now to test if this works,&lt;/p&gt;

&lt;p&gt;From the Main Proxy Server&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ip address list | &lt;span class="nb"&gt;grep &lt;/span&gt;192.168.1.150
inet 192.168.1.150/32 scope global ens32
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the Backup Proxy Server&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ip address list | &lt;span class="nb"&gt;grep &lt;/span&gt;192.168.1.150
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we will try stopping the keepalived service from our main Proxy Server to test if the Backup Proxy server runs automatically&lt;/p&gt;

&lt;p&gt;So now from Main Proxy Server&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl stop keepalived
ip address list | &lt;span class="nb"&gt;grep &lt;/span&gt;192.168.1.150
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the Backup Proxy Server&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ip address list | &lt;span class="nb"&gt;grep &lt;/span&gt;192.168.1.150
inet 192.168.1.150/32 scope global ens32
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 If the Keepalived daemon is stopped on the master device, the backup device will confirm that it has taken over the VIP (Virtual IP) associated with the stopped master device&lt;/p&gt;

&lt;p&gt;When the Keepalived daemon is started again, it is necessary to confirm that the original master device has regained control of the VIP&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;✍ Today I discussed regarding how to set up a proxy server in Linux CentOS7, apply SSL/TLS certificate to redirect traffic to HTTPS and finally configured a backup proxy server for the main server.&lt;/p&gt;

</description>
      <category>gratitude</category>
    </item>
  </channel>
</rss>
