<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Vinoth Mohan</title>
    <description>The latest articles on DEV Community by Vinoth Mohan (@vinothmohan).</description>
    <link>https://dev.to/vinothmohan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vinothmohan"/>
    <language>en</language>
    <item>
      <title>Kubernetes - An Overview at High-Level</title>
      <dc:creator>Vinoth Mohan</dc:creator>
      <pubDate>Mon, 07 Mar 2022 08:32:36 +0000</pubDate>
      <link>https://dev.to/vinothmohan/kubernetes-an-overview-at-high-level-3n5b</link>
      <guid>https://dev.to/vinothmohan/kubernetes-an-overview-at-high-level-3n5b</guid>
      <description>&lt;h1&gt;
  
  
  What Is Kubernetes?
&lt;/h1&gt;

&lt;p&gt;Kubernetes is popularly known as K8s is a production-grade open-source container orchestration tool developed by Google to help you manage the containerized/dockerized applications supporting multiple deployment environments like On-premise, cloud, or virtual machines.&lt;/p&gt;

&lt;h1&gt;
  
  
  Why is it popular and powerful ?
&lt;/h1&gt;

&lt;p&gt;Kubernetes can speed up the development process by making easy, automated deployments, updates (rolling-update) and by managing our apps and services with almost zero downtime. It also provides self-healing. Kubernetes can detect and restart services when a process crashes inside the container.Its is highly performant,scalable and provides reliable infrastructure to support data recovery with ease.&lt;/p&gt;

&lt;h1&gt;
  
  
  Architecture Of Kubernetes
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9iy7zu2japgj6acuv4yf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9iy7zu2japgj6acuv4yf.png" alt="Kubernetes Architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The most basic architecture of k8s has two major Nodes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Control plane&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Worker Nodes&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Control plane
&lt;/h1&gt;

&lt;p&gt;The master node is also known as a control plane that is responsible to manage worker nodes efficiently. User enters commands and configuration files from control plane. It controls all cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Master Node Processes:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;kube-apiserver :&lt;/strong&gt; It exposes the Kubernetes API. The API server is the front end for the Kubernetes control plane.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kube-controller-manager :&lt;/strong&gt; Control plane component that runs controller processes.&lt;br&gt;
Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process. Some types of these controllers are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Node controller&lt;/strong&gt;: Responsible for noticing and responding when nodes go down.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Job controller&lt;/strong&gt;: Watches for Job objects that represent one-off tasks, then creates Pods to run those tasks to completion.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Endpoints controller&lt;/strong&gt;: Populates the Endpoints object (that is, joins Services &amp;amp; Pods).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service Account &amp;amp; Token controllers&lt;/strong&gt;: Create default accounts and API access tokens for new namespaces.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kube-scheduler:&lt;/strong&gt; Control plane component that watches for newly created Pods with no assigned node, and selects a node for them to run on.&lt;/p&gt;

&lt;p&gt;Factors taken into account for scheduling decisions include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;individual and collective resource requirements,&lt;/li&gt;
&lt;li&gt;hardware/software/policy constraints,&lt;/li&gt;
&lt;li&gt;affinity and anti-affinity specifications,&lt;/li&gt;
&lt;li&gt;data locality,&lt;/li&gt;
&lt;li&gt;inter-workload interference,&lt;/li&gt;
&lt;li&gt;deadlines.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;etcd:&lt;/strong&gt; Consistent and highly-available key value store used as Kubernetes' backing store for all cluster data (meta data, objects, etc.).&lt;/li&gt;
&lt;/ul&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cloud Controller Manager :&lt;/strong&gt; It embeds cloud-specific control logic. The cloud controller manager lets you link your cluster into your cloud provider's API, and separates out the components that interact with that cloud platform from components that only interact with your cluster. The cloud-controller-manager only runs controllers that are specific to your cloud provider.
The following controllers can have cloud provider dependencies:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Node controller&lt;/strong&gt;: For checking the cloud provider to determine if a node has been deleted in the cloud after it stops responding&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Route controller&lt;/strong&gt;: For setting up routes in the underlying cloud infrastructure&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service controller&lt;/strong&gt;: For creating, updating and deleting cloud provider load balancers.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Worker Node
&lt;/h1&gt;

&lt;p&gt;Node components run on every node, maintaining running pods and providing the Kubernetes runtime environment.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Kubelet:&lt;/strong&gt; An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod. The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Kube-proxy:&lt;/strong&gt; It is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.&lt;/li&gt;
&lt;li&gt;It uses the operating system packet filtering layer if there is one and it's available. Otherwise, kube-proxy forwards the traffic itself.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Container Runtime:&lt;/strong&gt; The container runtime is the software that is responsible for running containers.

&lt;ul&gt;
&lt;li&gt;Kubernetes supports several container runtimes: Docker, containerd, CRI-O, and any implementation of the Kubernetes CRI (Container Runtime Interface)"&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Components of K8s:
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Pods:
&lt;/h2&gt;

&lt;p&gt;A pod is a smallest and simplest unit that you create or deploy in k8s, which is an abstraction of the container application.A single pod has usually one, or multiple containers, and their shared resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deployments
&lt;/h2&gt;

&lt;p&gt;Deployments are best used for stateless applications. Pods managed by deployment workload are treated as independent and disposable.&lt;/p&gt;

&lt;p&gt;If a pod encounters disruption, Kubernetes removes it and then recreates it.&lt;/p&gt;

&lt;h2&gt;
  
  
  DaemonSets
&lt;/h2&gt;

&lt;p&gt;Daemonsets ensures that every node in the cluster runs a copy of the pod.&lt;/p&gt;

&lt;p&gt;For use cases where you're collecting logs or monitoring node performance, this daemon-like workload works best.&lt;/p&gt;

&lt;h2&gt;
  
  
  ReplicaSets
&lt;/h2&gt;

&lt;p&gt;A ReplicaSet's purpose is to run a specified number of pods at any given time.&lt;/p&gt;

&lt;h2&gt;
  
  
  StatefulSets
&lt;/h2&gt;

&lt;p&gt;Like a Deployment , a StatefulSet manages Pods, are best used when your application needs to maintain its identity and store data.&lt;/p&gt;

&lt;p&gt;An application would be something like Zookeeper - an application that requires a database for storage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Jobs
&lt;/h2&gt;

&lt;p&gt;Jobs launch one or more pods and ensure that a specified number of them successfully terminate.&lt;/p&gt;

&lt;p&gt;Jobs are best used to run a finite task to completion as opposed to managing an ongoing desired application state.&lt;/p&gt;

&lt;h2&gt;
  
  
  CronJobs
&lt;/h2&gt;

&lt;p&gt;CronJobs are similar to jobs. CronJobs, however, runs to completion on a cron-based schedule.&lt;/p&gt;

&lt;h2&gt;
  
  
  Services
&lt;/h2&gt;

&lt;p&gt;An abstract way to expose an application running on a set of Pods as a network service.&lt;br&gt;
With Kubernetes you don't need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Type of services
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ClusterIP&lt;/strong&gt;. Exposes a service which is only accessible from within the cluster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NodePort&lt;/strong&gt;. Exposes a service via a static port on each node’s IP.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LoadBalancer&lt;/strong&gt;. Exposes the service via the cloud provider’s load balancer.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ExternalName&lt;/strong&gt;. Maps a service to a predefined externalName field by returning a value for the CNAME record.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcaxjksxpbybu053hxr0c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcaxjksxpbybu053hxr0c.png" alt="Service"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h2&gt;
  
  
  Kubernetes Ingress
&lt;/h2&gt;

&lt;p&gt;Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ingress Controller
&lt;/h2&gt;

&lt;p&gt;In order for the Ingress resource to work, the cluster must have an ingress controller running.&lt;/p&gt;

&lt;p&gt;Some third-party ingress controller:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Istio Ingress is an Istio based ingress controller.&lt;/li&gt;
&lt;li&gt;The NGINX Ingress Controller for Kubernetes works with the NGINX webserver (as a proxy).&lt;/li&gt;
&lt;li&gt;The Traefik Kubernetes Ingress provider is an ingress controller for the Traefik proxy.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Secrets
&lt;/h2&gt;

&lt;p&gt;A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or in a container image. Using a Secret means that you don't need to include confidential data in your application code.&lt;/p&gt;

&lt;h2&gt;
  
  
  ConfigMap
&lt;/h2&gt;

&lt;p&gt;A ConfigMap is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.&lt;/p&gt;

&lt;p&gt;A ConfigMap separates your configurations from your Pod and components, which helps keep your workloads portable. This makes their configurations easier to change and manage, and prevents hardcoding configuration data to Pod specifications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Persistent Volumes
&lt;/h2&gt;

&lt;p&gt;As we know data stored in container is ephermal. So we need persistant volume to preserve data across container restart.&lt;/p&gt;

&lt;p&gt;A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Persistant Volume Claim
&lt;/h2&gt;

&lt;p&gt;A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes&lt;/p&gt;

&lt;h2&gt;
  
  
  Advanced
&lt;/h2&gt;

&lt;p&gt;Some advanced topics you can explore which helps you become better kubernetes adminstrator.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Liveness and Readiness Probe&lt;/li&gt;
&lt;li&gt;Requests and Limits&lt;/li&gt;
&lt;li&gt;Resource Quotas&lt;/li&gt;
&lt;li&gt;Auto Scaling&lt;/li&gt;
&lt;li&gt;RBAC Authorization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are some basic topics which help you getting started in kubernetes. In order to get hands-on in kubernetes, You can easily set up Minikube or Amazon EKS cluster using eksctl.&lt;/p&gt;

&lt;p&gt;Thats it!! See you 😁&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Setting up Minikube in EC2 - The Easy Way</title>
      <dc:creator>Vinoth Mohan</dc:creator>
      <pubDate>Thu, 24 Feb 2022 18:31:49 +0000</pubDate>
      <link>https://dev.to/vinothmohan/setting-up-minikube-in-ec2-the-easy-way-22gi</link>
      <guid>https://dev.to/vinothmohan/setting-up-minikube-in-ec2-the-easy-way-22gi</guid>
      <description>&lt;p&gt;&lt;strong&gt;Minikube&lt;/strong&gt; is a tool that lets you run a single-node Kubernetes cluster on your personal computer or in EC2 instance so that you can try out Kubernetes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1 : Launch an EC2 Instance using the following configuration.
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;AMI : Ubuntu Server 20.04 LTS (HVM), (64-bit x86)&lt;/li&gt;
&lt;li&gt;Instance Type : t2.medium&lt;/li&gt;
&lt;li&gt;Configure Instance Details :

&lt;ul&gt;
&lt;li&gt;In this section scroll to the bottom, there you will find &lt;strong&gt;user data&lt;/strong&gt; field, Copy this script and paste it in that field
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!bin/bash
sudo apt update
sudo apt upgrade -y
sudo hostnamectl set-hostname minikube
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo apt-get update -y &amp;amp;&amp;amp;  sudo apt-get install -y docker.io
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
snap install kubectl --classic
sudo apt install conntrack -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Add Storage : 10GiB&lt;/li&gt;
&lt;li&gt;Add Tags : 

&lt;ul&gt;
&lt;li&gt;Key : Name&lt;/li&gt;
&lt;li&gt;Value : minikube&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Security Group : Create a New Security Group

&lt;ul&gt;
&lt;li&gt;Type : All Traffic&lt;/li&gt;
&lt;li&gt;Source : My IP (or) Leave it as 0.0.0.0/0&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Review and Launch : Select or Create a new key pair

&lt;ul&gt;
&lt;li&gt;Launch&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 2 : SSH into your EC2 instance
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ssh -i [keypair] ubuntu@[Instance Public IP]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3 : Verify
&lt;/h3&gt;

&lt;p&gt;As we already bootstrapped the all necessary installation, Only thing that we need to do is to verify the installed Minikube by checking the version.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ minikube version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4 : Start running Minikube
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Become a root user
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo -i
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Start the minikube
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ minikube start --vm-driver=none
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Check Status
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ minikube status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Thats it!!! Minikube is now running your EC2 instance !.&lt;br&gt;
You can play with kubernetes using this single node kubernetes cluster. &lt;/p&gt;

</description>
      <category>minikube</category>
      <category>kubernetes</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Terraform - An Overview at High-Level</title>
      <dc:creator>Vinoth Mohan</dc:creator>
      <pubDate>Wed, 23 Feb 2022 16:00:36 +0000</pubDate>
      <link>https://dev.to/vinothmohan/terraform-basics-fo3</link>
      <guid>https://dev.to/vinothmohan/terraform-basics-fo3</guid>
      <description>&lt;h2&gt;
  
  
  What is Terraform? 🤔
&lt;/h2&gt;

&lt;p&gt;Terraform is an open-source, cloud-agnostic, one of the most popular Infrastructure-as-code (IaC) tool developed by HashiCorp. It is used by DevOps teams to automate infrastructure tasks such as provisioning of your cloud resources.&lt;/p&gt;

&lt;p&gt;Terraform supported immutable infrastructure, a declarative language, a masterless and agentless architecture, and had a large community and a mature codebase.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of using Terraform:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Does orchestration, not just configuration management&lt;/li&gt;
&lt;li&gt;Supports multiple providers such as AWS, Azure, Oracle, GCP, and many more&lt;/li&gt;
&lt;li&gt;Provide immutable infrastructure where configuration changes smoothly&lt;/li&gt;
&lt;li&gt;Uses easy to understand language, HCL (HashiCorp configuration language)&lt;/li&gt;
&lt;li&gt;Easily portable to any other provider&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How Terraform Works?
&lt;/h2&gt;

&lt;p&gt;Terraform has two main components that make up its architecture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terraform Core&lt;/li&gt;
&lt;li&gt;Terraform Providers&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Terraform Configuration Files
&lt;/h2&gt;

&lt;p&gt;Configuration files are a set of files used to describe infrastructure in Terraform and have the file extensions .tf and .tf.json. Terraform uses a declarative model for defining infrastructure. Configuration files let you write a configuration that declares your desired state. Configuration files are made up of resources with settings and values representing the desired state of your infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F153713600-6cd87f78-0d94-4e1b-9951-7d28009a1b7b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F153713600-6cd87f78-0d94-4e1b-9951-7d28009a1b7b.png" alt="terraform-config-files-e1605834689106"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A Terraform configuration is made up of one or more files in a directory, provider binaries, plan files, and state files once Terraform has run the configuration.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configuration file&lt;/strong&gt; (*.tf files): Here we declare the provider and resources to be deployed along with the type of resource and all resources specific settings&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Variable declaration file&lt;/strong&gt; (variables.tf or variables.tf.json): Here we declare the input variables required to provision resources&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Variable definition files&lt;/strong&gt; (terraform.tfvars): Here we assign values to the input variables&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;State file&lt;/strong&gt; (terraform.tfstate): a state file is created once after Terraform is run. It stores state about our managed infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Terraform Providers
&lt;/h2&gt;

&lt;p&gt;A provider is responsible for understanding API interactions and exposing resources. It is an executable plug-in that contains code necessary to interact with the API of the service. Terraform configurations must declare which providers they require so that Terraform can install and use them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F153762048-55367d88-76ae-4d32-90da-c5171f91d73b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F153762048-55367d88-76ae-4d32-90da-c5171f91d73b.png" alt="Terraform-provider-api-call"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Terraform has over a hundred providers for different technologies, and each provider then gives terraform user access to its resources. So through AWS provider, for example, you have access to hundreds of AWS resources like EC2 instances, the AWS users, etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform Core Concepts
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Variables&lt;/strong&gt;: Terraform has input and output variables, it is a key-value pair. Input variables are used as parameters to input values at run time to customize our deployments. Output variables are return values of a terraform module that can be used by other configurations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Provider&lt;/strong&gt;: Terraform users provision their infrastructure on the major cloud providers such as AWS, Azure, OCI, and others. A provider is a plugin that interacts with the various APIs required to create, update, and delete various resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Module&lt;/strong&gt;: Any set of Terraform configuration files in a folder is a module. Every Terraform configuration has at least one module, known as its root module.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;State&lt;/strong&gt;: Terraform records information about what infrastructure is created in a Terraform state file. With the state file, Terraform is able to find the resources it created previously, supposed to manage and update them accordingly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resources&lt;/strong&gt;: Cloud Providers provides various services in their offerings, they are referenced as Resources in Terraform. Terraform resources can be anything from compute instances, virtual networks to higher-level components such as DNS records. Each resource has its own attributes to define that resource.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data Source&lt;/strong&gt;: Data source performs a read-only operation. It allows data to be fetched or computed from resources/entities that are not defined or managed by Terraform or the current Terraform configuration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Plan&lt;/strong&gt;: It is one of the stages in the Terraform lifecycle where it determines what needs to be created, updated, or destroyed to move from the real/current state of the infrastructure to the desired state.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Apply&lt;/strong&gt;: It is one of the stages in the Terraform lifecycle where it applies the changes real/current state of the infrastructure in order to achieve the desired state.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Terraform Lifecycle
&lt;/h3&gt;

&lt;p&gt;Terraform lifecycle consists of – init, plan, apply, and destroy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F153762027-3da8e99b-2166-4ddf-b0b6-4614e97db1f3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F153762027-3da8e99b-2166-4ddf-b0b6-4614e97db1f3.png" alt="terraform-lifecycle"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform-lifecycle
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Terraform init initializes the (local) Terraform environment. Usually executed only once per session.&lt;/li&gt;
&lt;li&gt;Terraform plan compares the Terraform state with the as-is state in the cloud, build and display an execution plan. This does not change the deployment (read-only).&lt;/li&gt;
&lt;li&gt;Terraform apply executes the plan. This potentially changes the deployment.&lt;/li&gt;
&lt;li&gt;Terraform destroy deletes all resources that are governed by this specific terraform environment.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Hands-on : Create a EC2 instance using Terraform
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Prerequisite&lt;/strong&gt;: A laptop or machine with terraform installed and AWS cli configured&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Step 1&lt;/strong&gt;: Create main.tf and variables.tf inside a folder&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;main.tf&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 3.27"
    }
  }

  required_version = "&amp;gt;= 0.14.9"
}

provider "aws" {
  profile = "default"
  region  = "ap-south-1"
}

resource "aws_instance" "app_server"{
  ami           = var.ami
  instance_type = var.instance_type
  tags = {
    Name = "TerraInstance"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;variables.tf&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "ami" {
    type = string
    default = "ami-0c6615d1e95c98aca"
}
variable "instance_type" {
    type = string
    default = "t2.micro"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt; : Initialize the directory with &lt;em&gt;$ terraform init&lt;/em&gt; command&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt; : Run &lt;em&gt;$ terraform plan&lt;/em&gt; command to print out the execution plan&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Step 4&lt;/strong&gt; : Apply the configuration now with the &lt;em&gt;$ terraform apply&lt;/em&gt; command.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Step 5&lt;/strong&gt; : Inspect the current state using &lt;em&gt;$ terraform show&lt;/em&gt; and check AWS console whether a instance is created.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Step 6&lt;/strong&gt; : Run &lt;em&gt;$ terraform destroy&lt;/em&gt; command to destroy the created instance.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thats it!!! Now that you sucessfully created a EC2 instance with terraform.&lt;/p&gt;

&lt;p&gt;Checkout my github repository for more 💁:&lt;br&gt;
&lt;a href="https://github.com/iamvinot/terraform-project" rel="noopener noreferrer"&gt;https://github.com/iamvinot/terraform-project&lt;/a&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Docker - An Overview at High-Level</title>
      <dc:creator>Vinoth Mohan</dc:creator>
      <pubDate>Sat, 19 Feb 2022 17:17:16 +0000</pubDate>
      <link>https://dev.to/vinothmohan/docker-an-overview-at-high-level-21jk</link>
      <guid>https://dev.to/vinothmohan/docker-an-overview-at-high-level-21jk</guid>
      <description>&lt;h1&gt;
  
  
  What is Docker ?
&lt;/h1&gt;

&lt;p&gt;Docker is an open-source platform for building distributed software using “containerization."&lt;/p&gt;

&lt;p&gt;Docker allows you to decouple the application/software from the underlying infrastructure into standardized units called containers that have everything the software needs to run including libraries, system tools, code, and runtime.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F154561459-bf77a2b2-8fb6-45f0-b6ba-a36f330450d0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F154561459-bf77a2b2-8fb6-45f0-b6ba-a36f330450d0.png" alt="docker"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Docker ?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Docker containers are minimalistic and enable portability. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker lets applications and their environments be kept clean and minimal by isolating them, which allows for more granular control and greater portability.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Docker containers enable composability. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Containers make it easier for developers to compose the building blocks of an application into a modular unit with easily interchangeable parts, which can speed up development cycles, feature releases, and bug fixes.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Docker containers ease orchestration and scaling.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Because containers are lightweight, developers can launch lots of them for better scaling of services. These clusters of containers do then need to be orchestrated, which is where Kubernetes typically comes in.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is Virtualization?
&lt;/h2&gt;

&lt;p&gt;Virtualization is the process of creating virtual enviroment or virtual machine by spliting one system into many different sections which act like separate, distinct individual systems. A software called Hypervisor makes this kind of splitting possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Containerization ?
&lt;/h2&gt;

&lt;p&gt;Containerization is a form of virtualization through which applications are run in containers (isolated user spaces) all using a shared OS. It packs or encapsulates software code and all its dependencies for it to run in a consistent and uniform manner on any infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Virtual Machine vs Docker ?
&lt;/h2&gt;

&lt;p&gt;Virtual Machines (VMs) virtualize the underlying hardware. They run on physical hardware via an intermediation layer known as a hypervisor. They require additional resources are required to scale-up VMs.&lt;/p&gt;

&lt;p&gt;They are more suitable for monolithic applications. Whereas, Docker is operating system level virtualization. Docker containers userspace on top the of host kernel, making them lightweight and fast. Up-scaling is simpler, just need to create another container from an image.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Virtual Machine&lt;/th&gt;
&lt;th&gt;Containers&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;A virtualization technique where each VM has an individual operating system.&lt;/td&gt;
&lt;td&gt;A virtualization technique where all containers share a host operating system.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Virtual machines are isolated at the hardware level&lt;/td&gt;
&lt;td&gt;Each container is isolated at the operating system level.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Virtual machines take time to create&lt;/td&gt;
&lt;td&gt;Containers are created fast&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Increased management overhead&lt;/td&gt;
&lt;td&gt;Decreased management overhead as only one host operating system needs to be cared for.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F154561595-2fa6b184-fa00-466b-8ade-5fa5e92b16d8.png" alt="vm"&gt;&lt;/th&gt;
&lt;th&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F154561657-effc4491-cac7-457c-8c74-789644c6e61e.png" alt="vm2"&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;VM&lt;/td&gt;
&lt;td&gt;Docker&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  What is Hypervisor?
&lt;/h2&gt;

&lt;p&gt;A hypervisor is a software that makes virtualization possible. It is also called Virtual Machine Monitor. It divides the host system and allocates the resources to each divided virtual environment.  &lt;/p&gt;

&lt;h2&gt;
  
  
  What are Docker Images?
&lt;/h2&gt;

&lt;p&gt;A Docker image is an executable file, that creates a Docker container. An image is built from the executable version of an application together with its dependencies and configurations. Running instance of an image is a container.&lt;/p&gt;

&lt;p&gt;Docker image includes system libraries, tools, and other files and dependencies for the application. An image is made up of multiple layers.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Docker Hub?
&lt;/h2&gt;

&lt;p&gt;Docker images create docker containers. There has to be a registry where these docker images live. This registry is Docker Hub. Users can pick up images from Docker Hub and use them to create customized images and containers. Currently, the Docker Hub is the world’s largest public repository of image containers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Components of Docker Architecture.
&lt;/h2&gt;

&lt;p&gt;The four major components of Docker are daemon, Client, Host, and Registry&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Docker daemon&lt;/strong&gt;: It is also referred to as ‘dockerd’ and it accepts Docker API requests and manages Docker objects such as images, containers, networks, and volumes. It can also communicate with other daemons to manage Docker services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker Client&lt;/strong&gt;: It is the predominant way that enables Docker users to interact with Docker. It sends the docker commands to docker, which actually executes them using Docker API. The Docker client can communicate with more than one daemon.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker Registry&lt;/strong&gt;: It hosts the Docker images and is used to pull and push the docker images from the configured registry. Docker Hub is the public registry that anyone can use, and Docker is configured to look for images on Docker Hub by default. However, it is always recommended for organizations to use own private registry.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker Host&lt;/strong&gt;: It is the physical host (VM) on which Docker Daemon is running and docker images and containers are created.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F154563362-c459a9e8-8cdd-47ce-8bed-e0eb435d0500.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F154563362-c459a9e8-8cdd-47ce-8bed-e0eb435d0500.png" alt="docker_components"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Docker Engine?
&lt;/h2&gt;

&lt;p&gt;Docker daemon or Docker engine represents the server. The docker daemon and the clients should be run on the same or remote host, which can communicate through command-line client binary and full RESTful API.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Docker Image Registry?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A Docker image registry, in simple terms, is an area where the docker images are stored. Instead of converting the applications to containers each and every time, a developer can directly use the images stored in the registry.&lt;/li&gt;
&lt;li&gt;This image registry can either be public or private and Docker hub is the most popular and famous public registry available.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What are Dockerfiles?
&lt;/h2&gt;

&lt;p&gt;Dockerfile is a text file that has instructions to build a Docker image. All commands in dockerfile could also be used from the command line to build images.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F154561888-5234ef20-d5ae-40f1-8e99-d188b0b0bcb4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F154561888-5234ef20-d5ae-40f1-8e99-d188b0b0bcb4.png" alt="Docker_File"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sample Dockerfile :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM ubuntu:16.04
COPY . /app
RUN make /app
CMD python /app/app.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each instruction in a dockerfile  creates one read-only layer:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F154561953-abdfda04-8d67-4b26-9bf0-75124586b9a5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F154561953-abdfda04-8d67-4b26-9bf0-75124586b9a5.png" alt="dfile"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Docker Commands
&lt;/h1&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Pull Command&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This command pulls an image from a docker public registry.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker pull docker/whalesay
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Build Command&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This command builds an image according to Dockerfile.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build [-t &amp;lt;name_of_image&amp;gt;] [-f &amp;lt;name_of_Dockerfile&amp;gt;] &amp;lt;path_to_Dockerfile&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F154562015-cba053d2-d28c-4deb-b59a-60e82ba88d35.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F154562015-cba053d2-d28c-4deb-b59a-60e82ba88d35.png" alt="build"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Run Command&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This command runs an container of an image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --name nginx-container nginx:1.16
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F154562064-41238206-504a-4cd6-b188-4bbcdcfee32c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F154562064-41238206-504a-4cd6-b188-4bbcdcfee32c.png" alt="run"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;ps Command&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This command lists the docker containers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F154562122-19820233-15f9-4ace-9c95-d6685a7e962a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F154562122-19820233-15f9-4ace-9c95-d6685a7e962a.png" alt="ps"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Stop Command&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This command stops a running container(s).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker stop nginx-container
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Remove Command&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This command removes a stopped container(s).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker rm nginx-container
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;List Image Command&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This command lists the docker images.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker images
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Remove Image Command&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This command removes image(s).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker rmi nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Attach Command&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This command attaches the terminal to a container running in the background (detached mode).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker attach &amp;lt;container id or name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Inspect Command&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This command returns details of the container in a JSON format.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker inspect &amp;lt;container id or name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Logs Command&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This command returns logs of the container running in the background (detached mode).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker logs &amp;lt;container id or name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Push Command&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This command pushes an image to your account on a docker public registry (dockerhub).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker push vinothmohan/pro-postgres
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Create a Docker Container&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Following command creates the docker container with the required images.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker create --name &amp;lt;container-name&amp;gt; &amp;lt;image-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Pause Container&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Processes running inside the container is paused. Following command helps us to achieve this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker pause &amp;lt;container-id/name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Container can’t be removed if in a paused state.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Unpause Container&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Unpause moves the container back to run the state. Below command helps us to do this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker unpause &amp;lt;container-id/name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Start Container&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If container is in a stopped state, container is started.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker start &amp;lt;container-id/name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Stop Container&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Container with all its processes is stopped with below command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker stop &amp;lt;container-id/name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To stop all the running Docker containers use the  below command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker stop $(docker ps -a -q)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Restart Container&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Container along with its processes are restarted&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker restart &amp;lt;container-id/name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Kill Container&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A container can be killed with below command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker kill &amp;lt;container-id/name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Destroy Container&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The entire container is discarded. It is preferred to do this when the container is in a stopped state rather than do it forcefully.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker rm &amp;lt;container-id/name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Docker Network
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F154562168-135fa9bb-41e1-4779-a7dd-683942008707.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F154562168-135fa9bb-41e1-4779-a7dd-683942008707.png" alt="network"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Bridge&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The Bridge network assigns IPs in the range of 172.17.x.x to the containers within it. To access these containers from outside you need to map the ports of these containers to the ports on the host.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Host&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The Host network will remove any network isolation between the docker host and the containers. For instance, if you run a container on port 5000, it will be accessible on the same port on the docker host without any explicit port mapping. The only downside of this approach is that you can not use the same port twice for any container. &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;None&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The None network keeps the container in complete isolation, i.e. they are not connected to any network or container.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To create Network
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker network create --driver driver_name network_name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Types of Volume mounts in Docker.
&lt;/h2&gt;

&lt;p&gt;There are three mount types available in Docker   &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F154562242-210ece9b-0b03-4db4-9e6c-db8b0cabcf35.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F154562242-210ece9b-0b03-4db4-9e6c-db8b0cabcf35.jpg" alt="dockervolume"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Volume mounts&lt;/strong&gt; are the best way to persist data in Docker. Data are stored in a part of the host filesystem which is managed by Docker containers. (/var/lib/docker/volumes/ on Linux)&lt;/p&gt;

&lt;p&gt;-v or --volume flag and --mount flag could be used for docker swarm services and standalone containers.&lt;/p&gt;

&lt;p&gt;To create a docker volume. For eg:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker volume create my-vol 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Inspect a volume&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker volume inspect my-vol
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we need to start a container with “my-vol”&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;With -v flag
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d  --name devtest -v my-vol:/app nginx:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here nginx images with the latest tag are executed with using volume mount “my-vol”&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;With --mount flag
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d --name devtest --mount \ source=my-vol,target=/app nginx:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Bind mounts&lt;/strong&gt; may be stored anywhere on the host system. A file or directory on the host machine is mounted into a container unlike volume mounts where a new directory is created within Docker’s storage directory on the host machine, and Docker manages that directory’s contents. Non-Docker processes on the Docker host or a Docker container can modify them at any time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;tmpfs mounts&lt;/strong&gt; are stored in the host system’s memory only and are never written to the host system’s file system. When the container stops, the tmpfs mount is removed, and files won’t persist.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Compose
&lt;/h2&gt;

&lt;p&gt;Docker Compose is a tool provided by Docker for defining and running multi-container applications together in an isolated environment. Either a YAML or JSON file can be used to configure all the required services like Database, Messaging Queue along with the application server. Then, with a single command, we can create and start all the services from the configuration file.&lt;/p&gt;

&lt;p&gt;It comes handy to reproduce the entire application along with its services in various environments like development, testing, staging and most importantly in CI as well.&lt;/p&gt;

&lt;p&gt;Typically the configuration file is named as docker-compose.yml. Below is a sample file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3'
services:
  app:
    image: appName:latest
    build: .
    ports:          
    - "8080"   
    depends_on:
      - oracledb
    restart: on-failure:10    
  oracledb:
    image: db:latest 
    volumes:
      - /opt/oracle/oradata
    ports:       
      - "1521"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F154562314-e9ed5bae-0654-4043-8d32-1ba4862ceaf7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F154562314-e9ed5bae-0654-4043-8d32-1ba4862ceaf7.png" alt="dcompoe"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>beginners</category>
      <category>devops</category>
    </item>
    <item>
      <title>Git - An Overview at High-Level</title>
      <dc:creator>Vinoth Mohan</dc:creator>
      <pubDate>Tue, 15 Feb 2022 08:04:04 +0000</pubDate>
      <link>https://dev.to/vinothmohan/git-an-overview-at-high-level-2ckk</link>
      <guid>https://dev.to/vinothmohan/git-an-overview-at-high-level-2ckk</guid>
      <description>&lt;h1&gt;
  
  
  What is GIT?
&lt;/h1&gt;

&lt;p&gt;Git is a free and open-source distributed version control system  for tracking changes in computer files and is used to help coordinate work among several people on a project while tracking progress over time. In other words, it’s a tool that facilitates source code management in software development.&lt;/p&gt;

&lt;p&gt;Git favors both programmers and non-technical users by keeping track of their project files. It enables multiple users to work together and handles large projects efficiently.&lt;/p&gt;

&lt;h1&gt;
  
  
  What do you understand by the term ‘Version Control System’?
&lt;/h1&gt;

&lt;p&gt;A version control system (VCS) records all the changes made to a file or set of data, so a specific version may be called later if needed.&lt;/p&gt;

&lt;p&gt;This helps ensure that all team members are working on the latest version of the file&lt;/p&gt;

&lt;h1&gt;
  
  
  Uses of Git
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Git is used to tracking changes in the source code.&lt;/li&gt;
&lt;li&gt;Distributed version control tool used for source code management.&lt;/li&gt;
&lt;li&gt;Allows multiple developers to work together.&lt;/li&gt;
&lt;li&gt;Supports non-linear development because of its thousands of parallel branches.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Features of Git
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Free and open-source&lt;/li&gt;
&lt;li&gt;Tracks history&lt;/li&gt;
&lt;li&gt;Supports non-linear development&lt;/li&gt;
&lt;li&gt;Creates backup&lt;/li&gt;
&lt;li&gt;Scalable&lt;/li&gt;
&lt;li&gt;Supports collaboration&lt;/li&gt;
&lt;li&gt;Branching is easier&lt;/li&gt;
&lt;li&gt;Distributed development.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Git WorkFlow
&lt;/h1&gt;

&lt;p&gt;The following image shows the git workflow diagram:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F153767766-3c16ba10-75d2-4c28-8598-0163f8013965.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F153767766-3c16ba10-75d2-4c28-8598-0163f8013965.png" alt="git-workflow_diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In Git, the workflow is mainly divided into three areas -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Working directory - This is the area where you modify your existing files.&lt;/li&gt;
&lt;li&gt;Staging area (Index) - In this, the files in your working directory are staged and snapshots are added.&lt;/li&gt;
&lt;li&gt;Git directory or repository - It is basically where you perform all the changes that need to be made i.e. perform commits to branch, checkout branch, make changes etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Git Init
&lt;/h2&gt;

&lt;p&gt;git init is one way to start a new project with Git. To start a repository, use either git init or git clone - not both.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Git Clone
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;git clone is a command which is used to clone or copy a target repository.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone &amp;lt;REPO_URL&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Clone a specific branch from the repository.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone -b &amp;lt;Branch_name&amp;gt; &amp;lt;Repo_URL&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Git Status
&lt;/h2&gt;

&lt;p&gt;git status is mainly used to display the state of the staging area and the repository. It helps us to track all the changes made, point out untracked files.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Git Add
&lt;/h2&gt;

&lt;p&gt;The git add command adds new or changed files in your working directory to the Git staging area.&lt;/p&gt;

&lt;p&gt;git add is an important command - without it, no git commit would ever do anything.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git add &amp;lt;file_name&amp;gt;or&amp;lt;path&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To add all the changes&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git add .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Git Commit
&lt;/h2&gt;

&lt;p&gt;Git commit is used to record all the changes in the repository. The git commit will commit all the changes and make a commit-id for the same for tracking down the changes made as shown in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F153770105-024c08d8-7b99-4aca-bf27-b1bd1a6eeafa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F153770105-024c08d8-7b99-4aca-bf27-b1bd1a6eeafa.png" alt="Git_commit"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As shown in the image, the command git commit creates a commit-id to track down changes and commits all the changes to the git repository.&lt;/p&gt;

&lt;p&gt;Command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git commit
git commit -m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The -m along with the command lets us write the commit message on the command line.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git commit -m "Commit message"
git commit -am
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The -am along with the command is to write the commit message on the command line for already staged files.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git commit -am "Commit message"
git commit -amend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The amend is used to edit the last commit. In case we need to change the last committed message, this command can be used.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git commit -amend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;rm stands for remove. It is used to remove a collection of files. The git rm command is used to remove or delete files from the working tree and index.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git rm &amp;lt;file_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, if you use the command git status, it would show, that the file has been deleted.&lt;/p&gt;

&lt;h2&gt;
  
  
  Git Branch
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A branch in Git is used to keep your changes until they are ready.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can do your work on a branch while the main branch(main) remains stable. After you are done with your work, you can merge it to the main branch.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For creating a new branch, the following command is used :&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git branch &amp;lt;branch_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;For switch from one branch to another.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git checkout &amp;lt;branch_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create a local branch and switch to that branch:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git checkout -b &amp;lt;branch_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Push in the local branch:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git push -u origin &amp;lt;branch_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: origin is the default name of remote repository&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Now, if someone wants to fetch some information, one can simply run:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git fetch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Delete Branches
&lt;/h2&gt;

&lt;p&gt;Once the work is done on a branch and merged with the Main branch, one can delete the branch.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The following command is used to delete branches:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git delete -d &amp;lt;branch_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: This command deletes a copy of the branch, but the original branch can still exist in remote repositories.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To delete remote branches, use the following command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git push origin --delete &amp;lt;branch_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Git Checkout
&lt;/h2&gt;

&lt;p&gt;The Git checkout is used to command Git on which branch changes have to be made. Checkout is simply used to change branches in repositories. It can also be used to restore files.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git checkout &amp;lt;branch_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Git Checkout Branch
To checkout or create a branch, the following command can be used:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git checkout -b  &amp;lt;branch_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will simply switch to the new branch branch_name.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Git Checkout Tag
While working on a large codebase, it because easy to have some reference point. That is where the checkout tag is used.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The following command is used to specify the tagname as well as the branch that will be checked out.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git checkout tag&amp;lt;/tag&amp;gt; &amp;lt;branch_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Git Merge
&lt;/h2&gt;

&lt;p&gt;Git merge is a command that allows you to merge branches from Git. It preserves the complete history and chronological order and maintains the context of the branch.&lt;/p&gt;

&lt;p&gt;The following image demonstrates how we can create different features by branching from the main branch and how we can merge the newly created features after the final review to the main branch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F153770122-965933e9-03b1-4b91-b8c1-ab6b1e588493.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F153770122-965933e9-03b1-4b91-b8c1-ab6b1e588493.png" alt="Git_Merge"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The command git merge is used to merge the branches.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Command :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git merge &amp;lt;branch_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  GIT Rebase
&lt;/h2&gt;

&lt;p&gt;Git Rebase is a process of combining a sequence of commits to a new base commit.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The primary reason for rebasing is to maintain a linear project history.&lt;/li&gt;
&lt;li&gt;When you rebase, you ‘unplug’ a branch and ‘replug’ it on the tip of another branch(usually main).&lt;/li&gt;
&lt;li&gt;The goal of rebasing is to take all the commits from a feature branch and put it on the main branch.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The following rebase command is used for rebasing the commits:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git rebase &amp;lt;branch_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Git Push
&lt;/h2&gt;

&lt;p&gt;The git push command is used to upload local repository content to a remote repository. Pushing is how you transfer commits from your local repository to a remote repo.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git push &amp;lt;remote&amp;gt; &amp;lt;branch&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Push the specified branch to , along with all of the necessary commits and internal objects. This creates a local branch in the destination repository. To prevent you from overwriting commits, Git won’t let you push when it results in a non-fast-forward merge in the destination repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git push &amp;lt;remote&amp;gt; --force
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Same as the above command, but force the push even if it results in a non-fast-forward merge. Do not use the --force flag unless you’re absolutely sure you know what you’re doing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git push &amp;lt;remote&amp;gt; --all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Push all of your local branches to the specified remote.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git push &amp;lt;remote&amp;gt; --tags
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Tags are not automatically pushed when you push a branch or use the --all option. The --tags flag sends all of your local tags to the remote repository.&lt;/p&gt;

&lt;h2&gt;
  
  
  Git Fetch
&lt;/h2&gt;

&lt;p&gt;Git Fetch only downloads the latest changes into the local repository. It downloads fresh changes that other developers have pushed to the remote repository since the last fetch and allows you to review and merge manually at a later time using Git Merge. As it doesn’t change the working directory or the staging area, it is safe to use.&lt;/p&gt;

&lt;p&gt;The below illustration shows the working of the command git fetch. It fetches all the latest changes that have been made in the remote repository and lets us make changes accordingly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F153770134-d50454be-c4ea-4990-af22-8fa2a29254e4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F153770134-d50454be-c4ea-4990-af22-8fa2a29254e4.png" alt="Git_Fetch"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The command used is :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git fetch &amp;lt;branch_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Git Pull Remote Branch
&lt;/h2&gt;

&lt;p&gt;You can pull in any changes that have been made from your forked remote repository to the local repository.&lt;/p&gt;

&lt;p&gt;Using the git pull command, all the changes and content can be fetched from the remote repository and can be immediately updated in the local repository to match the content.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We can simply pull a remote repository by using the git pull command. The syntax is as follows:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git pull
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command is equivalent to&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git fetch origin head
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Use the following command to check if there has been any change:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git pull &amp;lt;RemoteName&amp;gt; &amp;lt;BranchName&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If there is no change, it will show “Already up to date”. Else, it will simply merge those changes in the local repository.&lt;/p&gt;

&lt;h2&gt;
  
  
  Git Stash
&lt;/h2&gt;

&lt;p&gt;Sometimes in large codebases, there might be some cases when we do not want to commit our code, but at the same time don’t want to lose the unfinished code. This is where git stash comes into play. The git stash command is used to record the current state of the working directory and index in a stash.&lt;/p&gt;

&lt;p&gt;It stores the unfinished code in a stash and cleans the current branch from any uncommitted changes. Now, we can work on a clean working directory.&lt;/p&gt;

&lt;p&gt;If in the future, we again need to visit that code, we can simply use the stash and apply those changes back to the working repository.&lt;/p&gt;

&lt;p&gt;As shown below, using the command git stash, we can temporarily stash the changes we have made on the working copy and can work on something else. Later, when needed, we can git stash pop and again start working on it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F153770146-a80f500c-b997-4b73-ba4e-bea5792ea877.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F83613651%2F153770146-a80f500c-b997-4b73-ba4e-bea5792ea877.png" alt="Git_Stash"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How to stash changes in Git?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The syntax for stashing is as follows:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git stash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Suppose, you are working on a website and the code is stored in a repository.&lt;/p&gt;

&lt;p&gt;Now let's say, you have some files named design.css and design.js. Now you want to stash these files so that you can again use them later, while you work on something else.&lt;/p&gt;

&lt;p&gt;Therefore, later you can use the git stash list command to view all the changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Drop Stash
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;In case, you no longer require a stash, you can delete it with the following command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git stash drop &amp;lt;stash_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;If you want to delete all the stashes, simply use:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git stash clear
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Git-Ignore
&lt;/h2&gt;

&lt;p&gt;At times, there are some files that we might want Git to ignore while commiting. For example, private files or folders containing passwords, APIs etc. These files are user-specific and hence, we can ignore these using the .gitignore.&lt;/p&gt;

&lt;p&gt;.gitignore is generated automatically inside the project directory and ignores the files to get committed to the repositories.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to use the .gitignore?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Follow the below steps to use add the files you want Git to ignore.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open your project directory on your PC.&lt;/li&gt;
&lt;li&gt;Create a .gitignore file inside it.&lt;/li&gt;
&lt;li&gt;Inside the .gitignore write the names of all the files you want Git to ignore.&lt;/li&gt;
&lt;li&gt;Now add the .gitignore in your repository.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, if you check the status of your repo, you will see, all the files which were written in the .gitignore file have been ignored.&lt;/p&gt;

&lt;h1&gt;
  
  
  Advanced Git Concepts
&lt;/h1&gt;

&lt;h2&gt;
  
  
  git pull --rebase
&lt;/h2&gt;

&lt;p&gt;Git rebase is used to rewrite commits from one branch to another branch. In order to combine unpublished local changes with the published remote changes, &lt;em&gt;git pull&lt;/em&gt; is performed.&lt;/p&gt;

&lt;p&gt;With &lt;em&gt;git pull --rebase&lt;/em&gt;, the unpublished changes will be again applied on the published changes and no new commit will be added to history.&lt;/p&gt;

&lt;h2&gt;
  
  
  git merge --squash
&lt;/h2&gt;

&lt;p&gt;The squash along with git merge produces the working tree. It indexes in the same way as that of the real merge but discards the merge history.&lt;/p&gt;

&lt;p&gt;Command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git merge --squash origin/main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When to use git merge --squash?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When you have merged main into your branch and resolved conflicts.&lt;/li&gt;
&lt;li&gt;When you need to overwrite the original commits.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  git reflog
&lt;/h2&gt;

&lt;p&gt;The reflog records every change that is made in the repository. Apart from this, if some branch is lost from the repo, the recovery can be done using this command.&lt;/p&gt;

&lt;h2&gt;
  
  
  git revert
&lt;/h2&gt;

&lt;p&gt;Revert simply means to undo the changes. Therefore, it is an undo command in Git. Unlike traditional undo operation, the revert command does not delete any data. git revert is a commit operation, as it undo the specified commit.&lt;/p&gt;

&lt;p&gt;Options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This option is used to revert back a commit.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git revert &amp;lt;commit_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Edit commit message before reverting commit:
In case, we want to edit the commit message before reverting, -e is used for the same.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git revert -e &amp;lt;commit_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  git bisect
&lt;/h2&gt;

&lt;p&gt;Git bisect is a git tool used for debugging. Suppose, you have a large codebase and some commit causes a bug, but you are not sure of which commit causes it.&lt;/p&gt;

&lt;p&gt;Git bisect goes through all the previous commits and uses binary search to find the bugged commit.&lt;/p&gt;

&lt;p&gt;It is applied as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;git bisect start - Starts the bisect&lt;/li&gt;
&lt;li&gt;git bisect good v1.0 - Mention the last working commit.&lt;/li&gt;
&lt;li&gt;git bisect bad- Mentioning that the current commit has a bug.
It will return the commit which causes the bug and one can debug the issue efficiently.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  git blame
&lt;/h2&gt;

&lt;p&gt;git blame is used to know who/which commit is responsible for the latest changes in the repository. The author/commit of each line is visible through this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git blame &amp;lt;file_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command shows the commits which are responsible for changes of all lines of code.&lt;/p&gt;

&lt;h2&gt;
  
  
  git cherry-pick
&lt;/h2&gt;

&lt;p&gt;Choosing a commit from one branch and applying it to another is known as cherry picking in Git. Following are the steps to cherry pick a commit:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Visit the branch you want to apply to commit and use the following command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; git switch master
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Run the following command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git cherry-pick &amp;lt;commit_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Git Submodules
&lt;/h2&gt;

&lt;p&gt;Submodules are a tool that allows attaching an external repository inside another repository at a specific path. It allows us to keep a git repository as a subdirectory of another git repository.&lt;/p&gt;

&lt;p&gt;Commands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add git submodule: This takes the git URL as the parameter and clones the pointer repo as a submodule. The syntax to add git submodule is:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git submodule add &amp;lt;URL_link&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;git submodule init
git submodule init is to copy the mapping from .gitmodules file into ./.git/config file. git submodule init has extend behavior in which it accepts a list of explicit module names.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This enables a workflow of activating only specific submodules that are needed for work on the repository.&lt;/p&gt;

&lt;p&gt;Command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git submodule init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Git Subtrees
&lt;/h2&gt;

&lt;p&gt;git subtree lets you nest one repository inside another as a sub-directory. It is one of several ways Git projects can manage project dependencies.&lt;/p&gt;

&lt;p&gt;git-subtree is a wrapper shell script to facilitate a more natural syntax. This is actually still a part of contrib and not fully integrated into git with the usual man pages.&lt;/p&gt;

&lt;p&gt;A subtree is just a subdirectory that can be committed to, branched from, and merged along with your project in any way you want.&lt;/p&gt;

&lt;p&gt;Commands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;add: Let’s assume that you have a local repository that you would like
to add an external vendor library to. In this case we will add the git-subtree repository as a subdirectory of your already existing git-extensions repository in ~/git-extensions/:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git subtree add --prefix=git-subtree --squash \&amp;lt;Git_repo_link&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;pull : It is similar to pull from the repository with added prefix.
Command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git subtree pull --prefix &amp;lt;URL_link&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Git Submodules VS Subtrees
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Git Submodules&lt;/th&gt;
&lt;th&gt;Git Subtrees&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;It is a link to a commit ref in another repository&lt;/td&gt;
&lt;td&gt;Code is merged in the outer repository’s history&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Requires the submodule to be accessible in a server (like GitHub)&lt;/td&gt;
&lt;td&gt;Git subtree is decentralised, which basically means that its components are shared across a bunch of linked computers.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Git submodule is a better fit for component-based development, where your main project depends on a fixed version of another component (repo).&lt;/td&gt;
&lt;td&gt;Git subtree is more like a system-based development, where your all repo contains everything at once, and you can modify any part.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Suitable for smaller repository size&lt;/td&gt;
&lt;td&gt;Suitable for bigger repository size&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h1&gt;
  
  
  Git Commands
&lt;/h1&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;S. No&lt;/th&gt;
&lt;th&gt;Command Name&lt;/th&gt;
&lt;th&gt;Use&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;git init&lt;/td&gt;
&lt;td&gt;Initialise a local Git Repository&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;git add.&lt;/td&gt;
&lt;td&gt;Add one or more files to the staging area&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;git commit -m “Commit Message”&lt;/td&gt;
&lt;td&gt;Commit changes to the head but not to the remote repository.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;git status&lt;/td&gt;
&lt;td&gt;Check the status of your current repository and list the files you have changed.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;git log&lt;/td&gt;
&lt;td&gt;Provides a list of all commits made on a branch&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;git diff&lt;/td&gt;
&lt;td&gt;View the changes you have made to the file&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;git push origin &lt;/td&gt;
&lt;td&gt;Push the branch to the remote repository so that others can use it.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;git config --global user.name “Name”&lt;/td&gt;
&lt;td&gt;Tell Git who you are by configuring the author name&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;git config --global user.email &lt;a href="mailto:user@email.com"&gt;user@email.com&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;Tell Git who you are by configuring the author email id.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;git clone &lt;/td&gt;
&lt;td&gt;Creates a Git repository copy from a remote source&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;td&gt;git remote add origin &lt;/td&gt;
&lt;td&gt;Connect your local repository to the remote server and add the server to be able to push it.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;td&gt;git branch &lt;/td&gt;
&lt;td&gt;Create a new branch&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;13&lt;/td&gt;
&lt;td&gt;git checkout &lt;/td&gt;
&lt;td&gt;Switch from one branch to another&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;td&gt;git merge &lt;/td&gt;
&lt;td&gt;Merge the branch into the active branch&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;td&gt;git rebase&lt;/td&gt;
&lt;td&gt;Reapply commits on top of another base tip&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;td&gt;git checkout -b &lt;/td&gt;
&lt;td&gt;Creates a new branch and switch to it&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;17&lt;/td&gt;
&lt;td&gt;git stash&lt;/td&gt;
&lt;td&gt;Stash changes into a dirty working directory&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;18&lt;/td&gt;
&lt;td&gt;git pull&lt;/td&gt;
&lt;td&gt;Update local repository to the newest commit&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;19&lt;/td&gt;
&lt;td&gt;git revert &lt;/td&gt;
&lt;td&gt;Revert commit changes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;20&lt;/td&gt;
&lt;td&gt;git clean -n&lt;/td&gt;
&lt;td&gt;Shows which files would be removed from working directory. Use the -f flag in place of the -n flag to execute the clean.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;21&lt;/td&gt;
&lt;td&gt;git log --summary&lt;/td&gt;
&lt;td&gt;View changes (detailed)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;22&lt;/td&gt;
&lt;td&gt;git diff HEAD&lt;/td&gt;
&lt;td&gt;Show difference between working directory and last commit.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;23&lt;/td&gt;
&lt;td&gt;git log --oneline&lt;/td&gt;
&lt;td&gt;View changes (briefly)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;24&lt;/td&gt;
&lt;td&gt;git reflog&lt;/td&gt;
&lt;td&gt;Show a log of changes to the local repository’s HEAD. Add --relative-date flag to show date info or --all to show all refs.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;25&lt;/td&gt;
&lt;td&gt;git rebase -i &lt;/td&gt;
&lt;td&gt;Interactively rebase current branch onto . Launches editor to enter commands for how each commit will be transferred to the new base.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;26&lt;/td&gt;
&lt;td&gt;git restore --staged &lt;/td&gt;
&lt;td&gt;Resetting a staged file&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;27&lt;/td&gt;
&lt;td&gt;git rm -r [File_name]&lt;/td&gt;
&lt;td&gt;Remove a file (or folder)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;28&lt;/td&gt;
&lt;td&gt;git config --list&lt;/td&gt;
&lt;td&gt;List all variables set in config file, along with their values&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;29&lt;/td&gt;
&lt;td&gt;git branch -d &lt;/td&gt;
&lt;td&gt;Delete local branch in Git&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;30&lt;/td&gt;
&lt;td&gt;git push -d  &lt;/td&gt;
&lt;td&gt;Delete remote branch in Git&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;31&lt;/td&gt;
&lt;td&gt;git stash pop&lt;/td&gt;
&lt;td&gt;Unstash the changes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;32&lt;/td&gt;
&lt;td&gt;git commit -am&lt;/td&gt;
&lt;td&gt;The -am along with the command is to write the commit message on the command line for already staged files.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;33&lt;/td&gt;
&lt;td&gt;git commit -amend&lt;/td&gt;
&lt;td&gt;The amend is used to edit the last commit. Incase we need to change the last committed message, this command can be used.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;34&lt;/td&gt;
&lt;td&gt;git rm&lt;/td&gt;
&lt;td&gt;The git rm command is used to remove or delete files from working tree and index.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;35&lt;/td&gt;
&lt;td&gt;git pull --rebase&lt;/td&gt;
&lt;td&gt;Git rebase is used to rewrite commits from one branch to another branch.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;36&lt;/td&gt;
&lt;td&gt;git merge --squash&lt;/td&gt;
&lt;td&gt;The squash along with git merge produces the working tree. It indexes in the same way as that of the real merge, but discards the merge history.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;37&lt;/td&gt;
&lt;td&gt;git revert -e &lt;/td&gt;
&lt;td&gt;edit the commit mesage before reverting, -e is used for the same.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;38&lt;/td&gt;
&lt;td&gt;git bisect&lt;/td&gt;
&lt;td&gt;Git bisect goes through all the previous commit and uses binary search to find the bugged commit.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;39&lt;/td&gt;
&lt;td&gt;git blame&lt;/td&gt;
&lt;td&gt;git blame is used to know who/which commit is responsible for the lastest changes in the repository.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;40&lt;/td&gt;
&lt;td&gt;git cherry-pick&lt;/td&gt;
&lt;td&gt;Choosing a commit from one branch and applying it to another is known as cherry picking in Git.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

</description>
      <category>git</category>
      <category>vcs</category>
      <category>github</category>
      <category>devops</category>
    </item>
    <item>
      <title>AWS Elastic Load Balancer for Serverless Architecture</title>
      <dc:creator>Vinoth Mohan</dc:creator>
      <pubDate>Wed, 02 Feb 2022 07:54:31 +0000</pubDate>
      <link>https://dev.to/vinothmohan/aws-elastic-load-balancer-for-serverless-architecture-ede</link>
      <guid>https://dev.to/vinothmohan/aws-elastic-load-balancer-for-serverless-architecture-ede</guid>
      <description>&lt;p&gt;Load balancing is a significant part of every internet-facing software, and with Elastic Load Balancing (ELB), AWS offers a set of load balancers for every use case. &lt;/p&gt;

&lt;h2&gt;
  
  
  What is Elastic Load Balancer (ELB)?
&lt;/h2&gt;

&lt;p&gt;ELB is a set of load balancing(LB) services offered by AWS. They include &lt;strong&gt;Classic Load Balancer&lt;/strong&gt;, &lt;strong&gt;Gateway Load Balancer&lt;/strong&gt;, &lt;strong&gt;Network Load Balancer&lt;/strong&gt;, and &lt;strong&gt;Application Load Balancer&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Each of these LBs covers different use-cases.
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The &lt;strong&gt;Classic Load Balancer&lt;/strong&gt; is a good choice for EC2 based architectures&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;strong&gt;Gateway Load Balancer&lt;/strong&gt; helps with third-party VMs in VPCs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;strong&gt;Network Load Balancer&lt;/strong&gt; focuses on high-performance low-level networking, think UPD based connections for games or IoT&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;strong&gt;Application Load Balancer&lt;/strong&gt; is a high-level solution for software that uses the HTTP protocol&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the case of serverless architectures, all services use HTTP APIs, which means the ALB is the best choice. So, this article will focus on the ALB.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ko8P3ZfI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/auv1uqkrirpfsigj33xy.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ko8P3ZfI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/auv1uqkrirpfsigj33xy.jpg" alt="Image description" width="709" height="643"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Application Load Balancer (ALB)?
&lt;/h2&gt;

&lt;p&gt;The ALB’s focus on HTTP allows it to use parts of the protocol to make decisions about caching and save you some Lambda executions. This means your Lambda functions have to set their caching headers correctly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pricing
&lt;/h3&gt;

&lt;p&gt;While ALB can integrate with Lambda, ALB isn’t a serverless service; it has no pay-as-you-go model, which means you pay for times that aren’t used. But if you have a service with continuous steady traffic requirements, it can be cheaper than API Gateway in the long run.&lt;/p&gt;

&lt;h3&gt;
  
  
  Limits
&lt;/h3&gt;

&lt;p&gt;Also, API Gateway has a limit on 10k connections; the ALB doesn’t. It’s an API Gateway with more minor features; bare-bones, but built for performance. If you’re going big, ALB might be your only solution.&lt;/p&gt;

&lt;h3&gt;
  
  
  Permissions
&lt;/h3&gt;

&lt;p&gt;ALB is more of a traditional “strap in front of your public HTTP endpoint” kind of thing. So, while it integrates with Lambda, it doesn’t offer permissions based on IAM. You have to take care of this inside your serverless functions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Transformations
&lt;/h3&gt;

&lt;p&gt;This traditional load balancing approach also means ALB can’t do request and response transforms; it just pipes your data along. Again, this makes the ALB less flexible than the API Gateway and shifts more work to Lambda.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multi-Region
&lt;/h3&gt;

&lt;p&gt;You deploy the ALB to one region at a time. Again, this isn’t a serverless service, so more work on your side is required. To get your traffic balanced between multiple regions, you need Route53’s DNS-based balancing. &lt;/p&gt;

&lt;h3&gt;
  
  
  Configurations for Reliability or Costs
&lt;/h3&gt;

&lt;p&gt;Using ALB with a Lambda target usually delivers good reliability because Lambda scales automatically. If you need more than out-of-the-box reliability, you must deploy ALB to multiple regions and put it behind Route53. &lt;/p&gt;

&lt;p&gt;In terms of costs, Lambda can become your main offender. If you route every request to a Lambda function with a big memory config, things can get expensive quickly. So, follow the serverless best practice of keeping Lambda functions small and purpose-driven. Set up conditions for your ALB listeners, so you can use functions with a smaller memory footprint when possible.&lt;/p&gt;

&lt;h3&gt;
  
  
  Health Check Best Practices
&lt;/h3&gt;

&lt;p&gt;While an EC2 target can easily get overwhelmed, a Lambda target has a bit more buffer because of its inherent autoscaling; there are still things that can go wrong in a serverless system.&lt;/p&gt;

&lt;p&gt;AWS disables ALB health checks by default for Lambda targets, so you have to opt-in here.&lt;/p&gt;

&lt;p&gt;While some issues can arise from buggy code pushed to Lambda, most problems come from upstream services your function uses. So set up your Lambdas to pipe the health check and later respond with the result from the upstream service.&lt;/p&gt;

&lt;p&gt;If things are broken, the only quick solution is to tell Route53 to route the following requests to a different deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Log Analysis in AWS
&lt;/h3&gt;

&lt;p&gt;AWS lets you use Amazon Athena to analyze your ALB logs. Athena is a serverless query service. You need to activate query logs and save them to S3 to explore them with Athenas SQL queries.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>lambda</category>
    </item>
  </channel>
</rss>
