<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: jhaji12</title>
    <description>The latest articles on DEV Community by jhaji12 (@jhaji12).</description>
    <link>https://dev.to/jhaji12</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jhaji12"/>
    <language>en</language>
    <item>
      <title>My Interview Experience at Google - Customer Engineer, Gen AI</title>
      <dc:creator>jhaji12</dc:creator>
      <pubDate>Sun, 17 Sep 2023 19:25:49 +0000</pubDate>
      <link>https://dev.to/jhaji12/my-interview-experience-at-google-customer-engineer-gen-ai-996</link>
      <guid>https://dev.to/jhaji12/my-interview-experience-at-google-customer-engineer-gen-ai-996</guid>
      <description>&lt;p&gt;In the midst of an ordinary morning routine, a message notification on my phone brought a wave of unexpected excitement—a job opportunity at Google via LinkedIn.🤩🤩🤩 Skeptical but curious, I replied, expressing my interest in the Customer Engineer Gen AI role. The prospect of joining Google, even as a full-stack developer, was too alluring to ignore. To my astonishment, within minutes, I received a call from Google's HR team, a surreal moment I hadn't anticipated. As the conversation unfolded, I found myself invited to a screening round, signaling the beginning of a life-changing journey.&lt;/p&gt;

&lt;p&gt;On the day of the screening, I joined a Google Meet call, heart pounding with anticipation. The HR welcomed me and initiated discussions about my professional journey and work experience. What I thought would be a routine profile discussion quickly turned into a coding challenge. With determination, I tackled each question, emerging successfully from the initial screening round, a proud moment that I eagerly shared with family and friends.&lt;br&gt;
🥳🥳🥳&lt;br&gt;
Following the screening, an email from Google's recruiting team arrived, detailing the interview process, expectations, and comprehensive insights into each stage. Armed with this knowledge, I embarked on a week-long journey of intensive preparation, diving into Google Cloud fundamentals, Gen AI, Vertex AI, and various Google Cloud services.&lt;br&gt;
🫠🫠🫠&lt;br&gt;
The pivotal interview day arrived, and though my nerves were palpable, I resolved to give it my all. Two interviewers greeted me, creating an atmosphere of comfort and mentorship as they presented a coding challenge. They guided me through multiple problem-solving approaches, ultimately arriving at the optimal solution. As a pleasant surprise, they presented a bonus question, fostering a comfortable and supportive interview environment.&lt;br&gt;
🥹🥹🥹&lt;br&gt;
The joy of successfully passing the first round was tangible, a milestone celebrated with loved ones. An email from Google's HR soon followed, informing me of my qualification for the next round: RRK (Role Related Knowledge). I requested additional time to prepare, given my full-stack developer background and limited knowledge of Google Cloud, and the HR team graciously granted me a week.&lt;br&gt;
😁😁😁&lt;br&gt;
In the second round, I navigated questions about Gen AI and deployment while sharing insights from my freelance projects. The interview transitioned into situational inquiries, challenging me to prioritize stakeholders and defend my choices. Despite moments of doubt, I persevered, discussing containerization and addressing stakeholder needs.&lt;br&gt;
🫡🫡🫡&lt;br&gt;
However, my confidence faltered as the conversation delved into topics like hypervisors and VMWare, ultimately affecting my performance. Constructive feedback from the interviewer acknowledged that my tech journey had only just begun. I awaited the outcome with a mix of hope and uncertainty.&lt;br&gt;
🥲🥲🥲&lt;br&gt;
A week later, an email from HR shattered my anticipation: rejection. Yet, they left me with a glimmer of hope, assuring me that my profile would be considered for future roles. While this chapter didn't end as I had hoped, it taught me invaluable lessons about resilience, preparedness, and the unpredictable nature of life's journey. With renewed enthusiasm, I look ahead, ready to embrace the next opportunity and continue learning.&lt;br&gt;
😇😇😇&lt;br&gt;
Life is an ever-evolving journey, filled with experiences meant to guide and inspire us. As we navigate this vast landscape of opportunities together, I invite you to share your stories and learnings. &lt;/p&gt;

&lt;p&gt;Happy Learning!🚀🚀🚀&lt;/p&gt;

&lt;p&gt;Let's connect on LinkedIn 😄- &lt;a href="https://www.linkedin.com/in/jyoti-jha-bb1461175"&gt;https://www.linkedin.com/in/jyoti-jha-bb1461175&lt;/a&gt;&lt;/p&gt;

</description>
      <category>googlecloud</category>
      <category>interview</category>
      <category>google</category>
      <category>coding</category>
    </item>
    <item>
      <title>Vault for Beginners</title>
      <dc:creator>jhaji12</dc:creator>
      <pubDate>Fri, 02 Jun 2023 07:22:06 +0000</pubDate>
      <link>https://dev.to/jhaji12/vault-for-beginners-33ef</link>
      <guid>https://dev.to/jhaji12/vault-for-beginners-33ef</guid>
      <description>&lt;p&gt;Let's dive into a story-like narrative to help us understand Vault in a more engaging way.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4kldqerwe8sgyet0n89.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4kldqerwe8sgyet0n89.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once upon a time, in a world where sensitive information held great power, there existed a kingdom known as Vault. Vault was a fortress designed to safeguard valuable secrets like API keys, passwords, and database credentials.&lt;/p&gt;

&lt;p&gt;The king of the kingdom understood the importance of protecting these secrets from falling into the wrong hands. To ensure the security of the kingdom, he summoned the expertise of HashiCorp, a renowned master of secrets management.&lt;/p&gt;

&lt;p&gt;The HashiCorp team introduced Vault, a powerful tool that would serve as the guardian of the kingdom's secrets. They began by installing Vault within the fortified walls of the kingdom, setting up a stronghold dedicated to secrets management.&lt;/p&gt;

&lt;p&gt;To start their journey with Vault, the king's team launched a development server. This allowed them to experiment and learn without risking the security of the actual secrets. The development server simulated the real Vault environment, providing them with a safe space to explore its capabilities.&lt;/p&gt;

&lt;p&gt;With the server up and running, the team initiated Vault by executing a special ritual known as initialization. Vault generated a set of initial root tokens and presented them to the team. These tokens granted unparalleled access and control over the secrets within Vault.&lt;/p&gt;

&lt;p&gt;However, Vault was sealed shut to protect the secrets from unauthorized access. To unlock the vault's doors, the team used the unseal keys obtained during initialization. Each key acted as a unique piece of a puzzle, and combining them unsealed the fortress, making the secrets accessible.&lt;/p&gt;

&lt;p&gt;Now that Vault was unsealed, the team embarked on their quest to interact with the kingdom's secrets. They leveraged Vault's command-line interface (CLI) and RESTful API to authenticate, store, retrieve, and manage secrets. Vault provided them with the tools and capabilities to ensure the secrets remained secure and accessible only to authorized users.&lt;/p&gt;

&lt;p&gt;The team discovered that Vault employed &lt;strong&gt;&lt;em&gt;secrets engines&lt;/em&gt;&lt;/strong&gt;, magical entities responsible for managing different types of secrets. They enabled and configured secrets engines based on their specific needs. Each secrets engine possessed unique powers, allowing the team to generate secrets on the fly, store them securely, and retrieve them when needed.&lt;/p&gt;

&lt;p&gt;To maintain order within the kingdom, Vault introduced access control. The team defined &lt;strong&gt;&lt;em&gt;policies&lt;/em&gt;&lt;/strong&gt; using a special language known as HCL (HashiCorp Configuration Language). These policies granted or restricted access to specific secrets and operations. They assigned policies to users, groups, or tokens, ensuring that only those with the right permissions could unlock the secrets' potential.&lt;/p&gt;

&lt;p&gt;As the team delved deeper into their journey, they discovered that Vault's powers extended beyond the kingdom's borders. Vault seamlessly integrated with external platforms like Kubernetes, databases, and cloud providers. These integrations allowed the team to extend their secrets management practices beyond the boundaries of their kingdom, ensuring a holistic approach to security.&lt;/p&gt;

&lt;p&gt;With time, the kingdom flourished under Vault's protection. Secrets remained secure, access was granted to those who deserved it, and the kingdom's sensitive information remained safe from harm. Vault had become an invaluable asset, empowering the kingdom and its people to harness the power of secrets responsibly.&lt;/p&gt;

&lt;p&gt;And so, the story of Vault continued, empowering organizations far and wide to protect their secrets, maintain control, and keep their sensitive information away from prying eyes.&lt;/p&gt;




&lt;p&gt;Vault is a popular open-source tool developed by HashiCorp that provides secrets management, secure storage, and access control for applications and infrastructure. It helps in safeguarding sensitive information such as API keys, passwords, certificates, and database credentials.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Installation and Setup:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Download and install Vault from the official HashiCorp website (&lt;a href="https://www.vaultproject.io/" rel="noopener noreferrer"&gt;https://www.vaultproject.io/&lt;/a&gt;). Choose the appropriate version for your operating system.&lt;br&gt;
Follow the installation instructions provided in the documentation.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Running a Development Server:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Vault has a development mode that allows us to run it locally for testing and learning purposes. Start a development server by running the following command in our terminal:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;vault server -dev&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Initializing Vault:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once the development server is running, initialize Vault by executing the following &lt;/p&gt;

&lt;p&gt;&lt;code&gt;vault operator init&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Vault will generate a set of initial root tokens and provide you with unseal keys. These keys are crucial for managing and accessing Vault.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Unsealing Vault:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Vault is initially sealed to protect the sensitive data stored within it. Unseal Vault using the unseal keys generated during initialization. Run the following command for each unseal key:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;vault operator unseal &amp;lt;unseal_key&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Interacting with Vault:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Vault provides a command-line interface (CLI) and a RESTful HTTP API for interacting with its services.&lt;br&gt;
Use the CLI or API to authenticate, store secrets, and perform various operations.&lt;br&gt;
Explore the available commands and capabilities by referring to the Vault documentation.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Secrets and Secrets Engines:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Vault uses the concept of secrets engines to manage different types of secrets.&lt;br&gt;
Enable and configure secrets engines based on your requirements. Examples include KeyValue, AWS, Azure, Database, and more.&lt;br&gt;
Secrets engines allow you to generate, store, and access secrets securely.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access Control:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Vault provides robust access control mechanisms to manage who can perform specific operations and access certain secrets.&lt;br&gt;
Define policies using the HashiCorp Configuration Language (HCL) to enforce fine-grained access control.&lt;br&gt;
Assign policies to users, groups, or tokens to control their permissions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Integrations:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Vault integrates with various tools and platforms, such as Kubernetes, databases, cloud providers, and more.&lt;br&gt;
Explore Vault's documentation to learn about specific integrations and how to configure them.&lt;/p&gt;

&lt;p&gt;_Vault has many features and capabilities beyond the basic steps outlined here. As a beginner, it's recommended to read the official Vault documentation _&lt;/p&gt;

</description>
      <category>devops</category>
      <category>opensource</category>
      <category>security</category>
      <category>development</category>
    </item>
    <item>
      <title>EC2 - Elastic Compute Cloud</title>
      <dc:creator>jhaji12</dc:creator>
      <pubDate>Sun, 03 Apr 2022 11:18:38 +0000</pubDate>
      <link>https://dev.to/aws-builders/ec2-elastic-compute-cloud-l91</link>
      <guid>https://dev.to/aws-builders/ec2-elastic-compute-cloud-l91</guid>
      <description>&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/8uP4fJ4uEwM"&gt;
&lt;/iframe&gt;
&lt;br&gt;
&lt;strong&gt;Elastic Compute Cloud&lt;/strong&gt; is a web service by AWS which provides scalable computing capacity. Here Computing refers to processing power, memory, networking, storage, and other resources. It offers infrastructure as a service(IaaS).&lt;/p&gt;

&lt;p&gt;Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster. It is highly scalable and pay-as-you-go model.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of EC2 -
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vzkadTWB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dj1h9pwiuvunh5cfojrw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vzkadTWB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dj1h9pwiuvunh5cfojrw.png" alt="Image description" width="720" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Features of EC2 -
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Instances  - It is a virtual server for running applications on AWS infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AMI  - Amazon Machine Image is a template that provides information required to launch instances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Instance types  - Amazon EC2 provides a total of 8 instance types which are classified according to their use cases. Instance types comprise varying combinations of CPU, memory, storage and networking capacity to give the client a flexibility to choose the appropriate mix of resources for your applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Secure Login  - It allow secure login for your instances using key-pairs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Regions and Availability Zones  - It is a physical location around the world where data centers are clustered. Each group of logical data centers are called Availability Zone. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;VPCs(Virtual Private Clouds) -  It enables you to launch AWS resources into a virtual network that you've defined.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Amazon Machine Image
&lt;/h2&gt;

&lt;p&gt;An Amazon Machine Image (AMI) is a template/prototype that contains a software configuration (for example, an operating system, server, and applications). From an AMI we can launch an instance, which is a copy of the AMI running as a virtual server in the cloud. &lt;br&gt;
It includes EBS, Launch permissions, a block device mapping that specifies the volumes to attach to the instance when it's launched.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nu66QU9t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gpyxyx1w93a6sy4n4xi6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nu66QU9t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gpyxyx1w93a6sy4n4xi6.png" alt="Image description" width="289" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the below diagram, it shows how we can create and register an AMI which includes EBS snapshot or template that can be used to launch new instance or to create another copy of AMI within same region or different AWS region. If we want any changes then we can deregister AMI and then launch it again.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Gu1X5-7k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wm3urcbbmgx7bkdyc88u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Gu1X5-7k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wm3urcbbmgx7bkdyc88u.png" alt="Image description" width="476" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In simple words AMI is a base image(it can include different OS) using which we can launch instances.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating an EC2 Instance - Hands on
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Open the Amazon EC2 console at &lt;a href="https://console.aws.amazon.com/ec2/"&gt;click here&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To launch an instance click on ec2&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nx2YZohF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/py81obsrgyjhj9myj7fg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nx2YZohF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/py81obsrgyjhj9myj7fg.png" alt="Image description" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on instances to see the instances and their running status&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--svCYgYxK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kwww3h7ez3v6pthlgnyz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--svCYgYxK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kwww3h7ez3v6pthlgnyz.png" alt="Image description" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If there is no instance created you will see this &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CIbQ_vAb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mx4x4qw85ouozwmbkhq9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CIbQ_vAb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mx4x4qw85ouozwmbkhq9.png" alt="Image description" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click on new Launch instances and then it will show you all the AMIs that are available for different use cases.
We will select windows (Microsoft Windows Server 2019 Base - ami-0f9a92942448ac56f) to launch an instance of windows OS.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oGB1MRGs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kcnxj3cixa0ep7dr42uo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oGB1MRGs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kcnxj3cixa0ep7dr42uo.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On choosing the AMI, you will select the instance type according to your usage of CPU, RAM, Storage, bandwidth, if you are having free tier then only t2.micro is available.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6O3GCAzL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v6pc78rah4eqwvz6938b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6O3GCAzL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v6pc78rah4eqwvz6938b.png" alt="Image description" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Then you will choose the number of instances to be created from the AMI which is the configure instance details, this is the part where you can choose if you want to create a VPC.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lWc-dNPm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sf3lj7ip7e4u9omfckd7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lWc-dNPm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sf3lj7ip7e4u9omfckd7.png" alt="Image description" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add Storage - Here you will decide the size and volume type according to your requirements.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HJn3RDUH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u6bzitl5vgyc8i1bhsdb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HJn3RDUH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u6bzitl5vgyc8i1bhsdb.png" alt="Image description" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add Tags - We can add tags for our convenience it will be easier to identify and manage our instances like name, department, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hoeC84By--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8flxfxbnhelmb63icwow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hoeC84By--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8flxfxbnhelmb63icwow.png" alt="Image description" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure Security group - Here we can decide how we want to access our instances, we can choose the protocol type(RDP,HTTP, HTTPS, SSH). RDP stands for Remote Desktop Protocol which will allow us to create remote desktop for windows. We can also decide who can access our instance. As you will see this warning &lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Warning&lt;br&gt;
Rules with source of 0.0.0.0/0 allow all IP addresses to access your instance. We recommend setting security group rules to allow access from known IP addresses only.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hQ3SXSHD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uwekitqm87msjvtv8uwm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hQ3SXSHD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uwekitqm87msjvtv8uwm.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Review and Launch will allow you to check your instance launch details. You can go back to edit changes for each section. Click Launch to assign a key pair to your instance and complete the launch process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On clicking launch, AWS asks for a key which you download and use to access via RDP download and keep the key safe as it only can be downloaded once. You can also select a previous key if you have a key generated already.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XZkOmEj_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2mvis61khvz29c7480qp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XZkOmEj_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2mvis61khvz29c7480qp.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;As you can see your instance is in running state, Select the instance you want to connect and click on connect button&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Aze99uzQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c2eqyyh2bwyd2usgtp06.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Aze99uzQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c2eqyyh2bwyd2usgtp06.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Download the remote desktop file &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vazhb9VU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f8phps3u61sgevy26k0c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vazhb9VU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f8phps3u61sgevy26k0c.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click on get password which you will need while login to the Remote desktop. Now upload the download .pem file or the key pair value that you have generated while launching the instance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Br8_WF7s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pqph45g2ahkaevw9hcts.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Br8_WF7s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pqph45g2ahkaevw9hcts.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After uploading click on decrypt password and copy the password so that you can use it to login the Remote Desktop Connection.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Y7q44HI---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0sg3zrds01qdn4ybpdgs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Y7q44HI---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0sg3zrds01qdn4ybpdgs.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to the downloaded remote desktop file and use the password to login.
Here you go with the remote desktop connection, create your application and it is totally independent from your personal computer having it's own property. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Likewise you can create an instance for different AMIs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JiPZehul--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/33alag7ludkpgh40vx6f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JiPZehul--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/33alag7ludkpgh40vx6f.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you want to create a template of your instance then stop the instance from running and then you can create templates of launched instances. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--R_bTEfgM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8dckuemmdoj7emw344ln.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R_bTEfgM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8dckuemmdoj7emw344ln.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Likewise, you can create an image of your instance to launch another instance using your saved image(my AMIs) instead of using AMI. 
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VjJdbLnX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/07nferqwunok07691pwl.png" alt="Image description" width="800" height="450"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's all for this blog. &lt;br&gt;
Next I will be creating a blog on Instances and their types.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>aws</category>
      <category>ec2</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Understanding the Dockerfile Commands</title>
      <dc:creator>jhaji12</dc:creator>
      <pubDate>Wed, 30 Mar 2022 08:01:45 +0000</pubDate>
      <link>https://dev.to/aws-builders/understanding-the-dockerfile-format-3cc6</link>
      <guid>https://dev.to/aws-builders/understanding-the-dockerfile-format-3cc6</guid>
      <description>&lt;p&gt;&lt;strong&gt;"Blueprint for creating a docker image"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/MeiJMDFSM5c"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker builds images automatically by reading the instructions from a Dockerfile. &lt;/li&gt;
&lt;li&gt;It is a text file without any .txt extensions that contains all commands in order, needed to build a given image. &lt;/li&gt;
&lt;li&gt;It is always named &lt;strong&gt;&lt;em&gt;Dockerfile&lt;/em&gt;&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Docker image consists of read-only layers each of which represents a Dockerfile instruction. The layers are stacked and each one is created by the change from the previous layer. For example, if I create a base layer of ubuntu and then in second instruction I install Python it will create a second layer. Likewise, if I do any changes by the instructions(RUN , COPY , ADD) it will create a new layer in that image. &lt;/p&gt;

&lt;p&gt;Containers are read-write layers that are created by docker images.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In simple words, a Dockerfile is a set of instructions that creates a stacked-layer for each instruction that collectively makes an image(which is a prototype or template for containers)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjai2p3xyrz1hky39ufl.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjai2p3xyrz1hky39ufl.jpeg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5xexqi31wtlyswc2u2j.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5xexqi31wtlyswc2u2j.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Frequently used Dockerfile commands -&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;FROM - Defines a base image, it can be pulled from docker hub &lt;br&gt;
(for example- if we want to create a javascript application with node as backend then we need to have node as a base image, so it can run node application.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;RUN - Executes command in a new image layer( we can have multiple run commands )&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CMD - Command to be executed when running a container( It is asked to have one CMD command,  If a Dockerfile has multiple CMDs, it only applies the instructions from the last one.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;EXPOSE - Documents which ports are exposed (It is only used for documentation)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ENV - Sets environment variables inside the image &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;COPY - It is used to copy your local files/directories to Docker Container.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ADD - It is more feature-rich version of the COPY instruction. COPY is preferred over ADD. Major difference b/w ADD and COPY is that ADD allows you to copy from URL that is the source can be URL but in COPY it can only have local ones.  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ENTRYPOINT - Define a container's executable (You cannot override and ENTRYPOINT when starting a container unless you add the --entrypoint flag.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;VOLUME - It defines which directory in an image should be treated as a volume. The volume will be given a random name which can be found using docker &lt;strong&gt;inspect&lt;/strong&gt; command.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;WORKDIR - Defines the working directory for subsequent instructions in the Dockerfile(Important point to remember that it doesn't create a new intermediate layer in Image)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;#Basic Dockerfile&lt;br&gt;
FROM ubuntu:18.04 &lt;br&gt;
COPY . /app &lt;br&gt;
RUN make /app &lt;br&gt;
CMD python /app/app.py&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Each instruction creates one layer:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;FROM creates a layer from the ubuntu:18.04 Docker image.&lt;br&gt;
COPY adds files from your Docker client’s current directory.&lt;br&gt;
RUN builds your application with make.&lt;br&gt;
CMD specifies what command to run within the container.&lt;br&gt;
Let's see this demo example of Docker layer architecture-&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpvy63n4ifhx6wfiirkdl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpvy63n4ifhx6wfiirkdl.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If some files should be prevented from being copied into the Docker image(it can be sensitive informations like .env files which contains API keys or any other files that are not much important), a .dockerignore file can be added at the same level as the Dockerfile where files that should not be copied over into the Docker image can be specified. By this if we are using a COPY or ADD instruction in a Dockerfile to specify the files to be added into a Docker image, any file specified in the .dockerignore file will be ignored and not added into the Docker image. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shell and Exec forms&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All three instructions (RUN, CMD and ENTRYPOINT) can be specified in shell form or exec form.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shell form&lt;/strong&gt;&lt;br&gt;
 &lt;br&gt;
RUN apt-get install python3 &lt;br&gt;
CMD echo "Hello world" &lt;br&gt;
ENTRYPOINT echo "Hello world"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Exec form&lt;/strong&gt;&lt;br&gt;
This is the preferred form for CMD and ENTRYPOINT instructions. ["executable", "param1", "param2", ...]&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Difference between RUN,CMD and ENTRYPOINT?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;RUN - RUN instruction allows you to install your application and packages required for it. It executes any commands on top of the current image and creates a new layer by committing the results. Often you will find multiple RUN instructions in a Dockerfile.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;RUN apt-get install python&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;CMD - CMD instruction allows you to set a default command, which will be executed only when you run container without specifying a command. If Docker container runs with a command, the default command will be ignored. If Dockerfile has more than one CMD instruction, all but last CMD instructions are ignored.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;CMD "echo" "Hello World!"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;ENTRYPOINT - ENTRYPOINT instruction allows you to configure a container that will run as an executable. It looks similar to CMD, because it also allows you to specify a command with parameters. The difference is ENTRYPOINT command and parameters are not ignored when Docker container runs with command line parameters.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Prefer ENTRYPOINT to CMD when building executable Docker image and you need a command always to be executed. Additionally use CMD if you need to provide extra default arguments that could be overwritten from command line when docker container runs.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Choose CMD if you need to provide a default command and/or arguments that can be overwritten from command line when docker container runs.&lt;/p&gt;

&lt;p&gt;To know more about it - &lt;br&gt;
 &lt;a href="https://jyotijha5916.ongraphy.com/blog/understanding-dockerfile" rel="noopener noreferrer"&gt;Click here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>docker</category>
      <category>serverless</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Docker For Beginners</title>
      <dc:creator>jhaji12</dc:creator>
      <pubDate>Thu, 24 Mar 2022 09:48:12 +0000</pubDate>
      <link>https://dev.to/aws-builders/docker-for-beginners-36d7</link>
      <guid>https://dev.to/aws-builders/docker-for-beginners-36d7</guid>
      <description>&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/apgwM9hBGMQ"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Small Story to Understand the use of Containers 🧙🏻‍♀️&lt;br&gt;
"Let's meet our Characters first -&lt;/p&gt;

&lt;p&gt;John (Software Developer)🙋🏻‍♂️&lt;/p&gt;

&lt;p&gt;Harry(Software Tester)👨🏻‍💼&lt;/p&gt;

&lt;p&gt;John is a fresher and he is very excited to start his journey as a software developer. He is working on his first project which involves multiple languages, frameworks,  and different libraries. According to his requirements, he use to install all the dependencies on his machine. 👨🏻‍💻&lt;/p&gt;

&lt;p&gt;Successfully, he was able to finish his project and it was running fine on his machine. &lt;/p&gt;

&lt;p&gt;Next, he has to verify his project, so he sent the code to the testing team. Harry checked the code and It shows an 'Error ' ❌, but we know that the code runs fine on John's machine.&lt;/p&gt;

&lt;p&gt;The problem comes with dependencies. All the dependencies that are used by John are different from what is being used by the tester which creates the problem. It can be due to the change in versions( for ex: John has used python 2.0 and the testing team is using python 3.0) or maybe there are some dependencies that John may not be knowing about and it is getting used in the app( Because that dependency was already installed in his machine). There are many reasons for the error that are not possible to detect again and again.&lt;/p&gt;

&lt;p&gt;Now, we know that some dependencies or libraries are dependent on the Operating System as well, which may also be one of the reasons. It is not possible to transfer the OS itself to the testing team for John. 🤷🏻‍♂️&lt;/p&gt;

&lt;p&gt;Here, comes the role of Containers.&lt;/p&gt;

&lt;p&gt;With the help of Containers, John can isolate his application from the environment, which solves the 'It works on my machine' problem. By this, his application can run quickly and reliably from one computing environment to another. &lt;/p&gt;

&lt;p&gt;John gets appreciation from the manager, and he is very happy with his job and the team &lt;/p&gt;

&lt;p&gt;🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺&lt;/p&gt;

&lt;p&gt;What are Containers?&lt;/p&gt;

&lt;p&gt;Creating a Docker container can be described as creating a small software package that can run a particular application and its associated processes.&lt;/p&gt;

&lt;p&gt;The container that you create becomes portable and can be run on Docker installations on all types of computers in the same way without fear of compatibility issues.&lt;/p&gt;

&lt;p&gt;What is the need for Containers if we have Virtual Machines?&lt;/p&gt;

&lt;p&gt;A Virtual Machine (VM) is a compute resource that uses software instead of a physical computer to run programs and deploy apps. One or more virtual “guest” machines run on a physical “host” machine. Each virtual machine runs its own operating system and functions separately from the other VMs, even when they are all running on the same host. This means that, for example, a virtual Mac OS virtual machine can run on a physical PC.&lt;/p&gt;

&lt;p&gt;There are disadvantages that became the reason for using Containers-&lt;/p&gt;

&lt;p&gt;Running multiple virtual machines on one physical machine can result in unstable performance if infrastructure requirements are not met.&lt;br&gt;
Virtual machines are less efficient and run slower than a full physical computer.&lt;br&gt;
Whereas, Containers can be started much quicker than Virtual Machines. This is because the container is using the underlying host computer's Operating System, and not starting its own whenever it launches&lt;/p&gt;

&lt;p&gt;What is a Docker?&lt;/p&gt;

&lt;p&gt;Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. &lt;/p&gt;

&lt;p&gt;Docker Architecture&lt;/p&gt;

&lt;p&gt;Docker uses a client-server architecture. In simple words, Client communicates using REST API with daemon requesting for pull, build, run commands. Daemon accepts the commands and does the heavy lifting of building, running, and distributing your Docker containers. If a client is requesting to pull an image, then first it will communicate with daemon, and daemon will search for the image in local image registry, If not found then it will search it in Default Registry i.e Docker Hub and then the image will be sent to the client by daemon.&lt;/p&gt;

&lt;p&gt;Docker Client - &lt;/p&gt;

&lt;p&gt;When you use commands such as 'docker run', the client sends these commands to dockerd(Daemon), which carries them out. The docker command uses the Docker API to connect with daemon. &lt;/p&gt;

&lt;p&gt;The Docker daemon&lt;/p&gt;

&lt;p&gt;The Docker daemon (dockerd) listens for Docker API requests(from client) and manages Docker objects such as images, containers, networks, and volumes.&lt;/p&gt;

&lt;p&gt;Docker registries&lt;/p&gt;

&lt;p&gt;A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use, and Docker is configured to look for images on Docker Hub by default. You can even run your own private registry.&lt;/p&gt;

&lt;p&gt;Images&lt;/p&gt;

&lt;p&gt;An image is a read-only template with instructions for creating a Docker container. Often, an image is based on another image, with some additional customization. For example, you may build an image which is based on the ubuntu image, but installs the Apache web server and your application, as well as the configuration details needed to make your application run.&lt;/p&gt;

&lt;p&gt;You can understand it like this - &lt;/p&gt;

&lt;p&gt;Image is a Class and Container is an object of the image &lt;/p&gt;

&lt;p&gt;Where an object is an instance of class and class is a blue print of object. &lt;/p&gt;

&lt;p&gt;-Jyoti Jha&lt;/p&gt;

&lt;p&gt;Learning-&amp;gt; Teaching -&amp;gt; Enjoying :)&lt;/p&gt;

</description>
      <category>docker</category>
      <category>tutorial</category>
      <category>beginners</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
