<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Venkat</title>
    <description>The latest articles on DEV Community by Venkat (@heyvenatdev).</description>
    <link>https://dev.to/heyvenatdev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/heyvenatdev"/>
    <language>en</language>
    <item>
      <title>IAM Best Practices - AWS</title>
      <dc:creator>Venkat</dc:creator>
      <pubDate>Sun, 03 Apr 2022 15:28:00 +0000</pubDate>
      <link>https://dev.to/heyvenatdev/iam-best-practices-aws-iej</link>
      <guid>https://dev.to/heyvenatdev/iam-best-practices-aws-iej</guid>
      <description>&lt;p&gt;Hey everyone! Hope you're doing well and getting ready to read my yet another tech blog on IAM Best Practices - AWS. Let's discuss here on this.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1 - Login to the Console
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Visit &lt;br&gt;
&lt;a href="https://aws.amazon.com/console/"&gt;https://aws.amazon.com/console/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose &lt;strong&gt;Sign in to the console&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose  &lt;strong&gt;Root user&lt;/strong&gt;. Enter the &lt;strong&gt;Root user email address&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose &lt;strong&gt;Next&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter the &lt;strong&gt;Password for the root user&lt;/strong&gt;. Choose &lt;strong&gt;Sign in&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 2 - Enable MFA (optional)
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;At the top right, choose your account name. Then choose My Security Credentials from the drop down menu. &lt;/li&gt;
&lt;li&gt;Expand Multi-factor authentication (MFA). Choose Activate MFA.&lt;/li&gt;
&lt;li&gt;On the Manage MFA device pop-up window. Choose Virtual MFA device and choose Continue.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You will need a virtual MFA application installed on your device or computer. You can see a list of applications on step 1 on the Set up virtual MFA device pop-up window. There is a hyperlink which shows a list of compatible applications. Before continuing to the next step make sure you have one of these applications installed on your mobile device or computer. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Choose Show QR code and scan the code using your device.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you are using a computer you can choose Show secret key and type the secret key into your MFA application.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Type the first MFA code into the MFA code 1 field. Then type the second generated number into  the MFA code 2 field. Choose Assign MFA.&lt;/li&gt;
&lt;li&gt;You should see a pop-up indicating that you have successfully assigned a virtual MFA device. Choose Close. &lt;/li&gt;
&lt;li&gt;Expand Access keys (access key ID and secret access key).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; There should be no access keys listed. If an access key exists (for your new account) choose Delete under Actions. Choose Deactivate. Enter in the access key ID in the confirmation field. Choose Delete.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3 - Create an IAM user
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;In the service search bar, type in Identity and Access Management (IAM) dashboard. On the left side panel, choose Users. &lt;/li&gt;
&lt;li&gt;Choose Add user. Paste in Admin for the User name. Next to Access type, choose Programmatic access and AWS Management Console access. &lt;/li&gt;
&lt;li&gt;Choose Add user. Paste in Admin for the User name. Next to Access type, choose Programmatic access and AWS Management Console access.&lt;/li&gt;
&lt;li&gt;Uncheck Require password reset. &lt;/li&gt;
&lt;li&gt;Choose Next: Permissions.&lt;/li&gt;
&lt;li&gt;Choose Attach existing policies directly. Next to Filter policies, search for administrator. Under Policy name, choose AdministratorAccess. Choose Next: Tags.&lt;/li&gt;
&lt;li&gt;Choose Next: Review. Choose Create user. &lt;/li&gt;
&lt;li&gt;You can sign in with the new IAM user by clicking the hyperlink at the bottom of the Success window.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; It should look similar to the following: &lt;a href="https://000000000000.signin.aws.amazon.com/console"&gt;https://000000000000.signin.aws.amazon.com/console&lt;/a&gt;. Your account number will be different :)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Log in using the Admin user and password that you created.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 4 - Set up an IAM role for EC2 instance
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Now that you are logged in as the Admin user, search for IAM again in the service search bar. On the left side panel, choose Roles. Then, choose Create role. &lt;/li&gt;
&lt;li&gt;Choose AWS service. Choose EC2. Choose Next: Permissions. &lt;/li&gt;
&lt;li&gt;Next to Filter policies, search for amazons3full and choose AmazonS3FullAccess. &lt;/li&gt;
&lt;li&gt;Next to Filter policies search for amazondynamodb and choose AmazonDynamoDBFullAccess. &lt;/li&gt;
&lt;li&gt;Choose Next: Tags. Choose Next: Review. &lt;/li&gt;
&lt;li&gt;For Role name paste in S3DynamoDBFullAccessRole. Choose Create role.
&lt;strong&gt;Note: Using full access policies are not something recommended you should do in a production environment. We are using these policies as a proof of concept to get your demo up and running quickly. Once your Amazon S3 bucket and Amazon DynamoDB table are created, you can come back and modify this IAM Role to have more specific and restrictive permissions. More on this later.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Congratulations you successfully completed the exercise...🎉🎉🎉&lt;/p&gt;

&lt;p&gt;🚀 If you read something interesting from this article, please like and follow me for more posts.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>GCP Part -II (How to set up Kubernetes Engine)</title>
      <dc:creator>Venkat</dc:creator>
      <pubDate>Fri, 22 Oct 2021 17:20:51 +0000</pubDate>
      <link>https://dev.to/heyvenatdev/gcp-part-ii-how-to-set-up-kubernetes-engine-4e94</link>
      <guid>https://dev.to/heyvenatdev/gcp-part-ii-how-to-set-up-kubernetes-engine-4e94</guid>
      <description>&lt;p&gt;Hello there everyone!! Hope you're doing well and getting ready to read my yet another tech blog on Google Cloud Platform. This is continuation of part - I &lt;a href="https://dev.to/heyvenatdev/gcp-part-i-regions-zones-and-compute-engine-22dp"&gt;GCP Part -I (Regions, Zones and Compute Engine)&lt;/a&gt; and welcome to this blog and here you will get to know how to set up the Kubernetes Engine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes Engine
&lt;/h2&gt;

&lt;p&gt;Let's quickly look into how to set up Kubernetes Engine in Google Cloud console as follows : &lt;/p&gt;

&lt;h4&gt;
  
  
  Activate Cloud Shell
&lt;/h4&gt;

&lt;p&gt;Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.&lt;/p&gt;

&lt;p&gt;In the Cloud Console, in the top right toolbar, click the Activate Cloud Shell button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flcfrthrf0tgx6nwp11d7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flcfrthrf0tgx6nwp11d7.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Continue&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5w5x5kf37jkk2tkt5fy.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5w5x5kf37jkk2tkt5fy.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It takes a few moments to provision and connect to the environment. When you are connected, you are already authenticated, and the project is set to your PROJECT_ID.&lt;/p&gt;

&lt;p&gt;gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.&lt;/p&gt;

&lt;p&gt;You can list the active account name with this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud auth list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can list the project ID with this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud config list project
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Set a default compute zone
&lt;/h4&gt;

&lt;p&gt;Your compute zone is an approximate regional location in which your clusters and their resources live. For example, us-central1-a is a zone in the us-central1 region.&lt;/p&gt;

&lt;p&gt;To set your default compute zone to us-central1-a, start a new session in Cloud Shell, and run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud config set compute/zone us-central1-a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Create a GKE cluster
&lt;/h4&gt;

&lt;p&gt;A cluster consists of at least one cluster master machine and multiple worker machines called nodes. Nodes are Compute Engine virtual machine (VM) instances that run the Kubernetes processes necessary to make them part of the cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Cluster names must start with a letter and end with an alphanumeric, and cannot be longer than 40 characters.&lt;/p&gt;

&lt;p&gt;To create a cluster, run the following command, replacing [CLUSTER-NAME] with the name you choose for the cluster (for example:my-cluster).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud container clusters create [CLUSTER-NAME]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can ignore any warnings in the output. It might take several minutes to finish creating the cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Expected output&lt;/strong&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME        LOCATION       ...   NODE_VERSION  NUM_NODES  STATUS
my-cluster  us-central1-a  ...   1.16.13-gke.401  3        RUNNING
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Get authentication credentials for the cluster
&lt;/h4&gt;

&lt;p&gt;After creating your cluster, you need authentication credentials to interact with it.&lt;/p&gt;

&lt;p&gt;To authenticate the cluster, run the following command, replacing [CLUSTER-NAME] with the name of your cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud container clusters get-credentials [CLUSTER-NAME]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Expected output&lt;/strong&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Fetching cluster endpoint and auth data.
kubeconfig entry generated for my-cluster.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Deploy an application to the cluster
&lt;/h4&gt;

&lt;p&gt;You can now deploy a containerized application to the cluster.&lt;/p&gt;

&lt;p&gt;GKE uses Kubernetes objects to create and manage your cluster's resources. Kubernetes provides the Deployment object for deploying stateless applications like web servers. Service objects define rules and load balancing for accessing your application from the internet.&lt;/p&gt;

&lt;p&gt;To create a new Deployment hello-server from the hello-app container image, run the following kubectl create command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Expected output&lt;/strong&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;deployment.apps/hello-server created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Kubernetes command creates a Deployment object that represents hello-server. In this case, --image specifies a container image to deploy. The command pulls the example image from a Container Registry bucket. gcr.io/google-samples/hello-app:1.0 indicates the specific image version to pull. If a version is not specified, the latest version is used.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: You need to create hello-app image and should keep in the gcp bucket.&lt;/p&gt;

&lt;p&gt;To create a Kubernetes Service, which is a Kubernetes resource that lets you expose your application to external traffic, run the following kubectl expose command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl expose deployment hello-server --type=LoadBalancer --port 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this command:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1&lt;/strong&gt;. --port specifies the port that the container exposes.&lt;br&gt;
&lt;strong&gt;2&lt;/strong&gt;. type="LoadBalancer" creates a Compute Engine load balancer for your container.&lt;br&gt;
&lt;strong&gt;Expected output&lt;/strong&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;service/hello-server exposed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To inspect the hello-server Service, run kubectl get:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Expected output&lt;/strong&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME              TYPE              CLUSTER-IP        EXTERNAL-IP      PORT(S)           AGE
hello-server      loadBalancer      10.39.244.36      35.202.234.26    8080:31991/TCP    65s
kubernetes        ClusterIP         10.39.240.1       &amp;lt;none&amp;gt;           433/TCP           5m13s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: It might take a minute for an external IP address to be generated. Run the previous command again if the EXTERNAL-IP column status is pending.&lt;/p&gt;

&lt;p&gt;To view the application from your web browser, open a new tab and enter the following address, replacing [EXTERNAL IP] with the EXTERNAL-IP for hello-server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://[EXTERNAL-IP]:8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3yo4k9z0vlxoknbt9i4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3yo4k9z0vlxoknbt9i4.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Deleting the cluster
&lt;/h4&gt;

&lt;p&gt;To delete the cluster, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud container clusters delete [CLUSTER-NAME]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Congratulations!
&lt;/h3&gt;

&lt;p&gt;You have just deployed 🚀 a containerized application to Kubernetes Engine! 🎉🎉🎉&lt;/p&gt;

</description>
      <category>googlecloud</category>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>GCP Part -I (Regions, Zones and Compute Engine)</title>
      <dc:creator>Venkat</dc:creator>
      <pubDate>Sat, 09 Oct 2021 11:31:50 +0000</pubDate>
      <link>https://dev.to/heyvenatdev/gcp-part-i-regions-zones-and-compute-engine-22dp</link>
      <guid>https://dev.to/heyvenatdev/gcp-part-i-regions-zones-and-compute-engine-22dp</guid>
      <description>&lt;p&gt;Hello my dear readers!!! Hope you're doing well and getting ready to read my yet another tech blog and this time we are into the cloud technologies...&lt;/p&gt;

&lt;p&gt;I am going to take you through this Google Cloud journey as a step by step and I will also be learning along with you, so let's kick start our learning journey together.. :)&lt;/p&gt;

&lt;h2&gt;
  
  
  Regions &amp;amp; Zones
&lt;/h2&gt;

&lt;p&gt;Basically GCP provides 200+ services. And it is the cleanest cloud in the industry ever. Google Cloud protects your data, applications, infrastructure, and customers from fraudulent activity, spam, and abuse with the same infrastructure and security services Google uses. &lt;/p&gt;

&lt;p&gt;Google Cloud’s networking, data storage, and compute services provide data encryption at rest, in transit, and in use. Advanced security tools support compliance and data confidentiality.&lt;/p&gt;

&lt;p&gt;Below are the statistical information about the regions, zones,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5WhlDK-9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v4d912ote71zgw823w9m.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5WhlDK-9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v4d912ote71zgw823w9m.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and it's network as of now.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LNywNvSF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bc5490d1oudrkeb4ady1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LNywNvSF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bc5490d1oudrkeb4ady1.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Google Compute Engine
&lt;/h2&gt;

&lt;p&gt;In corporate data centers, applications are deployed to&lt;br&gt;
physical servers.&lt;br&gt;
Where do you deploy applications in the cloud?&lt;br&gt;
&lt;strong&gt;1.&lt;/strong&gt; Rent virtual servers&lt;br&gt;
&lt;strong&gt;2.&lt;/strong&gt;&lt;strong&gt;Virtual Machines&lt;/strong&gt; - Virtual servers in GCP&lt;br&gt;
&lt;strong&gt;3.&lt;/strong&gt;&lt;strong&gt;Google Compute Engine (GCE)&lt;/strong&gt; - Provision &amp;amp; Manage Virtual Machines&lt;/p&gt;

&lt;h3&gt;
  
  
  Compute Engine
&lt;/h3&gt;

&lt;p&gt;Secure and customizable compute service that lets you create and run virtual machines on Google’s infrastructure.&lt;/p&gt;

&lt;p&gt;Based on our requirements we can choose which type of compute engines from below types : &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Predefined machine types&lt;/strong&gt;: Start running quickly with pre-built and ready-to-go configurations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Custom machine types&lt;/strong&gt;: Create VMs with optimal amounts of vCPU and memory, while balancing cost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Preemptible machines&lt;/strong&gt;: Reduce computing costs by up to 80% with affordable short-term instances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Confidential computing&lt;/strong&gt;: Encrypt your most sensitive data while it’s being processed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rightsizing recommendations&lt;/strong&gt;: Optimize resource utilization with automatic recommendations.&lt;/p&gt;

&lt;p&gt;Let's look the below features provided by the compute engine based on OS, memory, costing, security, pricing etc,.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. VM Manager&lt;/strong&gt;&lt;br&gt;
VM Manager is a suite of tools that can be used to manage operating systems for large virtual machine (VM) fleets running Windows and Linux on Compute Engine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iv72ZDH2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u0tjre78hutmyf6vimw2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iv72ZDH2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u0tjre78hutmyf6vimw2.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fig : &lt;strong&gt;VM Manager architecture overview&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Confidential VMs&lt;/strong&gt;&lt;br&gt;
Confidential VMs are a breakthrough technology that allows you to encrypt data in use—while it’s being processed. It is a simple, easy-to-use deployment that doesn't compromise on performance. You can collaborate with anyone, all while preserving the confidentiality of your data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Live migration for VMs&lt;/strong&gt;&lt;br&gt;
Compute Engine virtual machines can live-migrate between host systems without rebooting, which keeps your applications running even when host systems require maintenance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Sole-tenant nodes&lt;/strong&gt;&lt;br&gt;
Sole-tenant nodes are physical Compute Engine servers dedicated exclusively for your use. Sole-tenant nodes simplify deployment for bring-your-own-license (BYOL) applications. Sole-tenant nodes give you access to the same machine types and VM configuration options as regular compute instances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Custom machine types&lt;/strong&gt;&lt;br&gt;
Create a virtual machine with a custom machine type that best fits your workloads. By tailoring a custom machine type to your specific needs, you can realize significant savings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Predefined machine types&lt;/strong&gt;&lt;br&gt;
Compute Engine offers predefined virtual machine configurations for every need from small general purpose instances to large memory-optimized instances with up to 11.5 TB of RAM or fast compute-optimized instances with up to 60 vCPUs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Preemptible VMs&lt;/strong&gt;&lt;br&gt;
Low-cost, short-term instances designed to run batch jobs and fault-tolerant workloads. Preemptible VMs provide significant savings of up to 80% while still getting the same performance and capabilities as regular VMs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Instance groups&lt;/strong&gt;&lt;br&gt;
An instance group is a collection of virtual machines running a single application. It automatically creates and deletes virtual machines to meet the demand, repairs workload from failures, and runs updates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9. Persistent disks&lt;/strong&gt;&lt;br&gt;
Durable, high-performance block storage for your VM instances. You can create persistent disks in HDD or SSD formats. You can also take snapshots and create new persistent disks from that snapshot. If a VM instance is terminated, its persistent disk retains data and can be attached to another instance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10. Local SSD&lt;/strong&gt;&lt;br&gt;
Compute Engine offers always-encrypted local solid-state drive (SSD) block storage. Local SSDs are physically attached to the server that hosts the virtual machine instance for very high input/output operations per second (IOPS) and very low latency compared to persistent disks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;11. GPU accelerators&lt;/strong&gt;&lt;br&gt;
GPUs can be added to accelerate computationally intensive workloads like machine learning, simulation, and virtual workstation applications. Add or remove GPUs to a VM when your workload changes and pay for GPU resources only while you are using them. Our new A2 VM family is based on the NVIDIA Ampere A100 GPU. You can learn more about the A2 VM family by requesting access to our alpha program.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;12. Global load balancing&lt;/strong&gt;&lt;br&gt;
Global load-balancing technology helps you distribute incoming requests across pools of instances across multiple regions, so you can achieve maximum performance, throughput, and availability at low cost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;13. Linux and Windows support&lt;/strong&gt;&lt;br&gt;
Run your choice of OS, including Debian, CentOS, CoreOS, SUSE, Ubuntu, Red Hat Enterprise Linux, FreeBSD, or Windows Server 2008 R2, 2012 R2, and 2016. You can also use a shared image from the Google Cloud community or bring your own.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;14. Per-second billing&lt;/strong&gt;&lt;br&gt;
Google bills in second-level increments. You pay only for the compute time that you use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;15. Commitment savings&lt;/strong&gt;&lt;br&gt;
With committed-use discounts, you can save up to 57% with no up-front costs or instance-type lock-in.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;16. Container support&lt;/strong&gt;&lt;br&gt;
Run, manage, and orchestrate Docker containers on Compute Engine VMs with Google Kubernetes Engine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;17. Reservations&lt;/strong&gt;&lt;br&gt;
Create reservations for VM instances in a specific zone. Use reservations to ensure that your project has resources for future increases in demand. When you no longer need a reservation, delete the reservation to stop incurring charges for it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;18. Right-sizing recommenda­tions&lt;/strong&gt;&lt;br&gt;
Compute Engine provides machine type recommendations to help you optimize the resource utilization of your virtual machine (VM) instances. Use these recommendations to resize your instance’s machine type to more efficiently use the instance’s resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;19. OS patch management&lt;/strong&gt;&lt;br&gt;
With OS patch management, you can apply OS patches across a set of VMs, receive patch compliance data across your environments, and automate installation of OS patches across VMs—all from a centralized location. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;20. Placement Policy&lt;/strong&gt;&lt;br&gt;
Use Placement Policy to specify the location of your underlying hardware instances. Spread Placement Policy provides higher reliability by placing instances on distinct hardware, reducing the impact of underlying hardware failures. Compact Placement Policy provides lower latency between nodes by placing instances close together within the same network infrastructure.&lt;/p&gt;

&lt;h4&gt;
  
  
  Compute Engine pricing
&lt;/h4&gt;

&lt;p&gt;Pricing for Compute Engine is based on per-second usage of the machine types, persistent disks, and other resources that you select for your virtual machines. If you have a specific project in mind, use the pricing calculator to estimate cost.&lt;/p&gt;

&lt;p&gt;Okay so, this must be more than enough to kick start our exploration in the cloud console here &lt;a href="https://console.cloud.google.com/compute/instances?project=dev-2525-ocrentserv-4f0252"&gt;Console&lt;/a&gt; 🎉🎉.&lt;/p&gt;

&lt;p&gt;Thank you so much for sticking around and holding on to the end.&lt;/p&gt;

&lt;p&gt;Until next time!&lt;/p&gt;

</description>
      <category>googlecloud</category>
      <category>cloudskills</category>
      <category>regions</category>
      <category>computeengine</category>
    </item>
    <item>
      <title>How Internet Message Access Protocol(IMAP) works in Node JS</title>
      <dc:creator>Venkat</dc:creator>
      <pubDate>Sat, 02 Oct 2021 09:18:11 +0000</pubDate>
      <link>https://dev.to/heyvenatdev/how-internet-message-access-protocol-imap-works-in-node-js-1jh5</link>
      <guid>https://dev.to/heyvenatdev/how-internet-message-access-protocol-imap-works-in-node-js-1jh5</guid>
      <description>&lt;p&gt;Hello my dear peers 😃! Hope you're doing well. Welcome to my tech blog and this time we are discussing about &lt;strong&gt;IMAP&lt;/strong&gt; package and it's uses in Node JS with real time code snippet examples. In this, first will only focus on reading emails.&lt;/p&gt;

&lt;h4&gt;
  
  
  node-imap is an IMAP client module for node.js.
&lt;/h4&gt;

&lt;p&gt;Let's open our terminal and hit &lt;strong&gt;npm install node-imap.&lt;/strong&gt; to install IMAP package.&lt;/p&gt;

&lt;p&gt;In this blog, we are mainly focusing on how to read email attachments based on the &lt;strong&gt;DATE RANGE&lt;/strong&gt;, &lt;strong&gt;FROM&lt;/strong&gt; particular email address and it's &lt;strong&gt;SUBJECT&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Let's see from the below example code which fetches first 3 email messages from the mail box.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var Imap = require('node-imap'),
    inspect = require('util').inspect;

var imap = new Imap({
  user: 'mygmailname@gmail.com',
  password: 'mygmailpassword',
  host: 'imap.gmail.com',
  port: 993,
  tls: true
});

function openInbox(cb) {
  imap.openBox('INBOX', true, cb);
}

imap.once('ready', function() {
  openInbox(function(err, box) {
    if (err) throw err;
    var f = imap.seq.fetch('1:3', {
      bodies: 'HEADER.FIELDS (FROM TO SUBJECT DATE)',
      struct: true
    });
    f.on('message', function(msg, seqno) {
      console.log('Message #%d', seqno);
      var prefix = '(#' + seqno + ') ';
      msg.on('body', function(stream, info) {
        var buffer = '';
        stream.on('data', function(chunk) {
          buffer += chunk.toString('utf8');
        });
        stream.once('end', function() {
          console.log(prefix + 'Parsed header: %s', inspect(Imap.parseHeader(buffer)));
        });
      });
      msg.once('attributes', function(attrs) {
        console.log(prefix + 'Attributes: %s', inspect(attrs, false, 8));
      });
      msg.once('end', function() {
        console.log(prefix + 'Finished');
      });
    });
    f.once('error', function(err) {
      console.log('Fetch error: ' + err);
    });
    f.once('end', function() {
      console.log('Done fetching all messages!');
      imap.end();
    });
  });
});

imap.once('error', function(err) {
  console.log(err);
});

imap.once('end', function() {
  console.log('Connection ended');
});

imap.connect();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are scenarios where you need to fetch only the attachments from the email and process it for a different purpose. In such cases, please refer below code example.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var imap = new Imap({
  user: 'mygmailname@gmail.com',
  password: 'mygmailpassword',
  host: 'imap.gmail.com',
    port: 993,
    tls: true,
  });
  imap.once("ready", function () {
    var fs = require("fs"),
      fileStream;
    imap.openBox("INBOX", true, function (err, box) {
      if (err) throw err;
      try {
        imap.search(
          [
            ["FROM", FROM_MAIL],
            ["HEADER", "SUBJECT", SUBJECT],
            ["UNSEEN", ["SINCE", "Day, Year"]],
          ],
          function (err, results) {
            if (err) throw err;
            try {
              var f = imap.fetch(results, {
                bodies: ["HEADER.FIELDS (FROM TO SUBJECT DATE)"],
                struct: true,
              });
              f.on("message", function (msg, seqno) {
                console.log("Message #%d", seqno);

                var prefix = "(#" + seqno + ") ";
                msg.on("body", function (stream, info) {
                  var buffer = "";
                  stream.on("data", function (chunk) {
                    buffer += chunk.toString("utf8");
                  });
                  stream.once("end", function () {
                    console.log(
                      prefix + "Parsed header: %s",
                      Imap.parseHeader(buffer)
                    );
                  });
                });
                msg.once("attributes", function (attrs) {
                  // console.log("test", attrs);
                  var attachments = findAttachmentParts(attrs.struct);
                  console.log(
                    prefix + "Has attachments: %d",
                    attachments.length
                  );
                  for (var i = 0, len = attachments.length; i &amp;lt; len; ++i) {
                    var attachment = attachments[i];

                    var f = imap.fetch(attrs.uid, {
                      //do not use imap.seq.fetch here
                      bodies: [attachment.partID],
                      struct: true,
                    });
                    //build function to process attachment message
                    f.on("message", processAttachment(attachment));
                  }
                });
                msg.once("end", function () {
                  console.log(prefix + "Finished email");
                });
              });
              f.once("error", function (err) {
                console.log("Fetch error: " + err);
              });
              f.once("end", function () {
                console.log("Done fetching all messages!");
                imap.end();
              });
            } catch (e) {
              console.log("err", e);
            }
          }
        );
      } catch (e) {
        console.log("log", e);
      }
    });
  });

  imap.once("error", function (err) {
    console.log(err);
  });

  imap.once("end", function () {
    console.log("Connection ended");
  });
  imap.connect();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The downloaded email attachment must be decoded using &lt;strong&gt;Base64Decode()&lt;/strong&gt; method.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function processAttachment(attachment) {
  var filename = attachment.params.name;
  var encoding = attachment.encoding;
  var name = filename.split(".")[1];
  console.log("log", name);

  return function (msg, seqno) {
    if (name === "pdf") {
      var prefix = "(#" + seqno + ") ";
      msg.on("body", function (stream, info) {
        //Create a write stream so that we can stream the attachment to file;
        console.log(
          prefix + "Streaming this attachment to file",
          filename,
          info
        );
        var path = require("path");
       // var dirPath = path.join(__dirname, "/attachments");
        var writeStream = fs.createWriteStream(filename);
        writeStream.on("finish", function () {
          console.log(prefix + "Done writing to file %s", filename);
        });

        if (toUpper(encoding) === "BASE64") {
          stream.pipe(new base64.Base64Decode()).pipe(writeStream);
        } else {
          stream.pipe(writeStream);
        }
      });
      msg.once("end", function () {
        console.log(prefix + "Finished attachment %s", filename);
      });
    }
  };
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: The above process attachment method has a condition check of having only PDF docs.&lt;/p&gt;

&lt;p&gt;So, after processing the email attachments would you recommend those emails still be present in same inbox? No not at all, because we need to move that to some other folder so that we can differentiate the newly arrived emails. &lt;/p&gt;

&lt;p&gt;So, you can move the processed email to specific folder from the inbox using below code example.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; imap.seq.move(seqno, "Processed", function (err) {
                  if (!err) {
                    console.log(seqno + ": move success");
                  }
                });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hope you got atleast an idea how to work with imap package and with emails in Node JS 🎉🎉. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://www.npmjs.com/package/node-imap"&gt;https://www.npmjs.com/package/node-imap&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/mikebevz/node-imap"&gt;https://github.com/mikebevz/node-imap&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thank you for sticking around and holding on to the end.&lt;/p&gt;

&lt;p&gt;Until next time!&lt;/p&gt;

</description>
      <category>node</category>
      <category>javascript</category>
      <category>imap</category>
    </item>
    <item>
      <title>How to dockerize a React app with Nest JS server code...!</title>
      <dc:creator>Venkat</dc:creator>
      <pubDate>Fri, 23 Jul 2021 05:40:01 +0000</pubDate>
      <link>https://dev.to/heyvenatdev/how-to-dockerize-a-react-app-with-nest-js-server-code-4ka</link>
      <guid>https://dev.to/heyvenatdev/how-to-dockerize-a-react-app-with-nest-js-server-code-4ka</guid>
      <description>&lt;p&gt;Namaste coders :) Welcome to my tech blog on dockerizing React app with one of Node's typescript framework. This is my first ever post in &lt;strong&gt;DEV&lt;/strong&gt;, excited to contribute it 😃.&lt;/p&gt;

&lt;h4&gt;
  
  
  Basically, there are two ways you can dockerize them,
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;1&lt;/strong&gt;. Dockerize both React app and Nest JS separately and compose them.&lt;br&gt;
&lt;strong&gt;2&lt;/strong&gt;. Dockerize both of the apps in a single docker file.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.Dockerize both React app and Nest JS separately and compose them.
&lt;/h3&gt;

&lt;h4&gt;
  
  
  a). Dockerize React app :
&lt;/h4&gt;

&lt;p&gt;Create a docker file as below in React app- &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

FROM node:14.16.1

WORKDIR /app

COPY package.json ./ 

RUN npm install

COPY . .

EXPOSE 3000

CMD ["npm", "start"]{% raw %}`
```


Also create a .dockerignore file

```
node_modules
.git
.gitignore
```
Next step is that we have to build the docker image of the React app.
```
 docker build . -t react
```

Now run the tagged image as below.
```
 docker run --name react -d -p 80:3000 react
```
Open http://localhost:3000 and you should see React served from Docker.

Also you can check the docker container running as below with `docker ps` command.
```
CONTAINER ID   IMAGE     COMMAND                  CREATED       STATUS         PORTS                                   NAMES
6aea1cf12647   react     "docker-entrypoint.s…"   11 days ago   Up 3 seconds   0.0.0.0:80-&amp;gt;3000/tcp, :::80-&amp;gt;3000/tcp   react
```
####b). Dockerize Nest JS code :
Create a docker file as below in your server directory- 
```
FROM node:14.16.1

WORKDIR /app

COPY package.json ./

RUN npm install

COPY . .

EXPOSE 5000

CMD [ "npm", "run", "start:dev" ]
```
As similar to above create a .dockerignore file

```
node_modules
.git
.gitignore
```
Next step is that we have to build the docker image of the server app.
```
 docker build . -t server
```

Now run the tagged image as below.
```
 docker run --name server -d -p 80:5000 server
```
Let's check this by hitting http://localhost:5000 and you should see your Nest JS being served from Docker.

So, now we have stepped into the final process of running both simultaneously by creating docker compose yaml file in the project root directory as below. 

```
version: "3"
services:
    frontend:
        container_name: client
        build:
            context: ./client
            dockerfile: Dockerfile
        image: react
        ports:
            - "3000:3000"
        volumes:
            - ./client:/app
    backend:
        container_name: backend
        build:
            context: ./server
            dockerfile: Dockerfile
        image: server
        ports:
            - "5000:5000"
        volumes:
            - ./server:/app

```
Run the command `docker-compose up` and you should see both the apps running.



###2.Dockerize both of the apps in a single docker file.

I would recommend this approach than the above, it's simple and preferred to follow for deploying these kind of applications for dev, qa and prod environments.

As we have both apps React and Nest JS server code. We initially need to build our React UI code and should copy the build folder contents as below - 

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xjfzr93av9h0ria6u4sa.jpg)

After, we need to paste them in public folder of the Nest JS server code root directory. 

**Note:** In Nest JS, you need to place server static module in *app.module.ts* file as below 
```
  ServeStaticModule.forRoot({
    rootPath: join(__dirname, '..', 'public'),
  }),
```

Finally, you are all set to run the docker file with commands below 
```
FROM node:14.16.1:lts-alpine
RUN mkdir -p /usr/src/app

WORKDIR /usr/src/app

COPY . .

RUN cd ./client &amp;amp;&amp;amp; npm ci  &amp;amp;&amp;amp; npm run build &amp;amp;&amp;amp; cd ..

RUN cd ./server &amp;amp;&amp;amp; npm ci  &amp;amp;&amp;amp; cd ..

RUN mkdir -p /usr/src/app/server/public

RUN cp -r ./client/build/* ./server/public/

WORKDIR  /usr/src/app/server

RUN npm run prebuild

RUN npm run build

EXPOSE 5000

CMD [ "npm", "run", "start:dev" ]
```
Build the docker file 
```
 docker build . -t ReactServer
```
And finally run the image 
```
docker run --name ReactServer -d -p 80:5000 ReactServer
```
Open http://localhost:5000 and you should see the application served from Docker.

Congratulations you successfully dockerized React UI and Nestjs server...🎉🎉🎉

🚀 If you read something interesting from this article, please like and follow me for more posts.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>react</category>
      <category>docker</category>
      <category>typescript</category>
    </item>
  </channel>
</rss>
