<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Vishal Raj</title>
    <description>The latest articles on DEV Community by Vishal Raj (@vishalraj82).</description>
    <link>https://dev.to/vishalraj82</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vishalraj82"/>
    <language>en</language>
    <item>
      <title>Running K8s in local env</title>
      <dc:creator>Vishal Raj</dc:creator>
      <pubDate>Sun, 17 Dec 2023 02:12:33 +0000</pubDate>
      <link>https://dev.to/vishalraj82/running-k8s-in-local-env-1fk4</link>
      <guid>https://dev.to/vishalraj82/running-k8s-in-local-env-1fk4</guid>
      <description>&lt;p&gt;While there are many tools out there, which let us run K8s in local environment. The idea of my project is to go a step further and show how a production like setup can be achieved in local with help of certain tools. While this is just step 1 in the process of understanding how things work, there are many more aspects, which are beyond the scope here.&lt;/p&gt;

&lt;p&gt;Please refer to the &lt;a href="https://kubernetes.io/docs/home/"&gt;official kubernetes documentation&lt;/a&gt; for more information on how it works.&lt;/p&gt;

&lt;p&gt;Please follow the &lt;a href="https://github.com/vishalraj82/k8s-with-multipass"&gt;github link&lt;/a&gt; for more details on the k8s project.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Long running process in NodeJS</title>
      <dc:creator>Vishal Raj</dc:creator>
      <pubDate>Fri, 02 Dec 2022 15:30:24 +0000</pubDate>
      <link>https://dev.to/vishalraj82/long-running-process-in-nodejs-59e</link>
      <guid>https://dev.to/vishalraj82/long-running-process-in-nodejs-59e</guid>
      <description>&lt;p&gt;More often than not, we don’t come across the situation where, in the context of Javascript based web application, a client request needs to run really long before its able to send response back to the client. While there can be many ways to handle such situation, such as queues, one solution can be, off-loading the processing of long running request to another thread. This method, of course, has its own list of pros and cons.&lt;/p&gt;

&lt;p&gt;Pros:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Does not involves external components such a queue&lt;/li&gt;
&lt;li&gt;NodeJS supports IPC when using the module child_process&lt;/li&gt;
&lt;li&gt;Child process executes independent of parent process&lt;/li&gt;
&lt;li&gt;Parent process can pass argument to the invoking of child process, which are received as CLI arguments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cons&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If the child process fails / crashes, there is no simple way to put retry.&lt;/li&gt;
&lt;li&gt;Since the child process runs separately, it will consume some memory of its own because of independent execution context.&lt;/li&gt;
&lt;li&gt;If required, a polling mechanism has to be built to know the status of child process.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nevertheless I have created a very simple example of the aforementioned concept. Please find it &lt;a href="https://github.com/vishalraj82/nodejs-ipc"&gt;here&lt;/a&gt;. The &lt;a href="https://github.com/vishalraj82/nodejs-ipc/blob/main/README.md"&gt;README.md&lt;/a&gt; describes how the solution works.&lt;/p&gt;

</description>
      <category>node</category>
      <category>javascript</category>
      <category>webdev</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>AWS - Amazon File Cache</title>
      <dc:creator>Vishal Raj</dc:creator>
      <pubDate>Sat, 15 Oct 2022 14:32:42 +0000</pubDate>
      <link>https://dev.to/vishalraj82/aws-amazon-file-cache-3j14</link>
      <guid>https://dev.to/vishalraj82/aws-amazon-file-cache-3j14</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Very recently AWS introduced a new service called Amazon File Cache. So what is this? Amazon File Cache is a fully managed, scalable, high-speed temporary caching service for files. The files could reside either on-premises (NFS v3) on the cloud, as supported file systems or Simple Storage Service (S3). It also crosses the restriction of geographical boundaries, which means that the on-premises location or cloud location can be spread across the globe.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IGcmhqmC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b8zvhdt3nxtj038nkbjm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IGcmhqmC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b8zvhdt3nxtj038nkbjm.png" alt="Amazon File Cache" width="880" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  When to use File Cache
&lt;/h2&gt;

&lt;p&gt;Given the fact that Amazon File Cache can cross the boundaries of geographical restriction, it basically means it can act as proxy over multiple file systems created across regions, thereby providing a seamless access to all the files. As an example, let's consider that you have files stored in S3 buckets in four different regions. If the files from all the regions are required to be accessed by an EC2 instance, then it must be aware of the S3 location, in order to get the file. Given that a File Cache can be created which can sit in front of all the four S3 regions, now the File Cache can be simply mounted as a folder for transparent access. File Cache supports both, lazy-loading (on-demand) and pre-load. This behavior can be configured for both, data and its metadata.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to access File Cache
&lt;/h2&gt;

&lt;p&gt;File Cache works with most of the Linux based distributions such as RHEL, CentOS, Suse and Ubuntu. In order to access File Cache from a L based instance, an open-source Lustre client must be installed. Once it is done, File Cache can be mounted as local directory in the instance. The File Cache can be administered from console, cli and sdk.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security and Monitoring
&lt;/h2&gt;

&lt;p&gt;File Cache encrypts data at rest and in transit. When File Cache needs to bring data from either S3 or AWS File system, it ensures that the data is encrypted while in transit. For on-premises file systems, it is highly recommended to use VPN in order to ensure privacy during file transfer or use AWS Direct Connect.&lt;br&gt;
Like every other service, File Cache is also integrated with AWS CloudWatch, allowing monitoring health and performance metrics in real time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations
&lt;/h2&gt;

&lt;p&gt;File Cache supports only up to 8 file systems or S3 buckets. You cannot mix file system with S3, it has to be all of same type. The minimum size provisioned is 1.2 Tib and then in increments of 2.4 Tib. Metadata requires separate storage, apart from data. There is a limit of 100 File Cache per account. Also, File Cache service is currently available only in the following regions&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;United States - North Virginia, Ohio, Oregon&lt;/li&gt;
&lt;li&gt;Asia Pacific - Singapore, Sydney, Tokyo&lt;/li&gt;
&lt;li&gt;Canada - Central&lt;/li&gt;
&lt;li&gt;European- Ireland, London, Frankfurt&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Image credits - &lt;a href="https://aws.amazon.com"&gt;https://aws.amazon.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you liked my article, &lt;a href="https://www.buymeacoffee.com/vishalraj82"&gt;Buy me a Coffee&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>efs</category>
      <category>intro</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>AWS Devops Guru</title>
      <dc:creator>Vishal Raj</dc:creator>
      <pubDate>Sun, 25 Sep 2022 12:49:44 +0000</pubDate>
      <link>https://dev.to/vishalraj82/aws-devops-guru-1b2k</link>
      <guid>https://dev.to/vishalraj82/aws-devops-guru-1b2k</guid>
      <description>&lt;h3&gt;
  
  
  Introduction to Devops Guru
&lt;/h3&gt;

&lt;p&gt;During the AWS re:Invent December 2020 event, AWS announced the release of a revolutionizing new product – Devops Guru (press release here). As per the official documentation&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“AWS Devops Guru is a fully-managed operations service that uses machine learning to make it easier for developers to improve application availability by automatically detecting operational issues and recommending specific actions to remediations.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In todays’ world, the applications are growing complex and distributed in nature. As more services are added, it requires attention on multiple fronts from the Devops team, viz for logging, monitoring, alarms setup, notifications and more. These tasks become tedious and often repetitive in nature. In case of any incidence or alarms going off, it can be overwhelming to understand what went wrong, when did it occur, what’s the root cause and probable fix. Often this procedure can take long, rendering longer MTTR (Mean Time to Recovery), thus causing bad user experience. This is where Devops Guru steps in, to make the life easier for developers as well as Devops team.&lt;/p&gt;

&lt;h3&gt;
  
  
  What exactly is Devops Guru
&lt;/h3&gt;

&lt;p&gt;Devops Guru is a fully managed operations service that enabled developer and Devops team to improve the application availability and infrastructure performance. It has been designed to provide pro-active as, well as reactive insights for the anomalies detected, provide accurate root cause analysis and most probable fixes for the same. Devops Guru has been designed based on the years of experience of handling numerous applications running on the AWS infrastructure.&lt;/p&gt;

&lt;h3&gt;
  
  
  How does Devops Guru works
&lt;/h3&gt;

&lt;p&gt;Let have a high-level view of how Devops Guru functions. This can be broken down into the following three stages.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the first step, the Devops Guru should be made aware of the resource to be monitored. This can be done by specifying an account or a CloudFormation stack or a list of tags which encompasses various resources.&lt;/li&gt;
&lt;li&gt;Once the boundaries have been set, Devops Guru will start analyzing the resources (application and the corresponding infrastructure) with insights from CloudTrail, CloudWatch and more. It can take anywhere from a few hours to up to a day before it starts producing resourceful insights.&lt;/li&gt;
&lt;li&gt;Devops Guru, when configured, sends notifications via SNS for anomalies detected.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once Devops Guru is made aware of the list of the resources to monitor, it would start analyzing the metrics and logs for last two weeks, to understand various usage patterns and automatically adjust itself to understand if a change is really an anomaly or expected. Since Devops Guru monitors resources continuously, there is no need to set or change any thresholds manually. As the application behavior changes, rather it would automatically adjust and understand as the pattern keep changing over the period. Devops Guru uses machine learning to evaluate and create useful insights.&lt;/p&gt;

&lt;p&gt;Let’s take a simple example of CloudFormation template which defines three main components – An API Gateway, a Lambda function and DynamoDB to store data. Let’s says that someone accidentally updates the DynamoDB to reduce its read capacity. At the same time, the app sees a surge in the HTTP traffic. Since the DynamoDB is functioning at a reduced capacity, hence, it would start throttling, eventually leading to timeouts in the database reads and the API gateway will start throwing HTTP 500 to the users. If this issue was to be fixed manually, it may take longer time to detect the root cause. Alternatively, if the system was being monitored by Devops Guru, it would have detected that a DynamoDB configuration was changed, right before the errors started to show up. Hence it can co-relate the events and suggest appropriate action which can lead to faster MTTR.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integrating Devops Guru with AWS services and third-party tools
&lt;/h3&gt;

&lt;p&gt;Devops Guru natively integrates with various AWS services such as CloudWatch, X-Ray, CloudTrail, CloudFormation, Config and many more. It can also integrate with EventBridge, enabling user to setup routing rules to determine where to send the notifications. It can also integrate with third-party incidence management tools from Atlassian and PagerDuty. Both the tools have ability to ingest SNS notification from Devops Guru and can managed from their internal dashboard.&lt;/p&gt;

&lt;h3&gt;
  
  
  Devops Guru free tier availability
&lt;/h3&gt;

&lt;p&gt;As for the most of the AWS resources, Devops Guru also has certain quota available as free tier usage. This includes 3 months of free usage which includes 7200 resource hours per group, per month of monitoring. It also includes 10,000 API calls to Devops Guru.&lt;/p&gt;

&lt;h3&gt;
  
  
  Devops Guru cost estimates
&lt;/h3&gt;

&lt;p&gt;Devops Guru provides a cost estimator in its dashboard so that users can understand the budget and how much would it cost to use the services. Once the free tier usage is exhausted, you need to pay for using the Devops Guru service. Devops Guru is charged based on the number of hours of active resource monitoring is done. Consider that if an S3 bucket is setup for monitoring, then it would be charged as long as it is under monitoring. Alternatively,  if an EC2 instance has been setup for monitoring, but lets say that it runs only for a few hours every 24 hours, then the cost would be incurred only for the time that the EC2 is up and running.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bibliography&lt;/strong&gt;:&lt;br&gt;
&lt;a href="https://podcasts.google.com/feed/aHR0cHM6Ly9kM2dpaDdqYmZlM2pscS5jbG91ZGZyb250Lm5ldC9hd3MtcG9kY2FzdC5yc3M/episode/aHR0cHM6Ly9zMy11cy13ZXN0LTIuYW1hem9uYXdzLmNvbS9hd3MtcG9kY2FzdC1yc3MvQVdTUG9kY2FzdC80NDQvMzA1N2YyYTEtODY2OS00OTI1LWJjZmMtYzQ2OWY4N2YxY2Jh?sa=X&amp;amp;ved=0CAUQkfYCahgKEwjIwKWF96z6AhUAAAAAHQAAAAAQrg4"&gt;AWS on Google podcasts&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/devops-guru/index.html"&gt;AWS Official Documentation&lt;/a&gt;&lt;br&gt;
YouTube - &lt;a href="https://www.youtube.com/watch?v=2uA8q-8mTZY"&gt;Episode 1&lt;/a&gt; / &lt;a href="https://www.youtube.com/watch?v=orPYMYCSbR8"&gt;Episode 2&lt;/a&gt; / &lt;a href="https://www.youtube.com/watch?v=N3NNYgzYUDA"&gt;Episode 3&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: &lt;em&gt;Images and examples used are from the sources mentioned in the bibliography.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>automation</category>
      <category>cloudskills</category>
    </item>
    <item>
      <title>Getting started with React &amp; TypeScript</title>
      <dc:creator>Vishal Raj</dc:creator>
      <pubDate>Sun, 24 Jul 2022 19:50:00 +0000</pubDate>
      <link>https://dev.to/vishalraj82/getting-started-with-react-typescript-4no5</link>
      <guid>https://dev.to/vishalraj82/getting-started-with-react-typescript-4no5</guid>
      <description>&lt;p&gt;This post is inteded for developers looking to get started with frontend development using the following&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;React&lt;/li&gt;
&lt;li&gt;TypeScript&lt;/li&gt;
&lt;li&gt;Webpack 5&lt;/li&gt;
&lt;li&gt;SASS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Why the forementioned selection? Well, I would say more of my personal choice. While there are numerous bundlers available in the market, I find Webpack 5 great because of its flexibility and extensibility. I find the combination of the above 4 an easy, as well as great combination for frotnend projects. The Github repository aims to be served as boilerplate code for beginners.&lt;/p&gt;

&lt;p&gt;Please refer to the &lt;a href="https://github.com/vishalraj82/typescript-webpack5"&gt;Github repository&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>react</category>
      <category>typescript</category>
      <category>webpack</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Using AWS to host a static website</title>
      <dc:creator>Vishal Raj</dc:creator>
      <pubDate>Fri, 27 May 2022 01:10:55 +0000</pubDate>
      <link>https://dev.to/vishalraj82/using-aws-to-host-a-static-website-57jo</link>
      <guid>https://dev.to/vishalraj82/using-aws-to-host-a-static-website-57jo</guid>
      <description>&lt;p&gt;This might be just one of the articles in already existing plethora of similar articles, teaching how to host a static website using any cloud provider services. However, I will try to make it different, by using simpler language, and simple examples. If this make sense, please continue reading…&lt;/p&gt;

&lt;h3&gt;
  
  
  Why host a static website?
&lt;/h3&gt;

&lt;p&gt;Even before we begin, the first question that rings is, in the modern era of Web 3.0, why would somebody want a static site? Its just plain useless, isn’t it? Well may be or maybe not. Not everyone in the world is super techie, but having a website might be pretty cool for some one who just wants to share their wonderful trip photographs with friends, family, neighbors, colleagues. How about having a simple blog? What about small retailers who just wants to showcase their products in order to attract the customer to the brick-and-mortar shop? And there can be numerous other use cases where a simple static site can make sense. If you still agree with me, please continue reading.&lt;/p&gt;

&lt;p&gt;So, why use so much of technicalities to host just a simple static site? Well I would say the initial setup is only a one time job and further will require little to no maintenance. Since we are focussing on AWS, here is the list of service that we will be utilizing to fulfil our purpose&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/s3/" rel="noopener noreferrer"&gt;Simle Storage Service&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/cloudfront/" rel="noopener noreferrer"&gt;CloudFront distribution&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/certificate-manager/" rel="noopener noreferrer"&gt;Certificate Manager&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/iam/" rel="noopener noreferrer"&gt;Identify and Access Management&lt;/a&gt; (Implicitly)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/route53/" rel="noopener noreferrer"&gt;Route53&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The diagram below coarsely represent the components above and how the relate to each other.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnl1xe0j95odesrphspgs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnl1xe0j95odesrphspgs.png" alt="AWS components"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Pre-requisites
&lt;/h2&gt;

&lt;p&gt;Before we proceed ahead, we have two pre-requisites.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A registered domain name. If you just want to experiment, &lt;a href="https://freenom.com" rel="noopener noreferrer"&gt;freenom.com&lt;/a&gt; provides free domain name for upto 12 months, but only with certain TLD.&lt;/li&gt;
&lt;li&gt;An AWS account. A fresh account is eligible forsome free resources. See more &lt;a href="https://aws.amazon.com/free/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Registering domain with Route53
&lt;/h2&gt;

&lt;p&gt;AWS Route53 is highly available and scalable cloud based DNS system with 100% uptime guarantee. It provides an array of services for DNS management. We shall begin by adding our domain in Route53 by creating a new public hosted zone.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngbnxagf0cfh8a2af3pi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngbnxagf0cfh8a2af3pi.png" alt="AWS Route53 name server details&amp;lt;br&amp;gt;
"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once we register our domain with Route53, it will provide us with the list of name servers to be used for our domain.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwuo3s91c63344paejmq1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwuo3s91c63344paejmq1.png" alt="AWS Route53 Add new public hosted zone&amp;lt;br&amp;gt;
"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These name servers are managed by AWS and we need to update this with our domain name registrar.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4jqwf8iyvem3350xobv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4jqwf8iyvem3350xobv.png" alt="Updating nameserver with domain provider&amp;lt;br&amp;gt;
"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Generating SSL certificate for domain
&lt;/h2&gt;

&lt;p&gt;The AWS Certificate Manager provides public SSL certificates for free of cost. We shall use it to generate the certificates for our domain, so that our domain can be access using the HTTPS protocol. This ensures that all communication between the user and the server is encrypted, and safe.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyggsu2vkmmsj379n35e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyggsu2vkmmsj379n35e.png" alt="AWS Certiciate Manager request public SSL certificate&amp;lt;br&amp;gt;
"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will use the fully qualified domain name (FQDN), the www sub-domain and a * for all future sub-domains, that we may need.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7g1d6bhq8voclrwy8ygd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7g1d6bhq8voclrwy8ygd.png" alt="AWS Certificate Manager domain details&amp;lt;br&amp;gt;
"&gt;&lt;/a&gt;&lt;br&gt;
After we request the SSL certificate, AWS ensures that we are the actual ownners of the domain before issuing the certificates. AWS recommends that we add certain CNAME records against our domain, which it can verify. Once the records are added, it takes some time for them to be propagated to AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fej5kt9rcovilthj0hu06.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fej5kt9rcovilthj0hu06.png" alt="AWS Certiciate Manager CNAME entries required&amp;lt;br&amp;gt;
"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once AWS verifies the domain with the CNAME entries, the certificate is issued and ready to be used.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpjk5oudujlx32oj4gh2v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpjk5oudujlx32oj4gh2v.png" alt="AWS Certiciate Manager DNS records added, certiciate ready&amp;lt;br&amp;gt;
"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding the static website assets to S3
&lt;/h2&gt;

&lt;p&gt;AWS Simple Storage Service seems to be the ideal choice when it comes to storing data. S3 supports practically unlimited storage. Each object in S3 can span upto 5 TB in size. AWS charges us for using S3 on the following two parameters&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Space consumed&lt;/li&gt;
&lt;li&gt;Size of data moving in and out of S3
Once we have our static site assets such as HTML, CSS, Javascript, Images, Videos, PDFs etc (whatever is required), we need to move it toS3. But before that we must create a bucket in S3, with a globally unique name. Because the bucket name is a part of URL which must globally unique.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmm4zdp5p21t5mg1p1bfk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmm4zdp5p21t5mg1p1bfk.png" alt="AWS S3 bucket&amp;lt;br&amp;gt;
"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwpc494d4zytuvoonqb3z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwpc494d4zytuvoonqb3z.png" alt="AWS Simple Storage Service bucket contents uploaded&amp;lt;br&amp;gt;
"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring CloudFront for S3
&lt;/h2&gt;

&lt;p&gt;AWS CloudFront is a globally distributes CDN service. Using CloudFront we can ensure that users from across the globe are able to access our contents with low latency. CloundFront can be integrated with a number of ther AWS services include S3. Lets go ahead and create a CloudFront distribution for our S3 bucket.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh25301ei4doib434o0jv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh25301ei4doib434o0jv.png" alt="AWS CloudFront distribution configuration&amp;lt;br&amp;gt;
"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Although we could have enabled public access for S3 bucket, but rather we kept it private (by default). With OAI, we are permitting the CloudFront distribution to be able to access contents via the bucket policy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvzuok25nkbhcghmzadon.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvzuok25nkbhcghmzadon.png" alt="AWS CloudFront distribution configuration&amp;lt;br&amp;gt;
"&gt;&lt;/a&gt;&lt;br&gt;
Although AWS CloudFront has edge locations across the world, we can segregate the content disbution based on where are customer are located. Additionally, we are also informing the CloudFront distibution about the domain via which it will be accessed and providing the corresponding SSL certificates. Once the distribution is created, it will take some time before it is ready to be consumed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwv3ijmofdqmdeovd2qsh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwv3ijmofdqmdeovd2qsh.png" alt="AWS CloudFront distribution ready&amp;lt;br&amp;gt;
"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The last action
&lt;/h2&gt;

&lt;p&gt;Once our CloudFront distribution is ready to be used, we must make another A record entry agains the domain which would be a proxy to the cloudfront distribution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffj4x85s53bf37tj2c825.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffj4x85s53bf37tj2c825.png" alt="AWS Route53 add A record for CloudFront distibution&amp;lt;br&amp;gt;
"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Target achieved
&lt;/h2&gt;

&lt;p&gt;After all this, we need to be patient for all the DNS information to be propagated. Once it is done, we can access our domain and see the static site in action.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frg7f6p4j1cb9e7kanpm8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frg7f6p4j1cb9e7kanpm8.png" alt="Static site hosted using AWS components"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  That will be all folks...
&lt;/h2&gt;

</description>
      <category>aws</category>
      <category>route53</category>
      <category>s3</category>
      <category>cloudfront</category>
    </item>
    <item>
      <title>Fun with Git aliases</title>
      <dc:creator>Vishal Raj</dc:creator>
      <pubDate>Fri, 21 Jan 2022 19:40:31 +0000</pubDate>
      <link>https://dev.to/vishalraj82/fun-with-git-aliases-4o48</link>
      <guid>https://dev.to/vishalraj82/fun-with-git-aliases-4o48</guid>
      <description>&lt;p&gt;Git is one of the most popular VCS (version control system). In this post we will discuss about git aliases and how to use them to their benefits.&lt;/p&gt;

&lt;p&gt;So what are git aliases ? In simple words, aliases are custom shortcuts for executing regular git commands. These shortcuts are specified in the file &lt;code&gt;$HOME/.gitconfig&lt;/code&gt;. Lets start with some examples.&lt;/p&gt;

&lt;p&gt;In order to see the current status, the command would be&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alternatively we can create an alias&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git config &lt;span class="nt"&gt;--global&lt;/span&gt; alias.st status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And now, to see the git status we can run the command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git st
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similarly, more and more aliases can be added for easier and quick execution of commonly used git commands. Lets add some more aliases.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git config &lt;span class="nt"&gt;--global&lt;/span&gt; alias.br branch
git config &lt;span class="nt"&gt;--global&lt;/span&gt; alias.ci commit
git config &lt;span class="nt"&gt;--global&lt;/span&gt; alias.co checkout
git cofnig &lt;span class="nt"&gt;--global&lt;/span&gt; alias.pr pull &lt;span class="nt"&gt;--rebase&lt;/span&gt;
git config &lt;span class="nt"&gt;--global&lt;/span&gt; alias.pf push &lt;span class="nt"&gt;--force&lt;/span&gt;
git config &lt;span class="nt"&gt;--global&lt;/span&gt; alias.rsh reset &lt;span class="nt"&gt;--hard&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Well, until now we have seen creating aliases for simple git commands. But what if you want to run multiple git commands in single alias. Functions to the rescue. &lt;/p&gt;

&lt;p&gt;Lets see an example situation. Let say you need to switch branch, but you also have staged changes. So you want to stash the changes  before switching to new branch, and pull the latest changes as well. But for this, we will edit the file &lt;code&gt;$HOME/.gitconfig&lt;/code&gt; and add new entry under the tab &lt;code&gt;[alias]&lt;/code&gt;. I am a &lt;code&gt;vim&lt;/code&gt; guy, but you can use any editor of your choice.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;alias&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
    st &lt;span class="o"&gt;=&lt;/span&gt; status
    br &lt;span class="o"&gt;=&lt;/span&gt; branch
    co &lt;span class="o"&gt;=&lt;/span&gt; checkout
    &lt;span class="nb"&gt;pr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; pull &lt;span class="nt"&gt;--rebase&lt;/span&gt;
    pf &lt;span class="o"&gt;=&lt;/span&gt; push &lt;span class="nt"&gt;--force&lt;/span&gt;
    rsh &lt;span class="o"&gt;=&lt;/span&gt; reset &lt;span class="nt"&gt;--hard&lt;/span&gt;
    cop &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"!f() { br=&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;git br &lt;span class="nt"&gt;--show-current&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="s2"&gt;; git stash save &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Stash from branch &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;br&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;; git fetch; git co &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;};  f"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The shortcut &lt;code&gt;cop&lt;/code&gt; can also be extended. Lets extend the above case with situation that when you switch branch, you also want to delete the current branch. But why would you want so ? Lets say that you branched out from &lt;code&gt;master&lt;/code&gt; for a minor bug fix. After you push the &lt;code&gt;bug-fix&lt;/code&gt; branch to remote and submit it for review, you want to purge it from local, after switching back to &lt;code&gt;master&lt;/code&gt; branch. Lets see&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;alias&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
    copd &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"!f() { br=&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;git br &lt;span class="nt"&gt;--show-current&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="s2"&gt;; git stash save &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Stash from branch &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;br&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;; git fetch; git co &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;; git br -d &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;br&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;; };  f"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And how to use it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vishalr@ubuntu &lt;span class="o"&gt;(&lt;/span&gt;bug-fix-branch&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;$&amp;gt;&lt;/span&gt; git br
&lt;span class="k"&gt;*&lt;/span&gt; bug-fix-branch
master
vishalr@ubuntu &lt;span class="o"&gt;(&lt;/span&gt;bug-fix-branch&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;$&amp;gt;&lt;/span&gt; git copd master
vishalr@ubuntu &lt;span class="o"&gt;(&lt;/span&gt;master&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;$&amp;gt;&lt;/span&gt; git br
&lt;span class="k"&gt;*&lt;/span&gt; master
vishalr@ubunt &lt;span class="o"&gt;(&lt;/span&gt;master&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;$&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that you have learnt, go ahead and create git aliases of your choice and have fun.&lt;/p&gt;

</description>
      <category>git</category>
      <category>development</category>
      <category>hacks</category>
      <category>guide</category>
    </item>
    <item>
      <title>Making simple HTTP requests in NodeJS</title>
      <dc:creator>Vishal Raj</dc:creator>
      <pubDate>Sun, 08 Aug 2021 10:42:00 +0000</pubDate>
      <link>https://dev.to/vishalraj82/making-simple-http-requests-in-nodejs-hog</link>
      <guid>https://dev.to/vishalraj82/making-simple-http-requests-in-nodejs-hog</guid>
      <description>&lt;p&gt;Of course, there are numerous &lt;a href="https:///npmjs.org"&gt;npm&lt;/a&gt; packages available to make HTTP requests. Just to name a few, you can use &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://npmjs.org/package/axios"&gt;Axios&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://npmjs.org/package/request"&gt;Request&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://npmjs.org/package/superagent"&gt;SuperAgent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://npmjs.org/package/got"&gt;Got&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;many more. These all are super fantastic libraries which bring in array of capabilities on how to make HTTP request and handle various responses and errors. &lt;/p&gt;

&lt;p&gt;But sometimes, all we need is, a simple HTTP/S request and response handler. This can be easily done with NodeJS's built-in packages &lt;a href="https://nodejs.org/api/http.html"&gt;http&lt;/a&gt; / &lt;a href="https://nodejs.org/api/https.html"&gt;https&lt;/a&gt; with a very simple lean piece of code. Lets see it in action.&lt;/p&gt;

&lt;p&gt;NOTE: Since &lt;code&gt;Promise&lt;/code&gt; is fancy, so I am gonna use it for this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// SimpleHttp.js&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;URL&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;url&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="nx"&gt;http&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;http&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="nx"&gt;https&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="cm"&gt;/**
 * Simple function to make HTTP / HTTPS request.
 *
 * @param {String} url The url to be scraped
 * @param {Object} config The configuration object to make HTTP request
 *
 * @return {Promise}
 */&lt;/span&gt;
&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;fetch&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;u&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;URL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="nx"&gt;secure&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="nx"&gt;u&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;protocol&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;secure&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="nx"&gt;https&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="nx"&gt;options&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;method&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;GET&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;u&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;hostname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;u&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;port&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;secure&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="mi"&gt;443&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;u&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pathname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;query&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="p"&gt;{},&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="nx"&gt;isHeadRequest&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;HEAD&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;method&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;reject&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;parseInt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;statusCode&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="mi"&gt;301&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;302&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;307&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="nx"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;load&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="nx"&gt;reject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nb"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Unexpected response, got HTTP &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;isHeadRequest&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="nx"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;headersn&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;);&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;chunks&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
                    &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;data&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;onData&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;  
                        &lt;span class="nx"&gt;chunks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
                    &lt;span class="p"&gt;});&lt;/span&gt;
                    &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;end&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;onEnd&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; 
                        &lt;span class="nx"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
                            &lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                            &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Buffer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;concat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;chunks&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;utf8&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                        &lt;span class="p"&gt;});&lt;/span&gt;
                    &lt;span class="p"&gt;});&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;

        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;postBody&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;postBody&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;error&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;reject&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

        &lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;end&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that will be all.&lt;/p&gt;

&lt;p&gt;EDIT: Added support to follow if server responds with &lt;em&gt;HTTP redirect&lt;/em&gt;.&lt;/p&gt;

</description>
      <category>node</category>
      <category>xhr</category>
      <category>http</category>
      <category>https</category>
    </item>
    <item>
      <title>How to try a Linux flavor</title>
      <dc:creator>Vishal Raj</dc:creator>
      <pubDate>Sun, 04 Jul 2021 07:12:10 +0000</pubDate>
      <link>https://dev.to/vishalraj82/how-to-try-a-linux-flavor-30g0</link>
      <guid>https://dev.to/vishalraj82/how-to-try-a-linux-flavor-30g0</guid>
      <description>&lt;p&gt;Whether one is aware or not, nobody's life is untouched by Linux os. We use it in one way or the other. The super power of Linux is that, it reigns from large Super computers to the tiny compuiting machines like Raspberry pi and IOT devices.&lt;/p&gt;

&lt;p&gt;Lets talk about, if you want to experience a flavor of linux, what are the different ways to do so.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using docker
&lt;/h2&gt;

&lt;p&gt;This might seem very odd, but of course, docker will let you experience the CLI version for a linux, say Fedora / Ubuntu / Debian / Alpine. You can use these OS images to build your application. Or just experience it in the CLI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using the cloud service providers
&lt;/h2&gt;

&lt;p&gt;Similar to docker, you can launch OS instance in environment provided by the cloud service providers such as AWS, GCP, MS Azure etc. Here also, you get access to the OS via the CLI only.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using virtual machine
&lt;/h2&gt;

&lt;p&gt;You can use Oracle Virtual Box to install just any flavor of linux, as long as you can download the ISO on your local machine. The installation process is similar to installing the OS on a real machine. You can have the full GUI experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing on flash drive
&lt;/h2&gt;

&lt;p&gt;You can burn the Linux OS ISO on a flash drive just to have a first hand experience, before you actually put on a machine that you use for daily tasks. Once you have the flash drive ready, put it into a machine (laptop / desktop) and during the boot process, choose the flash drive as boot device. A very common challenge with this process is that if you want to try multiple flavors, then you need to go through the tiring process of installing each flavor separately and then experience it. But nothing much to worry, we have solutions which let us put multilple flavors of linux on single flash drive. And then at the time of boot, you can choose, what OS do you want to boot with. &lt;br&gt;
See &lt;a href="https://www.linuxbabe.com/apps/create-multiboot-usb-linux-windows-iso"&gt;Link 1&lt;/a&gt; or &lt;a href="https://www.funkyspacemonkey.com/how-to-create-a-multi-iso-bootable-flash-drive"&gt;Link 2&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing on the real machine
&lt;/h2&gt;

&lt;p&gt;If you are happy with a Linux, that you tried from the flash drive, you may choose to install it on your machine machine as well. The same flash drive can be used to install the linux on your desktop / laptop. The setup process of most modern linux are fairly simple and fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it online
&lt;/h2&gt;

&lt;p&gt;Yes, you can experience a Linux OS online, via &lt;a href="https://distrotest.net/index.php"&gt;Distrotest.net&lt;/a&gt;. Surprisingly, they have a huge list of Linux OS that you can use. And, you get to see the GUI with this. But note that, the speed might vary depending upon the bandwidth of your network. Since this is a free service, therefore the OS is intialized with limited amount of resources. The default sesion time is 30 minutes and can be extended by 15 minutes.&lt;/p&gt;

</description>
      <category>ubuntu</category>
      <category>linux</category>
      <category>installation</category>
      <category>experiment</category>
    </item>
    <item>
      <title>NodeJS - Run your app with multiple versions of Node</title>
      <dc:creator>Vishal Raj</dc:creator>
      <pubDate>Wed, 09 Jun 2021 17:09:44 +0000</pubDate>
      <link>https://dev.to/vishalraj82/nodejs-run-your-app-with-multiple-versions-of-node-45np</link>
      <guid>https://dev.to/vishalraj82/nodejs-run-your-app-with-multiple-versions-of-node-45np</guid>
      <description>&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/RWgYby_u68s"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;As newer version of &lt;a href="https://nodejs.org/"&gt;NodeJS&lt;/a&gt; is released, it brings with it - performance improvements, speed, security, new features and more. If you have a nodejs based web application and plan to upgrade the version of nodejs, then of course it becomes important to test the application on the new version to ensure its sanity.&lt;/p&gt;

&lt;p&gt;In this post we will explore how can we use Docker to run our nodejs based application with two (or more) versions of nodejs.&lt;/p&gt;

&lt;p&gt;Lets explore the directory structure to understand how the files have been organized.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;vishalr&lt;/span&gt;&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;ubuntu&lt;/span&gt; &lt;span class="o"&gt;~&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;tree&lt;/span&gt; &lt;span class="nx"&gt;multi&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;node&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;app&lt;/span&gt;
&lt;span class="nx"&gt;multi&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;node&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
&lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt;
&lt;span class="err"&gt;│  &lt;/span&gt; &lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;js&lt;/span&gt;
&lt;span class="err"&gt;│  &lt;/span&gt; &lt;span class="err"&gt;└──&lt;/span&gt; &lt;span class="kr"&gt;package&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;json&lt;/span&gt;
&lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="nx"&gt;docker&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;compose&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;yml&lt;/span&gt;
&lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="nx"&gt;node14&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Dockerfile&lt;/span&gt;
&lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="nx"&gt;node16&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Dockerfile&lt;/span&gt;
&lt;span class="err"&gt;└──&lt;/span&gt; &lt;span class="nx"&gt;proxy&lt;/span&gt;
    &lt;span class="err"&gt;└──&lt;/span&gt; &lt;span class="nx"&gt;nginx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;conf&lt;/span&gt;

&lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="nx"&gt;directories&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;6&lt;/span&gt; &lt;span class="nx"&gt;files&lt;/span&gt;
&lt;span class="nx"&gt;vishalr&lt;/span&gt;&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;ubuntu&lt;/span&gt; &lt;span class="o"&gt;~&amp;gt;&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The main app resides inside the &lt;code&gt;app&lt;/code&gt; folder. The files &lt;code&gt;node14.Dockerfile&lt;/code&gt; and &lt;code&gt;node16.Dockerfile&lt;/code&gt; contain instructions to build docker image, to run app with Node v14.x and v16.x. The file &lt;code&gt;docker-compose.yml&lt;/code&gt; is a wrapper over the two docker files and adds Nginx as proxy over two docker containers. The file &lt;code&gt;proxy/nginx.conf&lt;/code&gt; contains the barebones configuration to use nginx as proxy for our application.&lt;/p&gt;

&lt;p&gt;Additionally we also need to make the following entry in the file &lt;code&gt;/etc/hosts&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;127.0.0.1  node16.myapp.local  node14.myapp.local
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To start all the containers execute the following command -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vishalr@ubuntu ~&amp;gt; docker-compose up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once all the containers (nginx, node16 and node14) are up, you can use the urls &lt;code&gt;http://node16.myapp.local&lt;/code&gt; and &lt;code&gt;http://node14.myapp.local&lt;/code&gt; in your local browser to test your application running with Node v16.x and Node v14.x respectively. &lt;/p&gt;

&lt;p&gt;You can find this project at my &lt;a href="https://github.com/vishalraj82/multi-node-app"&gt;Github repository&lt;/a&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>webdev</category>
      <category>node</category>
    </item>
    <item>
      <title>Using HTTPS in docker for local development</title>
      <dc:creator>Vishal Raj</dc:creator>
      <pubDate>Thu, 13 May 2021 20:58:15 +0000</pubDate>
      <link>https://dev.to/vishalraj82/using-https-in-docker-for-local-development-nc7</link>
      <guid>https://dev.to/vishalraj82/using-https-in-docker-for-local-development-nc7</guid>
      <description>&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/g31iYT4DcKw"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;As a web application developer, one of the most common challenge faced is, not having the local development environment close enough to the production environment. While there can be many aspects to this, in this post we will focus on the following two&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Having a domain name, instead of something like &lt;code&gt;http://localhost:8080&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Having a valid &lt;code&gt;HTTPS&lt;/code&gt; certificate on the local development machine&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Also, I am going to use docker for this demonstration. So lets begin.&lt;/p&gt;

&lt;p&gt;I am going to use Wordpress as my example application. So lets head over to the docker page for &lt;a href="https://hub.docker.com/_/wordpress"&gt;Wordpress&lt;/a&gt;. Scrolling to the bottom shows us a sample configuration which can be used with docker-compose to have Wordpress running with &lt;a href="https://hub.docker.com/_/mysql"&gt;MySQL&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Open a terminal and make a folder with name, say &lt;code&gt;wordpress-with-https&lt;/code&gt; and move inside it. Now create a file with name &lt;code&gt;docker-compose.yml&lt;/code&gt; and paste the contents copied from &lt;a href="https://hub.docker.com/_/wordpress"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.9'

services:
  wordpress:
    image: wordpress
    restart: always
    ports:
      - 8080:80
    environment:
      WORDPRESS_DB_HOST: db
      WORDPRESS_DB_USER: exampleuser
      WORDPRESS_DB_PASSWORD: examplepass
      WORDPRESS_DB_NAME: exampledb
    volumes:
      - wordpress:/var/www/html
  db:
    image: mysql:5.7
    restart: always
    environment:
      MYSQL_DATABASE: exampledb
      MYSQL_USER: exampleuser
      MYSQL_PASSWORD: examplepass
      MYSQL_RANDOM_ROOT_PASSWORD: '1'
    volumes:
      - db:/var/lib/mysql

volumes:
  wordpress:
  db:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save and exit from the file. Now lets start the docker containers - &lt;code&gt;docker-compose up&lt;/code&gt;. This will fire up the Wordpress and MySQL containers with appropriate configuration. Open a browser and type the URL - &lt;a href="http://localhost:8080"&gt;&lt;code&gt;http://localhost:8080&lt;/code&gt;&lt;/a&gt; and press enter. At this point, we can see that we have now successfully opened up the Wordpress setup page.&lt;/p&gt;

&lt;p&gt;In order to access the Wordpress app over domain name, we need to make an entry in the &lt;code&gt;/etc/hosts&lt;/code&gt; files and add the following at the end of the file - &lt;code&gt;127.0.0.1  my-wordpress-blog.local&lt;/code&gt;. After this, now we should be able to access - &lt;a href="http://my-wordpress-blog.local:8080"&gt;&lt;code&gt;http://my-wordpress-blog.local:8080&lt;/code&gt;&lt;/a&gt; in browser.&lt;/p&gt;

&lt;p&gt;In order to have HTTPS in the local development environment, we will use a utility called &lt;a href="https://github.com/FiloSottile/mkcert"&gt;mkcert&lt;/a&gt;. In order to have mkcert, we first need to install the dependency - &lt;code&gt;libnss3-tools&lt;/code&gt;. Open a terminal and run - &lt;code&gt;sudo apt install libnss3-tools -y&lt;/code&gt;. Now lets download the pre-built mkcert binary from the &lt;a href="https://github.com/FiloSottile/mkcert/releases"&gt;github releases page&lt;/a&gt;. Download the appropriate binary. Since I am using Ubuntu on my develoment machine, so I will use &lt;a href="https://github.com/FiloSottile/mkcert/releases/download/v1.4.3/mkcert-v1.4.3-linux-amd64"&gt;mkcert-v1.4.3-linux-amd64&lt;/a&gt;. Download the binary file and move it to &lt;code&gt;/usr/local/bin&lt;/code&gt;. We also need to make the file executable - &lt;code&gt;chmod +x mkcert-v1.4.3-linux-amd64&lt;/code&gt;. Now lets create a softlink with name - &lt;code&gt;mkcert&lt;/code&gt; - &lt;code&gt;ln -s mkcert-v1.4.3-linux-amd64 mkcert&lt;/code&gt;. The first step is to become a valid Certificate Authority for local machine - &lt;code&gt;mkcert -install&lt;/code&gt;. This will install the root CA for local machine.&lt;/p&gt;

&lt;p&gt;Now lets get back to generating self-signed SSL certificates. Lets move back to our development folder &lt;code&gt;wordpress-with-https&lt;/code&gt;. Here we will create directory &lt;code&gt;proxy&lt;/code&gt; and inside it &lt;code&gt;certs&lt;/code&gt; and &lt;code&gt;conf&lt;/code&gt;. Lets move inside &lt;code&gt;proxy/certs&lt;/code&gt; and generate the certificates.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vishalr@ubuntu ~/wordpress-with-https&amp;gt; mkcert \
&amp;gt;  -cert-file my-wordpress-blog.local.crt \
&amp;gt;  -key-file my-wordpress-blog.local.key \
&amp;gt;  my-wordpress-blog.local

Created a new certificate valid for the following names
 - "my-wordpress-blog.local"

The certiciate is at my-wordpress-blog.local.crt and the key at "my-wordpress-blog.local.key"

It will expire on 14 August 2021

vishalr@ubuntu ~/wordpress-with-https&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will generate the SSL key and certificate file which is valid for domain - &lt;a href="http://my-wordpress-blog.local"&gt;&lt;code&gt;my-wordpress-blog.local&lt;/code&gt;&lt;/a&gt;. Now lets modify the contents of file &lt;code&gt;docker-compose.yml&lt;/code&gt; to use nginx as the proxy. Add the following contents under &lt;code&gt;services&lt;/code&gt; tag.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;services:
  proxy:
    image: nginx:1.19.10-alpine
    ports:
      - 80:80
      - 443:443
    volumes:
      - ./proxy/conf/nginx.conf:/etc/nginx/nginx.conf
      - ./proxy/certs:/etc/nginx/certs
    depends_on:
      - wordpress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now lets focus on the nginx configuration to use it as proxy. Edit the file &lt;code&gt;wordpress-with-https/proxy/conf/nginx.conf&lt;/code&gt; and add the following configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;events {
  worker_connections 1024;
}

http {
  server {
    listen 80;
    server_name my-wordpress-blog.local;
    return 301 https://$host$request_uri;
  }

  server {
    listen 443 ssl;
    server_name my-wordpress-blog.local;

    ssl_certificate /etc/nginx/certs/my-wordpress-blog.local.crt;
    ssl_certificate_key /etc/nginx/certs/my-wordpress-blog.local.key;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers HIGH:!aNULL:!MD5;

    location / {
      proxy_buffering off;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_set_header X-Forwarded-Host $host;
      proxy_set_header X-Forwarded-Port $server_port;

      proxy_pass http://wordpress;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the docker containers are still running, press &lt;code&gt;Ctrl + C&lt;/code&gt; to stop them. Now lets fire up the docker containers with the updated contents in &lt;code&gt;docker-compose.yml&lt;/code&gt;. If all has been done correctly, we should have everything ready. Now lets open the browser and enter the url - &lt;a href="http://my-wordpress-blog.local"&gt;&lt;code&gt;http://my-wordpress-blog.local&lt;/code&gt;&lt;/a&gt;. This should redirect to &lt;a href="https://my-wordpress-blog.local"&gt;&lt;code&gt;https://my-wordpress-blog.local&lt;/code&gt;&lt;/a&gt;. Now if you look at the top left of the browser, the lock icon is green, which means that the browser has accepted our locally generated self signed ssl certificates.&lt;/p&gt;

&lt;p&gt;The github repository can be found &lt;a href="https://github.com/vishalraj82/https-in-docker"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;NOTE: &lt;code&gt;mkcert&lt;/code&gt; can be use to generate ssl certificates for local development only.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>https</category>
      <category>development</category>
      <category>mkcert</category>
    </item>
    <item>
      <title>Windows 10: Black &amp; White mode</title>
      <dc:creator>Vishal Raj</dc:creator>
      <pubDate>Sun, 29 Nov 2020 18:19:04 +0000</pubDate>
      <link>https://dev.to/vishalraj82/windows-10-black-white-mode-2ljb</link>
      <guid>https://dev.to/vishalraj82/windows-10-black-white-mode-2ljb</guid>
      <description>&lt;p&gt;Very recently I bought myself, a Google Pixel 4A, a stylish &amp;amp; sleek mobile. I have always been a fan of the Pixel series since its launch. &lt;/p&gt;

&lt;p&gt;A new feature which I found with Pixel 4A is, it allows you to set your sleep &amp;amp; wake up time. During this period, the phone turns to &lt;strong&gt;Do not disturb&lt;/strong&gt; mode and more brilliant, the display turns an eye soothing &lt;strong&gt;&lt;em&gt;Black &amp;amp; White&lt;/em&gt;&lt;/strong&gt; mode. I am in love with this.&lt;/p&gt;

&lt;p&gt;Now, since I am a software engineer by profession and passion, I spend quite a lot of my time on my laptop. I own a Dell XPS 7390 :-)&lt;br&gt;
Even though the Windows operating system has &lt;em&gt;Night Light&lt;/em&gt; mode, I don't like it much. To me, it does not help much.&lt;/p&gt;

&lt;p&gt;So, considering the case above with my Pixel 4A, I just turned curios if Windows 10 also as any such feature. I searched on google and TADAA.. I hit the jackpot (at least for me). Windows 10 has a setting where you can turn color filter on or off.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FjQJRTvR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/45a6gc8x03u12c4g9230.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FjQJRTvR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/45a6gc8x03u12c4g9230.png" alt="Turn color filter on or off"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hit the &lt;em&gt;Windows&lt;/em&gt; key and type "Turn color filter on or off". This takes you to System setting page for color filter. From their, you can turn on the color filter and enable global shortcut toggle key, which is &lt;em&gt;Ctrl + Win + C&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c6gmw01c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/saxdjdrf9jp9shyovmvo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c6gmw01c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/saxdjdrf9jp9shyovmvo.png" alt="Color filter"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With this, you can toggle between color or black &amp;amp; white display mode. &lt;strong&gt;Black &amp;amp; White&lt;/strong&gt; mode to the rescue of night owls.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Combine this feature with &lt;em&gt;Night mode&lt;/em&gt; and the display turns even dimmer.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Thats all folks!!&lt;/p&gt;

</description>
      <category>todayilearned</category>
      <category>windows</category>
      <category>blackandwhite</category>
    </item>
  </channel>
</rss>
