<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ashwani Pandey</title>
    <description>The latest articles on DEV Community by Ashwani Pandey (@ashwani1218).</description>
    <link>https://dev.to/ashwani1218</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ashwani1218"/>
    <language>en</language>
    <item>
      <title>EC2: The heart of AWS</title>
      <dc:creator>Ashwani Pandey</dc:creator>
      <pubDate>Fri, 18 Sep 2020 09:39:00 +0000</pubDate>
      <link>https://dev.to/ashwani1218/ec2-the-heart-of-aws-5ka</link>
      <guid>https://dev.to/ashwani1218/ec2-the-heart-of-aws-5ka</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZxVd_KoL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9mdjl4t1auk1irlb400u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZxVd_KoL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9mdjl4t1auk1irlb400u.png" alt="EC2 Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon web services have so many services to offer it often feels like finding a needle in a haystack. &lt;br&gt;
EC2 is one of the most popular offerings of AWS.&lt;/p&gt;

&lt;p&gt;Elastic Cloud Compute is the scalable compute capacity provided by AWS. It is your hardware on the cloud which is on-demand and ready to be provisioned with a lot of options to choose from. Imagine having the power to provide an unlimited fleet of Instance with a lot of computing capacity and no upfront payment. &lt;/p&gt;

&lt;p&gt;EC2 mainly consists of capabilities such as Renting virtual machines, storing data on virtual devices using EBS, distributing the load across machines using ELB, and scaling services using the auto-scaling groups. &lt;/p&gt;

&lt;h3&gt;
  
  
  Renting virtual machines:
&lt;/h3&gt;

&lt;p&gt;Virtual machines are the heart of cloud computing if you ignore serverless for a moment. Nearly every application running on the cloud uses a virtual machine as their host.&lt;/p&gt;

&lt;p&gt;AWS provides a variety of virtual machines to rent with different sizes and memory or compute capacity or based on the payment options as well. The instances sizes vary from t2.nano being the smallest to i3en.metal with various combinations of memory-optimized, compute-optimized, storage-optimized, and General purpose. &lt;/p&gt;

&lt;p&gt;It has so many options that one can easily get confused while choosing the right instance for their type of workload. But having so many options to choose from and having categorized instances based on the type of workload they can optimally handle can also provide some help while choosing the instance type.&lt;/p&gt;

&lt;p&gt;Pricing of virtual machines also varies from no upfront payment to reserved instances which also provides options to its customer on whether they want to a commitment or not. The pay as you go model can help some to save money if you have no idea about the workload you are going to have. While reserving instances or Host for a period of time can also save some money for customers who know their workload and can commit for a period of time to have stability. There are many more options such as spot instances that are super cheap but can be taken away if the bid goes higher. &lt;/p&gt;

&lt;h3&gt;
  
  
  Storing data:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Elastic Block Storage&lt;/strong&gt; is a network drive for AWS, It acts as a raw disk for storage in EC2. It is like a detachable drive on the cloud. You can use EBS volumes and attach them to any instance and start using them instantly. In the case of instance failure, the data stored in EBS is secured and can be detached from the instance and used somewhere else.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YMVjqkrN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/dazmtg5ol02vvtqojudc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YMVjqkrN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/dazmtg5ol02vvtqojudc.png" alt="EBS Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As EBS acts as an independent entity it can be detached from one instance and can be attached to another. EBS volumes are locked to an Availability zone and have a provisioned capacity. EBS volumes come in 4 types i.e.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;GP2 (SSD)&lt;/li&gt;
&lt;li&gt;IO1 (SSD)&lt;/li&gt;
&lt;li&gt;ST1 (HDD)&lt;/li&gt;
&lt;li&gt;SC1 (HDD)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Instance Store&lt;/strong&gt; also provides temporary block storage for instances. These are different from EBS as they are physically attached to the Host instance, this comes with its pros and cons. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_PDJcCYY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6jy7hwvalexgnct1aprs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_PDJcCYY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6jy7hwvalexgnct1aprs.png" alt="EC2 with Instance Store"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pros being:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;As they are physically attached to the host instance it has better I/O performance.&lt;/li&gt;
&lt;li&gt;Instance store can be used as a Buffer or a cache.&lt;/li&gt;
&lt;li&gt;The data stored in the Instance store is persisted during reboots.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Cons being:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;On stop or termination, the instance store is lost along with the data stored in it.&lt;/li&gt;
&lt;li&gt;You cannot resize the instance store.&lt;/li&gt;
&lt;li&gt;If you need to back up the data in the instance store we manually need to back it up there is no automated process.
In short Instance store is a physical disk with very high ops that cannot be increased in size and has a risk of data loss if hardware fails.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;EFS (Elastic File Storage)&lt;/strong&gt; &lt;br&gt;
Amazon EFS is a network file system that is managed by AWS. It provides scalable file storage. It is said to be infinitely scalable and has so many advantages over EBS and Instance Store. We can configure multiple instances to have a common file system. It also works in multiple Availability Zones, which makes it easy for instances from different Availability Zones to connect to the file system and work from the same data source.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wucuxhx5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/in2kyfm0skic9ywcmdhc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wucuxhx5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/in2kyfm0skic9ywcmdhc.png" alt="EFS with EC2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;EFS uses the NFS v4.1 protocol. It is a highly available, scalable, and Expensive service i.e 3 times more expensive from GP2 of EBS. We can use security groups to control access to EFS. We can have encryption at rest using the KMS service.&lt;/p&gt;

&lt;p&gt;NOTE: It is only compatible with Linux based AMI and not windows.&lt;/p&gt;

&lt;h3&gt;
  
  
  Distributing load:
&lt;/h3&gt;

&lt;p&gt;EC2 service provides a load balancer for distributing load between multiple instances in AWS. Using a load balancer helps us to manage the incoming request and balance it across a fleet of instances. This helps us ensure that our application's downtime is minimized as much as possible.&lt;/p&gt;

&lt;p&gt;We can have multiple downstream instances to balance your load while exposing a single point of access or DNS. This helps us manage failure, Load balancer regularly health checks the instances, and if an instance fails it stops sending traffic to those instances and triggers an alarm. It also provides SSL termination. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6kJEKnxN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/51lse1hepv2c1c37jw3a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6kJEKnxN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/51lse1hepv2c1c37jw3a.png" alt="Load Balancer With EC2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are three types of load balancer in AWS i.e.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Classic load balancer (V1)&lt;/li&gt;
&lt;li&gt;Application load balancer (V2)&lt;/li&gt;
&lt;li&gt;Network load balancer (V2)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We can also set up a load balancer internally. So that we can load balance internally between the instances. for example, we can have a web tier and an application tier, the web tier sends a request to the application tier. We can set up an internal load balancer to balance requests coming from the web tier into the application tier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BIdIIPxu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8trhlx3j6i345oa8xqk8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BIdIIPxu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8trhlx3j6i345oa8xqk8.png" alt="Internal load balancer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can also set up stickiness so that the request from one user only goes to a single instance for a period of time to ensure consistency.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scaling Services:
&lt;/h3&gt;

&lt;p&gt;Selecting the perfect instance for the workload is one of the most difficult tasks for a solutions architect. Even though there is a range of options to choose from, we still can't predict the correct amount of instances required to balance our workload. AWS's Auto-scaling does a commendable job in balancing the number of instances.&lt;/p&gt;

&lt;p&gt;Just imagine your server having more workload on Wednesdays and less on Sundays, how will you commission servers to match the workload? No worries, we just need to add our instances into an Auto-Scaling group and define policies on how you want to scale your instances. If you have a predictive workload you can set up a policy saying that increases the number of instances to 5 on Wednesdays or if you have an unpredictable workload you can set up policies based on various parameters such as CPU or memory usage. Say if my instances have more than 80% of CPU utilization increase the number of instances by one.&lt;/p&gt;

&lt;p&gt;We can also set up an scale in policy that will ensure that we don't over provision instances that are of no use. So we can set a policy that says remove instances if the CPU utilization is below 20%. In this way, you can automate the process of scaling in and out instances with no manual intervention. Auto-Scaling ensures high availability and when used along with load balancer can help us provide quality service to our customers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sZqptnIU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/di2kcpt1r8s3074wv71m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sZqptnIU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/di2kcpt1r8s3074wv71m.png" alt="Auto Scaling"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;EC2 is one of the most popular offerings of AWS and there is much more to discuss and learn in EC2.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudcomputing</category>
      <category>ec2</category>
      <category>amazonwebservices</category>
    </item>
    <item>
      <title>Prototyping Javascript { }</title>
      <dc:creator>Ashwani Pandey</dc:creator>
      <pubDate>Mon, 24 Aug 2020 08:53:42 +0000</pubDate>
      <link>https://dev.to/ashwani1218/prototyping-javascript-4ml2</link>
      <guid>https://dev.to/ashwani1218/prototyping-javascript-4ml2</guid>
      <description>&lt;p&gt;Managing memory while writing code is one of the major qualities a developer can possess. Execution environment executes javascript code in two stages, i.e &lt;code&gt;Creation&lt;/code&gt; and &lt;code&gt;Hoisting&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Execution context: Creation and Hoisting
&lt;/h3&gt;

&lt;p&gt;Execution context creates a couple of things before actually executing the code. Firstly it creates a Global Object and the outer environment and then sets up memory space for variables and functions which is called &lt;code&gt;Hoisting&lt;/code&gt;. Before the code is executed memory is allocated so that the variables exist in memory. &lt;br&gt;
Functions are written along with the code but that's not the case with variables instead a placeholder called as &lt;code&gt;undefined&lt;/code&gt; is assigned to the variables and later in the execution phase where the code is executed line by line, the variables are assigned to their respective values. This helps in &lt;code&gt;Dynamic typing&lt;/code&gt; or &lt;code&gt;Coercion&lt;/code&gt; of javascript, wherein the type of variable is defined during the run time.&lt;br&gt;
So to summarize all variables are initialized with &lt;code&gt;undefined&lt;/code&gt; but functions are allocated with memory and therefore can be called even before it is defined. In the case of variables, we will get an &lt;code&gt;undefined&lt;/code&gt; value.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt; function person(firstname, lastname){
           return "Hello "+this.firstname+" "+this.lastname
      } 
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In the above example, we have a function that takes in two arguments i.e. first and last name, and returns a greeting. Our javascript objects constitute of various functions like this, and these functions are allocated with memory during the hoisting phase of execution. Mind you, the more number of functions that are written in the object definition, the more memory is allocated to the object, and each time its instance is created.&lt;/p&gt;

&lt;h3&gt;
  
  
  Function constructors
&lt;/h3&gt;

&lt;p&gt;Function constructors are normal functions that are used to construct objects. The &lt;code&gt;this&lt;/code&gt; variable points a new empty object and that object is returned from the function automatically. &lt;br&gt;
Creating a function constructor for the Person object.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function person(firstname, lastname){
    this.first = firstname;
    this.last = lastname;
}

let employee1 = new person("John" , "Doe");
let employee2 = new person("Jane", "Doe");
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now that we extend the properties of the person object we can add new variables on the fly. for eg:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;employee1.designation = "Developer"
employee2.designation = "Tester"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Prototyping
&lt;/h3&gt;

&lt;p&gt;Prototyping an object is a method that can be used to add member functions to the object prototype which will make it available to all its extended objects but will save memory as the method is only available in the prototype and not copied to every object.&lt;br&gt;
This helps us to create base objects of sorts and extend their functionality without actually allocating the memory for functions. &lt;br&gt;
for eg:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Person.prototype.getFullName = function(){
    return this.firstname+" "+this.lastname;
}
Person.prototype.greet = function(){
    return "Hello "+this.firstname+" "+this.lastname;
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This above example will add two methods to the prototype available for the objects.&lt;/p&gt;

&lt;p&gt;Javascript leverages this functionality to provide various functions on inbuilt data structures and types. If we closely watch the object definition of an array we can see the functions that javascript provides &lt;br&gt;
&lt;br&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pSHd6xwC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/noitaczvu2mlc9118a0m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pSHd6xwC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/noitaczvu2mlc9118a0m.png" alt="Array prototype"&gt;&lt;/a&gt;&lt;br&gt;
In the object definition, we have &lt;strong&gt;proto&lt;/strong&gt; which consists of various functions that a developer can use. When we define an array the functions are not allocated with memory, we can still use the methods. &lt;/p&gt;
&lt;h3&gt;
  
  
  Built-in function constructors
&lt;/h3&gt;

&lt;p&gt;We can have our own methods which can be added to the prototype of the built-in function constructor. for eg&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;String.prototype.isLengthLessThan = function(boundary){
    return this.length &amp;lt; boundary;
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The above method adds a function called &lt;code&gt;isLengthLessThan()&lt;/code&gt; to the prototype of string. &lt;/p&gt;

&lt;p&gt;Various javascript frameworks such as JQuery leverages these functionalities in jQuery.fn.init to write code that allocates minimum memory and provides tons of functionality to users. &lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Prototype objects is a way to go for creating objects with tons of functionalities with minimal memory allocation. There are a lot more things we can achieve using prototyping.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>tutorial</category>
      <category>prototyping</category>
    </item>
    <item>
      <title>Github Actions: Power of CICD</title>
      <dc:creator>Ashwani Pandey</dc:creator>
      <pubDate>Mon, 17 Aug 2020 08:27:39 +0000</pubDate>
      <link>https://dev.to/ashwani1218/github-actions-power-of-cicd-3af4</link>
      <guid>https://dev.to/ashwani1218/github-actions-power-of-cicd-3af4</guid>
      <description>&lt;p&gt;Github has been improving continuously to provide a better experience to its users. The recent update of Nov 19 Github's actions was introduced. By Introducing Github actions developers now have the power of CI/CD and version control in the same dashboard. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XKGtshho--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/eslef27u14wwo8wqiora.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XKGtshho--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/eslef27u14wwo8wqiora.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CI/CD is the heart of automation. Github now provides a one-stop solution for maintaining, building, documenting, testing, and deployment of code. It is so convenient to have the code and CI/CD pipelines in the same place. I remember changing tabs to see if the build has passed in Travis CI and having to maintain a separate dashboard. Which with the introductions of &lt;code&gt;travisci.com&lt;/code&gt; has made it even more difficult to track. My old repository builds are not even automatically imported from &lt;code&gt;.org&lt;/code&gt; to  &lt;code&gt;.com&lt;/code&gt;. And there were other issues like ghost repositories build whose name has been changed or deleted. &lt;/p&gt;

&lt;p&gt;Github's actions is also a better solution than Jenkins's pipeline. I would prefer an automated and freely available hosted server for my CI/CD pipeline than a fully user-managed installation on my local computer. &lt;/p&gt;

&lt;h3&gt;
  
  
  So what is Github action
&lt;/h3&gt;

&lt;p&gt;Github is more than CI/CD it is a general-purpose workflow system for building and testing or doing pretty much anything you want to do with your code after you push it to the repository. &lt;/p&gt;

&lt;p&gt;We can automate and execute different workflows right from our repository with Github actions.&lt;br&gt;
Being an opensource project you can find pretty much everything as an already implemented repository either by Github or some other third party repository. If we want to deploy to AWS there is a workflow for that. If we want to publish to GitHub pages there is a workflow for that. We can even run a CRON job, for instance, so if you want to update your static webpages every two hours? there is an implementation for that too. &lt;/p&gt;
&lt;h3&gt;
  
  
  What is a workflow?
&lt;/h3&gt;

&lt;p&gt;So the workflow is what makes all of this possible. It is the string that holds all the beads together. We can compare a workflow with Spring batch, It made up of different jobs which in turn has actions. We can choose to create a workflow of our own or can use an already existing workflow provided by Github or any other third party repository.&lt;br&gt;
A single workflow can consist of various different jobs that can be executed parallelly or one after the other based on the configuration we define. So if we want to deploy our application after a successful build we can have the deploy be dependent on the build job.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Build.yml

name: Build using Github Workflow   # Specifying the name of the workflow

on:                                 # This acts as the trigger 
  push:                             # and lets GitHub know when to trigger the job
    branches:
      - master
  pull_request:
    branches:
      - master

jobs:                               # This lets us define the various 
  build:                            # jobs we need to execute for a successful build
    runs-on: ubuntu-latest          # Specifing the image to run our code on
    strategy:
      matrix:
        node-version: [10.x, 12.x, 14.x] # Specifying the runtime
    steps:                          # Steps are actions that are to be performed
      - uses: actions/checkout@v2
      - name: Use Node.js ${{ matrix.node-version }}
        uses: actions/setup-node@v1
        with:
          node-version: ${{ matrix.node-version }}
      - run: npm install --production
      - run: npm ci
      - run: npm run build --if-present
      - run: npm test

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Above is one of a typical workflow file that defines how we want to build our application. For Github to automatically detect and start building our application we need to put our configuration file in &lt;code&gt;.github/workflows&lt;/code&gt; directory&lt;/p&gt;

&lt;h3&gt;
  
  
  So what can be done using Github Workflow
&lt;/h3&gt;

&lt;p&gt;Github workflow provides the developer with an arsenal of events to have a trigger on. We can trigger a job on not just push or pull requests but pretty much anything event in the Github ecosystem. This level of flexibility provides us with tremendous potentials. We can have a job to treat a new contributor, we can check for stale issues and pull request, We can build using macOS, Windows, and Linux simultaneously as parallel build jobs.&lt;br&gt;
&lt;br&gt;&lt;br&gt;
The UI of Github actions is a piece of art, we have live streaming logs that are color-coded and have emoji support as well, yes you heard it right &lt;strong&gt;'emoji'&lt;/strong&gt; support. We can search in the entire logs for errors and keywords. Suppose you have an error in the logs, now if you want to share that error you just need to copy an already provided URL for the exact line of log containing the error. Now no more "line number 12 of blah blah", just copy the URL and share it to friends, peers and it will take you to the exact line where the error is.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MAMcxAmg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jmgyh2hcsu1rg6e5a58b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MAMcxAmg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jmgyh2hcsu1rg6e5a58b.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can also generate badges saying build passed or failed and use it in our readme file to let other contributors know. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BNE9bj7k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/legu4k7v7mziqi6wn1ss.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BNE9bj7k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/legu4k7v7mziqi6wn1ss.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A good thing about these workflows is that they are nothing but repositories that can be referenced from anywhere, which means no pre-configuration is required to use actions present somewhere in the whole Github ecosystem. We just need to reference it and done we can extend the functionality.&lt;/p&gt;

&lt;p&gt;Now if you want to deploy your application on cloud, Github has support for almost every cloud out there and actions to take help from. We can always create our own actions from scratch as well. Now what about my credentials, don't worry now every Github repository comes with a secret store where you can secretly store your credentials and reference it in the workflows. I mean what else do we need. By the way, these builds are absolutely free for all open source projects. &lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Having said that, there is so much more that Github's actions have to offer. And the amount of extensibility it provides is commendable.&lt;/p&gt;

</description>
      <category>github</category>
      <category>cicd</category>
      <category>devops</category>
      <category>actionshackathon</category>
    </item>
    <item>
      <title>Docker in a nutshell</title>
      <dc:creator>Ashwani Pandey</dc:creator>
      <pubDate>Fri, 14 Aug 2020 05:45:01 +0000</pubDate>
      <link>https://dev.to/ashwani1218/docker-in-a-nutshell-3hm2</link>
      <guid>https://dev.to/ashwani1218/docker-in-a-nutshell-3hm2</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HHO-1c_D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/opkemkphrur4mzuu0uhm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HHO-1c_D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/opkemkphrur4mzuu0uhm.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;br&gt;
Docker has made the life of a Software engineer easy. The whole "It was working on my machine" problem has a one-stop solution. Docker has introduced a way of standardizing the isolation of software into a container. This isolation helps developers to define dependencies and have predictable behavior of the developed software.&lt;/p&gt;

&lt;p&gt;For many developers today Docker is a defacto standard to build a containerized application. But what makes Docker so good at what it does. Docker uses virtualization to accomplish the isolation of software from other processes.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Let's see how OS runs on the system:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The kernel is the central module of an Operating System. It is the part of the operating system that loads first and remains in the main memory. Processes running on the system communicate with the kernel that in turn talks with the hardware.&lt;br&gt;
&lt;br&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--y76u9wtd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/07jg9fx9d64lk0yphm2c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y76u9wtd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/07jg9fx9d64lk0yphm2c.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;br&gt;
Processes talks using System calls. A System call commonly knowns as &lt;strong&gt;syscalls&lt;/strong&gt; is a programmatic way in which a process or a computer program requests a service from the kernel of the operating system on which it is executed. Using Namespaces we can segment out harddrive to have different versions or isolated resources per process or a group of processes. &lt;/p&gt;

&lt;p&gt;Docker leverages these techniques to isolate the software from the underlying hardware. Docker works by making images of the software in a container with the required amount of dependencies and hardware, required to run the software not more or less. Images are basically a snapshot of the filesystem, a Startup command, and the required dependencies in a closed and concealed container providing developers to expect predictable behavior of the environment the software is running in. &lt;br&gt;
&lt;br&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nJTL2Vj6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2cg9ncuqeq33om7hq9wr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nJTL2Vj6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2cg9ncuqeq33om7hq9wr.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
One can create a Docker image using a Dockerfile. Dockerfile is a set of instructions that leads to the creation of a Docker image and later on, can be used to create any number of Docker containers to run the application. As the set of instructions remains the same after the creation of the image it's behavior remains unchanged.&lt;br&gt;
Building a Doker image is fairly simple if one knows what is required for the piece of code to run perfectly. One can follow the steps and can create Docker images with ease.&lt;/p&gt;
&lt;h2&gt;
  
  
  What is a Docker image
&lt;/h2&gt;

&lt;p&gt;Docker image is more like a blueprint for creating containers of sorts. We can compare it with java classes only with memory allocated to the images on the hard drive. These images act as a template or basis to create a docker container which actually runs the application.&lt;/p&gt;
&lt;h3&gt;
  
  
  How to create a Docker Image:
&lt;/h3&gt;


&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Dockerfile 

FROM node:alpine     # Getting the base image

WORKDIR "/app".      # Specifying the working Directory

COPY package.json .  # Copying the required files

RUN npm install      # Installing the Dependencies/ Running some configuration

COPY . .             # Copying the rest of the code

CMD ["npm", "start"] # Specifying the Startup command
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Getting the base image
&lt;/h3&gt;

&lt;p&gt;It is the most important step of the docker image creation process to define the base image. This helps us to leverage the benefits of using an already built image of sorts. In the above example, I have used a Node image to start with. The node image is pulled from a public repository called the Docker Hub. By using a node image I don't have to do the manual work of installing node into my image it is preconfigured by the node image provider. &lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;&lt;code&gt;FROM&lt;/code&gt;&lt;/strong&gt; command takes in the name of an image and the tag if we want to specify. By default, if not specified the docker daemon pulls up the latest version of the image available. We can specify different tags that are available to us on the Dockerhub. The syntax is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM &amp;lt;image_name&amp;gt;:&amp;lt;image_tag&amp;gt; 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Alpine&lt;/strong&gt; Linux is a Linux distribution built around musl lib and busybox. The size of an Alpine image is about 5MB and has access to a package repository making it a great image base for utilities. We could have used an Alpine image as a base image and then installed node into it, but it would increase the number of lines we. need to write in the docker file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Specifying the working Directory
&lt;/h3&gt;

&lt;p&gt;Specifying a working directory is such a handy tool to avoid conflict between the filesystem snapshots file names and user-specified file names. A working directory can be specified at any point in time in the Dockerfile, after which every command we specify for example &lt;code&gt;COPY&lt;/code&gt; or &lt;code&gt;RUN&lt;/code&gt; or &lt;code&gt;CMD&lt;/code&gt; will be executed only in the specified working directory.&lt;br&gt;
Suppose we have a file named lib in our application which is very common and if we copy the code into the root directory of our docker image the lib folder will override the filesystem's lib folder creating conflict and unexpected behavior. It is recommended to specify a working directory to separate application files and fS files.&lt;br&gt;
We can specify a working directory using this syntax:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;WORKDIR /app  #Or any folder name you want to give
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Copying the required files
&lt;/h3&gt;

&lt;p&gt;While copying the files into the docker image we can directly copy the whole project into the image or we can copy only the files required to install the dependencies into the image FS.&lt;br&gt;
We might not understand the signification of this step first time, but when we rebuild the image docker uses the cache to rebuild the image and only executes the steps after something is modified. &lt;br&gt;
We generally don't change the dependencies and therefore it is not required to install the dependencies every time we rebuild the image. Although if we copy the entire project it will force docker to reinstall dependencies as the code we write changes.&lt;br&gt;
It is wise to copy files like package.json beforehand and install the dependencies as they change less frequently.&lt;/p&gt;

&lt;p&gt;We can use the &lt;code&gt;COPY&lt;/code&gt; command to copy the files&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;COPY &amp;lt;file in the system&amp;gt; &amp;lt;copy location&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;"&lt;strong&gt;.&lt;/strong&gt;" is a wildcard that can be used to copy all the files in the current directory.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running configuration
&lt;/h3&gt;

&lt;p&gt;Using this step we can certain commands in the docker image such as installing dependencies or getting something from the internet. This step helps us to do anything in the image which we would generally do in our local machine to make sure that our program runs properly.&lt;/p&gt;

&lt;p&gt;We can make use of &lt;code&gt;RUN&lt;/code&gt; command to run the configuration or any other command such as npm install.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RUN npm install
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Specifying Startup command
&lt;/h3&gt;

&lt;p&gt;This step is a crucial step for the creation of a docker image as it specifies the default command which is to be used while creating and starting the container. The Startup command is what docker runs when we create a container out of a Docker image and try to run it. We can specify the startup command using &lt;code&gt;CMD&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CMD npm start
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This can be anything such as &lt;code&gt;java -jar myjar.jar&lt;/code&gt; if we want to execute a jar as our primary command&lt;/p&gt;

&lt;h3&gt;
  
  
  Command to create a docker image from a Dockerfile:
&lt;/h3&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker build -t &amp;lt;tag&amp;gt; &amp;lt;build-context&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We can create a docker image using the above syntax in the terminal. The &lt;code&gt;-t&lt;/code&gt; flag helps us to specify tags to the created image for later use. We can start the container using the tags we specify while building the image. &lt;/p&gt;

&lt;h3&gt;
  
  
  Command to run a Docker image
&lt;/h3&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker run &amp;lt;image_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;OR&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker run &amp;lt;tag&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;When we build an image in docker we get back an image id which can be used to fire up a container. we can also use the tag we used while building the image. If the Docker daemon doesn't find an image with the specified tag, It will look into docker hub repo for the tagged image. If an image is found on the Dockerhub docker will pull that image and start a container from it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This is just a gist of what Docker has to provide. This blog is just for getting started with Docker. There is a lot more that we can do by leveraging the features and tools provided by Docker.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>dockerfile</category>
      <category>devops</category>
      <category>virtualization</category>
    </item>
    <item>
      <title>Spring JPA: Under the covers</title>
      <dc:creator>Ashwani Pandey</dc:creator>
      <pubDate>Mon, 10 Aug 2020 03:29:20 +0000</pubDate>
      <link>https://dev.to/ashwani1218/spring-jpa-under-the-covers-11hi</link>
      <guid>https://dev.to/ashwani1218/spring-jpa-under-the-covers-11hi</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bLy7_wey--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4ddj2q8o4k3o5hxt0w9n.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bLy7_wey--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4ddj2q8o4k3o5hxt0w9n.jpeg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Spring Data JPA
&lt;/h1&gt;

&lt;p&gt;In the early days, the DAO layer used to consist or still has a lot of boilerplate code which makes it cumbersome to implement. Spring Data JPA, part of the spring-data framework, helps to reduce this boilerplate code and makes it easy for a developer to focus on what's really important.&lt;/p&gt;

&lt;p&gt;Reduced boilerplate means reduced code and artifacts to define and maintain. Spring JPA takes this to another level wherein you can replace the DAO layer with a configuration in a property file. The level of abstractions JPA provides helps the developer to only have an interface artifact to maintain.&lt;/p&gt;

&lt;p&gt;To start working with Spring JPA a DAO interface must extend JPARepository. Just my extending this interface the developer has tons of methods already implemented and ready to use.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public interface FooRepository extends JpaRepository&amp;lt;Foo, Long&amp;gt; { 
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  But how does Spring JPA work!
&lt;/h4&gt;

&lt;p&gt;There are many ways you can leverage the power of Spring JPA, We will discuss Query creation here.&lt;/p&gt;

&lt;h5&gt;
  
  
  There are two ways to implement query creation:
&lt;/h5&gt;

&lt;ol&gt;
&lt;li&gt;Automatic Custom Queries&lt;/li&gt;
&lt;li&gt;Manual Custom Queries&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  1. Automatic Custom Queries:
&lt;/h4&gt;

&lt;p&gt;When we extend the JPARepository Spring scans every method in the interface and tries to parse it to generate queries. It stripes the prefixes such as &lt;strong&gt;find...By&lt;/strong&gt;, &lt;strong&gt;read...By&lt;/strong&gt;, &lt;strong&gt;count...By&lt;/strong&gt;, &lt;strong&gt;query...By&lt;/strong&gt;, &lt;strong&gt;get...By&lt;/strong&gt;, from the method, and starts parsing the rest of it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public interface FooRepository extends JpaRepository&amp;lt;Foo, Long&amp;gt; { 

    public Optional&amp;lt;Foo&amp;gt; findByName(String name); 

}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The first &lt;strong&gt;'By'&lt;/strong&gt; acts as a delimiter to indicate the start of the query. We can add conditions on the entity properties and concatenate them with &lt;strong&gt;'And'&lt;/strong&gt; or &lt;strong&gt;'Or'&lt;/strong&gt;. we can also add a Distinct clause to set a distinct flag.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public interface FooRepository extends JpaRepository&amp;lt;Foo, Long&amp;gt; { 

    public Optional&amp;lt;Foo&amp;gt; findByFirstnameAndLastname(String firstname, String lastname); 

    public List&amp;lt;Foo&amp;gt; findByFirstnameOrLastname(String firstname, String lastname);

    public List&amp;lt;Foo&amp;gt; findDistinctByLastname(String lastname);
}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;" Expressions are usually property traversal combined with operators that can be concatenated. We can combine properties expression with 'And' and 'Or'. There are other operators such as 'Between', 'LessThan', 'GreaterThan', 'Like' for property expression." :- Spring Docs&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The above line defines the automatic query creation perfectly.&lt;/p&gt;

&lt;h5&gt;
  
  
  Property Expression
&lt;/h5&gt;

&lt;p&gt;Property expression can only refer to a direct property of the managed entity. At query creation time, you already make sure that the parsed property is a property of managed domain class &lt;/p&gt;

&lt;h4&gt;
  
  
  2. Manual custom Queries
&lt;/h4&gt;

&lt;p&gt;If we still wish to write some custom query to refine our result and to achieve the desired set of records which cannot be done using the Automatic query creation we are free to write custom JPQL queries.&lt;/p&gt;

&lt;p&gt;We can define the custom queries using the &lt;strong&gt;@Query&lt;/strong&gt; annotation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public interface FooRepository extends JpaRepository&amp;lt;Foo, Long&amp;gt; { 

    @Query("SELECT f FROM Foo f WHERE LOWER(f.name) = LOWER(:name)")
    Foo retrieveByName(@Param("name") String name);

}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Once we use the &lt;strong&gt;@Query&lt;/strong&gt; annotation the method name doesn't&lt;br&gt;
matter as the method name won't be parsed.&lt;/p&gt;

&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;p&gt;Thanks to the Spring Data JPA team we can implement the DAO layer effortlessly and focus on what matters. This blog is just a gist of what Spring Data JPA has to offer, there is a lot more we can do with the JPA implementation&lt;/p&gt;

</description>
      <category>java</category>
      <category>springboot2</category>
      <category>springdatajpa</category>
      <category>database</category>
    </item>
  </channel>
</rss>
