<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: abhick09</title>
    <description>The latest articles on DEV Community by abhick09 (@abhick09).</description>
    <link>https://dev.to/abhick09</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/abhick09"/>
    <language>en</language>
    <item>
      <title>CODEDEPLOY ON AWS PART 1</title>
      <dc:creator>abhick09</dc:creator>
      <pubDate>Fri, 29 Jan 2021 11:41:40 +0000</pubDate>
      <link>https://dev.to/abhick09/codedeploy-on-aws-part-1-400p</link>
      <guid>https://dev.to/abhick09/codedeploy-on-aws-part-1-400p</guid>
      <description>&lt;h1&gt;
  
  
  Setup IAM User with access to resources
&lt;/h1&gt;

&lt;h3&gt;
  
  
  What are IAM Users and how to create one?
&lt;/h3&gt;

&lt;p&gt;IAM Users are AWS Users who can access,provision,monitor various AWS Manged or any services provided within the AWS Architecture.There can be fine grain control over the IAM users where users can have only READ access to multiple services or full CRUD control over there assigned services.&lt;/p&gt;

&lt;p&gt;We can simply create a IAM User from the console but there are three ways where we can create a IAM user : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS CLI : Has access key which is a combination of (access key ID and a secret access key)&lt;/li&gt;
&lt;li&gt;AWS API : AWS API type of IAM user also interacts with the AWS services VIA CLI.&lt;/li&gt;
&lt;li&gt;AWS Console : Has console password and username which will grant access to the AWS Management Console.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What are the types of IAM users?
&lt;/h3&gt;

&lt;p&gt;The types of IAM Users are Users with  :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Programmatic access : The IAM user might need to make API calls, use the AWS CLI, or use the Tools for Windows PowerShell. In that case, create an access key (access key ID and a secret access key) for that user.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS Management Console access : If the user needs to access the AWS Management Console, create a password for the user.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What are the ways for IAM users to access/assign resources?
&lt;/h3&gt;

&lt;p&gt;The ways for IAM users to access/assign resources are through directly policy applied to the user or group level policy applied to the users.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IAM User : A IAM user associated with an access key or console password which can be attached policies directly.&lt;/li&gt;
&lt;li&gt;IAM Group : A IAM group is a collection of IAM Users with similar IAM Roles required to attain an objective from which multiple tasks can be handles by a big group of people or could be used for various stages of software engineering lifecycle (ie dev,prod,staging).&lt;/li&gt;
&lt;li&gt;IAM Roles : IAM Roles are a collection of access where we can attach roles to users who can perform special actions as in deployment or any other.&lt;/li&gt;
&lt;li&gt;Instance Profiles : The Instance profiles are profiles created by the IAM user to access and manage policies for EC2 Instance to operate and work properly with CODE DEPLOY.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How to assign the IAM user to use CODE DEPLOY?
&lt;/h3&gt;

&lt;p&gt;The CODE DEPLOY and IAM user are connected via Instance Profiles with IAM Roles,IAM Groups.&lt;/p&gt;

&lt;p&gt;Instance Profiles : The Instance profiles are profiles created by the IAM user to access and manage policies for EC2 Instance to operate and work properly with CODE DEPLOY.Instance Profiles are attached with IAM Roles which will grant the profile with access to resources only one IAM Role can be assigned to a Instance Profiles.Where as the same IAM Role can be used on multiple Instance Profile.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>AWS S3 DEMYSTIFED</title>
      <dc:creator>abhick09</dc:creator>
      <pubDate>Mon, 14 Dec 2020 09:47:02 +0000</pubDate>
      <link>https://dev.to/abhick09/aws-s3-demystifed-2bda</link>
      <guid>https://dev.to/abhick09/aws-s3-demystifed-2bda</guid>
      <description>&lt;h2&gt;
  
  
  What is S3?
&lt;/h2&gt;

&lt;p&gt;S3 is a object based where individual files can be of from 0bytes to 5TB.There is unlimited storage in S3 and are stored in buckets.Naming of the bucket should be unique globally as it is a universal namespace.S3 as if the data stored in s3 are successfully uploaded it return status code of 200.The objects in the S3 will have key value pair.Amazon S3 is a simple key-based object store. When you store data, you assign a unique object key that can later be used to retrieve the data. Keys can be any string, and they can be constructed to mimic hierarchical attributes. Alternatively, you can use S3 Object Tagging to organize your data across all of your S3 buckets and/or prefixes.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are S3 objects?
&lt;/h2&gt;

&lt;p&gt;In S3 all the files are stored as objects.The objects will have : &lt;br&gt;
*Key : Value&lt;br&gt;
*Version ID&lt;br&gt;
*Metadata&lt;br&gt;
*Subresources : Access Control Lists&lt;/p&gt;

&lt;p&gt;But in S3 we have a condition where if you are writing a new file and you can read it immediately but if you are overwriting a file it might take sometime to propagate and get a new data this is also the case when you delete a file&lt;/p&gt;

&lt;h2&gt;
  
  
  What does are the benefits of S3?
&lt;/h2&gt;

&lt;p&gt;*S3 guarantees 11*9 for durability for its service.&lt;br&gt;
*It has tiered storage available with seven tires to choose from depending on your needs.&lt;br&gt;
*It has life cycle management you can also set lifecycle expiration policies to automatically remove objects based on the age of the object.&lt;br&gt;
*It has versioning.When analyzing the storage costs of the operations, note that the 4 GB object from Day 1 is not deleted from the bucket when the 5 GB object is written on Day 15. Instead, the 4 GB object is preserved as an older version and the 5 GB object becomes the most recently written version of the object within your bucket.&lt;br&gt;
*It has encryption.You can choose to encrypt data using SSE/S3, SSE/C, SSE/KMS, or a client library such as the Amazon S3 Encryption Client. All four enable you to store sensitive data encrypted at rest in Amazon S3.&lt;br&gt;
It has access control lists(file level) and bucket policy for (bucket level)security.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the various storage tiers for S3?
&lt;/h2&gt;

&lt;p&gt;*s3 standard&lt;br&gt;
*s3 IA : Infrequently Accessed&lt;br&gt;
*s3 one zone - IA : Infrequently Accessed One Zone only.&lt;br&gt;
*s3 - intelligent tiering (reduced redundancy storage) : Auto Tier by machine learning our usage&lt;br&gt;
*s3 glacier : for very infrequent accessed information&lt;br&gt;
*s3 glacier deep archive : for archiving datas that are not being changed and are not needed rapidly.&lt;br&gt;
*s3 outposts : S3 Outposts storage class to store your S3 data on-premises.Amazon S3 on Outposts delivers object storage in your on-premises environment, using the S3 APIs and capabilities that you use in AWS today. AWS Outposts is a fully managed service that extends AWS infrastructure, AWS services, APIs, and tools to virtually any datacenter, co- location space, or on-premises facility.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the conditions for charges of S3?
&lt;/h2&gt;

&lt;p&gt;They are charged for with the selection of storage tier and follows : &lt;br&gt;
*REQUESTS(transfer in - transfer out)&lt;br&gt;
*STORAGE MANAGEMENT PRICING : you can use a single Amazon S3 bucket to store a mixture of S3 Glacier Deep Archive, S3 Standard, S3 Standard-IA, S3 One Zone-IA, and S3 Glacier data. Also S3 Object Tagging are a part of storage management.&lt;br&gt;
*DATA TRANSFER PRICING(in-out)&lt;br&gt;
*Versioning&lt;br&gt;
*Location&lt;br&gt;
*We measure storage usage in “TimedStorage-ByteHrs,” which are added up at the end of the month to generate your monthly charges.&lt;br&gt;
*Assume you store 100GB (107,374,182,400 bytes) of data in Amazon S3 Standard in your bucket for 15 days in March, and 100TB (109,951,162,777,600 bytes) of data in Amazon S3 Standard for the final 16 days in March.&lt;br&gt;
At the end of March, you would have the following usage in Byte-Hours: Total Byte-Hour usage = (107,374,182,400 bytes x 15 days x 24 hours / day)] + 109,951,162,777,600 bytes x 16 days x 24 hours / day)] = 42,259,901,212,262,400 Byte-Hours.&lt;br&gt;
Let's convert this to GB/Months: 42,259,901,212,262,400 Byte-Hours / 1,073,741,824 bytes per GB / 744 hours per month = 52,900 GB/Months&lt;br&gt;
This usage volume crosses two different volume tiers. The monthly storage price is calculated below assuming the data is stored in the US East Northern Virginia) Region: 50 TB Tier: 51,200 GB x $0.023 = $1,177.60 50 TB to 450 TB Tier: 1,700 GB x $0.022 = $37.40&lt;br&gt;
Total Storage Fee = $1,177.60 + $37.40 = $1,215.00&lt;/p&gt;

&lt;h2&gt;
  
  
  How is the security being handled in S3?
&lt;/h2&gt;

&lt;p&gt;Customers may use four mechanisms for controlling access to Amazon S3 resources: Identity and Access Management (IAM) policies, bucket policies, Access Control Lists (ACLs), and Query String Authentication. IAM enables organizations with multiple employees to create and manage multiple users under a single AWS account. With IAM policies, customers can grant IAM users fine-grained control to their Amazon S3 bucket or objects while also retaining full control over everything the users do. With bucket policies, customers can define rules which apply broadly across all requests to their Amazon S3 resources, such as granting write privileges to a subset of Amazon S3 resources. Customers can also restrict access based on an aspect of the request, such as HTTP referrer and IP address. With ACLs, customers can grant specific permissions (i.e. READ, WRITE, FULL_CONTROL) to specific users for an individual bucket or object. With Query String Authentication, customers can create a URL to an Amazon S3 object which is only valid for a limited time.&lt;br&gt;
By default all new created s3 are private there is no public access.We can setup access control to our bucket with : bucket policy : applied at bucket level which applies to all objects in the bucket written in json (policy generator tool) access control lists : applied at object level that we can apply to individuals or groups.&lt;br&gt;
we can also configure access logs to see all the requests to the s3 bucket all (crud) operations being performed in our S3.&lt;br&gt;
How does S3 handle Encryption&lt;br&gt;
You can choose to encrypt data using SSE/S3, SSE/C, SSE/KMS, or a client library such as the Amazon S3 Encryption Client. All four enable you to store sensitive data encrypted at rest in Amazon S3.&lt;/p&gt;

&lt;p&gt;So in transit while the requests are being generated S3 can enable encryption by forcing (SSL/TLS/HTTPS) to handle its encryption.&lt;/p&gt;

&lt;p&gt;But at rest AWS handles encryption with its services using :&lt;br&gt;
SSE/S3 provides an integrated solution where Amazon handles key management and key protection using multiple layers of security. You should choose SSE/S3 if you prefer to have Amazon manage your keys.&lt;/p&gt;

&lt;p&gt;SSE/C enables you to leverage Amazon S3 to perform the encryption and decryption of your objects while retaining control of the keys used to encrypt objects. With SSE/C, you don’t need to implement or use a client-side library to perform the encryption and decryption of objects you store in Amazon S3, but you do need to manage the keys that you send to Amazon S3 to encrypt and decrypt objects. Use SSE/C if you want to maintain your own encryption keys, but don’t want to implement or leverage a client-side encryption library.&lt;/p&gt;

&lt;p&gt;SSE/KMS enables you to use AWS Key Management Service (AWS KMS) to manage your encryption keys. Using AWS KMS to manage your keys provides several additional benefits. With AWS KMS, there are separate permissions for the use of the master key, providing an additional layer of control as well as protection against unauthorized access to your objects stored in Amazon S3. AWS KMS provides an audit trail so you can see who used your key to access which object and when, as well as view failed attempts to access data from users without permission to decrypt the data. Also, AWS KMS provides additional security controls to support customer efforts to comply with PCIDSS, HIPAA/HITECH, and FedRAMP industry requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Handling CORS in S3?
&lt;/h2&gt;

&lt;p&gt;CORS IN S3 is very simple as we can enable CORS using bucket policy and hence services trying to access the bucket will be allowed without and cross origin issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using CLOUDFRONT for S3?
&lt;/h2&gt;

&lt;p&gt;AWS CLOUDFRONT helps in making the S3 bucket available in all edge locations which will optimize the performance of the S3 where anyone can access the S3 bucket from anywhere in the globe and faster with cloudfront where as using cloudfront in s3 will make the bucket available in all the edge locations of Amazon Web Services and hence all the operations(CRUD) to be handled by S3 will improve as it communicates with the edge locations of AWS and then AWS will use its internal network to get the data if they are not cached.&lt;br&gt;
*EDGE locations will cache your information&lt;br&gt;
*Origin is all the aws services that will be needed to be access by the users origin can be s3,ec2&lt;br&gt;
*Distribution : web for website / rtmp for audio/video&lt;br&gt;
*Cloudfront is also used to accelerate the upload of files to s3.(Transfer Acceleration)&lt;br&gt;
*Objects are cache for the life of the ttl(time to live-default:24hrs) you can set the ttl.&lt;br&gt;
*As with ttl as before you data if new will be accessed ASAP but if you edit the data can be cached and to clear the cache you are allowed but you will be charged.&lt;br&gt;
*You can clear the cache with invalidating objects&lt;/p&gt;

&lt;h2&gt;
  
  
  How to make the most of S3?
&lt;/h2&gt;

&lt;p&gt;We can make the most of S3 with : &lt;br&gt;
*use cloudfront&lt;br&gt;
*using random prefix for keynames(hex hash)&lt;br&gt;
*We can make 3500 put requests per second 5500 get requests so utilize to increase performace.&lt;br&gt;
*CloudWatch Storage Metrics are enabled by default for all buckets, and reported once per day but you can configure it to conditions you want.&lt;br&gt;
*You can use CloudWatch to set thresholds on any of the storage metrics counts, timers, or rates and trigger an action when the threshold is breached. For example, you can set a threshold on the percentage of 4xx Error Responses and when at least 3 data points are above the threshold trigger a CloudWatch alarm to alert a DevOps engineer.&lt;/p&gt;

&lt;h2&gt;
  
  
  LAB
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Create two buckets&lt;br&gt;
*In your bucket enable CORS to access data with these two buckets.&lt;br&gt;
*In your bucket Add ACL and Bucket Policy.&lt;br&gt;
*In your bucket Add Encryption&lt;br&gt;
*Go to cloudfront and enable Cloudfront and Transfer Acceleration&lt;br&gt;
*Go to cloudfront and add restrictions to restrict countries&lt;br&gt;
*Go to cloudfront and use invalidating objects to clear cache&lt;br&gt;
*In your bucket enable versioning to keep all version of your object&lt;br&gt;
*In your bucket enable access logs&lt;br&gt;
*Update your bucket policy to restrict access to come from only cloud front.&lt;br&gt;
*Create iam user to access bucket from cloudfront&lt;br&gt;
*In your cloudfront choose http methods&lt;br&gt;
*In your cloudfront set minimum ttl (for fast changing objects up to date)&lt;br&gt;
*In your cloudfront you can use signed URLs or Signed Cookies) for restricted content&lt;br&gt;
*Enable AWS/WAF to protect the bucket from common web exploits that may affect availability, compromise security, or consume excessive resources.&lt;br&gt;
**PLEASE GO THROUGH FOR MORE COMPREHENSIVE INFORMATION&lt;/em&gt;* &lt;strong&gt;&lt;a href="https://aws.amazon.com/s3/faqs/"&gt;https://aws.amazon.com/s3/faqs/&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>cloudskills</category>
      <category>aws</category>
    </item>
    <item>
      <title>Domain Name Server: Give me the IP address.</title>
      <dc:creator>abhick09</dc:creator>
      <pubDate>Sun, 23 Aug 2020 12:05:07 +0000</pubDate>
      <link>https://dev.to/abhick09/domain-name-server-give-me-the-ip-address-2oph</link>
      <guid>https://dev.to/abhick09/domain-name-server-give-me-the-ip-address-2oph</guid>
      <description>&lt;p&gt;In the real world as humans we feel comfortable identifying everything around us with specific names whether it be other humans,animals or any object we associate its identification with various names but while on the internet computers/servers are identified with numbers also know as IP addresses.These IP addresses are the reason we are able to communicate with various computers all over the world with the help of internet.&lt;/p&gt;

&lt;p&gt;As we feel comfortable with names but the computers on the network use numbers for their identification there tends to arise a problem where we find it hard to memorize these long numeric addresses to communicate with computers so as a result to resolve this and help people use the internet and communicate with various servers were made easy with the introduction to DNS also know as Domain Name System.&lt;br&gt;
Domain Name System is the bridge which translates IP addresses to domain names.For examples these are various domain names with their IP addresses.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Name: facebook.com
Address: 157.240.198.35(IP V4)
Name: facebook.com
Address: 2a03:2880:f144:82:face:b00c:0:25de(IP V6)&lt;/li&gt;
&lt;li&gt;Name: google.com
Address: 172.217.160.174(IP V4)
Name: google.com
Address: 2404:6800:4009:80a::200e(IP V6)&lt;/li&gt;
&lt;li&gt;Name: linkedin.com
Address: 108.174.10.10(IP V4)
Name: linkedin.com
Address: 2620:109:c002::6cae:a0a(IP V6)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;You could copy and paste the IP V4 addresses which will automatically redirect you to their respective domain names.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  THE PROCESS
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;As our machine could not resolve the domain’s IP address within our machine it sends a query to your ISP’s server where if our ISP’s server has the IP address cached it will provide it to us then our machine could request for the results to the particular server with the help of the IP address provided by our ISP.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If our ISP does not have the IP address we were looking for it sends a request to the ROOT SERVER.There are 13 sets of root servers around the world which are handled by 12 different organizations of the world.The root server then redirects the request to TLD servers also know as TOP LEVEL DOMAIN.These TLD servers are supposed to have addresses information for top level domains such as .com,.net,.org etc.But the TLD will still not have the IP address of our domain so it redirects us to Authoritative Name Server.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Authoritative Name Servers are the ones who are supposed to have all the information regarding the domain.Then as the request reaches the authoritative name server it will have the desired IP address of the domain and then sends it back to our machine so that we can access the information we require via the web using the ip addresses associated with our query.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;As technology is just a tool to help us live a better and easier life DNS is also a tool which helps us browse the web in a easy and fast way so we could concentrate more on the information we seek rather than spending time trying to figure out the way get information from the internet with very confusing and long numeric addresses.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>codenewbie</category>
      <category>devops</category>
      <category>computerscience</category>
    </item>
    <item>
      <title>Restful CRUD API with Express in NODE.js.</title>
      <dc:creator>abhick09</dc:creator>
      <pubDate>Mon, 11 May 2020 15:30:49 +0000</pubDate>
      <link>https://dev.to/abhick09/restful-crud-api-with-express-multer-in-node-js-4eo5</link>
      <guid>https://dev.to/abhick09/restful-crud-api-with-express-multer-in-node-js-4eo5</guid>
      <description>&lt;p&gt;I hope everyone is safe during theses difficult times here i am to today trying to give you some things i have learned on building basic CRUD operations with Express.&lt;/p&gt;

&lt;p&gt;As express and node are very powerful things and can be used to built large scale applications.This CRUD results to building restful endpoints which can be used for all your web apps whether it be on the phone via apps or on a domain for the internet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ca3nD--T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4mkgqqxxx1h1qx8sselc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ca3nD--T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4mkgqqxxx1h1qx8sselc.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we are following the MVC architecture for clean and easily maintainable code so i suggest you to do the same if you are following along.Its pretty simple its is just separating the code into three parts where &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Model&lt;/em&gt;&lt;/strong&gt; : Data Schema&lt;br&gt;
&lt;strong&gt;&lt;em&gt;View&lt;/em&gt;&lt;/strong&gt;  : Your view files&lt;br&gt;
&lt;strong&gt;&lt;em&gt;Controller&lt;/em&gt;&lt;/strong&gt; : Where for handling functions for any changes in your schema.&lt;/p&gt;

&lt;p&gt;Apart from these some additional folders are needed as your code will tend to become more organized,easy to read and collaborate on as well.We have other folders such as:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Routes&lt;/em&gt;&lt;/strong&gt;: For navigating via different api's for different requests for the same or different schema.&lt;br&gt;
&lt;strong&gt;&lt;em&gt;Upload&lt;/em&gt;&lt;/strong&gt; : For our images.&lt;br&gt;
&lt;strong&gt;&lt;em&gt;Middleware&lt;/em&gt;&lt;/strong&gt; : For third party configuration such as authentication in my setup. (JWT will be covered in next post) &lt;/p&gt;

&lt;p&gt;Lets get started once when you npm init your project as in my previous post.You now create server.js file to run your node server.Where you require a http module which is a default package in NODE and the file is as follows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2HDUPeti--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9y6zukcg7d9cdn1nd3u3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2HDUPeti--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9y6zukcg7d9cdn1nd3u3.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then we should run &lt;strong&gt;&lt;em&gt;node server.js&lt;/em&gt;&lt;/strong&gt; in your terminal which will be reflected on localhost:5000 in your browser.&lt;/p&gt;

&lt;p&gt;Now creating you first route.Create a route folder and inside it add your file where you want to store your routes in my case which is &lt;strong&gt;&lt;em&gt;devRoutes.js&lt;/em&gt;&lt;/strong&gt;.&lt;br&gt;
After that your need to configure your route file as per following.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oWS_Yryw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/opq2kcj1p85267l2a2zg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oWS_Yryw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/opq2kcj1p85267l2a2zg.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have configured it you need to add your route in the app.js file where you need to import the route file and then call it using &lt;strong&gt;&lt;em&gt;app.use&lt;/em&gt;&lt;/strong&gt; followed by the url you want where &lt;strong&gt;&lt;em&gt;app.use('/dev',devRoutes)&lt;/em&gt;&lt;/strong&gt; will take you to getting all the request in that route.As below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RFKMLmTD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6ldewswlctw9ykz2dd63.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RFKMLmTD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6ldewswlctw9ykz2dd63.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will be using a non relational database called &lt;strong&gt;&lt;em&gt;mongodb&lt;/em&gt;&lt;/strong&gt;.You can configure any other database with this setup.&lt;/p&gt;

&lt;p&gt;Now we configure our database and start with the CRUD operations.I suggest that you to use the online db with mongodb atlas which has free for development purpose(simply google mongodb atlas) as they will be very easy to configure and use and you can even put your projects online when you want very less configuration will be needed to configure as compared to shifting from an local environment.&lt;/p&gt;

&lt;p&gt;You should now create a model and controller folder. As for now in this project we are dealing with end points and testing it with postman we will not need view aspects of the project but you can add the view and fetch the end points in your node app as well.&lt;/p&gt;

&lt;p&gt;We will be using &lt;strong&gt;&lt;em&gt;app.js&lt;/em&gt;&lt;/strong&gt; files for all of our configuration needed including our database.As configuring mongo db atlas in pretty simple and easy or you can setup you local mongo db on your device as well.&lt;/p&gt;

&lt;p&gt;For online users once you have logged in to the mongodb atlas you need to create a cluster for your database.You should add your admin user name and password as well.&lt;/p&gt;

&lt;p&gt;The UI of mongo atlas is pretty comfortable you can find your way around its will be under heading name  database security.&lt;/p&gt;

&lt;p&gt;Now once you set the user name password you should click on &lt;strong&gt;&lt;em&gt;connect&lt;/em&gt;&lt;/strong&gt; and then &lt;strong&gt;&lt;em&gt;connect to your application&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Then there you can find a link which will be required to setup mongo to your application.&lt;/p&gt;

&lt;p&gt;Lets start our db setup by installing the mongoose package.Where we install &lt;strong&gt;&lt;em&gt;npm install mongoose&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You will also need to setup body-parser to access data using json so &lt;strong&gt;&lt;em&gt;npm install body-parser&lt;/em&gt;&lt;/strong&gt; and its setup is also needed.&lt;/p&gt;

&lt;p&gt;Once you have connected you need to navigate to the app.js file and create a connection.Once everything is configured your app.js will look like.I recommend you need to put your password in a safe env file but for demo i have directly have this there.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3AKdyYHt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/052j5sjgdo7vmk3oz2b7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3AKdyYHt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/052j5sjgdo7vmk3oz2b7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have made a connection you need to create a schema.So as to create a schema you need to add models folder in your setup and add a model.js in that folder which should at last look like this or as per your choice of other many data types i would suggest to the official mongodb documentation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--J_ZVU7S5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/kv1qy1c4qr3uitks9hhw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--J_ZVU7S5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/kv1qy1c4qr3uitks9hhw.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the schema is created we need to work on both our routes and controllers where geeting individual data will require id for posting data will require a body and some other functions are needed to be followed as per the files below.Once you create your schema you need to create a controller and a routes file as follows.&lt;/p&gt;

&lt;p&gt;router.js&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BigJf1dp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/184qrmsx7lgvtz2cdvwj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BigJf1dp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/184qrmsx7lgvtz2cdvwj.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Controller.js&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RDXGogvj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0aywffcoopzqo0f5mlye.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RDXGogvj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0aywffcoopzqo0f5mlye.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At last you need to test the end points with postman with its requests.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0J-6-qKa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/dsezfkb96ouomla73muf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0J-6-qKa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/dsezfkb96ouomla73muf.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KP_kBZYk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5ic929qf9524jasts9cu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KP_kBZYk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5ic929qf9524jasts9cu.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks for making it through this much hope for a feedback just trying to be the part of a community.Next post will be CRUD with image upload and JWT authentication which will guard the routes.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>node</category>
      <category>beginners</category>
      <category>computerscience</category>
    </item>
    <item>
      <title>WHY you should start with NODE.js</title>
      <dc:creator>abhick09</dc:creator>
      <pubDate>Sat, 09 May 2020 14:09:38 +0000</pubDate>
      <link>https://dev.to/abhick09/initial-things-to-know-with-node-js-2p5h</link>
      <guid>https://dev.to/abhick09/initial-things-to-know-with-node-js-2p5h</guid>
      <description>&lt;p&gt;While most of the worlds internet is filled with JavaScript we surely know that JavaScript has been through everything and is a very powerful language.While most of the time it is considered to be a language which is responsible for structuring and rendering dynamic content on the UI  along side HTML and CSS BUT for quite sometime now it is being used to built the server side or the database part of the web applications which is NODE.js.&lt;br&gt;
It was written using C, C++, JavaScript.&lt;/p&gt;

&lt;p&gt;NODE.js was written by Ryan Dahl and had its first release in 2009 by the NODE.js foundation and now has partnered with the community and is under a joint partnership named OpenJS foundation.&lt;/p&gt;

&lt;p&gt;Setup your application with your machine.&lt;br&gt;
&lt;a href="https://nodejs.org/en/download/"&gt;https://nodejs.org/en/download/&lt;/a&gt; use this link to follow the documentation choose your system for your respective OS.&lt;/p&gt;

&lt;p&gt;Once you have configured NODE and npm on your machine use your terminal or bash to work with node and installing packages(npm).&lt;/p&gt;

&lt;p&gt;Node package manager is a package manager in NODE and other JavaScript frameworks as Express,React and many more which helps install various JavaScript package,libraries for helping us build large scale applications where libraries have ready to use services to handle various operations required in our application which are stored in the folder named /node_modules. &lt;a href="https://www.npmjs.com/"&gt;https://www.npmjs.com/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now as to build large scale applications we use frameworks where as many other technologies NODE also has many frameworks to choose from some of them are.&lt;br&gt;
1.ExpressJs &lt;a href="https://expressjs.com/"&gt;https://expressjs.com/&lt;/a&gt;&lt;br&gt;
2.MeterorJs &lt;a href="https://www.meteor.com/"&gt;https://www.meteor.com/&lt;/a&gt;&lt;br&gt;
3.NestJs &lt;a href="https://nestjs.com/"&gt;https://nestjs.com/&lt;/a&gt;&lt;br&gt;
4.SailsJs &lt;a href="https://sailsjs.com/"&gt;https://sailsjs.com/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and many more these are the top 4 frameworks according to the stars they have on GITHUB.&lt;/p&gt;

&lt;p&gt;So why use NODE?&lt;br&gt;
Node is very popular and as it uses JavaScript you can master full-stack web development with both front-end and the server side using the same language.The main feature of NODE is that it is asynchronous which as a result wont let it run out of memory and makes it very fast.As node works on single thread which can handle thousands of connections very fast using an event loop where a event is triggered and then it moves on.&lt;/p&gt;

&lt;p&gt;Still why use NODE?&lt;br&gt;
It excels with REST API,Microservices,Real Time apps(chat,live updates),CRUD apps these can be built with node and will perform very fast applications like netflix,yahoo,paypal,linkedin,godaddy are some diverse examples which uses NODE to serve there clients. &lt;/p&gt;

&lt;p&gt;So what is a basic NODE setup?&lt;br&gt;
At first just type node in your terminal and do some basic arthemtic operations or try creating functions within the terminal.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mb2Pi_LH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nky281lh940ezqz6i7j2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mb2Pi_LH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nky281lh940ezqz6i7j2.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IseigVbt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nu6u3fal7w704hnbwto8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IseigVbt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nu6u3fal7w704hnbwto8.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Every node project is initiated with npm init which creates a package.json file.&lt;br&gt;
We need to navigate to the folder where we want node application to be setup and open the terminal/bash and enter the command npm init which creates a package.json file in your folder.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_2ldxWTI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8oxd5ia139jq0i9q4ggv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_2ldxWTI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8oxd5ia139jq0i9q4ggv.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As in the picture we have a package.json file where as we installed express it creates a dependencies section where we can know which package have been installed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dYG-5k17--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8pd1qprlq80meozax9ny.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dYG-5k17--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8pd1qprlq80meozax9ny.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we would want to create ourfunction.js file for our specific reasons.Will further continue with express and setup a MVC pattern CRUD app with token based authentication,file upload,route guard,nesting tables in database with relationships as person with profile tables.Stay tuned.&lt;br&gt;
Meanwhile you should learn about if not JSON,Arrow functions,MVC pattern,HTTP,Promises would help you learn node fast.&lt;/p&gt;

&lt;p&gt;Hope for a feedback to improve just wanted to give out some prerequisites  if you want to start server side work with JavaScript and its many options.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>node</category>
      <category>beginners</category>
      <category>computerscience</category>
    </item>
  </channel>
</rss>
