<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Aniket Rathi</title>
    <description>The latest articles on DEV Community by Aniket Rathi (@aniketrathi1999).</description>
    <link>https://dev.to/aniketrathi1999</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aniketrathi1999"/>
    <language>en</language>
    <item>
      <title>A beginner's guide to REDIS cache.</title>
      <dc:creator>Aniket Rathi</dc:creator>
      <pubDate>Tue, 15 Feb 2022 18:38:46 +0000</pubDate>
      <link>https://dev.to/aniketrathi1999/a-beginners-guide-to-redis-cache-2mc</link>
      <guid>https://dev.to/aniketrathi1999/a-beginners-guide-to-redis-cache-2mc</guid>
      <description>&lt;h2&gt;
  
  
  What is caching
&lt;/h2&gt;

&lt;p&gt;Caching is an intermediary stage to provide an abstract storage mechanism to support your backend. It enables efficient data extraction which are optimized to reduce the response time of your server. In simple terms, we store data in a temporary location so that the data can be easily accessed with minimal retrieval cost. It also reduces the bandwidth of data sent over the network thus making your application fast and user-friendly. Once a certain piece of data is obtained after several computations, it is stored in cache and we can directly access it the next time we need it skipping the additional cost of computation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Factors to decide when to involve cache in your backend
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data chunk being used frequently&lt;/strong&gt;&lt;br&gt;
Caching makes sense only if you use a computated chunk of data very frequently. If this is not the case, caching wouldn't make any sense as a new set of data always has to be calculated and stored in cache. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deciding your TTL&lt;/strong&gt;&lt;br&gt;
TTL is the time in seconds after which your key inside cache will get expired. It is of utmost importance that you have to decide the optimal time after which you want to update/remove the key from cache. The logic to maintain an up-to-date cache plays an important part in both your response time and more importantly not providing stale data in your response.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How does REDIS cache work
&lt;/h2&gt;

&lt;p&gt;Redis stands for REmote DIctionary Server. It has the ability to store and manipulate high-level data types. It is an in-memory database, its data access operations are faster than any other disk-based database, which makes Redis the perfect choice for caching. Its key-value data storage system is another plus because it makes storage and retrieval much simpler. Using Redis, we can store and retrieve data in the cache using the SET and GET methods, respectively (just like Hashmap in java or dictionary in python).&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up Redis
&lt;/h2&gt;

&lt;p&gt;We will be discussing about implementing Redis for a typical NodeJS server. To start with, we need to install redis node client. Also make sure that Redis is installed and running in your local. To find how to install and spin up redis do checkout &lt;a href="https://redis.io/topics/quickstart" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  Working with Redis in Node layer
&lt;/h2&gt;

&lt;p&gt;Using Redis is very simple. For any route receiving requests, we need to check if the route has cache enabled or not. If enabled we need to find if the data for requested key exists in the cache. If it exists, then without any database operation, we directly return it from the middleware itself. If not, then we compute that data and before returning it we also store in key-pair format in the Redis cache. The key used to store the data can be any custom string which can be formed using several parameters of your request.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const logger = require('winston-logger')
const CONFIG = require('configs/config')
const { redis: redisClient } = require('redis-client')
const axios = require('axios')

const getData = async (_, args, ctx) =&amp;gt; {
  try {
    let { data: { slug, query } } = args

    //creating unique key based on slug
    let cacheName = `MEDIA_PAGE_COLLECTION-${slug}`
    let cacheData = await redisClient.get(cacheName)
    if (cacheData) {
      let data = JSON.parse(cacheData)
      return {
        data
      }
    } else {
      let url = `${CONFIG.contentful.baseUrl}/spaces/${CONFIG.contentful.spaceId}/environments/${CONFIG.contentful.environment}`

      let response = await axios({
        url,
        method: 'POST',
        headers: { 'Authorization': `Bearer ${CONFIG.accessToken}`},
        customQuery: {
          query
        }
      })
      data = response.data

      await redisClient.setex(cacheName, 43200, JSON.stringify(data))

      return {
        data
      }
    }
  } catch (error) {
    logger.error('ERROR WHILE FETCHING data &amp;gt;&amp;gt;&amp;gt;', error)
    return error
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above code is an example of how to implement Redis-cache. Firstly we check if data exists or not. If does not exist then we then create a key dynamically store the data against the key. While storing the data, we have provided 3 parameters. First is the key for which the data has to be stored. Second is the TTL for which the data should be stored in the cache and the third parameter is the content. After the TTL, the key-value pair expires.&lt;br&gt;
I have also attached a basic flow chart to demonstrate the basic functionality of how a typical cache works. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftf3gfv8aliiatxs1k98p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftf3gfv8aliiatxs1k98p.png" alt="Work Flow of Redis Cache"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To install and use Redis for a node server checkout &lt;a href="https://www.npmjs.com/package/redis" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>redis</category>
      <category>node</category>
      <category>database</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Database Indexing </title>
      <dc:creator>Aniket Rathi</dc:creator>
      <pubDate>Fri, 17 Sep 2021 04:43:13 +0000</pubDate>
      <link>https://dev.to/aniketrathi1999/database-indexing-30bc</link>
      <guid>https://dev.to/aniketrathi1999/database-indexing-30bc</guid>
      <description>&lt;p&gt;Indexing is always the most crucial part of a database design where you have to trade-off between write operation speeds and read/update/delete speeds. While deciding the type of indexes, one must always consider several factors like the kind of expected queries, read-to-write ratio, amount of memory in your system, size of your database etc. &lt;/p&gt;

&lt;p&gt;While coming up with an indexing strategy, one must always have a deep understanding of the application's queries and what are the most frequently referred fields. Relative frequency of each query justifies whether it even needs an index or not.&lt;/p&gt;

&lt;p&gt;Before using the strategy in production, indexes must be designed and tested over with different configurations in the dev-environment itself to inspect which strategy performs the best. &lt;br&gt;
Inspect the current indexes created for your collections to ensure they are supporting your current and planned queries. If an index is no longer used, drop the index.&lt;/p&gt;

&lt;p&gt;The following are some indexing strategies that can come handful while designing your database structure :&lt;/p&gt;

&lt;h4&gt;
  
  
  Indexing for your queries
&lt;/h4&gt;

&lt;p&gt;An index supports a query when the index contains all the fields scanned by the query. Creating indexes that support queries results in greatly increased query performance.&lt;/p&gt;

&lt;h4&gt;
  
  
  Indexing for sorting related queries
&lt;/h4&gt;

&lt;p&gt;One can support efficient queries, by using the strategies here while specifying the sequential order and sort order of index fields.&lt;/p&gt;

&lt;h4&gt;
  
  
  Ensuring that indexes fit in RAM
&lt;/h4&gt;

&lt;p&gt;When the index fits in RAM, one can avoid reading the indexes from the disk and get faster processing speeds.&lt;/p&gt;

&lt;h4&gt;
  
  
  Create Queries that Ensure Selectivity
&lt;/h4&gt;

&lt;p&gt;Selectivity is the ability of a query to narrow results using the index. Selectivity allows databases to use indexes for a larger portion of the work associated with fulfilling the query.&lt;/p&gt;

&lt;h2&gt;
  
  
  Different types of indexes available -
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Single Field&lt;/li&gt;
&lt;li&gt;Compound Indexing (Using multiple fields to create single index)&lt;/li&gt;
&lt;li&gt;Text Indexing&lt;/li&gt;
&lt;li&gt;Partial Indexing (Indexing a field based on a condition)&lt;/li&gt;
&lt;li&gt;Sparse Indexing (Ensures that the each row in database contains the index or else the row is skipped)&lt;/li&gt;
&lt;li&gt;TTL Indexing (Special indexing that can remove rows after certain amount of time)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A combination of the above mentioned indexes can easily enhance the processing speeds of your queries.  &lt;/p&gt;

</description>
      <category>mongodb</category>
      <category>database</category>
      <category>mysql</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Using Environment Variables</title>
      <dc:creator>Aniket Rathi</dc:creator>
      <pubDate>Sat, 14 Aug 2021 05:07:48 +0000</pubDate>
      <link>https://dev.to/aniketrathi1999/using-environment-variables-3icg</link>
      <guid>https://dev.to/aniketrathi1999/using-environment-variables-3icg</guid>
      <description>&lt;p&gt;Environment variables are the most important part of your backend when it comes to deployment. They store the configuration of your server and hence should never be exposed. I am going to cite an example about using the environment variables in a node application. &lt;br&gt;
The package.json can be a place to store your environment variables but is not at all a secure option. &lt;/p&gt;
&lt;h3&gt;
  
  
  dotenv
&lt;/h3&gt;

&lt;p&gt;The .env file is a special type of file used to define environment variables for your node application in a key:value format. But nodejs is unable to parse this file. Here comes dotenv which takes care of these environment variables and helps node parse the .env file.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;1. Creating the file&lt;/strong&gt;&lt;br&gt;
The .env file needs to be created in the root directory of your application. This file can contain your port, jwt secret key etc.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PORT=5000
JWT_SECRET_KEY="SHHHHHHH"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Configuring the dotenv&lt;/strong&gt;&lt;br&gt;
First you need to install dotenv as a dev package.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm i -D dotenv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can use your environment variables by importing them from .env file. So far so good. Your starting point(app.js) can consider the change when you pivot from dev to prod. But if you have imported and used your environment variables in other files, this can cause you trouble unless you initialize dotenv in each file. This is a frequent mistake made by beginners. With some tweaks in your scripts used to start the application, this trouble can be fixed easily. &lt;br&gt;
&lt;strong&gt;3. Changing scripts&lt;/strong&gt;&lt;br&gt;
You might already have 2 scripts to run your application in dev and prod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
       "scripts": {
        "start": "node app.js",
        "dev": "node app.js"
        // For nodemon users ====
        "dev": "nodemon app.js"
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We need to change the dev script so that the node knows when to use your .env file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
       "scripts": {
        "start": "node app.js",
        "dev": "node -r dotenv/config app.js"
        // For nodemon users ====
        "dev": "nodemon -r dotenv/config app.js"
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And done! &lt;br&gt;
Now you don't require the following lines of code in any file including your app.js/index.js .&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const dotenv = require('dotenv')
const myEnv = dotenv.config()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Ensure that you ignore the file in .gitignore&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>node</category>
      <category>security</category>
      <category>backend</category>
      <category>javascript</category>
    </item>
    <item>
      <title>It’s time to get over mongoose for your Node application</title>
      <dc:creator>Aniket Rathi</dc:creator>
      <pubDate>Fri, 30 Jul 2021 14:13:47 +0000</pubDate>
      <link>https://dev.to/aniketrathi1999/it-s-time-to-get-over-mongoose-for-your-node-application-1p6f</link>
      <guid>https://dev.to/aniketrathi1999/it-s-time-to-get-over-mongoose-for-your-node-application-1p6f</guid>
      <description>&lt;p&gt;MongoDB is an easy-to-use and open-source database system designed to store data in document format. This is quite different than the traditional SQL databases which store data in the form of tables. Hence this makes MongoDB fall in the NoSQL category. These documents in MongoDB consist of a series of key/value pairs (similar to a hashmap in java or dictionary in python). A table of SQL is equivalent to a collection in MongoDB whereas a row is equivalent to a document. Each collection does not have a fixed definition of a document. Since the data to be stored is not structural, this makes MongoDB quite flexible for which we can see that the firms around the globe have been migrating their databases to MongoDB.&lt;br&gt;
With this flexibility comes the problem of unstructured data. Developers familiar with SQL databases find it difficult to switch to such unstructured storage systems. This is where the mongoose comes into the picture. It is a powerful MongoDB ODM(Object Database Modelling). This helps in defining the structure of documents and connecting to MongoDB. This facilitates the developers who worked with SQL databases to migrate to the Mongo community. It also provides abstraction thus making object definitions more readable. So far so good.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where’s the problem?
&lt;/h3&gt;

&lt;p&gt;When building small applications, one can surely choose mongoose. But when it comes to large-scale applications where a user-data object is not restricted to just his address, email, and username, defining a structure becomes a tedious task. Also, such a type of data modeling will not be recommended by your data analysts.&lt;br&gt;
Mongoose acts more like middleware in your node application that can validate the incoming data and restrict it for bad inputs. But do you really need an extra NPM package to do that for you? This can easily be handled for essential input fields by your services before forwarding it to your DB service. With time the structure of the document will change drastically and making a schema out of it will be a very complicated task. When dealing with information coming from packets, this will differ from one packet to another. Any packet containing important information won’t enter your database if it does not satisfy the mongoose schema hence deviating from a real-world scenario.&lt;/p&gt;

&lt;h3&gt;
  
  
  What should be done?
&lt;/h3&gt;

&lt;p&gt;Well, if you are into NodeJS, you might be familiar with creating your own middlewares. You can use MongoDB native driver to connect to the database and write your middleware over it. This middleware can authenticate the required fields for you and pass your document in the next() method. This ensures that you don't create inappropriate documents and does not restrict your incoming data to a pre-defined schema. Your system should be able to welcome data of any shape allowing analysts to look out for irregularities among fields too instead of a restricted schema. Also, you might like to skip an extra package when your package.json increases with time.&lt;/p&gt;

&lt;h2&gt;
  
  
  If you want to use a schema-less database, why on earth would you choose a schema just to ensure input validations?
&lt;/h2&gt;

</description>
      <category>mongodb</category>
      <category>database</category>
      <category>backend</category>
      <category>node</category>
    </item>
  </channel>
</rss>
