<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Aayush Kurup</title>
    <description>The latest articles on DEV Community by Aayush Kurup (@aayushk47).</description>
    <link>https://dev.to/aayushk47</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aayushk47"/>
    <language>en</language>
    <item>
      <title>From Relational to Analytical: The Power of Redshift Data Warehousing and Analytics</title>
      <dc:creator>Aayush Kurup</dc:creator>
      <pubDate>Wed, 29 Nov 2023 11:21:31 +0000</pubDate>
      <link>https://dev.to/aayushk47/from-relational-to-analytical-the-power-of-redshift-data-warehousing-and-analytics-4i67</link>
      <guid>https://dev.to/aayushk47/from-relational-to-analytical-the-power-of-redshift-data-warehousing-and-analytics-4i67</guid>
      <description>&lt;p&gt;I recently got my hands dirty working on data warehousing and found myself wondering why traditional databases like Postgres aren't commonly used for this task. After some research and experimentation, I discovered that specialized data warehousing solutions like Amazon Redshift offer distinct advantages over regular databases when it comes to handling large volumes of data and supporting complex analytical queries. In this blog post, we'll explore the reasons why Redshift has become the go-to choice for data warehousing and why it's worth considering for your own data storage and analysis needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Brief Intro to Data Warehousing
&lt;/h2&gt;

&lt;p&gt;Data is the most critical asset for all modern businesses. It is essential for making an informed decision and staying ahead of the competition. The data can be collected from various sources. For example, an e-commerce site can collect data from its website, along with some of its affiliate websites. Now, it's obvious that you don't want to go to each of these sources and analyze data repeatedly.&lt;/p&gt;

&lt;p&gt;Enter Data Warehousing.&lt;/p&gt;

&lt;p&gt;Data warehousing involves several processes that enable organizations to collect, store, and manage data in a centralized repository. Typically, it starts with data being extracted from various sources, such as transactional databases, log files, and third-party applications. The data collected from all these sources may not necessarily be uniform, so it needs to be transformed and cleaned. This process is known as Extract, Transform, and Load (ETL), and there are many tools available that can be used for this task, such as Talend, Informatica, and AWS Glue. Once the data is transformed and loaded into the centralized data warehouse, it is organized and structured in a way that enables efficient querying and analysis.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MgHyRMc5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ggvz9yd84f6mj5idtboe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MgHyRMc5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ggvz9yd84f6mj5idtboe.png" alt="ETL pipline" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the data is loaded into a data warehouse, the next step in the process is to perform analysis and prepare visualizations using the data. Tools like Tableau, Power BI, and Quicksight can be used for this.&lt;/p&gt;

&lt;p&gt;Now that you understand what data warehousing is, let's understand why it's not a good idea to use relational databases as the central repository.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Limits of Relational Databases for Data Warehousing
&lt;/h2&gt;

&lt;p&gt;Relational Databases are great. I mean they are still used by almost every system in the world to store their data. But when it comes to data warehousing, the size of the data we are talking about is HUGE!!.&lt;/p&gt;

&lt;p&gt;Scaling out a relational database is challenging. But let's say that we planned and scaled out our database, we may still be facing a lot of issues in the query performance.&lt;/p&gt;

&lt;p&gt;Relational databases are designed to store data in tables with relationships between them. This is great for transactional processing where small amounts of data are frequently accessed and updated. But for analyzing large amounts of data, we require complex queries that involve aggregating, joining and filtering large tables. On top of that, RDBs ensure have to ensure that the data is consistent, which further adds overhead to the query performance. We can employ some strategies to improve the performance, but wouldn't it be easier to just use a solution that is designed to handle complex queries and large data?&lt;/p&gt;

&lt;p&gt;Some of the most popular technologies that are used for these purposes include AWS Redshift and Google's Big Query. Let's discuss a bit about the architecture of Redshift and see how it is designed to handle fetching large amounts of data very quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Unpacking Redshift
&lt;/h2&gt;

&lt;p&gt;Let's unpack Redshift feature by feature and see how each feature helps makes it great for data warehousing and analytics.&lt;/p&gt;

&lt;h3&gt;
  
  
  Massively Parallel Processing (MPP) Architecture
&lt;/h3&gt;

&lt;p&gt;The entire service of Redshift is built on a distributed, massively parallel processing (MPP) architecture, which enables it to handle large amounts of data quickly and efficiently.&lt;/p&gt;

&lt;p&gt;Each redshift cluster consists of a leader node and multiple compute nodes. The job of leader node is to receive queries for the client, create an execution plan and distribute the compute nodes. Compute node, on the other hand, is where actual data processing occurs. Each compute node consists of one or more slices. A slice is a self-contained processing unit that can execute queries independently and in parallel with other slices in the same node. This distributed architecture is one of the few features used to accelerate the performance of redshift.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZWP243GN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iatb3ofrutt0bhxfw2fd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZWP243GN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iatb3ofrutt0bhxfw2fd.png" alt="Redshift Architecture" width="612" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Columnar Database
&lt;/h3&gt;

&lt;p&gt;Under the hood, redshift uses the Postgres database, which brings all the goodness of relational databases that we love. But it is not a relational database, rather it is a columnar database. A columnar database stores all the column-specific data together. This means they store column-specific data together instead of row-specific data. Believe it or not, this helps in faster data retrieval since only the columns required by the query are read, instead of reading the entire row. Also, since the columns store similar data, columnar databases can compress data more efficiently, which results in reduced disk I/O operations and an increase in in-memory operations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Zone Maps
&lt;/h3&gt;

&lt;p&gt;A zone map is a metadata stored for each column. Essentially, it is an index that stores the minimum and maximum values of each column. By storing this information, a zone map can help in determining which zone has to be scanned to satisfy the query optimally.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sort and Distribution Keys
&lt;/h3&gt;

&lt;p&gt;It's quite obvious that sorted data is much more easily accessible than unsorted data. Sort Keys are used in determining the physical order in which the data is stored. These keys could be defined as a single column or as a compound key (multiple columns).&lt;/p&gt;

&lt;p&gt;Distribution keys on the other hand determine how the data is distributed across the nodes in a redshift cluster. As the data is distributed evenly across the nodes, it becomes easier and faster to query data in parallel.&lt;/p&gt;

&lt;h3&gt;
  
  
  Materialized View
&lt;/h3&gt;

&lt;p&gt;A materialized view is a precomputed table for storing the results of complex queries. When a materialized view is created, the result of the underlying query is stored in the table. When a query with a similar pattern runs the next time, the data stored in the precomputed table is used instead of running the entire query again. This significantly reduces the query performance, especially for queries that involve a huge amount of data and complex operations.&lt;/p&gt;

&lt;p&gt;On top of all the above features, redshift is secure, scalable, easy to use and integrate with BI tools.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;While traditional relational databases like Postgres are great for transactional processing, they may not be the best fit for data warehousing due to their limitations in handling complex queries and large volumes of data.&lt;/p&gt;

&lt;p&gt;This is where specialized solutions like Amazon Redshift come in. With its massively parallel processing architecture, columnar database, zone maps, sort and distribution keys, and materialized views, Redshift provides a powerful and efficient platform for data warehousing and analytics.&lt;/p&gt;

&lt;p&gt;Hopefully, this blog post has given you a better understanding of data warehousing, where traditional solutions fail and how solutions like Redshift overcome them. If you have any questions or comments, feel free to leave them below!&lt;/p&gt;

</description>
      <category>database</category>
      <category>datawarehousing</category>
      <category>redshift</category>
      <category>sql</category>
    </item>
    <item>
      <title>Production-Grade Workflow for File Uploads</title>
      <dc:creator>Aayush Kurup</dc:creator>
      <pubDate>Sun, 12 Nov 2023 19:44:32 +0000</pubDate>
      <link>https://dev.to/aayushk47/production-grade-workflow-for-file-uploads-5fdl</link>
      <guid>https://dev.to/aayushk47/production-grade-workflow-for-file-uploads-5fdl</guid>
      <description>&lt;p&gt;Let's say you want to create a file upload feature on your app. What is the best way to design it? If you're a web developer, you know that uploading files is a common task that can be trickier than it seems. There are many factors to consider, including security, performance, and scalability. Let's discuss how we can upload and manage a file in your application.&lt;/p&gt;

&lt;h1&gt;
  
  
  Sending the file to the backend
&lt;/h1&gt;

&lt;p&gt;One way to manage this feature is by sending the data to the backend through a request. That is half the problem solved. Once the file reaches the backend, we need to decide how we are going to store this file, such that it can be used easily next time. So again, we can go three ways here:-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Save the file as a blob in the database.&lt;/li&gt;
&lt;li&gt;Save the file in the server, and save the file path in the database for later use.&lt;/li&gt;
&lt;li&gt;Send the file to a cloud-storage service, and save its URL in the database.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's discuss each of these.&lt;/p&gt;

&lt;p&gt;The first approach is probably the easiest. One query and we are done. But it is the worst approach. The reason is that files can be extremely large. If we save them in the database, not only will it consume a lot of our resources, but it will also slow down our query execution as fetching blob type is generally slower.&lt;/p&gt;

&lt;p&gt;The second approach is better. You can easily save the file on your server. Its file path, which is a string will not cause any issue in the database and we can easily save it in the database. But again, a server does not have infinite storage capacity. At some point, we will have to increase the storage.&lt;/p&gt;

&lt;p&gt;Out of the three, the third approach is the best. A cloud-storage service is generally highly available, scalable, fully managed, secure and works on a pay-as-you-go model. So instead of storing the file on your server or database, why not simply store it on the cloud, and save a link to that file on your database? The permissions can also be managed this way, so anyone without sufficient permissions won't be able to access the file.&lt;/p&gt;

&lt;p&gt;But all these approaches still have an issue. Sending files from the front end to the back end may slow down depending on file size, network bandwidth, etc.&lt;/p&gt;

&lt;h1&gt;
  
  
  Uploading files directly from the frontend
&lt;/h1&gt;

&lt;p&gt;Uploading files from the front end seems like a good idea. But how do we do it?&lt;/p&gt;

&lt;p&gt;It's pretty easy. All you need is an upload URL of the S3 object. But remember the URL must be secure. It should have limited permissions and those permissions should expire after some time. So a secure signed URL has to be sent from the back end to the front end, which it will use to upload the file. In response to the file upload request, the front end will receive a &lt;em&gt;slug&lt;/em&gt;, which they can send to the backend to be stored in the database.&lt;/p&gt;

&lt;p&gt;Now, I think that what may be bugging you would be - "How do we access the file from the slug?". Well, we can get a signed URL with read permissions from the slug of the file object. When a user requests the file, we can get a signed URL from the slug and then return it from the backend.&lt;/p&gt;

&lt;h1&gt;
  
  
  Let's Just Do It
&lt;/h1&gt;

&lt;p&gt;Let's try and implement this idea. For this tutorial, I will be using AWS S3. To follow along, you need a free AWS account and AWS CLI installed on your machine. The code will be written in JavaScript and React, but the logic is the same in every language. If you understand the logic well, you will be able to implement it in your favorite language.&lt;/p&gt;

&lt;h1&gt;
  
  
  AWS Configurations
&lt;/h1&gt;

&lt;p&gt;Assuming you have an AWS account and you have installed the AWS CLI, the first thing we need to do is create the S3 buckets with appropriate permissions. Now, to do this, you can use the AWS management console, but for this demo, I will be using the AWS CLI.&lt;/p&gt;

&lt;p&gt;To configure the AWS CLI you need an access key and a secret key, which you can get from the IAM console on your AWS account by following these steps:-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--k8j9kitU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fh8n0sxu5moaa8bm5xin.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k8j9kitU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fh8n0sxu5moaa8bm5xin.png" alt="Image description" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have the keys, fire up your terminal and run:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws configure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The AWS configure command will ask for the access key, secret key and AWS region. Once you have entered all these details, you are all set to create the S3 buckets.&lt;/p&gt;

&lt;h1&gt;
  
  
  Creating S3 Buckets
&lt;/h1&gt;

&lt;p&gt;Creating a bucket is easy enough. Just run the following command on your terminal:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3api create-bucket --bucket &amp;lt;bucket-name&amp;gt; --region &amp;lt;region name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We're not done yet, we need to configure CORS for this bucket. To do that, run the following:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3api put-bucket-cors --bucket &amp;lt;your-bucket-name&amp;gt; --cors-configuration '{
  "CORSRules": [
    {
      "AllowedHeaders": ["*"],
      "AllowedMethods": ["GET", "HEAD"],
      "AllowedOrigins": ["*"],
      "ExposeHeaders": []
    }
  ]
}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here I have allowed all origins and headers, along with GET and HEAD methods. You can change this configuration as per your requirement.&lt;/p&gt;

&lt;h1&gt;
  
  
  File Uploader Component
&lt;/h1&gt;

&lt;p&gt;Let's start with the UI:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function App() {
    return (
        &amp;lt;div&amp;gt;
            &amp;lt;form action=""&amp;gt;
                &amp;lt;input type="file" /&amp;gt;
            &amp;lt;/form&amp;gt;
        &amp;lt;/div&amp;gt;
    )
}

export default App;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's a simple react component that renders a file input. I have used no styles here, just to keep it simple. Let's add some functionality to it.&lt;/p&gt;

&lt;p&gt;When we click on the upload button, we get a popup to select a file, and when we select the file, the input value changes. So to override the default behaviour with our functionality, we need to use the onChange prop on input:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function App() {
    handleFileUpload(event) {
        // API call to get signed url for uploading the file
        let response = await fetch.get('&amp;lt;INSERT API ENDPOINT&amp;gt;')
        response = await response.json()
        const uploadUrl = response.data.url;
        // API to call upload image to S3
        const uploadUrl = await fetch.put(uploadUrl, event);
        if(uploadUrl.status === 200) {
            // Call the api to save the slug in your database
        }
    }
    return (
        &amp;lt;div&amp;gt;
            &amp;lt;form action=""&amp;gt;
                &amp;lt;input type="file" /&amp;gt;
            &amp;lt;/form&amp;gt;
        &amp;lt;/div&amp;gt;
    )
}

export default App;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, I have added &lt;code&gt;handleFileUpload&lt;/code&gt; method which uploads the file on S3. First, we send a request to our backend, which returns a signed URL through which we can upload the file. Next, we send a put request to the signed URL with the file object as the body. This will upload the file to the S3 bucket, and we get the s3 URL and slug in response. Next, you can save the slug in your database if you want to use it later. Your backend can generate a signed URL using the slug with just read permissions.&lt;/p&gt;

&lt;p&gt;But, we are missing something here, aren't we? Oh yeah, we need to create endpoints which return signed URLs. Let's create them in the next section.&lt;/p&gt;

&lt;h1&gt;
  
  
  Generating Signed URLs
&lt;/h1&gt;

&lt;p&gt;Depending on which framework you use, the boilerplate of your project may change. All frameworks will have a service with common logic. Let's see how this service looks:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const AWS = require('aws-sdk');
const { v4 } = require('uuid');

const s3 = new AWS.S3({
    accessKeyId: accessKeyId,
    secretAccessKey: secretAccessKey,
    region: region
});

function generateSignedS3Url() {
  const expires = new Date();
  expires.setMinutes(expires.getMinutes() + 10);

  // Generate the signed URL parameters
  const params = {
    Bucket: bucketName,
    Key: v4(),
    Expires: expires
  };
  const signedUrl = s3.getSignedUrl('putObject', params);

  return signedUrl;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above snippet, we created an S3 object from the aws-sdk, and then we defined the generateSignedS3Url function. To generate a signed URL, we need to provide the bucket where we want to store the file, a key, which is basically the file name, and expires, which is the time when the permission on our signed URL expires. putObject is an AWS S3 action which allows put request on an object. We use getSignedUrl on the s3 object to generate the signed URL, and return this signed URL.&lt;/p&gt;

&lt;p&gt;Now, this function can be called on your controller to get the signed URL and return as a response.&lt;/p&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;In conclusion, when designing a file upload feature, consider security, performance, and scalability storing files as blobs in the database is convenient but resource-intensive. Saving file paths on the server is efficient but limited by storage capacity. The best approach is using a cloud-storage service like AWS S3, offering availability, scalability, and security and would involve uploading files directly from the front end using secure signed URLs.&lt;/p&gt;

&lt;p&gt;Thank you for reading, we value your feedback. Happy coding!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>s3</category>
      <category>node</category>
      <category>react</category>
    </item>
    <item>
      <title>Querying Firebase Realtime Database and Cloud Firestore from your terminal</title>
      <dc:creator>Aayush Kurup</dc:creator>
      <pubDate>Fri, 30 Oct 2020 03:33:29 +0000</pubDate>
      <link>https://dev.to/aayushk47/querying-firebase-realtime-database-and-cloud-firestore-from-your-terminal-23pe</link>
      <guid>https://dev.to/aayushk47/querying-firebase-realtime-database-and-cloud-firestore-from-your-terminal-23pe</guid>
      <description>&lt;p&gt;I believe that the way we all learn writing queries for a database is quite similar. After learning the basics, we pull up our terminal, start the database server and practice writing different queries. Apart from learning, a database shell also acts as a very good testing tool. Most of the databases provides us with an interface so that we can learn, except &lt;strong&gt;Firebase databases&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DZwVGB35--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/gchk8fhts5e717g3nzhs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DZwVGB35--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/gchk8fhts5e717g3nzhs.png" alt="Firebase" width="384" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When I first used realtime database, the fact that I couldn't double check the output of my query really bugged me. So I decided to create a solution for this - &lt;strong&gt;&lt;a href="//npmjs.com/package/fireshell"&gt;Fireshell&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Getting started with fireshell
&lt;/h1&gt;

&lt;p&gt;&lt;a href="//npmjs.com/package/fireshell"&gt;Fireshell&lt;/a&gt; is a CLI tool which can be used to execute realtime database and cloud firestore queries in your terminal.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing the package
&lt;/h3&gt;

&lt;p&gt;To install fireshell, just run the following command:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install -g fireshell
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure you have node.js and npm installed on your system before running this command.&lt;/p&gt;

&lt;h3&gt;
  
  
  Connecting the shell with the database
&lt;/h3&gt;

&lt;p&gt;To start the shell, simply run &lt;code&gt;fireshell&lt;/code&gt; in your terminal. You will be prompted a few questions.&lt;/p&gt;

&lt;p&gt;The shell will first ask you to select a database:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;? Choose one of the following (Use arrow keys)
&amp;gt; Realtime Database
  Cloud Firestore
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then you have to provide the &lt;strong&gt;absolute path&lt;/strong&gt; to your firebase config file. It has to be a JSON file that you recieve from firebase to connect your application with your firebase project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;? Enter the absolute path to firebase config file
&amp;gt; /root/path/to/your/config.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, you have to provide the URL of firebase realtime database. If you are connecting to realtime database, you need to provide this url. But if you are trying to connect to firestore, you can ignore it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;? Enter the URL of firebase realtime database. (Ignore if you chose cloud firestore)
&amp;gt; https://&amp;lt;YOUR FIREBASE PROJECT NAME&amp;gt;.firebaseio.com/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once these inputs are provided, the shell will be connected to your database.&lt;/p&gt;

&lt;h3&gt;
  
  
  Writing Queries
&lt;/h3&gt;

&lt;p&gt;Your queries must start with the keyword &lt;code&gt;db&lt;/code&gt;. This &lt;code&gt;db&lt;/code&gt; is a variable that stores reference to the database object. You can chain the rest of your query as you normally do.&lt;/p&gt;

&lt;p&gt;For realtime database, make sure that you end any read query or any query that returns some data with the once method and pass value as its argument.&lt;/p&gt;

&lt;p&gt;Some basic examples on writing queries are provided &lt;a href="https://github.com/AayushK47/fireshell/blob/master/README.md#writing-queries"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Final Words
&lt;/h1&gt;

&lt;p&gt;Thank you for checking out this blog article. Do try out fireshell and share your experience. If you face any issue or if you want to make some contribution to this project, head on to the &lt;a href="https://github.com/AayushK47/fireshell"&gt;github repo&lt;/a&gt; and create an issue. &lt;/p&gt;

&lt;p&gt;Happy learning&lt;br&gt;
Ciao!&lt;/p&gt;

</description>
      <category>npm</category>
      <category>node</category>
      <category>firebase</category>
      <category>database</category>
    </item>
  </channel>
</rss>
