<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Aadhitya Dev</title>
    <description>The latest articles on DEV Community by Aadhitya Dev (@aadhitya_dev_).</description>
    <link>https://dev.to/aadhitya_dev_</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aadhitya_dev_"/>
    <language>en</language>
    <item>
      <title>6 Different Data Formats Commonly Used in Data Analytics</title>
      <dc:creator>Aadhitya Dev</dc:creator>
      <pubDate>Mon, 06 Oct 2025 14:19:46 +0000</pubDate>
      <link>https://dev.to/aadhitya_dev_/6-different-data-formats-commonly-used-in-data-analytics-243n</link>
      <guid>https://dev.to/aadhitya_dev_/6-different-data-formats-commonly-used-in-data-analytics-243n</guid>
      <description>&lt;p&gt;In the world of data analytics, the choice of data format plays a crucial role in efficiency, storage, and processing. Different formats cater to various needs, from simple text-based exchanges to optimized binary storage for big data systems. In this article, we'll dive into six common data formats: CSV, SQL (relational tables), JSON, Parquet, XML, and Avro.&lt;/p&gt;

&lt;p&gt;For each format, I'll explain it in simple terms and represent a small dataset using it. The dataset is a simple collection of student records:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Name: Alice, Register Number: 101, Subject: Math, Marks: 90&lt;/li&gt;
&lt;li&gt;Name: Bob, Register Number: 102, Subject: Science, Marks: 85&lt;/li&gt;
&lt;li&gt;Name: Charlie, Register Number: 103, Subject: English, Marks: 95&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's explore each format one by one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. CSV (Comma Separated Values)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;CSV is a straightforward text format where each row of data is a line, and values within the row are separated by commas (or other delimiters). It's like a basic spreadsheet without any fancy features. CSV is popular because it's easy to generate, read, and compatible with most tools, but it lacks built-in schema or data types, which can lead to parsing issues.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgjzkujg4kidofrf5c6ph.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgjzkujg4kidofrf5c6ph.png" alt="CSV DATA" width="655" height="340"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's our student dataset in CSV format:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7nzhe9shdfgecj9dfc8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7nzhe9shdfgecj9dfc8.png" alt=" " width="523" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. SQL (Relational Table Format)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;SQL represents data in relational tables, which are like grids with rows (records) and columns (fields). It's not a file format itself but a way to structure data in databases. Each table has a defined schema specifying data types, and you can query it using SQL language. It's great for structured data with relationships but requires a database system to manage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2szjclj871tsw4y4ykjo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2szjclj871tsw4y4ykjo.png" alt=" " width="551" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's how our dataset would look as SQL statements to create and populate a table:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Firaqrtir1dv7tccuo11z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Firaqrtir1dv7tccuo11z.png" alt=" " width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. JSON (JavaScript Object Notation)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;JSON is a flexible, text-based format that stores data as key-value pairs (objects) or lists (arrays). It's human-readable, supports nested structures, and is widely used in web services, APIs, and configuration files. JSON is self-describing but can be verbose for large datasets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8rem5s9993twvhfvg46k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8rem5s9993twvhfvg46k.png" alt=" " width="456" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our dataset as a JSON array of objects:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj7j9kok7m0qcc0ncttaj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj7j9kok7m0qcc0ncttaj.png" alt=" " width="515" height="568"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Parquet (Columnar Storage Format)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Parquet is a binary, columnar storage format designed for big data processing. Instead of storing data row by row, it groups values by column, which enables better compression and faster analytics queries (e.g., summing a single column without scanning everything). It's popular in systems like Hadoop and Spark.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fer0ce835uzjsx6qfu71n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fer0ce835uzjsx6qfu71n.png" alt=" " width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since Parquet is binary, it can't be shown as readable text. Below is a hexadecimal representation of the Parquet file for our dataset (generated using Python's PyArrow library):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3sabfste4dso8paqzu6x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3sabfste4dso8paqzu6x.png" alt=" " width="800" height="638"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. XML (Extensible Markup Language)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;XML is a text-based markup language that uses hierarchical tags to structure data. It's like a tree of elements, making it suitable for complex, nested data. XML is verbose and self-descriptive but less efficient for large volumes due to its size. It's common in enterprise systems and web services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsx4y0om92yuj0d9yr806.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsx4y0om92yuj0d9yr806.png" alt=" " width="800" height="471"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our dataset in XML format:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv2y6x2nbolkozfpuw1cc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv2y6x2nbolkozfpuw1cc.png" alt=" " width="533" height="568"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Avro (Row-based Storage Format)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Avro is a compact, binary row-based format that includes the data schema within the file. This allows for schema evolution (changing structures over time) and efficient serialization. It's row-oriented, making it good for write-intensive workloads, and is commonly used in Apache Kafka and Hadoop ecosystems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6t0j3n1cudlbmc4p0ys7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6t0j3n1cudlbmc4p0ys7.png" alt=" " width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Avro being binary, here's the schema in JSON format, followed by a Python code snippet that would generate the binary file:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyp3vsdpwd1gddmhxo1aw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyp3vsdpwd1gddmhxo1aw.png" alt=" " width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Code to generate the Avro file:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwlledsy8b97su165uoco.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwlledsy8b97su165uoco.png" alt=" " width="800" height="643"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each of these formats has its place in data analytics. Text-based ones like CSV, JSON, and XML are great for readability and interoperability, while binary formats like Parquet and Avro excel in performance and scalability for big data. Choose based on your use case—whether it's quick exports, complex queries, or efficient storage. If you're working in cloud environments, formats like Parquet often shine due to their compression and query optimization.&lt;/p&gt;

&lt;p&gt;What’s your go-to data format? Let me know in the comments!&lt;br&gt;
Happy coding!!&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>data</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>MongoDB Hands-On Practice: CRUD Operations with the MongoDB Node.js Driver</title>
      <dc:creator>Aadhitya Dev</dc:creator>
      <pubDate>Sun, 07 Sep 2025 14:13:06 +0000</pubDate>
      <link>https://dev.to/aadhitya_dev_/mongodb-hands-on-practice-crud-operations-with-the-mongodb-nodejs-driver-3k0e</link>
      <guid>https://dev.to/aadhitya_dev_/mongodb-hands-on-practice-crud-operations-with-the-mongodb-nodejs-driver-3k0e</guid>
      <description>&lt;p&gt;You've got data, and you've got MongoDB Atlas. Now what? The best way to learn is by doing. In this tutorial, we'll take a provided dataset of business reviews and use the MongoDB Node.js driver to perform essential Create, Read, Update, and Delete (CRUD) operations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdk7y02s5k2thpekewc3.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdk7y02s5k2thpekewc3.jpeg" alt=" " width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;1. Prerequisites: Connect to Your Atlas Cluster&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;First, we need to connect our Node.js application to MongoDB Atlas.&lt;/p&gt;

&lt;p&gt;1.Install the driver: npm install mongodb&lt;/p&gt;

&lt;p&gt;2.Get your connection string from your Atlas cluster dashboard (under the "Connect" button).&lt;/p&gt;

&lt;p&gt;3.Set up your connection: Create a database.js module.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tsttl8vy9sci042a0ns.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tsttl8vy9sci042a0ns.jpeg" alt=" " width="630" height="612"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;_2. The Queries (Using the Node.js Driver)&lt;br&gt;
_&lt;/strong&gt;&lt;br&gt;
A. Insert the Provided Records&lt;br&gt;
We'll write a script to insert your 10 review documents.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fql3prssvbwrefcykn4ju.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fql3prssvbwrefcykn4ju.png" alt=" " width="625" height="772"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;B. Query to find top 5 businesses with highest average rating.&lt;br&gt;
Since each document is a single review, we need to $group by the business to calculate the average. We'll group by both business_id and name to get a clear result.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faripcrcdonndq9y3zqso.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faripcrcdonndq9y3zqso.png" alt=" " width="635" height="616"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Expected Output:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxoqoo8hcfq0ctlxwvrjx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxoqoo8hcfq0ctlxwvrjx.png" alt=" " width="607" height="130"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;C. Query to count how many reviews contain the word “good”.&lt;br&gt;
We'll use the $regex operator for a case-insensitive text search on the review field. The $count stage is perfect for this.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa663i6w1wfuqqryz96wc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa663i6w1wfuqqryz96wc.png" alt=" " width="628" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;D. Query to get all reviews for a specific business ID.&lt;br&gt;
This is a simple find query. We'll find all documents where business_id matches our target.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8uu345e9n4vw15zkltow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8uu345e9n4vw15zkltow.png" alt=" " width="603" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;E. Update a review and delete a record.&lt;br&gt;
Update a Review:&lt;br&gt;
Let's find the review for "Tech Cafe" and update its rating and text.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7iv8k5fv9rdlnohscxa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7iv8k5fv9rdlnohscxa.png" alt=" " width="621" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;F.Delete a Record:&lt;br&gt;
Let's delete the review for "Burger Town".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgaxanpxrzi85q81atg1q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgaxanpxrzi85q81atg1q.png" alt=" " width="622" height="383"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;_&lt;br&gt;
Conclusion_&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You've now successfully performed all the requested operations on your flat reviews collection! The key difference in this approach is that business information (like the name) is duplicated in each review, which is a common trade-off for read performance and simplicity.&lt;/p&gt;

&lt;p&gt;Key MongoDB concepts we used:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;insertMany(): For bulk inserting documents.&lt;/li&gt;
&lt;li&gt;Aggregation Pipeline ($group, $sort, $limit, $count, $match): The powerhouse for complex data analysis and transformation.&lt;/li&gt;
&lt;li&gt;$regex: For powerful pattern matching within string fields.&lt;/li&gt;
&lt;li&gt;find(): The standard workhorse for querying documents.&lt;/li&gt;
&lt;li&gt;updateOne() and deleteOne(): For modifying and removing documents.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This hands-on approach should give you the confidence to start building your own MongoDB-backed applications. Happy coding&lt;/p&gt;

</description>
      <category>mongodb</category>
      <category>node</category>
      <category>database</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
