<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sujitha Selvaraj</title>
    <description>The latest articles on DEV Community by Sujitha Selvaraj (@sujitha_selvaraj_af5010c5).</description>
    <link>https://dev.to/sujitha_selvaraj_af5010c5</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sujitha_selvaraj_af5010c5"/>
    <language>en</language>
    <item>
      <title>🧹 Data Cleaning Challenge with Pandas (Google Colab)</title>
      <dc:creator>Sujitha Selvaraj</dc:creator>
      <pubDate>Sun, 09 Nov 2025 13:03:47 +0000</pubDate>
      <link>https://dev.to/sujitha_selvaraj_af5010c5/data-cleaning-challenge-with-pandas-google-colab-4558</link>
      <guid>https://dev.to/sujitha_selvaraj_af5010c5/data-cleaning-challenge-with-pandas-google-colab-4558</guid>
      <description>&lt;p&gt;&lt;strong&gt;🧹 Data Cleaning Challenge with Pandas (Google Colab)&lt;/strong&gt;&lt;br&gt;
Data cleaning is one of the most crucial steps in any data science or analytics project. In this challenge, I worked on a real-world dataset from Kaggle with over 100,000 rows, performing various Pandas operations to clean, preprocess, and prepare it for further analysis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;📂 Dataset Details&lt;/strong&gt;&lt;br&gt;
For this challenge, I selected the E-commerce Sales Dataset from Kaggle containing around 120,000 rows and 12 columns.&lt;/p&gt;

&lt;p&gt;It includes data such as:&lt;/p&gt;

&lt;p&gt;🧾 Order ID&lt;br&gt;
👤 Customer Name&lt;br&gt;
🛒 Product &amp;amp; Quantity&lt;br&gt;
💰 Sales &amp;amp; Discount&lt;br&gt;
🌍 Region&lt;br&gt;
📅 Order Date&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before Cleaning:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Rows → 120,000&lt;br&gt;
Columns → 12&lt;br&gt;
File format → .csv&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;⚙️ Tools &amp;amp; Environment&lt;/strong&gt;&lt;br&gt;
Python 3&lt;br&gt;
Google Colab&lt;br&gt;
Libraries: Pandas, NumPy, Matplotlib&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb2755emr3gtp6cbo5639.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb2755emr3gtp6cbo5639.png" alt=" " width="800" height="274"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>challenge</category>
      <category>datascience</category>
      <category>python</category>
    </item>
    <item>
      <title>Exploring NoSQL Data Analysis: A Practical Study Using a Kaggle Dataset</title>
      <dc:creator>Sujitha Selvaraj</dc:creator>
      <pubDate>Sun, 09 Nov 2025 13:00:00 +0000</pubDate>
      <link>https://dev.to/sujitha_selvaraj_af5010c5/exploring-nosql-data-analysis-a-practical-study-using-a-kaggle-dataset-3g18</link>
      <guid>https://dev.to/sujitha_selvaraj_af5010c5/exploring-nosql-data-analysis-a-practical-study-using-a-kaggle-dataset-3g18</guid>
      <description>&lt;p&gt;🗂️ &lt;strong&gt;Step 1: Setting up MongoDB Atlas&lt;/strong&gt;&lt;br&gt;
Go to MongoDB Atlas.&lt;br&gt;
Create a free cluster (use the Shared Tier option).&lt;br&gt;
Under Network Access, add your IP:&lt;br&gt;
Click Network Access → Add IP Address → Allow access from anywhere (0.0.0.0/0).&lt;br&gt;
Create a database user and remember the credentials. Example:&lt;/p&gt;

&lt;p&gt;Username: 22cs181&lt;br&gt;
Password: Sujitha&lt;/p&gt;

&lt;p&gt;Once your cluster is ready, click “Connect → Connect using MongoDB Shell” and copy the connection string.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcd3ijoebgigtjd0zkdgs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcd3ijoebgigtjd0zkdgs.png" alt=" " width="800" height="542"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;💻** Step 2: Connect from Mongo Shell**&lt;br&gt;
Open PowerShell or Command Prompt, then run:&lt;/p&gt;

&lt;p&gt;`bash&lt;br&gt;
mongosh "mongodb+srv://m0.wpjmxqh.mongodb.net/" --apiVersion 1 --username 22cs181_db_user&lt;/p&gt;

&lt;p&gt;Then enter your password when prompted:&lt;/p&gt;

&lt;p&gt;Enter password: Sujitha&lt;/p&gt;

&lt;p&gt;If connection succeeds, you’ll see:&lt;/p&gt;

&lt;p&gt;Atlas atlas-xxxx-shard-0 [primary]&amp;gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F192cdvi2d7vvglopljba.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F192cdvi2d7vvglopljba.png" alt=" " width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📥 &lt;strong&gt;Step 3: Create a Database and Insert Records&lt;/strong&gt;&lt;br&gt;
Switch to a database (it will auto-create):&lt;/p&gt;

&lt;p&gt;javascript&lt;br&gt;
use businessDB&lt;/p&gt;

&lt;p&gt;Insert 10 sample business review records:&lt;/p&gt;

&lt;p&gt;javascript&lt;br&gt;
db.reviews.insertMany([&lt;br&gt;
{ "business_id": "B001", "name": "Cafe Aroma", "rating": 4.6, "review": "Good food and fast service!", "date": "2025-11-07" },&lt;br&gt;
{ "business_id": "B002", "name": "Pizza Palace", "rating": 4.8, "review": "Amazing crust and cheese quality!", "date": "2025-11-07" },&lt;br&gt;
{ "business_id": "B003", "name": "Tea Time", "rating": 4.2, "review": "Nice ambience and friendly staff.", "date": "2025-11-07" },&lt;br&gt;
{ "business_id": "B004", "name": "Sweet Treats", "rating": 3.9, "review": "Desserts were good but service was slow.", "date": "2025-11-07" },&lt;br&gt;
{ "business_id": "B005", "name": "Veggie Delight", "rating": 4.1, "review": "Healthy food with good taste.", "date": "2025-11-07" },&lt;br&gt;
{ "business_id": "B006", "name": "Burger Hub", "rating": 4.9, "review": "Best burgers ever!", "date": "2025-11-07" },&lt;br&gt;
{ "business_id": "B007", "name": "Ocean Dine", "rating": 4.7, "review": "Fresh seafood and great view.", "date": "2025-11-07" },&lt;br&gt;
{ "business_id": "B008", "name": "Spice Route", "rating": 3.8, "review": "Food was okay, but spicy.", "date": "2025-11-07" },&lt;br&gt;
{ "business_id": "B009", "name": "Bakers Street", "rating": 4.5, "review": "Good pastries and coffee.", "date": "2025-11-07" },&lt;br&gt;
{ "business_id": "B010", "name": "Quick Bite", "rating": 4.0, "review": "Good service and clean place.", "date": "2025-11-07" }&lt;br&gt;
])&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffr9xltck9xo37rrx6shk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffr9xltck9xo37rrx6shk.png" alt=" " width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔍 Step 4: Queries&lt;br&gt;
🏆 4.1 Top 5 Businesses by Rating&lt;br&gt;
&lt;code&gt;javascript&lt;br&gt;
db.reviews.find().sort({ rating: -1 }).limit(5)&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;🔤 4.2 Count of Reviews Containing “good”&lt;br&gt;
&lt;code&gt;javascript&lt;br&gt;
db.reviews.countDocuments({ review: /good/i })&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;🏪 4.3 Get Reviews for a Specific Business ID&lt;br&gt;
&lt;code&gt;javascript&lt;br&gt;
db.reviews.find({ business_id: "B005" })&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fczdlunkeob1nppr1humb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fczdlunkeob1nppr1humb.png" alt=" " width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;✏️ &lt;strong&gt;Step 5: Update and Delete&lt;/strong&gt;&lt;br&gt;
✏️ Update a Review&lt;br&gt;
&lt;code&gt;javascript&lt;br&gt;
db.reviews.updateOne(&lt;br&gt;
{ business_id: "B005" },&lt;br&gt;
{ $set: { rating: 4.3, review: "Updated: Great taste and fresh ingredients!" } }&lt;br&gt;
)&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;🗑️ Delete a Record&lt;br&gt;
&lt;code&gt;javascript&lt;br&gt;
db.reviews.deleteOne({ business_id: "B010" })&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcr0w85hayxhscce0eq9t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcr0w85hayxhscce0eq9t.png" alt=" " width="800" height="208"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📤** Step 6: Export Data to JSON/CSV**&lt;br&gt;
Exit Mongo shell:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;bash&lt;br&gt;
exit&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then run the following from PowerShell (not inside mongosh) 👇&lt;/p&gt;

&lt;p&gt;📄 Export as CSV&lt;br&gt;
&lt;code&gt;bash&lt;br&gt;
mongoexport --uri="mongodb+srv://22cs098_db_user:NAVEEN@m0.wpjmxqh.mongodb.net/businessDB" --collection=reviews --type=csv --fields=business_id,name,rating,review,date --out=reviews.csv&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;📦 Export as JSON&lt;br&gt;
&lt;code&gt;bash&lt;br&gt;
mongoexport --uri="mongodb+srv://22cs098_db_user:NAVEEN@m0.wpjmxqh.mongodb.net/businessDB" --collection=reviews --out=reviews.json&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;📊** Step 7: View the Exported Files**&lt;br&gt;
Open reviews.csv in Excel or VS Code.&lt;br&gt;
Open reviews.json in any text editor.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6ea6wb0ektzrer0pl3w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6ea6wb0ektzrer0pl3w.png" alt=" " width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🎯** Final Thoughts**&lt;br&gt;
MongoDB Atlas makes it easy to:&lt;/p&gt;

&lt;p&gt;Manage cloud-hosted databases&lt;br&gt;
Perform CRUD operations&lt;br&gt;
Export results in multiple formats&lt;br&gt;
This project demonstrates all essential MongoDB operations — perfect for Data Engineering and Database Management learning tasks.&lt;/p&gt;

</description>
      <category>mongodb</category>
      <category>datascience</category>
    </item>
    <item>
      <title>Data in the Cloud — 6 Common Data Formats Every Analyst Should Know</title>
      <dc:creator>Sujitha Selvaraj</dc:creator>
      <pubDate>Thu, 06 Nov 2025 12:54:14 +0000</pubDate>
      <link>https://dev.to/sujitha_selvaraj_af5010c5/data-in-the-cloud-6-common-data-formats-every-analyst-should-know-1kb3</link>
      <guid>https://dev.to/sujitha_selvaraj_af5010c5/data-in-the-cloud-6-common-data-formats-every-analyst-should-know-1kb3</guid>
      <description>&lt;p&gt;&lt;strong&gt;1. CSV (Comma Separated Values)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt;&lt;br&gt;
CSV is the simplest and most widely used data format. It stores data in plain text where each line represents a record, and values are separated by commas.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;name,reg_no,subject,marks&lt;br&gt;
Asha Rao,R001,Maths,89&lt;br&gt;
Vikram S,R002,Physics,76&lt;br&gt;
Meera K,R003,Chemistry,92&lt;br&gt;
Rohit P,R004,Maths,68&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where it’s used:&lt;/strong&gt;&lt;br&gt;
CSV is used in spreadsheets, data imports/exports, and small-scale analytics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. SQL (Relational Table Format)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt;&lt;br&gt;
SQL represents data stored in relational databases. The data is organized in tables with defined columns and data types. Each row represents one record.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  TABLE: students
&lt;/h2&gt;
&lt;h2&gt;
  
  
  name      | reg_no | subject   | marks
&lt;/h2&gt;

&lt;p&gt;Asha Rao  | R001   | Maths     | 89&lt;br&gt;
Vikram S  | R002   | Physics   | 76&lt;br&gt;
Meera K   | R003   | Chemistry | 92&lt;br&gt;
Rohit P   | R004   | Maths     | 68&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where it’s used:&lt;/strong&gt;&lt;br&gt;
Used in databases like MySQL, PostgreSQL, and SQL Server for structured data and transactional operations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. JSON (JavaScript Object Notation)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt;&lt;br&gt;
JSON stores data as key-value pairs. It is lightweight, human-readable, and commonly used in APIs and modern web applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;[&lt;br&gt;
  {"name": "Asha Rao", "reg_no": "R001", "subject": "Maths", "marks": 89},&lt;br&gt;
  {"name": "Vikram S", "reg_no": "R002", "subject": "Physics", "marks": 76},&lt;br&gt;
  {"name": "Meera K", "reg_no": "R003", "subject": "Chemistry", "marks": 92},&lt;br&gt;
  {"name": "Rohit P", "reg_no": "R004", "subject": "Maths", "marks": 68}&lt;br&gt;
]&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where it’s used:&lt;/strong&gt;&lt;br&gt;
APIs, web applications, configuration files, and NoSQL databases like MongoDB.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Parquet (Columnar Storage Format)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt;&lt;br&gt;
Parquet is a columnar storage format used for big data analytics. Unlike row-based formats, Parquet stores data column-wise, which reduces storage space and increases query performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example (conceptual view):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;name    reg_no  subject marks&lt;br&gt;
Asha Rao    R001    Maths   89&lt;br&gt;
Vikram S    R002    Physics 76&lt;br&gt;
Meera K R003    Chemistry   92&lt;br&gt;
Rohit P R004    Maths   68&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where it’s used:&lt;/strong&gt;&lt;br&gt;
Big data platforms like Apache Spark, Hadoop, and AWS Athena for fast analytics and cloud storage efficiency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. XML (Extensible Markup Language)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt;&lt;br&gt;
XML is a tag-based format used to represent structured data. It is similar to HTML but designed to store and transport data rather than display it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;br&gt;
  &lt;br&gt;
    Asha Rao&lt;br&gt;
    R001&lt;br&gt;
    Maths&lt;br&gt;
    89&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
    Vikram S&lt;br&gt;
    R002&lt;br&gt;
    Physics&lt;br&gt;
    76&lt;br&gt;
  &lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where it’s used:&lt;/strong&gt;&lt;br&gt;
Web services (SOAP), configuration files, and systems that require strong data validation through schemas.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Avro (Row-based Storage Format)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt;&lt;br&gt;
Avro is a compact binary format that stores both data and schema. It’s designed for fast data serialization and supports schema evolution, making it ideal for real-time data pipelines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example (logical representation):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;{"name": "Asha Rao", "reg_no": "R001", "subject": "Maths", "marks": 89}&lt;br&gt;
{"name": "Vikram S", "reg_no": "R002", "subject": "Physics", "marks": 76}&lt;br&gt;
{"name": "Meera K", "reg_no": "R003", "subject": "Chemistry", "marks": 92}&lt;br&gt;
{"name": "Rohit P", "reg_no": "R004", "subject": "Maths", "marks": 68}&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where it’s used:&lt;/strong&gt;&lt;br&gt;
Data streaming (Apache Kafka), data serialization, and large-scale data pipelines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each data format serves a different purpose in the analytics and cloud ecosystem:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CSV is simple and universal.&lt;/li&gt;
&lt;li&gt;SQL ensures structure and relationships.&lt;/li&gt;
&lt;li&gt;JSON adds flexibility and nesting.&lt;/li&gt;
&lt;li&gt;Parquet optimizes analytical queries.&lt;/li&gt;
&lt;li&gt;XML emphasizes structure and validation.&lt;/li&gt;
&lt;li&gt;Avro focuses on efficient, schema-based data transport.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Understanding when and how to use these formats is a core skill for any data analyst, data engineer, or cloud professional.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>csv</category>
      <category>dataformat</category>
      <category>analytics</category>
    </item>
  </channel>
</rss>
