<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sourav Sarangi</title>
    <description>The latest articles on DEV Community by Sourav Sarangi (@sourav_sarangi_dad0e0ae28).</description>
    <link>https://dev.to/sourav_sarangi_dad0e0ae28</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sourav_sarangi_dad0e0ae28"/>
    <language>en</language>
    <item>
      <title>My Journey Through the Kaggle Google 5-Day Intensive ML Sprint</title>
      <dc:creator>Sourav Sarangi</dc:creator>
      <pubDate>Thu, 04 Dec 2025 03:34:58 +0000</pubDate>
      <link>https://dev.to/sourav_sarangi_dad0e0ae28/my-journey-through-the-kaggle-x-google-5-day-intensive-ml-sprint-2a1n</link>
      <guid>https://dev.to/sourav_sarangi_dad0e0ae28/my-journey-through-the-kaggle-x-google-5-day-intensive-ml-sprint-2a1n</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/googlekagglechallenge"&gt;Google AI Agents Writing Challenge&lt;/a&gt;: [Learning Reflections OR Capstone Showcase]&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  My Learning Journey / Project Overview
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Key Concepts / Technical Deep Dive
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Reflections &amp;amp; Takeaways
&lt;/h2&gt;

&lt;p&gt;Over the past week, I completed the Kaggle × Google 5-Day Intensive Program — a fast-paced, hands-on sprint that helped me dive into Python for Data Science, Machine Learning basics, and Kaggle-style workflows. Below, I’m sharing the full structure of the course, how I experienced each day, what I built, and the skills I gained. If you’re starting out in ML or thinking of trying Kaggle, this might help you decide if this path is for you.&lt;/p&gt;




&lt;p&gt;📅 Course Structure &amp;amp; My Daily Experience&lt;/p&gt;

&lt;p&gt;Day 1 — Getting Started: Python Basics + Kaggle Environment&lt;/p&gt;

&lt;p&gt;✔️ Introduction to the Kaggle environment: Notebooks, datasets, competitions.&lt;/p&gt;

&lt;p&gt;✔️ Brushed up on Python essentials — lists, dictionaries, loops, conditionals, functions.&lt;/p&gt;

&lt;p&gt;✔️ First hands-on task: loaded a dataset using Pandas and performed basic exploration (head, shape, info).&lt;/p&gt;

&lt;p&gt;My takeaway: Kaggle Notebooks are beginner-friendly, and running code live makes experimentation very straightforward.&lt;/p&gt;




&lt;p&gt;Day 2 — Data Cleaning &amp;amp; Exploratory Data Analysis (EDA)&lt;/p&gt;

&lt;p&gt;✔️ Learned data cleaning: handling missing values, removing duplicates, filtering outliers.&lt;/p&gt;

&lt;p&gt;✔️ Explored data using Pandas: .describe(), grouping, filtering, summary statistics.&lt;/p&gt;

&lt;p&gt;✔️ Performed preliminary visualization to observe data distributions and relationships.&lt;/p&gt;

&lt;p&gt;My takeaway: Investing time in clean, well-explored data is critical — it lays the foundation for good ML results.&lt;/p&gt;




&lt;p&gt;Day 3 — First Machine Learning Models (Baseline)&lt;/p&gt;

&lt;p&gt;✔️ Understood the ML workflow: splitting data into training and test sets, fitting models, evaluating performance.&lt;/p&gt;

&lt;p&gt;✔️ Built baseline models using Scikit-Learn:&lt;/p&gt;

&lt;p&gt;Linear Regression (for regression tasks)&lt;/p&gt;

&lt;p&gt;Decision Trees&lt;/p&gt;

&lt;p&gt;Random Forests&lt;/p&gt;

&lt;p&gt;✔️ Ran a quick mini-competition/prediction task on a real dataset.&lt;/p&gt;

&lt;p&gt;My takeaway: Even baseline models — with minimal tuning — can deliver surprisingly decent results on real-world data.&lt;/p&gt;




&lt;p&gt;Day 4 — Enhancing Models: Feature Engineering &amp;amp; Hyperparameter Tuning&lt;/p&gt;

&lt;p&gt;✔️ Practiced feature engineering: generating new features, encoding categorical variables, scaling when required.&lt;/p&gt;

&lt;p&gt;✔️ Applied hyperparameter tuning and cross-validation strategies to improve model performance.&lt;/p&gt;

&lt;p&gt;✔️ Learned about the importance of model interpretation and avoiding overfitting.&lt;/p&gt;

&lt;p&gt;My takeaway: Often, smarter features and better validation improve performance more than choosing a more complex model.&lt;/p&gt;




&lt;p&gt;Day 5 — Final Project: End-to-End Pipeline + Submission&lt;/p&gt;

&lt;p&gt;✔️ Built a complete ML pipeline: Data loading → cleaning → exploration → feature engineering → model training → evaluation → prediction.&lt;/p&gt;

&lt;p&gt;✔️ Generated submission.csv and submitted to a real competition on Kaggle.&lt;/p&gt;

&lt;p&gt;✔️ Witnessed the model’s score and placement on the leaderboard — first “real” ML submission.&lt;/p&gt;

&lt;p&gt;My takeaway: Going from zero to a full submission in 5 days is possible — and hugely motivating. It turns theory into a tangible outcome.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>googlekagglechallenge</category>
      <category>ai</category>
      <category>aiagents</category>
    </item>
  </channel>
</rss>
