<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dravinesh</title>
    <description>The latest articles on DEV Community by Dravinesh (@dravinesh_9bb18e385f06063).</description>
    <link>https://dev.to/dravinesh_9bb18e385f06063</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dravinesh_9bb18e385f06063"/>
    <language>en</language>
    <item>
      <title>SustainAgent – A Multi-Agent System for Energy Insights (Google Kaggle 5-Day AI Agents Intensive Capstone)</title>
      <dc:creator>Dravinesh</dc:creator>
      <pubDate>Sat, 29 Nov 2025 13:20:26 +0000</pubDate>
      <link>https://dev.to/dravinesh_9bb18e385f06063/sustainagent-a-multi-agent-system-for-energy-insights-google-x-kaggle-5-day-ai-agents-intensive-3jg</link>
      <guid>https://dev.to/dravinesh_9bb18e385f06063/sustainagent-a-multi-agent-system-for-energy-insights-google-x-kaggle-5-day-ai-agents-intensive-3jg</guid>
      <description>&lt;p&gt;Over the last few days, I completed the Google × Kaggle 5-Day AI Agents Intensive Course, and as part of the final capstone, I built SustainAgent — a simple but practical multi-agent system designed to analyze large-scale energy consumption data, detect unusual behavior, and generate meaningful weekly reports.&lt;/p&gt;

&lt;p&gt;This project was my first attempt at structuring a system around the Agent Development Kit (ADK) concepts such as memory, sessions, agent roles, and tool-like components. Even though I didn't use the full ADK (because of execution constraints), I implemented a working multi-agent architecture inside a Kaggle notebook using Python.&lt;/p&gt;

&lt;p&gt;What is SustainAgent?&lt;/p&gt;

&lt;p&gt;SustainAgent is a multi-agent pipeline with three core responsibilities:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;DataRetriever Agent&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Loads the dataset, validates important fields (like timestamps and energy values), and logs basic structure.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Analyzer Agent&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Performs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Weekly aggregation&lt;/li&gt;
&lt;li&gt;Consumption statistics&lt;/li&gt;
&lt;li&gt;Trend comparison (last 7 days vs previous 7 days)&lt;/li&gt;
&lt;li&gt;Anomaly detection using Isolation Forest&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Reporter Agent&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Generates a final weekly report in Markdown and PDF, and bundles supporting artifacts like plots and CSV summaries.&lt;/p&gt;

&lt;p&gt;Everything runs using a custom SafeSession class that stores events and memory safely in JSON, simulating the session management taught in Day 3 of the course.&lt;/p&gt;

&lt;p&gt;Dataset Used&lt;/p&gt;

&lt;p&gt;I used the Daily Household Energy Dataset, which contains:&lt;/p&gt;

&lt;p&gt;3.5 million+ rows of smart-meter energy usage&lt;/p&gt;

&lt;p&gt;Columns such as day, energy_mean, energy_sum, energy_std, etc.&lt;/p&gt;

&lt;p&gt;Data spanning different dates across households&lt;/p&gt;

&lt;p&gt;The sheer size made this a perfect dataset for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;trend analysis&lt;/li&gt;
&lt;li&gt;anomaly detection&lt;/li&gt;
&lt;li&gt;memory-based agent workflows&lt;/li&gt;
&lt;li&gt;real operational testing inside a Kaggle notebook&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Architecture Overview&lt;/p&gt;

&lt;p&gt;Here’s how the system flows internally:&lt;/p&gt;

&lt;p&gt;DataRetriever → Analyzer → Reporter → Output Artifacts&lt;br&gt;
                      ↓&lt;br&gt;
                  SafeSession&lt;/p&gt;

&lt;p&gt;DataRetriever&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reads the CSV&lt;/li&gt;
&lt;li&gt;Extracts date &amp;amp; value columns&lt;/li&gt;
&lt;li&gt;Logs events like “DatasetLoaded”, “Rows: 3,510,433”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Analyzer&lt;/p&gt;

&lt;p&gt;Computes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mean&lt;/li&gt;
&lt;li&gt;Median&lt;/li&gt;
&lt;li&gt;STD&lt;/li&gt;
&lt;li&gt;Min/Max&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Weekly grouping&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Trend comparison&lt;/li&gt;
&lt;li&gt;Isolation Forest anomaly detection&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Reporter&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creates human-readable markdown report&lt;/li&gt;
&lt;li&gt;Exports a polished PDF&lt;/li&gt;
&lt;li&gt;Saves plots + summary CSVs&lt;/li&gt;
&lt;li&gt;Logs report creation events&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Session / Memory&lt;/p&gt;

&lt;p&gt;A custom SafeSession class ensures all logs, timestamps, and metrics are stored in a JSON-friendly format without serialization errors. This mirrors ADK-style memory and event tracking.&lt;/p&gt;

&lt;p&gt;Key Results&lt;/p&gt;

&lt;p&gt;These values may differ slightly depending on execution, but during my run:&lt;/p&gt;

&lt;p&gt;Weekly Insights&lt;/p&gt;

&lt;p&gt;Weekly summaries saved as:&lt;br&gt;
/kaggle/working/artifacts/weekly_summary.csv&lt;/p&gt;

&lt;p&gt;Trend (Last 7 vs Previous 7 days)&lt;/p&gt;

&lt;p&gt;On my most recent run, the trend showed:&lt;br&gt;
Decreasing consumption&lt;br&gt;
(Previous runs showed increasing, depending on date ranges.)&lt;/p&gt;

&lt;p&gt;Anomalies&lt;/p&gt;

&lt;p&gt;Using IsolationForest (contamination=0.05), we found:&lt;br&gt;
 ~175,000 anomalous records&lt;/p&gt;

&lt;p&gt;These represent sudden spikes, dips, or unusual usage patterns.&lt;/p&gt;

&lt;p&gt;Generated Artifacts&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;sustain_report.md&lt;/li&gt;
&lt;li&gt;sustain_report.pdf&lt;/li&gt;
&lt;li&gt;timeseries.png&lt;/li&gt;
&lt;li&gt;weekly_mean.png&lt;/li&gt;
&lt;li&gt;weekly_summary.csv&lt;/li&gt;
&lt;li&gt;anomalies.csv&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These can be used directly for dashboards, stakeholder insights, or model refinement.&lt;/p&gt;

&lt;p&gt;Screenshots &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvbcgsccxxm54e8pr5eu2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvbcgsccxxm54e8pr5eu2.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyt3gf5hatmgje5w8j5d3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyt3gf5hatmgje5w8j5d3.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg86o237lulgaxm1cqgi9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg86o237lulgaxm1cqgi9.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fieqpibx4ka4gjxyn9nqt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fieqpibx4ka4gjxyn9nqt.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What I Learned?&lt;/p&gt;

&lt;p&gt;This capstone helped me understand:&lt;/p&gt;

&lt;p&gt;-How agents are structured&lt;/p&gt;

&lt;p&gt;Even without the exact ADK, the logic of roles, sessions, and events became clearer.&lt;/p&gt;

&lt;p&gt;-Why memory matters&lt;/p&gt;

&lt;p&gt;Storing event logs and intermediate summaries makes agents reliable and interpretable.&lt;/p&gt;

&lt;p&gt;-How to deal with real datasets&lt;/p&gt;

&lt;p&gt;Handling 3.5M rows pushed me to optimize memory, fix NaN-related errors, and clean serialization logic.&lt;/p&gt;

&lt;p&gt;-The importance of evaluation&lt;/p&gt;

&lt;p&gt;Weekly trends, anomaly patterns, and consumption behaviors are extremely useful for real-world energy analytics.&lt;/p&gt;

&lt;p&gt;-Stronger intuition for multi-agent pipelines&lt;/p&gt;

&lt;p&gt;Breaking a system into separate roles helped reduce complexity and debugging time.&lt;/p&gt;

&lt;p&gt;-Technologies Used&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Python&lt;/li&gt;
&lt;li&gt;Panda &amp;amp; NumPy&lt;/li&gt;
&lt;li&gt;Matplotlib&lt;/li&gt;
&lt;li&gt;IsolationForest (Scikit-learn)&lt;/li&gt;
&lt;li&gt;FPDF&lt;/li&gt;
&lt;li&gt;Custom SafeSession for JSON-safe memory&lt;/li&gt;
&lt;li&gt;Kaggle Notebook runtime&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Project Source&lt;/p&gt;

&lt;p&gt;Kaggle Notebook (Public):&lt;br&gt;
&lt;a href="https://www.kaggle.com/code/dravinesh/sustainagent" rel="noopener noreferrer"&gt;https://www.kaggle.com/code/dravinesh/sustainagent&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This includes the full implementation, logs, plots, and downloadable artifacts.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;SustainAgent was a great learning experience because it took the theory from the Google × Kaggle course and turned it into a practical, end-to-end working system. The project pushed me into thinking in terms of agents, sessions, reliability, and report generation, instead of only writing one big script.&lt;/p&gt;

&lt;p&gt;This was a valuable exercise in building clean, modular AI workflows — and I’m excited to evolve SustainAgent into a more advanced system with tool calling, Gemini integration, and real-time agent interactions in the future.&lt;/p&gt;

&lt;p&gt;Thanks for reading! &lt;br&gt;
Feel free to check out the notebook or suggest improvements.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>python</category>
      <category>showdev</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
