Introduction
I recently completed the 5-Day AI Agents Intensive Course by Google and Kaggle. It was an incredible deep dive into building autonomous systems that go beyond simple chatbots.
For my Capstone Project, I chose the Agents for Good track to solve a problem I face constantly: passive learning.
The Problem: Studying is Passive
We all have to read long PDF guides or textbooks. The problem is that reading is often passive. You stare at the text, but you don't always interact with it. If you have a specific question, you have to search manually. If you want to test yourself, you have to rely on pre-made quizzes that might not cover what you just read.
The Solution: The AI Study Buddy
I built an Interactive Learning Agent that turns any static text into a dynamic study partner.
Instead of just reading, users can:
Ask RAG-based questions: "What is the difference between X and Y?"
Request active quizzes: "Quiz me on this topic."
Have a conversation: The agent remembers context for natural follow-ups.
You can see the full project code here: https://www.kaggle.com/code/anandk05/aibuddy-34
How It Works (The Architecture)
I built this agent using Python and Google's Gemini Pro model. It uses a "Router" architecture to intelligently switch between tools.
1. The Router Agent 🧠
The core of the system is a routing function. It analyzes the user's intent.
If the user asks "What is...", it routes to the RAG Tool.
If the user says "Quiz me...", it routes to the Quiz Generator.
2. Retrieval Augmented Generation (RAG) 📚
I used FAISS to create a vector store from the study material. This allows the agent to pull precise, factual answers directly from the source text, reducing hallucinations.
3. Self-Evaluation & Quality Control ✅
This was my biggest "Aha!" moment from the course. I didn't just want the agent to generate a quiz; I wanted it to generate a good quiz.
I implemented an Evaluator Tool that programmatically audits the generated quiz before showing it to the user. It checks:
Is the question clear?
Are there exactly 4 options?
Is the answer key valid?
If the quiz fails this check, the agent regenerates it. This ensures a high-quality experience for the student.
What I Learned
This course changed how I view AI. Before, I thought of LLMs as text generators. Now, I see them as reasoning engines.
The most powerful concept I applied was Observability. By adding detailed logs (or "thought traces") to my code, I could watch the agent "think"—detecting intent, selecting tools, and evaluating its own work. It felt less like coding a script and more like teaching a digital employee.
Demo
You can see the agent in action in this video demo: https://youtu.be/FJjsLvuQJzI?si=IPlp8CdOlx-QhYwa
Conclusion
Building the AI Study Buddy showed me that we can use AI to make education more accessible and engaging. I'm excited to keep refining this agent and adding more tools in the future!
Top comments (0)