Subtitle: Real-world lessons from building a Multi-Agent System for the Google AI Agents Intensive.
I’ve been working with Machine Learning and Deep Learning for a while, but "AI Agents" always felt a bit abstract. I signed up for the 5-Day AI Agents Intensive with Google and Kaggle to see if they were actually useful or just another buzzword.
For my capstone, I decided to tackle a problem I know is genuinely hard: University Timetabling. It’s a classic constraint-satisfaction problem. You have courses, professors, rooms, and time slots. Everything has to fit perfectly.
I thought, I'll just feed the data to Gemini and let it figure it out.
I was wrong. Here is exactly how I failed, and how I finally got it working.
Attempt 1: The Monolithic Agent
My first approach was to dump all my JSON data (courses, rooms, professor preferences) into the system prompt of a single agent and simply tell it to schedule everything.
The result was a mess. The agent didn't just fail; it hallucinated.
- It invented time slots that didn't exist.
- It successfully booked
CS_101(which needs 60 seats) intoLab_201(which has 25 seats). - It claimed there were conflicts when the schedule was empty.
I realized pretty quickly that LLMs are reasoning engines, not magic wands. You can't just throw a massive logic puzzle at them in a single turn and expect a perfect output.
The Fix: Sequential Loops (Version 2)
I had to stop thinking like a prompt engineer and start thinking like a software engineer. The problem wasn't the model; it was the control flow. Trying to solve 50 conflicts simultaneously is hard for a human, let alone an LLM.
I decided to re-architect the system using the Google Agent Development Kit (ADK) to implement a Loop Agent pattern.
Instead of saying "Schedule these 50 courses", I built a sequential pipeline that processes the problem step-by-step:
- Bidding Agent: Interviews departments to get a "wish list" of schedules.
- Processing Loop: Pops one course request from the stack.
- Validation: Tries to book it using strict Python tools.
- Negotiation: If it fails (conflict), it negotiates an alternative just for that one course.
- Repeat.
By forcing the agent to focus on one constraint at a time, the hallucinations stopped. The system actually started reasoning. When it couldn't book CS_101, it correctly reported: "No room exists with both 60 seats and computers", rather than just forcing a bad booking.
What I Actually Learned
Going through this process—building, breaking, and rebuilding—taught me three specific things about Agentic AI:
- Tools are Guardrails: I wrote Python functions like
book_room()to act as the interface to my database. These tools threw errors if the agent tried to do something illegal (like double-booking). This forced the LLM to stay grounded in reality. - State Management is Hard: The trickiest part wasn't the AI; it was managing the session state. Passing the list of "unbooked courses" cleanly from one agent to the next in the loop was the key engineering challenge.
- Architecture > Intelligence: A team of simple, specialized agents (even using a lighter model like Gemini 2.5 Flash Lite) outperforms one massive, complex agent every time.
Final Thoughts
"Agora" (my project) isn't perfect, but it works. It takes raw requirements and autonomously negotiates a valid schedule.
🔗 Links
- Kaggle Notebook: https://www.kaggle.com/code/romannihal/ai-agent-hackathon-v2

Top comments (0)