From "What's an Agent?" to Building My Own Concierge System
Honestly, I Didn't Get It at First
I signed up for this course because everyone kept talking about AI agents and I felt like I was missing something. I'd played around with ChatGPT, built a few basic chatbots, tried some API stuff. But agents? I thought it was just marketing speak for "chatbot that can do more things."
Turns out I was completely wrong, and it took me until like Day 3 to actually understand why.
The difference hit me when I was working through the tool-calling lab. It wasn't about the agent responding to me anymore—it was about the agent deciding what to do next. That's when it clicked. This thing was actually making choices, not just pattern matching my input to some output.
The One Concept That Actually Changed How I Think
The observe-reason-act loop.
Before this course, my brain worked in straight lines with AI: you give it input, it processes, you get output. Done.
But agents don't work that way. They look at what's happening (observe), think about what they should do (reason), do something (act), and then start the whole cycle again based on what happened. It's iterative. It's adaptive. It's honestly kind of wild when you see it working.
This wasn't just some technical detail I memorized. It changed how I approach problems now. Instead of thinking "what prompt do I need to write," I think "what can this agent figure out on its own if I give it the right tools?"
When The Labs Made Everything Real
Reading documentation is one thing. Actually building stuff is completely different.
Day 2's tool-calling lab was where things got interesting for me. I'd read about function calling before but never really understood it. Watching an agent decide which tool to use, figure out what parameters to pass, handle the response, and then keep going based on that... it felt almost magical. Until I looked under the hood and realized it was just really good engineering.
The multi-agent lab on Day 4 kind of blew my mind though. Seeing multiple agents work together, each handling their specialty, reminded me more of how actual teams work than how software usually works. One agent would do its thing, pass info to another agent, that one would do its thing. No single agent had to be perfect at everything.
The Stuff I Messed Up (Because That's Where You Actually Learn)
I'm putting this in because honestly, my failures taught me more than my successes.
First attempt: gave my agent way too much freedom. I threw like 10 different tools at it with basically no guardrails, and it started hallucinating function calls that didn't exist. Learned real quick that autonomy without boundaries isn't helpful, it's just chaos.
Second attempt: went too far the other way. Made the agent ask permission for everything. Defeated the whole point of having an autonomous agent in the first place. Found out the hard way you need to trust it within well-defined limits.
The thing that tripped me up the most? Managing context over multiple turns. My agent would just... forget stuff after a few exchanges. Had to figure out summarization, state management, and basically be ruthless about what information actually matters vs what's just noise.
What Changed in My Head
Week 1 (Before):
- Thought AI was just fancy autocomplete
- Autonomy meant "doesn't need human input"
- Intelligence = getting the right answer
Week 2 (After):
- AI can actually pursue goals through interaction
- Autonomy means planning, executing, checking results, and adapting
- Intelligence = breaking down fuzzy objectives into concrete actions while dealing with uncertainty
I don't just build differently now—I think differently. When I look at any tool or API, I automatically ask myself: "Could an agent use this? What would it need to know? How would it know if it succeeded or failed?"
My Capstone: The Itinerary Generator That Actually Works
So here's what I built: a concierge agent that plans your afternoon for you.
The Problem I Was Trying to Solve:
You know that feeling when you want to do something but you waste 30 minutes bouncing between Google Maps, TripAdvisor, and random blogs trying to figure out a plan? That's annoying. I wanted something where I could just say "plan my afternoon" and get back an actual workable itinerary.
What I Built:
A multi-agent system using Google ADK that:
- Reads your preferences (stored profile)
- Plans activities based on your interests
- Checks if everything is actually feasible (time-wise, location-wise)
- Refines and cleans up the output
I used Gemini 2.5 Flash as the model, with custom tools I built:
-
current_time_checker(city)- figures out local time -
location_feasibility_check(place_name)- checks if travel time makes sense
The system uses InMemorySessionService, InMemoryMemoryService, and InMemoryArtifactService to handle state and context across the whole flow.
How It Actually Works:
You give it a prompt like "plan my afternoon in London" and it:
- Checks your profile (interests: history, coffee, art; starts after 12 PM; staying at Central Hotel)
- Generates initial ideas
- Runs feasibility checks
- Outputs a clean itinerary
Example output:
1:00 PM – 3:30 PM: Tower of London
Historic visit, approx. 30 min travel time.
3:45 PM – 4:45 PM: Monmouth Coffee Company
Coffee break, approx. 30 min travel time.
5:00 PM – 7:00 PM: National Gallery
Art viewing, approx. 30 min travel time.
What Went Wrong (A Lot):
The ADK documentation was... let's say "incomplete." Half the examples online used old API versions that don't work anymore. I spent hours just trying to figure out why imports were failing. Had to dig through the actual module code to figure out what functions even existed.
The Runner setup was confusing. It needs an app plus multiple services (session, memory, artifacts). Took forever to debug because error messages weren't exactly helpful.
Also, Gemini sometimes returns all this metadata along with the actual text—stuff like thought_signature and function_call objects. Had to filter through all that to extract just the clean text output.
What I Actually Learned:
- How to structure multi-step agent reasoning
- Tool integration (and why tools need to return useful info, not just success/failure)
- Session and state management across multiple turns
- Debugging skills I didn't know I needed
The project isn't perfect. The feasibility checks are mocked. The tools are simple. But it works, and more importantly, it demonstrates the core concepts: structured reasoning, custom tools, modular design, stateful execution.
I could scale this into something way more useful with real APIs—maps, weather, public transit, restaurant reservations. The architecture is there.
What Actually Surprised Me
The safety and alignment stuff hit different after building my own agent.
Before the course, I thought safety concerns were kind of abstract. After building an agent that can actually do things—call APIs, make decisions, execute actions—I get it now. If you can't trust your agent, it doesn't matter how sophisticated it is.
My itinerary agent is low-stakes. But imagine an agent that can send emails, make purchases, or modify databases. You need boundaries. You need testing. You need fail-safes. The most capable agent in the world is useless if you're afraid to let it run.
This tension between "how much can it do" vs "how much should it do" is basically the entire challenge of building agents, and this course gave me practical ways to think about it.
Advice If You're Taking This Course
Start stupidly simple. Don't try to build AGI on Day 1. Build an agent that does one thing okay, then add to it.
Expect weird failures. Agents break in creative ways you won't predict. Each weird failure teaches you something about the gap between what you think should happen and what actually happens.
The observe-reason-act loop is everything. If you really understand this cycle, you can build anything. Everything else is just implementation.
Don't skip the multi-agent stuff. It seems complicated, and it kind of is, but that's where the real power is. Sometimes the answer isn't one smarter agent—it's three simpler agents that specialize.
Debug with patience. The ADK has rough edges. Version mismatches, unclear docs, confusing error messages. You'll spend time on stuff that feels like it should just work. That's normal.
What I'm Taking Away From This
I came in thinking I'd learn some new AI techniques. I'm leaving with a completely different framework for thinking about what's possible.
Agents aren't just better chatbots. They're a different category of thing entirely. They can observe, decide, act, and learn from what happens. That's genuinely new, and the implications are kind of staggering.
We're not building tools that wait for instructions anymore. We're building systems that can pursue objectives, adapt to changing situations, and work alongside us (or sometimes instead of us).
That's powerful. That's also why building them responsibly matters so much.
This course didn't just teach me how to build agents. It taught me why certain design choices matter, what can go wrong, and how to think about the trade-offs between capability and control.
For anyone on the fence about taking this course: do it. The future of AI isn't going to be bigger models that answer questions better. It's going to be agents that can actually do things in the world. This course is how you learn to build them.
If you're curious about the technical implementation, you can check out my capstone project on Kaggle: Concierge Multi-Agent Itinerary Generator. The code's messy in places (because real learning is messy), but it works, and you can see exactly how I structured the agent workflow.
Thanks to Google and Kaggle for putting this together, and to everyone in the Discord who helped debug my many, many problems. This was worth the time.
Top comments (0)