This is a submission for the Google AI Agents Writing Challenge: Learning Reflections
How a five-day Google x Kaggle intensive reshaped my understanding of agents, architecture, and intelligent system design.
I walked into the Google x Kaggle AI Agents Intensive with the belief that I understood intelligent systems. I had already built RAG workflows, written long prompts, and integrated APIs. But after five days, everything I thought I knew about designing agents was turned upside down.
This reflection is about that transformation—and how it shaped the "Wellness for Everyone" coach I built as my capstone project.
Before the Intensive: Stuck at Level 0
I began the course with the assumption that “agent development = complex prompting.” In the terminology of the course's Agent Taxonomy, I was operating at Level 0: The Core Reasoning System.
I had naively believed that crafting a single, powerful prompt could unlock everything. But the results were… unforgettable:
“Your plan for Monday is 30 minutes of jogging.”
“Your plan for Tuesday is 30 minutes of jogging.”
“Your plan for Wednesday is 30 minutes of jogging.”
At that moment, I wasn’t sure whether I had built a Planner—or a broken copy-paste machine with a gym membership. It felt like I had programmed a system that could copy and paste my commands without ever truly understanding them, as if the model was on autopilot and didn’t really care about the context.
My agent didn’t feel like an agent—it felt like a teenager arguing with me. Polite enough to pretend it was listening; rebellious enough to do whatever it wanted.
It hit me like a wave—prompts do not create agents. Architecture creates agents.
The Breakthroughs: Climbing the Ladder of Agency
💡 Insight #1 — From "Super-Prompt" to "Level 3 Collaboration"
Day 1 introduced the Agent Taxonomy, which completely changed my approach. I realized that forcing a single agent to handle conflicting constraints leads to cognitive overload.
To build a robust coach, I ascended to Level 3: The Collaborative Multi-Agent System. I moved from a single "Super Agent" to a comprehensive "Team of Specialists" organized in layers:
- The Core Logic Layer: A Planner orchestrates the strategy, supported by a Calculator for metrics and an Evaluator to enforce quality control.
- The Specialist Layer: A Training Drafter and Nutrition Drafter generate domain-specific plans.
- The Delivery Layer: An Aggregator compiles the drafts, and a Presenter ensures the final interface is user-friendly.
- The Support Layer: A Daily Helper handles real-time queries, and a Weekly Reviewer manages long-term adaptation.
Together, these layers transform free-form text generation into a controlled, inspectable system.
By treating "Agents as Tools", I delegated specific domains to specific sub-agents. This division of labor turned a chaotic text generation task into a deterministic engineering workflow. This shift—from prompt crafting to system design—was the moment I stopped building demos and started building products.
🧠 Insight #2 — Context Engineering is the "Mise en Place"
Day 3 taught me that Context Engineering is the foundation of intelligent behavior.
Context engineering is like mise en place in the kitchen—everything must be organized and ready before cooking. In my capstone, I implemented Memory Consolidation. My agent doesn't re-read chat logs; it reads a consolidated User State (e.g., current_injury: "left_knee"). This ensures consistency: even if the user mentioned an injury three days ago, the system "remembers" it as a current constraint, not just a past log entry.
🎯 Insight #3 — "Trajectory Is The Truth"
Day 4 taught me that "The true measure of an agent's quality... lies in its entire decision-making process".
I moved from "Black Box" testing to "Glass Box" Observability. I built an Auto-Evaluation Pipeline (LLM-as-a-Judge) that traces the reasoning steps. Instead of just asking "Did it give a plan?", my system asks: "Did the Planner check the injury list before scheduling squats?" By evaluating the trajectory, I can catch logic failures before they ever reach the user.
The Capstone: Engineering a Level 3 System
My capstone, Wellness for Everyone, is an accessible AI Coach designed to provide safe, personalized guidance at zero cost. To build it, I applied these insights to solve three real engineering challenges.

Figure 1: The system architecture, featuring a Sequential Backbone, Parallel Drafters, and a Triple-Tier Fallback mechanism.
3.1 Challenge A — Taming Chaos with Structured Workflows
Goal: Ensure logical correctness, low latency, and graceful recovery in a multi-agent pipeline.
Managing multiple agents requires strict orchestration. I designed a hybrid architecture combining three distinct workflows:
- Sequential Backbone: I used a Sequential Workflow (
Calculator→Planner→Reviewer) to ensure logical dependency—just like how a recipe requires you to prep ingredients before cooking. - Parallel Drafting: Inside the Planner, I used a Parallel Workflow, enabling the Training Drafter and Nutrition Drafter to generate plans simultaneously. This significantly reduced latency without sacrificing depth.
- Iterative Loops (Refinement): I implemented a Loop Workflow for quality control. If a draft fails validation, the Evaluator Agent rejects it and loops back to request a rewrite, preventing "one-shot" failures.
3.2 Challenge B — Grounding with A2A, APIs & MCP
Goal: Prevent hallucinations by anchoring agents to verified external knowledge.
A key challenge was connecting the agent to reality. An ungrounded agent hallucinates; a grounded agent looks up facts.
- A2A Protocol: I used the Agent-to-Agent (A2A) Protocol to connect my Daily Helper to an external Organization Referral Microservice. This allows the agent to fetch verified contact lists from non-profit organizations for crisis support.
- Open APIs: I integrated Weather and Nutrition APIs. The agent checks the weather to avoid scheduling runs during thunderstorms, and queries the Nutrition API to verify calorie counts for specific foods (e.g., "avocado"), ensuring diet plans are mathematically accurate.
-
Local MCP: Inspired by the Model Context Protocol (MCP), I built a local MCP-style adapter to securely access verified safety guidelines (like
knee_pain.md), ensuring medical advice is grounded in data.
3.3 Challenge C — Engineering for Quality (The Day 4 Pillars)
Goal: Guarantee safety, robustness, and usefulness beyond one-shot success.
Day 4 taught me that Agent Quality isn't just about code; it's about Effectiveness, Robustness, and Safety. I engineered specific mechanisms for each:
- Safety (Context-Aware Guardrails): Standard safety filters often naively block keywords like 'pain', rendering a wellness coach useless. I built a specialized Guardrail Agent with a Semantic Whitelist. It intelligently distinguishes between harmful intent (block) and medical context (e.g., "knee pain" -> allow), ensuring users get help without triggering false refusals.
- Robustness (The Rescue Engine): To handle stochastic failures, I built a Rescue Engine. If the Tier-1 drafting loop fails to produce a complete plan (e.g., missing the "Nutrition" section), the Rescue Engine detects this specific gap and force-generates the missing component. The user never sees an incomplete response.
- Effectiveness (HITL Refinement): Quality means alignment with user intent. I implemented a Human-in-the-Loop (HITL) workflow where the user reviews the draft (Plan V1), provides feedback (e.g., "I want to rest on Monday"), and the agent regenerates a refined version (Plan V2). This ensures the final output is not just "safe," but truly "helpful."
Final Reflections: The Architect Mindset
I entered this course with the simple goal of building a chatbot, but I left with a mission: to use agent-based AI to democratize wellness coaching.
The course completely transformed my perspective—shifting my focus from building individual features to designing scalable, resilient AI systems. I no longer just ask "What should I say to the model?" I ask: How do I architect a system with Observability, Safety Guardrails, and Shared Context?
The AI Agents Intensive didn’t just teach me how to use agents—it taught me how to architect them. This architect mindset is what I plan to carry forward—whether I’m building wellness systems, developer tools, or the next generation of agent-based products.
📎 Appendix
Kaggle Notebook: https://www.kaggle.com/code/maggiezhao11/wellness-for-everyone-v2
Top comments (0)