DEV Community

Cover image for It's a Study Date! Reflections on the AI Agents Intensive Course
Chloe Lian
Chloe Lian

Posted on

It's a Study Date! Reflections on the AI Agents Intensive Course

This is a submission for the Google AI Agents Writing Challenge: Learning Reflections

Signing up for the 5-Day AI Agents Intensive Course offered by Google and Kaggle in November 2025 felt like a natural extension of my ongoing exploration into AI systems. Needless to say, I had to rope in my husband to join me for midnight study sessions - he has also been experimenting with AI agents for customer support, as Head of IT Ops and Transformation Lead in his company.

As a business analyst in a company that is scaling up, I've long been immersed in designing infrastructures that push the boundaries of reasoning and autonomy. This course arrived at a perfect juncture in my career, where I was grappling with how to scale experimental prototypes into reliable, enterprise-grade solutions. My husband and I spent all 5 days intensively working through the course materials, discussing what we were learning, comparing notes, and brainstorming ideas of how we could apply these innovations in our work (and also personal projects).

The whitepapers providing theoretical foundations, podcasts distilling key ideas, codelabs offering hands-on immersion, and livestreams delivering real-time expert dialogue, all created a learning ecosystem that not only deepened my technical acumen but also fuelled profound personal reflections on AI's role in business transformation.

In this reflection, I write about my key takeaways, the concepts that resonated most deeply, how my understanding of AI agents has evolved, and a sneak peek at my capstone project, Rubriq. I'll also draw connections to my prior familiarity with the Agent Development Kit (ADK) through Google Skill Boost labs, and relate these insights to my work as a business analyst crafting agentic workflows on Google Cloud.

The course kicked off on Day 1 with "Introduction to Agents". The whitepaper mapped the evolution laddering up to autonomous agents, introducing a taxonomy that categorized systems from Level 0 core reasoning to Level 4 self-evolving architectures. The need for agent operations (ops) to ensure reliability and governance, alongside interoperability and security through constrained policies, was made very clear. The codelabs had us constructing our first agent and multi-agent system using Google ADK powered by Gemini, integrating tools like Google Search for real-time data. This hands-on element built directly on my existing knowledge from Google Skill Boost labs, where I'd experimented with ADK in Google Cloud. In those labs, I had already learned ADK's modular structure for rapid prototyping, but the course elevated it further and sparked more ideas for my work. For the Kaggle course, I used Google ADK directly in the Kaggle environment.

The livestream was where the magic happened - this was my absolute favorite part of the course. It transformed abstract concepts into vivid, applicable strategies. These sessions were extremely engaging and rich in cutting-edge content. Each day felt like I was eagerly awaiting the next episode of a soap opera. For me, hearing the discussion about agent architectures in real-time closed the gap between theory and practice; it reminded me of troubleshooting agent flows in Google Cloud, where I've built systems to automate email classification and data handling. The guests' insights on interoperability challenges echoed my experiences with integrating disparate systems and data sources, evolving my view of agents from siloed tools to interconnected ecosystems. The well-curated panel of speakers really connected concepts to actual workflows, giving us many lightbulb (aha!) moments. Personally, this reinforced how agents could streamline my analyst role, where I often juggle low-code tools like Zapier with Python scripts and Gemini for custom workflows, and could save me hours on tasks like data processing and writing reports.

On Day 2, we explored tools as extensions for actions beyond model training, advocating best practices like granular design, concise outputs, and validation. The Model Context Protocol (MCP) tackled the "N x M" integration dilemma, standardizing communication to mitigate risks in enterprise settings. Codelabs delved into custom tools and long-running operations with MCP, which felt like a direct upgrade from my Skill Boost experiments, where I'd used ADK for simple function calling but still had questions about scalability. Here, MCP's focus on secure, pauseable operations resonated with my Google Cloud projects, like building agentic flows that halt for human approval in sensitive financial analyses.

The livestream amplified this - the insights gleaned on tool integration in high-stakes environments mirrored challenges in my work, where I've orchestrated agents on Vertex AI to handle real-time data from web sources. Listening to the guests' comments on best practices really inspired me to refine my cloud-based workflows, ensuring tools like BigQuery integrations are robust. This day shifted my mindset, as I also saw how agents themselves can be used as tools - a concept I integrated into my capstone project.

Day 3's "Context Engineering: Sessions and Memory" was a revelation, defining context engineering as dynamic information management for stateful AI. It distinguished sessions for immediate conversations from memory for long-term persistence, covering variances, optimizations, and architectures like vector databases. Codelabs on stateful agents and persistent memory built on my ADK foundations, as I previously manually built memory stores in databases, but had not done session handling at this depth and with this elegance. Relating to my role, this directly informed how I manage conversation histories in agentic flows for ongoing business queries, like email threads between an agent and a customer.

The live Q&As in the livestreams were riveting, providing so much clarity around the nuances and complexity of implementation, as well as the many design principles to consider. This also helped to clarify the tradeoffs I've faced in Google Cloud, optimizing for cost in long-context RAG use cases. The livestream guests gave really powerful breakdowns that helped me connect memory concepts to automating multi-turn interactions in my workflows, like customer queries and data actions on their behalf that span sessions. Personally, it evoked reflections on past projects where poor context management led to inefficiencies and workarounds. Now, I see memory (when done well) as the key ingredient for adaptive systems.

Day 4 addressed "Agent Quality", framing evaluation in non-deterministic worlds with "Outside-In" hierarchies and evaluators like LLM-as-a-Judge. We discussed observability through logs, traces, and metrics. Codelabs on debugging and evaluation added production rigor to my existing experience, crucial for cloud deployments where I have to be able to monitor agent performance in real-time.

The example of data science agents tied to my analyst work, where it's important to evaluate agentic flows for accuracy in forecasting and various outputs. I also understood checks for quality in non-deterministic setups more deeply through the livestream, as this is truly new ground, and it was fantastic to hear from the panel of experts. This evolved my approach, integrating HITL for high-stakes decisions, like in my capstone.

Finally, Day 5's "Prototype to Production" guided scaling with A2A Protocol, focusing on deployment. Codelabs on A2A and Vertex AI were a culmination to this course, aligning with one of the biggest challenges facing enterprises now who are stuck in infinite prototype loop, on how to transition prototypes.

The livestream provided forward-looking insights as always. The discussions on scaling mirrored my agentic roadmap for company-wide automation.

Over the 5 days, the livestreams consistently stood out as my favorite, integrating all the intangible elements and made everything feel much more connected with the complexity of ground reality. The livestreams and whitepapers were also deeply, intellectually rewarding, with guests and experts freely sharing their ongoing work, challenges, and experiences. This experience has really elevated my prior ADK knowledge to advanced applications.

This course indeed evolved my understanding of agents, MCP, context engineering, and quality frameworks. In my analyst role, this has directly translated to better solution designs for agentic flows on Google Cloud, and a better grasp of the complexities involved.

My capstone, Rubriq, embodied much of the learnings. It is an agentic reviewer system based on user-input rubrics, inspired by Kaggle competition rubrics. Using Google ADK, Rubriq features an Orchestrator with a team of sequential agents for analysis, scoring, and feedback, powered by Gemini on Vertex AI. Memory via InMemorySessionService is used for a persistent, observable system. Building it revealed integration hurdles, like JSON fragility (which took 4 hours to debug), but it all worked perfectly in the end!

Overall, the course was intensive, transformative, and has translated to direct innovations in my role. Thanks to the teams at Google and Kaggle for a wonderful job!

Another side benefit - it made for 5 wonderful "date" nights with my husband while we did this intensive course together!

Top comments (0)