DEV Community

Cover image for Learning Reflections from the AI Agents Intensive Course (Google + Kaggle)
José Cachucho
José Cachucho

Posted on

Learning Reflections from the AI Agents Intensive Course (Google + Kaggle)

Over the last weeks, I completed the AI Agents Intensive Course by Google and Kaggle, followed by the development of my capstone project. What I took from this experience went far beyond learning the Agent Development Kit (ADK). It helped me clarify how agentic architectures work in practice, how they differ from traditional LLM applications, and how much potential they hold for real-world use.

Because I often play multiple roles in my professional life — product manager, developer, business analyst, and sometimes team leader (mainly due to limited staffing) — I’m constantly looking for ways to expand my skills in areas that can bring value to my organization. AI has been one of those areas for several years now.

Some time ago, I read an article in The Batch from deeplearning.ai titled “AI Product Managers Will Be In-Demand.”

The message deeply resonated with me. The author described how AI product management requires a blend of technical understanding, iterative development, data literacy, comfort with ambiguity, and a mindset of continuous learning. It felt entirely aligned with my experience — and with the direction I see my own work evolving.

In my organization, I’m often the person colleagues reach out to whenever the topic is “how can we make real use of AI?”

That’s why I saw this course as an opportunity to strengthen both the technical side and the product-management side of my involvement with AI initiatives.


🌱 Why This Course Mattered to Me

I’ve been learning AI for years in parallel with my primary career — something made possible largely thanks to people like Andrew Ng, whose work democratized high-quality AI education. Because of that, I try to seize every opportunity to learn more, especially now in the era of generative AI and autonomous agents.

What interested me most was understanding not only how agents are built, but also the emerging patterns that seem to be consolidating across frameworks, tools, and production practices.

The AI Agents Intensive turned out to be a perfect environment for exploring this.


🧠 What I Learned About Agentic Architectures

Even though ADK belongs to the Google ecosystem, the concepts it teaches are extremely universal. I could immediately see parallels with LangChain, LlamaIndex, CrewAI, Autogen, and even custom internal agent frameworks.

Some of the concepts that resonated most with me:

• Multi-agent orchestration

The pattern where an orchestrator delegates tasks to specialized agents felt both powerful and natural. It reminded me more of traditional software architecture than of the “prompt engineering” mindset many associate with LLMs.

• Tools as secure, isolated capabilities

I really appreciated how ADK formalizes tools as deterministic functions with structured inputs and outputs. This dramatically reduces ambiguity and makes the agent-tool interface feel like proper software engineering.

• Memory and session state

Understanding how to structure short-term and long-term memory — and how to safely persist identity and context — made me appreciate how essential state management is for real multi-turn systems.

• Observability

Structured logging, tracing, and clear visibility into every agent and tool call felt incredibly important. It’s not glamorous, but it’s foundational for reliability.

• Role-based access control at the tool layer

This was something I explored more deeply in my project. Instead of prompting the LLM with “you must not allow users to do X,” I enforced permissions entirely at the tool level.

The result: a far more robust system, immune to conversational manipulation.

It’s a pattern I haven’t seen documented often, but I believe it should become standard.

• Structured outputs

I knew this in theory, but building a full system really showed me how critical it is. A surprising amount of agent orchestration depends on clear, stable, consistent outputs.

Overall, the course made me see agents not as “LLMs with extra steps,” but as genuine software systems — modular, stateful, orchestrated, observable, and governed by principles of security and separation of concerns.


🚀 My Capstone Project: SupportPilot

To put everything into practice, I built SupportPilot, an autonomous multi-agent IT support assistant designed to:

  • troubleshoot common IT issues via a knowledge base
  • escalate unresolved issues through ticket creation
  • manage the full ticket lifecycle
  • enforce role-based permissions (end users vs. service-desk agents)
  • maintain conversation state through persistent sessions
  • log all actions and events through structured observability

The architecture includes:

  • An Orchestrator Agent
  • A Knowledge Agent
  • A Ticket Agent
  • A JSON knowledge base
  • Two SQLite databases (tickets + sessions)
  • Six custom tools with strict structured outputs

Building this system gave me hands-on experience with real agent orchestration, tool design, RBAC, session management, and debugging complex agent behavior. It significantly improved my intuition for how these systems behave in production-like scenarios.


✨ Final Reflections

This course helped me:

  • strengthen my understanding of agentic AI
  • recognize architectural patterns that repeat across frameworks
  • deepen my technical skills in building multi-agent systems
  • explore the strengths and trade-offs of ADK
  • better prepare myself to guide AI adoption in my organization
  • and rediscover the value of learning by building

I’m grateful to Kaggle and Google for organizing this initiative. It was practical, well-designed, and genuinely inspiring.

I look forward to continuing this journey — both as a developer and as someone who increasingly takes on the role of AI Product Manager inside my organization.

Top comments (0)