DEV Community

Cover image for Google AI Agents Writing Challenge
Esheshwari Kumari
Esheshwari Kumari

Posted on

Google AI Agents Writing Challenge

This is a submission for the Google AI Agents Writing Challenge: Learning Reflections

My Learning Journey Through the Google × Kaggle AI Agents Intensive

When I signed up for the 5-Day AI Agents Intensive, I expected to learn “how to use Gemini.”
What I actually discovered was something much bigger:
how to think in terms of agents - modular, autonomous, reasoning units that can cooperate, divide tasks, and solve problems the way teams do.

This course fundamentally shifted how I understand AI systems.
Before this, I saw models as tools.
Now I see agents as teammates.

What Resonated Most With Me

1. The Power of Breaking Problems Into Agents

The concept that made everything click for me was the idea of dividing a complex workflow into multiple specialized agents:

  • one that extracts information,
  • one that understands context,
  • one that plans,
  • one that communicates,
  • one that can even reflect and improve outputs.

It felt like designing a small AI company - each “employee agent” with a well-defined job.

This changed the way I design systems.
Instead of writing one big block of logic, I now think:

“Which agent should own this responsibility?”

That mental shift will stay with me for a long time.

2. Tools + Agents = Real Power

The moment the course introduced tools, everything expanded.

Seeing an agent call:

  • Google Search
  • Custom Python functions
  • OpenAPI tools
  • Long-running operations
  • Built-in tools like code execution

made me realize something important:

Models are not limited to text.
They can act.

This was the turning point where agents started feeling like real software components, not just language models.

3. Sessions, Memory, and State

This was one of my biggest “aha” moments.

I always assumed agents were stateless.

Learning about:

  • InMemorySessionService
  • session state
  • memory banks
  • context compaction
  • long-term memory
  • completely changed my understanding.

I finally saw how modern AI systems maintain continuity, progress, and context, just like a real assistant.

4. Observability Matters

This course didn’t just teach how to build agents, but also how to make them:

  • debuggable
  • observable
  • traceable
  • reliable

I never thought about tracing, events, or agent-level logging before.
Now I can’t imagine building an agent system without it.

How My Understanding of Agents Evolved

_At the start:
Agents = “LLMs with a wrapper.”

By the end:
Agents = reasoning entities with structure, memory, tools, policies, routing, autonomy, and collaboration._

They are no longer “functions that use a model.”
They are systems that orchestrate models, tools, and workflows.

Because of this course, I see AI development as:

“Orchestrating intelligent workers, not calling a model API.”

This shift in thinking is priceless.

What I Built — My Capstone Project: PWOA

As the final project, I built PWOA — Personal Workflow Optimization Assistant, a multi-agent productivity system that:

  • extracts tasks from text/PDFs/images
  • classifies & prioritizes them
  • generates a structured daily plan
  • syncs events to Google Calendar
  • drafts reminders through Gmail
  • and uses Gemini to reflect and refine the plan

It was the first time I combined:

  1. multi-agent architecture
  2. tool calls
  3. OCR + reasoning
  4. scheduling logic
  5. ADK-style design concepts
  6. session state
  7. Google APIs
  8. OpenAI + Gemini in one system

This project made everything in the course “real” for me.

Key Learnings From Building My Capstone
1. Agent workflows become easier when you think in responsibilities

  • Extractor Agent should only extract.
  • Priority Agent should only rank.
  • Scheduler Agent should only plan.

Clear boundaries = clean system.

2. Agents must communicate like teammates

  • I learned to design agents that:
  • pass structured outputs
  • validate assumptions
  • refine each other’s mistakes
  • break down ambiguity
  • This is real collaborative intelligence.

3. Reflection makes outputs feel human

Adding a Reflection Agent powered by Gemini totally changed the quality of results.

_The system went from:

“Here is your schedule”
to
“Here is your schedule + why it makes sense + improvements.”_

Reflection is underrated.

What This Course Taught Me Beyond the Code

  • That AI systems are designed, not just coded
  • That models + tools = software agents
  • That reasoning + autonomy will be the future of AI apps
  • That good agent design is about responsibility, clarity, and structure

That the future of development will be about orchestrating agents, not writing monolithic logic

This course didn’t just give me skills.
It gave me a new mental model for building intelligent systems.

Final Thoughts

The 5-Day AI Agents Intensive felt like learning a new superpower.
It opened my perspective on how AI can be built, scaled, deployed, and optimized.
It taught me not just how to use agents, but how to think like an agent developer.

I’m truly grateful to Google, Kaggle, the mentors, and the community.
This course didn’t just level up my skills - it expanded what I believe I can build.

Here’s to many more agentic systems ahead.

Kaggle Notebook
GitHub Project

Top comments (0)