DEV Community

Akash Santhnu Sundar
Akash Santhnu Sundar

Posted on

How the 5-Day Intensive Felt

Before this course, “AI agents” for me were basically just LLMs with a couple of tools glued on. Over the 5-Day AI Agents Intensive with Google and Kaggle, that changed a lot—agents started to feel more like teammates that can follow goals, call the right tools, and leave a trail of reasoning you can actually inspect. The focus on the AI Canvas, routing, and traces made me think less about single prompts and more about how the whole system behaves over time.​

The core idea that stuck with me is that agents are not just “chat completions,” but systems that can plan, act, remember, and be measured like any other piece of software. That mindset shift ended up shaping the way I built my capstone project, Orca.​

What the 5 Days Covered
The curriculum was organized around a set of core concepts: “From Prompt to Action,” agent architectures, tools and tool best practices, sessions and memory, observability, and evaluation. Each day stacked on top of the previous one, so by Day 4 it felt like we had walked from a simple prompt all the way to something that could realistically run in production.​

Day 1 introduced agent architectures and “From Prompt to Action,” showing how a user request turns into plans, tool calls, and loops instead of just one response. Day 2 focused on Agent Tools and Agent Tools Best Practices, using the Agent Development Kit and Model Context Protocol (MCP) to safely connect agents to real APIs and services. Day 3 went into Agent Sessions and Agent Memory—how to manage short-term context and longer-term knowledge so agents can handle multi-turn tasks and remember what matters. Day 4 zoomed in on Agent Observability and Agent Evaluation, covering logging, tracing, metrics, and evaluation runs in the ADK UI and via the CLI.​​

What I Built: Orca
For the capstone, I built Orca, an agentic stock analysis and forecasting system that tries to behave like a quant analyst you can talk to. Instead of one “do-everything” model, Orca uses a small panel of agents—one looks at fundamentals, another at technicals, another at sentiment and risk—and then a judge agent pulls their views together into a single recommendation. That “Panel of Experts” pattern from the course mapped really naturally to finance, where you never want to rely on just one signal.​

To support this, Orca uses custom tools to grab real market data, compute indicators, and run forecasts before the agents interpret anything. The labs on tool calling and step-by-step traces were especially helpful here, because when an agent picked the wrong tool or misunderstood the output, the trace made it obvious what went wrong.​

How the Course Shaped Orca
Day 1’s focus on architectures and the “From Prompt to Action” mindset helped me treat Orca as a pipeline: understand the user’s goal, plan which experts and tools to involve, execute, then refine. Instead of writing one big prompt like “analyze this stock,” I started designing roles and flows that mirrored how a human analyst team might work.​​

Day 2’s content on tools and best practices gave me a solid template for building Orca’s data and indicator tools—small, focused, and predictable, so the agents can call them safely. Day 3’s sessions and memory concepts made me think about how Orca could remember a user’s risk profile, watchlist, or previous decisions, while still being careful about what not to store in a financial context. Day 4’s observability and evaluation modules pushed me to treat traces and evaluation runs as first-class features, turning Orca from a black box into more of a glass box for financial decisions.​​

What Stood Out in the Labs
The hands-on labs are where everything really clicked, especially around multi-agent setups and observability. Designing prompts felt very different from designing full runs that I could replay, compare, and systematically improve. That mindset pushed me to treat Orca as a living system rather than a one-off script that either “works” or “doesn’t.”​

Another thing that stood out was how much cleaner things became once I gave each agent a clear, narrow role. Instead of one overloaded “smart” agent, having a few focused ones made the whole system easier to debug and explain, especially when showing users which agent contributed what. In a domain like finance, that transparency is a big part of trust.​

How My View of Agents Changed
By the end of the course, I stopped thinking of agents as fancy chatbots and started seeing them as structured workflows wrapped around an LLM core. The emphasis on safety, evaluation, and user experience made it clear that a good agent is one you can question, measure, and improve without guessing what happened inside.​

For Orca, that translated into aiming for a “glass box” instead of a black box. The traces, intermediate reasoning, and small debates between agents are not just implementation details—they are part of the product experience, especially when someone is making decisions with real money on the line.​

Where I Want to Take This Next
This intensive also gave me a lot of ideas for next steps. I’d like to push Orca further into portfolio-level analysis, with better risk modeling, scenario testing, and maybe even light integrations with tools people already use to track investments. I am also curious about building evaluation harnesses so that when I tweak prompts or tools, I can test them against historical market conditions instead of relying on vibes.​

Overall, the biggest shift for me was in the questions I now ask myself. It’s no longer just “How do I prompt this model?” but “How do I design an agentic system that people can rely on, debug, and improve over time?”. Orca is my first serious attempt at answering that question, and this course was a big part of making it feel possible.

Try Orca Yourself
If you want to see Orca in action or explore how it is built under the hood, here are the main links:

Live app: https://orca-clean-5iwhtnvpfq-uc.a.run.app

GitHub repository: https://github.com/Sansyuh06/Orca

Demo video (2 minutes): https://www.youtube.com/watch?v=jUmFzOSeUKo​

The live app shows the full agentic stock analysis flow, the GitHub repo dives into the architecture and code, and the video gives a quick overview of the experience without needing to set anything up

Top comments (0)