DEV Community

Kuber Jain
Kuber Jain

Posted on

From Prompting to Planning: My Week with AI Agents

From Prompting to Planning: My Week with AI Agents

I just wrapped up the 5-Day AI Agents Intensive with Google and Kaggle, and honestly, it completely flipped my script on how AI actually works. I went in thinking I was pretty good at prompt engineering, but I came out realizing that "prompting" is only about 10% of the puzzle. The real magic happens when you stop asking the AI to give you an answer and start teaching it how to act.

Here’s the breakdown of what I picked up and how I built my project, the Oracle Agent, along the way.

The "Aha!" Moment: It’s Not a Chatbot, It’s a Decision Engine

Day one was a reality check. We looked at how "Agentic" architectures differ from regular apps. I finally understood that an agent is a system that can basically go: "Okay, I see what you want, let me make a plan, go use this tool, see what happened, and then come back to you." My understanding evolved from seeing AI as a high-tech encyclopedia to seeing it as the "manager" of a workflow. This led directly to my project. I didn't want to build another bot that just spit out horoscopes. I wanted an autonomous system that could crunch numbers perfectly and give advice that felt like a real session.

The Tools That Stuck With Me

A few concepts from the labs really resonated with me because they solved real problems I’ve run into before:

-Stop trusting the AI with math:
On day two, we talked about "Taking Action." I realized that if I want my agent to do numerology, I should never ask the LLM to calculate the numbers. It’ll hallucinate the math every time. Instead, I built Custom Tools—Python functions that do the exact math—and taught the agent how to call them. Now, the Agent does the talking, but the code does the calculating.

- Context is everything:
Day three was all about memory. Nobody wants to repeat their birth date five times in one conversation. Learning how to build Stateful Sessions was a game-changer. It allowed the Oracle Agent to remember who you are across multiple turns, so you can ask follow-up questions about your career or relationships without starting from scratch.

- Open the Black Box:
Day four was about observability. In the past, if an AI gave a weird answer, I just shrugged. Now, I use tracing and logging. In my project logs, I can see exactly when the agent decides to trigger a tool versus when it's just processing text. It makes the whole system feel reliable instead of random.

Learning by Doing (The Labs)

The hands-on labs were where things got messy in a good way. I learned that the hardest part of building an agent isn't the AI part—it's the Orchestration. You have to be very clear about how the agent observes the world.

In building the Oracle Agent, I spent more time thinking about the "workflow" than the "prompt." It was about setting up that loop:
Input -> Plan -> Act (Math Tool) -> Observe (Result) -> Synthesize.
That's the blueprint for building things that actually work in the real world.

Final Thoughts
This intensive felt like graduating from "using AI" to "building with AI." The Oracle Agent is just a start, but it uses everything the course threw at us: deterministic tools, short-term memory, and a structured multi-turn flow.

I’m walking away with a new rule of thumb: Don’t just prompt the model—give it a plan and the tools to pull it off.

Link to my capstone project - The Oracle Agent

Please let me know in comments if you have any questions/thoughts or share your experience.

Top comments (0)