DEV Community

Jinesh Barot
Jinesh Barot

Posted on

I Thought I Knew AI. Then I Met "Agents"

I Thought I Knew AI. Then I Met "Agents."

My 5-day journey from simple chatbots to reasoning engines with Google & Kaggle


[Introduction]
Like many developers, I fell into the trap of thinking "AI" just meant "ChatGPT." I thought you typed a prompt, crossed your fingers, and hoped for a good answer. If the AI didn't know the answer, I assumed the technology just wasn't ready yet.

I was wrong.

Over the last 5 days, participating in the Google & Kaggle AI Agents Intensive, my mental model of Artificial Intelligence has completely shattered. I learned that we aren't just building talkers anymore; we are building doers.

Here is how my understanding evolved, and the three technical concepts that changed everything for me.


1. The Shift: From "Predicting" to "Reasoning"

The biggest "Aha!" moment for me was understanding the Agentic Loop (Observe → Think → Act).

Before this course, I treated LLMs like a magic 8-ball. I thought the flow was linear:

User InputAI Answer

Now, I realize that an Agent is actually a router. It doesn't just guess; it pauses to "think."

  1. It Observes the user's request.
  2. It Reasons about what needs to be done.
  3. It Acts by using a specific tool.

This distinction turns AI from a creative writing assistant into a reliable software engineer.

2. Tools: The Cure for Hallucinations

I used to struggle with AI making up facts (hallucinations). I tried endless "prompt engineering" tricks to fix it, telling the bot * "Please don't lie."*

The course taught me the real solution: Tools.

By giving the Agent access to grounded data—whether it's a Google Search tool or a specific database via the Model Context Protocol (MCP)—we don't have to trust the model's training data. We just trust its reasoning capabilities.

I realized that we can build systems that look like this:

# The Old Way
response = model.generate("What is the weather in Tokyo?")
# Result: "I cannot access real-time data."

# The Agent Way
if tool_needed == "weather_api":
    data = weather_api.get("Tokyo")
    final_answer = model.synthesize(data)
# Result: "It is currently 12°C in Tokyo."
Enter fullscreen mode Exit fullscreen mode
  1. Memory: More Than Just Chat History

Finally, the third piece of the puzzle was Memory.

Before the intensive, I thought "memory" just meant sending the last 10 messages back to the bot so it knew what we were talking about. But I learned that context windows are expensive and limited. You can't just feed an entire documentation manual into a prompt every time you want to ask a question.

The course clarified the difference between:

Short-term memory: Storing the immediate session details (like a normal chat).

Long-term memory: Using Vector Stores and RAG (Retrieval-Augmented Generation) to recall information from weeks or months ago.

This distinction is what turns a one-time interaction into a helpful, long-term assistant that actually "knows" you.

Conclusion: The Future is Agentic

Description of image

I walked into this challenge wanting to learn how to write better prompts. I walked out realizing that prompts are just the beginning.

The future doesn't belong to those who can write the best poem in ChatGPT; it belongs to developers who can architect systems where AI has the right tools to do the job.

I’m excited to keep building with the Agent Development Kit (ADK). My next goal is to build an agent that can actually manage my Google Calendar based on my emails.

What was your biggest takeaway from the challenge? Let me know in the comments!

Top comments (0)