Over the last week, I’ve had one of the most eye-opening learning experiences thanks to the Google x Kaggle 5-Day AI Agents Intensive. Going into the course, I had a very traditional understanding of Large Language Models: powerful tools that generate text, write code, summarize documents, and answer questions.
By the end of the intensive, that view felt almost outdated.
Once you give an LLM tools and a structured reasoning loop, it stops behaving like a chatbot — and starts acting like an Agent. A system that observes, thinks, decides, and takes action.
This post is my reflection on what I learned, the ideas that stuck with me, and the capstone project I built: a Market Sentiment Monitoring and Alert Agent.
🌟 The Turning Point: Understanding Tools + Reasoning
The biggest “aha!” moment for me was understanding how agents use ReAct loops — the combination of Reasoning + Acting.
Before the course, if I asked an LLM,
“What’s the current market sentiment?”
it would give a general response based on whatever it already knew.
After the course, my agent responds more like a researcher:
Thought: “I should look up the latest news for these tickers.”
Action: Calls a search tool or news API.
Observation: Reads the actual headlines.
Reasoning: “Most of these look bearish — the sentiment is negative.”
Final Answer: Sends an alert with real data.
That shift — from “guessing” to “checking” — was huge.
The moment you let the model use tools, you stop getting hallucinations and start getting grounded, reliable output.
📈 My Capstone Project: Market Sentiment Monitoring & Alert Agent
For my final project, I wanted to solve a problem I’ve personally struggled with: trying to stay on top of financial news in real time.
With markets moving fast, it’s impossible to read everything.
But an agent can.
What the Agent Does
My agent continuously:
Monitors live financial news
Analyzes headlines for sentiment using Gemini
Assigns a numerical score to each article
Alerts me if overall sentiment shifts sharply into Fear (bearish) or Greed (bullish)
It essentially serves as a real-time “market mood detector.”
How It Works Behind the Scenes
🧠 The Brain: Google Gemini (via Vertex AI / Gemini API)
🔧 The Tools:
Python functions to fetch live market data
Search integrations to gather fresh headlines
Sentiment scoring logic
⚙️ The Reasoning Layer:
The agent isn’t just scanning keywords — it understands context.
For example:
“Loss” could mean stock price loss
Or it could mean “data loss,” which shouldn’t affect market sentiment
That sort of nuance is what made the project feel real and meaningful.
🛠️ Challenges & Lessons Learned
Like any good project, this one came with its own learning curve.
- Grounding Is Everything
Hallucinations drop dramatically when you force the agent to rely on real-time tools.
Google Search as a tool was a game changer — it made the system factual, not speculative.
- Memory Needs Limits
During the course, we talked a lot about memory management.
I saw firsthand why it matters:
You don’t want your agent using outdated news or carrying irrelevant info across runs.
- Evaluation Isn’t Optional
Day 4 emphasized the importance of actually evaluating agent performance instead of trusting your gut.
I found myself comparing the agent’s sentiment scores with my own assessments and adjusting the prompts until it felt right.
🎯 Final Thoughts
This intensive may have only lasted five days, but it fundamentally changed how I think about AI. I no longer see LLMs as passive tools — I see them as active systems capable of handling workflows end-to-end.
I plan to keep improving my Market Sentiment Agent, and eventually, I want it to:
issue more granular alerts
provide analysis across sectors
maybe even recommend portfolio adjustments (in a safe, simulated environment!)
If you’re curious about agentic AI, this is the best time to jump in.
The distance between “talking to AI” and “having AI work alongside you” is shrinking faster than we realize.
Top comments (1)