In this tutorial, weβll build a simple AI agent that can fetch real-time weather information for a given country.
Weβll keep it:
- π» Fully local (LLM via Ollama)
- π Real-time (weather via API)
- π§ Agent-based (tool usage with LangChain)
π― What Youβll Build
An agent that can answer:
βWhatβs the weather in India?β
Flow:
User β Agent β Tool (Weather API) β Agent β Response
β οΈ Important Reality
Local models (via Ollama) cannot browse the internet.
So we solve this by:
π Giving the agent a tool that calls a real API
π The agent decides when to use that tool
π§± Project Structure
root/
β
βββ main.py
βββ projects/
βββ search_agent/
βββ app.py
βββ tools.py
π Step 1: Get Free Weather API
Use: https://openweathermap.org/api
- Sign up
- Get API key
π¦ Step 2: Install Dependencies
pip install langchain langchain-ollama requests python-dotenv
π Step 3: Add ".env"
WEATHER_API_KEY=your_api_key_here
π Step 4: Create Tool (REAL API)
π projects/search_agent/tools.py
import requests
import os
from langchain.tools import tool
@tool
def get_weather(city: str) -> str:
"""
Fetches real-time weather data for a given city.
"""
api_key = os.getenv("WEATHER_API_KEY")
url = f"http://api.openweathermap.org/data/2.5/weather?q={city}&appid={api_key}&units=metric"
response = requests.get(url)
if response.status_code != 200:
return "Could not fetch weather data."
data = response.json()
weather = data["weather"][0]["description"]
temp = data["main"]["temp"]
return f"The current weather in {city} is {weather} with temperature {temp}Β°C."
π§ What is @tool?
The @tool decorator converts a Python function into a LangChain Tool.
Without @tool:
def get_weather(city):
...
Just a normal function.
With @tool:
@tool
def get_weather(city):
...
Now it becomes:
- Discoverable by the agent
- Has a name + description
- Callable by LLM reasoning
Internally, it gives the agent:
- Tool name:
get_weather - Description (docstring)
- Input schema
So the agent can decide:
Thought: I need weather info
Action: get_weather
Action Input: India
βοΈ Step 5: Build the Agent
π projects/search_agent/app.py
from langchain_ollama import ChatOllama
from langchain.agents import initialize_agent, AgentType
from .tools import get_weather
def search_agent():
print("π¦ Weather Agent Started")
llm = ChatOllama(
model="llama3",
temperature=0
)
tools = [get_weather]
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True
)
response = agent.run("What is the weather in India?")
print("\nFinal Answer:")
print(response)
π§Ύ Step 6: Root Entry Point
π main.py
from projects.search_agent.app import search_agent
from dotenv import load_dotenv
load_dotenv()
def main():
print("Hello from LangChain Course!")
search_agent()
if __name__ == "__main__":
main()
βΆοΈ Step 7: Run the Project
python main.py
π Agent Flow Diagram
User Question
β
LLM (Reasoning)
β
Decides Tool β get_weather
β
API Call (OpenWeather)
β
Returns Data
β
LLM formats answer
β
Final Output
π§ Whatβs Happening Under the Hood
The agent follows ReAct pattern:
Thought β Action β Observation β Thought β Final Answer
Example:
Thought: I need weather info
Action: get_weather
Action Input: India
Observation: The weather is 30Β°C, cloudy
Final Answer: The weather in India is...
β οΈ Common Mistakes
β Expecting LLM to fetch real-time data
β Not using tools
β Missing API key
β Weak tool descriptions
π Improvements You Can Add
- π Country β City mapping
- π§ Multi-step queries (forecast + humidity)
- π¬ User input instead of hardcoded query
- π‘ Add caching layer
- π§ͺ Add error handling + retries
π― Key Learnings
- Agents donβt βknowβ real-time data
- Tools bridge LLM β real world
-
@toolis the gateway to agent capabilities - ReAct loop powers decision making
π₯ What Next?
You can now build:
- π Search agent (Google / Tavily)
- π» Coding agent (file tools)
- π RAG agent (documents)
- π§ Multi-agent system
Youβve officially moved from:
π LLM user β AI Agent builder π
Top comments (1)
Neat walkthrough β the @tool decorator explanation with the before/after comparison is the clearest I have seen. One thing worth adding: the agent tool-calling behavior changes significantly depending on which Ollama model you use. Llama3 handles tool schemas well but some smaller models silently skip the tool call entirely.