DEV Community

Vikas Choubey
Vikas Choubey

Posted on

πŸ€– Build a Weather Search Agent (LangChain + Ollama)

In this tutorial, we’ll build a simple AI agent that can fetch real-time weather information for a given country.

We’ll keep it:

  • πŸ’» Fully local (LLM via Ollama)
  • 🌐 Real-time (weather via API)
  • 🧠 Agent-based (tool usage with LangChain)

🎯 What You’ll Build

An agent that can answer:

β€œWhat’s the weather in India?”

Flow:


User β†’ Agent β†’ Tool (Weather API) β†’ Agent β†’ Response

Enter fullscreen mode Exit fullscreen mode

⚠️ Important Reality

Local models (via Ollama) cannot browse the internet.

So we solve this by:

πŸ‘‰ Giving the agent a tool that calls a real API

πŸ‘‰ The agent decides when to use that tool


🧱 Project Structure


root/
β”‚
β”œβ”€β”€ main.py
└── projects/
└── search_agent/
β”œβ”€β”€ app.py
└── tools.py

Enter fullscreen mode Exit fullscreen mode

πŸ”‘ Step 1: Get Free Weather API

Use: https://openweathermap.org/api

  1. Sign up
  2. Get API key

πŸ“¦ Step 2: Install Dependencies

pip install langchain langchain-ollama requests python-dotenv
Enter fullscreen mode Exit fullscreen mode

πŸ” Step 3: Add ".env"

WEATHER_API_KEY=your_api_key_here
Enter fullscreen mode Exit fullscreen mode

πŸ›  Step 4: Create Tool (REAL API)

πŸ“„ projects/search_agent/tools.py

import requests
import os
from langchain.tools import tool


@tool
def get_weather(city: str) -> str:
    """
    Fetches real-time weather data for a given city.
    """
    api_key = os.getenv("WEATHER_API_KEY")

    url = f"http://api.openweathermap.org/data/2.5/weather?q={city}&appid={api_key}&units=metric"

    response = requests.get(url)

    if response.status_code != 200:
        return "Could not fetch weather data."

    data = response.json()

    weather = data["weather"][0]["description"]
    temp = data["main"]["temp"]

    return f"The current weather in {city} is {weather} with temperature {temp}Β°C."
Enter fullscreen mode Exit fullscreen mode

🧠 What is @tool?

The @tool decorator converts a Python function into a LangChain Tool.

Without @tool:

def get_weather(city):
    ...
Enter fullscreen mode Exit fullscreen mode

Just a normal function.


With @tool:

@tool
def get_weather(city):
    ...
Enter fullscreen mode Exit fullscreen mode

Now it becomes:

  • Discoverable by the agent
  • Has a name + description
  • Callable by LLM reasoning

Internally, it gives the agent:

  • Tool name: get_weather
  • Description (docstring)
  • Input schema

So the agent can decide:

Thought: I need weather info
Action: get_weather
Action Input: India
Enter fullscreen mode Exit fullscreen mode

βš™οΈ Step 5: Build the Agent

πŸ“„ projects/search_agent/app.py

from langchain_ollama import ChatOllama
from langchain.agents import initialize_agent, AgentType
from .tools import get_weather


def search_agent():
    print("🌦 Weather Agent Started")

    llm = ChatOllama(
        model="llama3",
        temperature=0
    )

    tools = [get_weather]

    agent = initialize_agent(
        tools,
        llm,
        agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
        verbose=True
    )

    response = agent.run("What is the weather in India?")

    print("\nFinal Answer:")
    print(response)
Enter fullscreen mode Exit fullscreen mode

🧾 Step 6: Root Entry Point

πŸ“„ main.py

from projects.search_agent.app import search_agent
from dotenv import load_dotenv

load_dotenv()


def main():
    print("Hello from LangChain Course!")
    search_agent()


if __name__ == "__main__":
    main()
Enter fullscreen mode Exit fullscreen mode

▢️ Step 7: Run the Project

python main.py
Enter fullscreen mode Exit fullscreen mode

πŸ”„ Agent Flow Diagram

User Question
     ↓
LLM (Reasoning)
     ↓
Decides Tool β†’ get_weather
     ↓
API Call (OpenWeather)
     ↓
Returns Data
     ↓
LLM formats answer
     ↓
Final Output
Enter fullscreen mode Exit fullscreen mode

🧠 What’s Happening Under the Hood

The agent follows ReAct pattern:

Thought β†’ Action β†’ Observation β†’ Thought β†’ Final Answer
Enter fullscreen mode Exit fullscreen mode

Example:

Thought: I need weather info
Action: get_weather
Action Input: India
Observation: The weather is 30Β°C, cloudy
Final Answer: The weather in India is...
Enter fullscreen mode Exit fullscreen mode

⚠️ Common Mistakes

❌ Expecting LLM to fetch real-time data
❌ Not using tools
❌ Missing API key
❌ Weak tool descriptions


πŸš€ Improvements You Can Add

  • 🌍 Country β†’ City mapping
  • 🧠 Multi-step queries (forecast + humidity)
  • πŸ’¬ User input instead of hardcoded query
  • πŸ“‘ Add caching layer
  • πŸ§ͺ Add error handling + retries

🎯 Key Learnings

  • Agents don’t β€œknow” real-time data
  • Tools bridge LLM β†’ real world
  • @tool is the gateway to agent capabilities
  • ReAct loop powers decision making

πŸ”₯ What Next?

You can now build:

  • πŸ”Ž Search agent (Google / Tavily)
  • πŸ’» Coding agent (file tools)
  • πŸ“„ RAG agent (documents)
  • 🧠 Multi-agent system

You’ve officially moved from:

πŸ‘‰ LLM user β†’ AI Agent builder πŸš€

Top comments (1)

Collapse
 
klement_gunndu profile image
klement Gunndu

Neat walkthrough β€” the @tool decorator explanation with the before/after comparison is the clearest I have seen. One thing worth adding: the agent tool-calling behavior changes significantly depending on which Ollama model you use. Llama3 handles tool schemas well but some smaller models silently skip the tool call entirely.