DEV Community

Aromal
Aromal

Posted on

How to create your own websearch agent using langchain

When you ask ChatGPT or other AI models something like “What’s the latest news?”, they often reply with outdated information.

Why? Because most language models are trained on past data and don’t have real-time internet access.

That’s where LangChain agents + search tools come in.

In this tutorial, we’ll build a search-powered conversational agent that can:

  • Use an LLM for reasoning
  • Fetch real-time web data (via DuckDuckGo search)
  • Remember the context of the conversation

Think of it as your personal AI assistant with live web access. 🔥

Why Use a Search Agent?

Here are some real-world use cases:

  • Latest news & trends → “What’s happening in AI research today?”
  • Market/Stock updates → “What’s the current price of Tesla stock?”
  • Research assistant → “Find me recent articles on climate change policies.”
  • Productivity tools → A chatbot that pulls fresh answers instead of relying only on training data.

In short, this bridges the gap between static AI knowledge and the dynamic internet.

Prerequisites

Before starting, make sure you have:

  • Python 3.9+
  • An OpenRouter API key (free/paid options available)
  • Installed dependencies:
pip install langchain langchain-openai langchain-community duckduckgo-search
Enter fullscreen mode Exit fullscreen mode

Step 1: Import Required Libraries

We’ll need LangChain’s LLM wrapper, a search tool, and agent utilities.

from langchain_openai import ChatOpenAI
from langchain_community.tools import DuckDuckGoSearchRun
from langchain.tools import Tool
from langchain.memory import ConversationBufferMemory
from langchain.agents.agent_types import AgentType
from langchain.agents import initialize_agent
Enter fullscreen mode Exit fullscreen mode

Step 2: Initialize the LLM (via OpenRouter)

Here we use the DeepSeek model served via OpenRouter.
You can swap this with gpt-4, gpt-3.5-turbo, or other supported models.

llm = ChatOpenAI(
    model="deepseek/deepseek-chat-v3-0324:free",
    base_url="https://openrouter.ai/api/v1",
    api_key="your_openrouter_api_key_here",
)
Enter fullscreen mode Exit fullscreen mode

Why OpenRouter?

  • It supports multiple models (DeepSeek, Mistral, GPT, etc.)
  • You don’t have to lock into one vendor
  • Some models are free!

Step 3: Add a Search Tool

To fetch live results, we’ll use DuckDuckGoSearchRun.

search = DuckDuckGoSearchRun()

tool = [
    Tool(
        name="search",
        func=search.run,
        description="When you want real time data use this",
    )
]
Enter fullscreen mode Exit fullscreen mode

Why DuckDuckGo?

  • It’s lightweight and privacy-friendly
  • Works well for quick factual lookups
  • No API key required

Step 4: Add Memory (for Conversations)

Without memory, the agent forgets past messages.
We’ll use ConversationBufferMemory so it can keep context.

memory = ConversationBufferMemory(memory_key="chat_history" return_errors=True)
Enter fullscreen mode Exit fullscreen mode

Step 5: Initialize the Agent

Now, we combine everything into a conversational agent.

agent = initialize_agent(
    llm=llm,
    tools=tool,
    agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
    memory=memory,
    verbose=True,
    handle_parsing_errors=True
)
Enter fullscreen mode Exit fullscreen mode

Final Code

from langchain_openai import ChatOpenAI
from langchain_community.tools import DuckDuckGoSearchRun
from langchain.tools import Tool
from langchain.memory import ConversationBufferMemory
from langchain.agents.agent_types import AgentType
from langchain.agents import initialize_agent

from dotenv import load_dotenv
import os


load_dotenv()

def webSearchAgent(question):

    llm = ChatOpenAI(
        model = "deepseek/deepseek-chat-v3-0324:free",
        base_url = "https://openrouter.ai/api/v1",
        api_key = os.getenv("OPENROUTER_API_KEY"),
    )

    search = DuckDuckGoSearchRun()

    tool = [
        Tool  (
            name = "search",
            func = search.run,
            description = "When you want real time data use this",
        )
    ]

    memory = ConversationBufferMemory(memory_key = "chat_history")

    agent = initialize_agent(
        llm = llm,
        tools = tool,
        agent = AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
        memory = memory, 
        verbose = True,
        handle_parsing_errors = True
    )

    result = agent.run(question)
    print(f"Agent:\n\n {result}")

webSearchAgent("Give me some latest news about Ai")
Enter fullscreen mode Exit fullscreen mode

Step 6: Run the Agent 🎉

Finally, let’s test it out!

    result = agent.run(question)
    print(f"Agent:\n\n {result}")

webSearchAgent("Give me the latest news")
Enter fullscreen mode Exit fullscreen mode

Example Run

Execution_result1
Execution_result1

Making real-time AI search assistant using LangChain!
This way is powerful because it combines:

  • Reasoning power of LLMs
  • Real-time knowledge via search
  • Conversation memory

With just ~40 lines of code, you’ve got the foundation for an AI research assistant, news bot, or productivity tool.

💡 Did you find this useful?
Drop a ⭐ on this repo and follow me on github
https://github.com/DeveloperAromal/Websearch-Agent

Happy coding

Top comments (1)

Collapse
 
masterdevsabith profile image
Muhammed Sabith

Wooaahh ! This is super useful 😍🤩