Have you ever created a chatbot that just didn’t quite hit the mark? You pose a question, it delivers a one-word response, and if it misses the context, the conversation goes nowhere. Imagine if you could create an AI that can think, strategize, and utilize tools to tackle problems on its own. That’s the essence of Agentic AI.
In this tutorial, we’re going beyond basic chatbots. We’ll develop a Research Agent capable of independently using a web search tool to gather the latest information and compile a thorough answer for us. We’ll leverage LangGraph, a robust library from LangChain specifically crafted for building dynamic, agentic workflows.
By the time you finish this guide, you’ll have a fully functional AI agent and a solid grasp of the key concepts that drive its capabilities.
What Defines an "Agentic" AI?
Unlike a straightforward call to a Large Language Model (LLM), an agent functions in a continuous loop:
1. Plan: The LLM assesses the user’s question along with its previous actions to determine the next step.
2. Act: It performs an action, such as conducting a web search, executing a piece of code, or making an API call.
3. Observe: The agent reviews the results gained from its action to determine if it has sufficient information to respond effectively.
4. Loop: This process is repeated—steps 1 through 3—until the agent is ready to provide a final answer.
This "Plan-Act-Observe" framework forms the backbone of agentic reasoning, and LangGraph excels in facilitating these loops.
[ Are you looking: Generative AI Integration Services]
Introducing Our Project: A Research Agent
We are developing an agent designed to tackle intricate questions requiring the most current information. For instance:
User Query: "What are the latest trends in quantum computing for 2024?"
Agent's Thought Process:
- "I should search for 'quantum computing trends 2024'."
- [Executes the search tool and retrieves results]
- "I have access to multiple articles. I need to read through them and create a clear, concise summary for the user."
- [Delivers a well-structured response based on the gathered information]
Prerequisites:
- Python 3.8 or higher
- An OpenAI API key (you can obtain one here)
- A Tavily API key (a fast and cost-effective web search API designed for AI agents—get a free key here)
Step 1: Set Up Your Environment
First, create a new directory and install the required packages.
bash
pip install langgraph langchain-openai tavily-python python-dotenv
Create a .env file to securely store your API keys. Never hardcode them!
bash
.env
OPENAI_API_KEY=your_openai_api_key_here
TAVILY_API_KEY=your_tavily_api_key_here
Step 2: The Building Blocks of Our Agent
Create a new file called research_agent.py. Let's start by importing the necessary modules and loading our environment.
python
research_agent.py
from langchain_openai import ChatOpenAI
from tavily import TavilyClient
from langgraph.graph import END, StateGraph
from typing import TypedDict, Annotated
import operator
from dotenv import load_dotenv
import os
Load environment variables
load_dotenv()
Initialize our LLM and Tools
llm = ChatOpenAI(model="gpt-4-turbo-preview") # or use "gpt-3.5-turbo" for a cheaper option
tavily = TavilyClient(api_key=os.getenv("TAVILY_API_KEY"))
[ Also Read: What is Security Patching and Why is it Essential for Businesses?]
Step 3: Define the Agent's State
The agent needs a "memory" to know what it has done and what it needs to do next. In LangGraph, we define this as a State. Our state will track:
The user's original question
*-
The steps the agent has taken
The answer it has compiled
python
Define the State object
class AgentState(TypedDict):
question: str
search_results: str
answer: str
Define our Tool
def search_tool(question: str) -> str:
"""Call the Tavily web search tool."""
response = tavily.search(query=question, max_results=3)
return response["results"]
Step 4: Define the Agent's Nodes
In LangGraph, a workflow is built using Nodes (functions) and Edges (connections between them). Our agent will have two key nodes:
The search_node: Responsible for deciding the search query and executing the search tool.
The answer_node: Responsible for synthesizing the final answer from the search results.
python
Node 1: Search the web
def search_node(state: AgentState):
print(" Planning a search...")
# The LLM decides the best search query based on the question
query = llm.invoke(
f"Based on the user's question, generate a single, optimal web search query. Question: {state['question']}"
).content
print(f" Searching for: '{query}'")
results = search_tool(query)
return {"search_results": results}
Node 2: Generate the final answer
def answer_node(state: AgentState):
print("✍️ Writing answer...")
# The LLM synthesizes the search results into a coherent answer
answer = llm.invoke(
f"""
User Question: {state['question']}
**Search Results:**
{state['search_results']}
Please write a comprehensive and detailed answer based SOLELY on the search results provided above. Cite your sources.
"""
).content
return {"answer": answer}
[ Good Read: Top 10 Generative AI Integration Services for Your Business]
Step 5: Build the Graph
This is where the magic happens. We create the graph, add our nodes, define the flow, and compile it into a runnable application.
python
Create the graph
workflow = StateGraph(AgentState)
Add our nodes
workflow.add_node("search", search_node)
workflow.add_node("answer", answer_node)
Define the flow: Start -> Search -> Answer -> End
workflow.set_entry_point("search")
workflow.add_edge("search", "answer")
workflow.add_edge("answer", END)
Compile the graph into a runnable app
app = workflow.compile()
Step 6: Run Your First AI Agent
Now for the fun part. Let's execute our agent with a question that requires recent data.
python
if name == "main":
# Define your question
question = "What are the latest breakthroughs in solar cell efficiency announced in 2024?"
# Run the agent!
print(f" Starting agent for question: {question}\n")
final_state = app.invoke({"question": question, "search_results": "", "answer": ""})
# Print the results
print("\n" + "="*50)
print(" **FINAL ANSWER:**")
print("="*50)
print(final_state["answer"])
Run the script:
bash
python research_agent.py
Sit back and watch your agent think, act, and answer!
Understanding the Output
You should see an output in your terminal that traces the agent's thought process:
text
Starting agent for question: What are the latest breakthroughs in solar cell efficiency announced in 2025?
Planning a search...
Searching for: 'solar cell efficiency breakthroughs 2024'
Writing answer...
FINAL ANSWER:
Based on recent search results, several significant breakthroughs in solar cell efficiency have been announced in 2025
You can check more info about: How Agentic AI is Transforming DevSecOps.
Top comments (0)