DEV Community

Midas126
Midas126

Posted on

Building Your Own AI Agent: A Practical Guide with LangGraph

From Chatbots to Autonomous Agents: The Next AI Frontier

We've all interacted with AI chatbots. You ask a question, it provides an answer. But what if your AI could not just answer, but act? What if it could browse the web, execute code, update a database, or orchestrate a multi-step workflow autonomously? This is the promise of AI agents—and with today's tooling, building one is more accessible than you might think.

While large language models (LLMs) excel at understanding and generating text, they lack the ability to perform actions in the real world. AI agents bridge this gap by combining LLMs with tools, memory, and decision-making logic. In this guide, we'll move beyond simple API calls and build a functional research agent that can autonomously gather information, analyze it, and compile a report.

Why LangGraph? The Power of State Machines

You could build a simple agent with a while loop and some conditional logic. But as complexity grows, you'll quickly face spaghetti code. This is where LangGraph shines—a library from LangChain for building stateful, multi-actor applications.

Think of LangGraph as a framework for creating directed graphs where nodes are functions (or "tools") and edges define the flow. The agent's state is passed through this graph, with the LLM acting as the router that decides which path to take next. This structure gives us several advantages:

  • Explicit control flow: Visualize your agent's decision process
  • Persistent state: Maintain context across multiple steps
  • Human-in-the-loop: Easily add approval steps
  • Parallel execution: Run certain tools simultaneously

Building a Research Agent: Step by Step

Let's build a practical example: an agent that researches a given topic and creates a concise briefing document. Our agent will need to:

  1. Search the web for current information
  2. Extract key points from relevant sources
  3. Synthesize findings into a structured report

Step 1: Setting Up Our Environment

# requirements.txt
# langchain>=0.1.0
# langchain-openai
# langchain-community
# langgraph
# duckduckgo-search
# python-dotenv

import os
from typing import TypedDict, List, Annotated
from langchain_openai import ChatOpenAI
from langchain_community.tools import DuckDuckGoSearchRun
from langgraph.graph import StateGraph, END
from langgraph.prebuilt import ToolExecutor
from langchain_core.messages import HumanMessage, SystemMessage
from dotenv import load_dotenv

load_dotenv()

# Initialize our LLM and tools
llm = ChatOpenAI(model="gpt-4-turbo", temperature=0)
search_tool = DuckDuckGoSearchRun()
tools = [search_tool]
tool_executor = ToolExecutor(tools)
Enter fullscreen mode Exit fullscreen mode

Step 2: Defining Our Agent State

The state is the single source of truth that flows through our graph. We define it as a TypedDict:

class AgentState(TypedDict):
    """State for our research agent."""
    topic: str
    search_results: List[str]
    key_points: List[str]
    report: str
    iterations: Annotated[int, lambda x, y: x + y]  # Automatically increments
    max_iterations: int
Enter fullscreen mode Exit fullscreen mode

Notice the Annotated type for iterations. LangGraph uses this to define how state updates merge—here, we're specifying that iterations should be summed, effectively creating a counter.

Step 3: Creating Our Graph Nodes

Each node is a function that takes the state, performs an action, and returns an updated state.

def search_node(state: AgentState) -> AgentState:
    """Search for information about the topic."""
    if state["iterations"] >= state["max_iterations"]:
        return {**state, "report": "Maximum research iterations reached."}

    query = f"latest developments {state['topic']} 2024"
    results = search_tool.invoke(query)

    # Keep only the most relevant results
    search_results = results.split("\n")[:5] if results else []

    return {
        **state,
        "search_results": search_results,
        "iterations": 1  # Increment the counter
    }

def analyze_node(state: AgentState) -> AgentState:
    """Extract key points from search results."""
    if not state["search_results"]:
        return {**state, "key_points": ["No results to analyze"]}

    analysis_prompt = f"""
    Analyze these search results about {state['topic']}:
    {chr(10).join(state['search_results'])}

    Extract 3-5 key points. Be concise and factual.
    """

    messages = [
        SystemMessage(content="You are a research analyst."),
        HumanMessage(content=analysis_prompt)
    ]

    response = llm.invoke(messages)
    key_points = response.content.split("\n")

    return {**state, "key_points": key_points}

def report_node(state: AgentState) -> AgentState:
    """Synthesize findings into a report."""
    report_prompt = f"""
    Create a brief research report on {state['topic']} based on these key points:
    {chr(10).join(state['key_points'])}

    Structure your report with:
    1. Executive Summary
    2. Key Findings
    3. Implications
    4. Sources Considered

    Keep it under 500 words.
    """

    messages = [
        SystemMessage(content="You are a technical writer."),
        HumanMessage(content=report_prompt)
    ]

    response = llm.invoke(messages)

    return {**state, "report": response.content}
Enter fullscreen mode Exit fullscreen mode

Step 4: Wiring It All Together

Now we create our graph and define the flow:

# Initialize the graph
workflow = StateGraph(AgentState)

# Add nodes
workflow.add_node("search", search_node)
workflow.add_node("analyze", analyze_node)
workflow.add_node("report", report_node)

# Define the flow
workflow.set_entry_point("search")
workflow.add_edge("search", "analyze")
workflow.add_edge("analyze", "report")
workflow.add_edge("report", END)

# Compile the graph
app = workflow.compile()
Enter fullscreen mode Exit fullscreen mode

Step 5: Running Our Agent

# Initialize our state
initial_state = AgentState(
    topic="retrieval augmented generation",
    search_results=[],
    key_points=[],
    report="",
    iterations=0,
    max_iterations=3
)

# Execute the graph
final_state = app.invoke(initial_state)

print("Research Complete!")
print(f"Topic: {final_state['topic']}")
print(f"Iterations: {final_state['iterations']}")
print("\n" + "="*50 + "\n")
print(final_state['report'])
Enter fullscreen mode Exit fullscreen mode

Taking It Further: Advanced Agent Patterns

Our basic agent works, but production agents need more sophistication. Here are two powerful patterns you can implement:

Pattern 1: Conditional Routing with LLM as Decider

Instead of linear flow, let the LLM decide the next step:

from langgraph.graph import MessagesState

class RouterState(MessagesState):
    next: str

def router_node(state: RouterState):
    """LLM decides which tool to use next."""
    prompt = """Based on the conversation, what should we do next?
    Options:
    - search: Need more information
    - analyze: Have enough data to analyze
    - report: Ready to compile findings
    - end: Task is complete

    Return only the option name."""

    response = llm.invoke(prompt)
    return {"next": response.content.strip()}

# Now you can create branches in your graph
workflow.add_conditional_edges(
    "router",
    lambda state: state["next"],
    {
        "search": "search",
        "analyze": "analyze", 
        "report": "report",
        "end": END
    }
)
Enter fullscreen mode Exit fullscreen mode

Pattern 2: Parallel Execution with Human Oversight

Some tasks can run in parallel, and sometimes you want human approval:

from langgraph.checkpoint import MemorySaver
from langgraph.graph import START

# Add persistence
memory = MemorySaver()
app = workflow.compile(checkpointer=memory)

# Config for human-in-the-loop
config = {"configurable": {"thread_id": "research_1"}}

# Run until human input needed
for event in app.stream(initial_state, config=config, stream_mode="values"):
    if "report" in event and event["report"]:
        print("Draft report ready for review:")
        print(event["report"])

        # Human can modify state before continuing
        approval = input("Approve? (y/n/modify): ")
        if approval.lower() == "modify":
            # Add human feedback to state
            feedback = input("Enter your feedback: ")
            updated_state = {**event, "human_feedback": feedback}
            app.update_state(config, updated_state)
Enter fullscreen mode Exit fullscreen mode

Best Practices for Production Agents

  1. Add validation: Validate tool inputs and outputs to catch errors early
  2. Implement rate limiting: Respect API limits, especially with paid services
  3. Add observability: Log decisions, tool usage, and token consumption
  4. Test thoroughly: Create unit tests for each node and integration tests for flows
  5. Plan for failures: Add retry logic and fallback mechanisms
  6. Consider security: Sanitize inputs, especially when executing code

Your Next Steps

We've built a functional research agent, but this is just the beginning. Consider extending it with:

  • Multi-source research: Add arXiv, Wikipedia, or news API tools
  • Citation tracking: Automatically track and cite sources
  • Collaborative agents: Multiple specialized agents working together
  • Long-term memory: Store findings in a vector database for future reference

The true power of AI agents lies in their ability to automate complex workflows that previously required human intervention. Start small with a specific use case, instrument everything, and iterate based on real usage.

Ready to build? Fork the complete example code and try modifying it for your own use case. Share what you build—I'd love to see how you're applying agentic AI to real problems!

Remember: The best agents don't just automate tasks—they augment human capabilities. Build tools that make you more effective, not ones that replace your judgment entirely.

Top comments (0)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.