DEV Community

M TOQEER ZIA
M TOQEER ZIA

Posted on

Building AI Agents with LangGraph: A Complete Beginner's Guide

1. What Is This Project About?

Your project is learning how to build intelligent AI agents using LangGraph and LangChain. These agents can:

  • Have conversations that remember context
  • Use tools (like looking up stock prices)
  • Make decisions based on what they're asked
  • Process information step-by-step

Think of an AI agent like a smart assistant that can think through problems. It doesn't just give you an answer; it can break down the problem, call functions when needed, and adjust based on the results.

The Problem It Solves

Without AI agents, an AI model like ChatGPT can only respond based on what it's trained on. It can't look up real-time information or use external tools. AI agents solve this by letting the model decide when and how to use tools, then incorporating the results into its response.

Why LangGraph?

  • LangChain is a framework for building AI applications (good for simple tasks)
  • LangGraph adds the ability to create workflows - sequences of steps that the AI can navigate intelligently
  • Think of it as: LangChain = building blocks, LangGraph = blueprint for arranging those blocks

2. Core Concepts Explained (Simple Language)

What Is Agentic AI?

Agentic AI is AI that can make decisions and take actions on its own, not just answer questions.

Example:

  • Regular AI: You ask "What's the weather?" → It gives a generic response based on training data
  • Agentic AI: You ask "What's the weather?" → It calls a weather tool → It gets real data → It gives you today's specific weather

Key trait: An agent can choose what to do next based on the situation.

What Is LangChain?

LangChain is a toolkit for building AI applications. It helps with:

  • Connecting to different AI models (OpenAI, Google, etc.)
  • Managing prompts and messages
  • Creating functions that AI can call
  • Chaining operations together

Think of it as LEGO blocks for AI - basic pieces you can connect.

What Is LangGraph?

LangGraph is a workflow engine built on top of LangChain. It lets you:

  • Create nodes (steps/actions)
  • Connect nodes with edges (paths between steps)
  • Make decisions about which path to take
  • Build complex workflows that agents can navigate

Think of it as a flowchart that AI can follow and make decisions within.

Difference: LangChain vs LangGraph

Aspect LangChain LangGraph
Purpose Building AI applications Creating AI workflows/graphs
Structure Linear chains Non-linear graphs with decisions
Use Case Simple Q&A, text processing Agents with tools, multi-step tasks
Control Flow Sequential Decision-based (if-else paths)
Memory Basic conversation history Built-in state management
Complexity Lower Higher, more powerful
Example Simple chatbot Chatbot that uses tools and decides

In simple terms: LangChain is like a recipe, LangGraph is like a flowchart the AI follows.

What Is a Graph (In This Context)?

A graph is a way to show connections between things:

  • Circles = nodes (actions/steps)
  • Arrows = edges (connections between steps)
  • Decisions = conditional edges (if-else paths)
START → Node1 → Node2 → END
          ↓
        Decision
          ↓
        Node3 → END
Enter fullscreen mode Exit fullscreen mode

What Are Nodes?

A node is a function that does something. In your code:

def chatbot(state: State):
    return {"messages": [llm_with_tools.invoke(state["messages"])]}
Enter fullscreen mode Exit fullscreen mode

This is a node. It receives the current state, processes it, and returns an updated state.

Nodes are like workers in a factory - each does a specific job.

What Are Edges?

An edge is a connection between nodes. It defines the path the data takes.

Types of edges:

  • Regular edge: Always goes from Node A to Node B
  • Conditional edge: Chooses which node to go to based on a decision
builder.add_edge(START, "chatbot")  # Regular edge
builder.add_conditional_edges('chatbot', tools_condition)  # Conditional edge
Enter fullscreen mode Exit fullscreen mode

State Management

State is the data that flows through your graph. Think of it as a backpack that each node can look at and modify.

Your state definition:

class State(TypedDict):
    messages: Annotated[list, add_messages]
Enter fullscreen mode Exit fullscreen mode

This means: "We're carrying messages with us. When we add new messages, append them to the list (don't replace)."

Why Annotated[list, add_messages]?

  • list = the type of data (a list)
  • add_messages = how to update it (append instead of replace)

This is important because with conversations, you want to keep all previous messages, not discard them.


3. Step-by-Step Code Explanation

Let me walk through your most complete example: tools_memory_langsmith_trace.ipynb

Part 1: Imports and Setup

from langchain.chat_models import init_chat_model
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_core.tools import tool
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.checkpoint.memory import MemorySaver
Enter fullscreen mode Exit fullscreen mode

What each import does:

  • init_chat_model - Initialize an AI model (Gemini, ChatGPT, etc.)
  • Annotated - Let us add extra information to types
  • TypedDict - Define what data we're carrying (the "backpack")
  • StateGraph - The workflow builder
  • START, END - Special nodes marking beginning and end
  • add_messages - How to merge messages in our state
  • tool - Decorator to make functions callable by AI
  • ToolNode - Special node that runs tools
  • tools_condition - Built-in decision: "Does the AI want to use tools?"
  • MemorySaver - Save conversation history between runs

Why needed: These give us everything we need to build an AI agent with tools and memory.

Part 2: Memory Setup

memory = MemorySaver()
Enter fullscreen mode Exit fullscreen mode

What this does: Creates a memory storage object that saves conversation history.

Why it's needed: Without memory, each conversation starts fresh. With it, the AI remembers previous messages.

Part 3: Load Environment Variables

from dotenv import load_dotenv
load_dotenv()
Enter fullscreen mode Exit fullscreen mode

What this does: Loads API keys from a .env file (like your OpenAI or Google API key).

Why it's needed: Don't hardcode secret keys in your code.

Part 4: Define the State (The Backpack)

class State(TypedDict):
    messages: Annotated[list, add_messages]
Enter fullscreen mode Exit fullscreen mode

What this means:

  • We're creating a data structure called State
  • It has one piece of data: messages
  • messages is a list
  • When we add messages, use add_messages (which appends, not replaces)

Why it's structured this way:

  • Keeps conversation history - All messages stay in the list
  • Type-safe - Python knows what we expect
  • Extensible - Can add more fields like user_id, timestamp, etc.

Part 5: Define a Tool

@tool
def get_stock_price(symbol: str) -> float:
    """Return the current price of a stock given the stock symbol

    :param symbol: stock symbol
    :return: current price of the stock
    """
    return {
        "MSFT": 200.3,
        "AAPL": 100.4,
        "AMZN": 150.0,
        "RIL": 87.6
    }.get(symbol, 0.0)
Enter fullscreen mode Exit fullscreen mode

What this does:

  • @tool decorator makes this function callable by the AI
  • Takes a stock symbol (like "MSFT")
  • Returns a price from the dictionary
  • If symbol not found, returns 0.0

Why it's important:

  • The AI can decide to use this tool when relevant
  • The AI sees the docstring (description) and knows what it does
  • AI calls it with the right parameters

In real life: This would call an API to get real stock prices. Here it's mocked for learning.

Part 6: Create Tools List and Bind to LLM

tools = [get_stock_price]

llm = init_chat_model("google_genai:gemini-2.0-flash")
llm_with_tools = llm.bind_tools(tools)
Enter fullscreen mode Exit fullscreen mode

What this does:

  1. Put all tools in a list
  2. Initialize Google's Gemini model
  3. Bind tools to the model - tell the model which tools are available

What bind_tools means:
The AI model now knows about these tools. When you ask it something, it can decide to call a tool and tell us the tool name and parameters.

Why it's needed:

  • Without binding, the model doesn't know tools exist
  • With binding, the model can intelligently use them

Part 7: Create a Chatbot Node

def chatbot(state: State):
    return {"messages": [llm_with_tools.invoke(state["messages"])]}
Enter fullscreen mode Exit fullscreen mode

What this does:

  1. Takes the current state (with all previous messages)
  2. Sends all messages to the LLM with tools available
  3. Gets back a response (or tool call)
  4. Returns it wrapped in a dict with the key messages

Why the wrapping?
The state system expects returns in dict format to merge them properly.

The flow: Old messages + invoke → New response → Add to state

Part 8: Build the Graph - Create Builder

builder = StateGraph(State)
Enter fullscreen mode Exit fullscreen mode

What this does: Create a blank workflow blueprint using our State definition.

Think of it as: Getting a blank flowchart template.

Part 9: Add Nodes

builder.add_node(chatbot)
builder.add_node("tools", ToolNode(tools))
Enter fullscreen mode Exit fullscreen mode

What this does:

  1. Add the chatbot function as a node
  2. Add a special "tools" node that knows how to run our tools

The ToolNode is special:

  • It's built-in to LangGraph
  • It knows how to execute tool calls
  • When the LLM says "call get_stock_price with MSFT", ToolNode handles it

Part 10: Add Edges - Create Paths

builder.add_edge(START, "chatbot")                    # Always start at chatbot
builder.add_conditional_edges('chatbot', tools_condition)  # Decision point
builder.add_edge('tools', 'chatbot')                  # After tools, go back to chatbot
builder.add_edge("chatbot", END)                      # Can end here
Enter fullscreen mode Exit fullscreen mode

Let's trace the flow:

  1. START → chatbot - Always start by calling the chatbot node
  2. chatbot → decision - After chatbot runs, ask: "Does the AI want to use tools?"
    • If YES → go to tools node
    • If NO → go to END
  3. tools → chatbot - After running tools, go back to chatbot
  4. chatbot → END - From chatbot, can reach END

Why this structure?

  • If AI doesn't need tools, it responds and we're done
  • If AI needs tools, we get the result and let AI use it in another response
  • This loop continues until the AI decides no more tools are needed

Part 11: Compile the Graph

graph = builder.compile(checkpointer=memory)
Enter fullscreen mode Exit fullscreen mode

What this does:

  • Finalize the graph (check for errors, optimize)
  • Add memory checkpoint (save state between runs)

Why checkpointer?
With checkpointer=memory, every conversation is saved. You can resume a conversation using the same thread_id.

Without checkpointer:

  • Each .invoke() call is isolated
  • No conversation memory between calls

Part 12: Using the Graph - First Invocation

config = {
    "configurable": {
        "thread_id": "1"
    }
}

state = graph.invoke(
    {
        "messages": [{"role": "user", "content": "What is the current stock price of MSFT?"}]
    },
    config=config
)
Enter fullscreen mode Exit fullscreen mode

What this does:

  1. Create a config with thread_id: "1" - this identifies the conversation
  2. Call graph.invoke() with:
    • Initial state (user message)
    • Config (thread ID for memory)
  3. Get back the final state with all messages

The thread_id is important:

  • First call with thread_id "1" → saves to memory with key "1"
  • Second call with same thread_id "1" → loads previous memory, remembers context

How it works internally:

  1. User message → Chatbot node → LLM sees "I need get_stock_price tool"
  2. Decision: Does AI need tools? YES → go to tools node
  3. Tools node runs get_stock_price("MSFT") → returns 200.3
  4. Result back to chatbot → LLM now has the price → creates response
  5. Response goes to output

Part 13: Using Different Thread IDs (Memory Separation)

config2 = {
    "configurable": {
        "thread_id": "2"
    }
}

state = graph.invoke(
    {
        "messages": [
            {"role": "user", "content": "What is the current stock price of MSFT?"}
        ]
    },
    config=config2
)
Enter fullscreen mode Exit fullscreen mode

What this does: Use thread_id "2" instead of "1"

Why?

  • Each thread_id is a separate conversation
  • Thread "1" and Thread "2" have separate memory
  • Thread "1" might remember previous context, Thread "2" starts fresh
  • Useful for multiple users or multiple conversations

Part 14: Tracing with LangSmith

from langsmith import traceable

@traceable
def call_graph(query: str):
    state = graph.invoke(
        {
            "messages": [
                {"role": "user", "content": query}
            ]
        },
        config=config
    )
    return state["messages"][-1].content
Enter fullscreen mode Exit fullscreen mode

What @traceable does:

  • Logs this function call to LangSmith
  • Records inputs, outputs, timing
  • Creates a trace for debugging

Why it's useful:

  • See exactly what the AI did
  • Understand why it made decisions
  • Debug problems more easily
  • Visualize the execution flow

state["messages"][-1].content:

  • state["messages"] = list of all messages
  • [-1] = last message
  • .content = the text content of that message

4. Graph Flow Explanation

Let me explain how data flows through your graph using a real example.

Example: "What is the current stock price of MSFT?"

Step 1: START

  • User provides: {"role": "user", "content": "What is the current stock price of MSFT?"}
  • State now has: messages: [user_message]

Step 2: Enter Chatbot Node

  • Chatbot receives state with all previous messages
  • Calls LLM with all messages and available tools
  • LLM reads the question and sees get_stock_price tool is available
  • LLM decides: "I need to call get_stock_price with symbol='MSFT'"

Step 3: Decision Point (tools_condition)

  • Question: "Did the AI decide to use tools?"
  • Answer: YES (LLM returned a tool call)
  • Next: Go to "tools" node

Step 4: Tools Node

  • Receives the tool call request
  • Executes: get_stock_price("MSFT")
  • Gets result: 200.3
  • Adds to state: {"tool": "get_stock_price", "result": 200.3}
  • State now has: messages: [user_message, tool_call, tool_result]

Step 5: Back to Chatbot Node

  • Chatbot receives state with: original message, tool call, tool result
  • Calls LLM again with all messages
  • LLM sees the tool result and creates a response: "The current stock price of MSFT is $200.3"

Step 6: Decision Point Again (tools_condition)

  • Question: "Does the AI need tools again?"
  • Answer: NO (LLM returned a message, not a tool call)
  • Next: Go to END

Step 7: END

  • Graph completes
  • State returned with all messages including the response

Final state:

{
    "messages": [
        {"role": "user", "content": "What is the current stock price of MSFT?"},
        {"role": "assistant", "content": "I'll look up the stock price for you."},  # Tool call
        {"tool": "get_stock_price", "result": 200.3},  # Tool result
        {"role": "assistant", "content": "The current stock price of MSFT is $200.3"}  # Final response
    ]
}
Enter fullscreen mode Exit fullscreen mode

Decision Points in Your Graph

tools_condition is a built-in function that checks:

IF the AI's last message includes a tool call:
    → Go to "tools" node
ELSE:
    → Go to END
Enter fullscreen mode Exit fullscreen mode

Why conditional edges matter:

  • The graph doesn't always follow the same path
  • Different user inputs → Different paths
  • If no tools needed, skip tools node
  • If tools needed, use them and get results

Another Path Example: "Tell me a fun fact"

Step 1: User asks: "Tell me a fun fact"

Step 2: Chatbot node

  • LLM sees the question
  • Doesn't need tools (no need to call get_stock_price)
  • Returns: "Here's a fun fact: The sun is..."

Step 3: Decision point

  • Question: "Any tool calls?"
  • Answer: NO
  • Next: END

Path taken: START → Chatbot → Decision (NO) → END

See the difference? Same graph, different paths based on what the AI decides.


5. Chatbot Logic - How It Works Internally

Your chatbot is actually stateful. Here's what that means:

What Is "Stateful"?

Stateless: Each request is independent. The system forgets previous interactions.

User: "Hi"
Bot: "Hello!"
User: "What's my name?"
Bot: "I don't know, you didn't tell me"  ← Forgot previous message
Enter fullscreen mode Exit fullscreen mode

Stateful: The system remembers previous interactions.

User: "My name is Alice"
Bot: "Nice to meet you, Alice!"
User: "What's my name?"
Bot: "Your name is Alice"  ← Remembers Alice from before
Enter fullscreen mode Exit fullscreen mode

Your chatbot is stateful because of this code:

messages: Annotated[list, add_messages]
Enter fullscreen mode Exit fullscreen mode

The add_messages function means: Keep appending, don't forget.

How Messages Flow

First turn:

state = graph.invoke({
    "messages": [{"role": "user", "content": "Hi, tell me about yourself"}]
})
# LLM sees: [user_message]
# LLM responds
# State after: [user_message, assistant_response]
Enter fullscreen mode Exit fullscreen mode

Second turn (same thread_id):

state = graph.invoke({
    "messages": [{"role": "user", "content": "What did you just say?"}]
},
config=config  # Same thread_id
)
# Memory loads: [user_message, assistant_response]  ← From before!
# Plus new user message: [old_user, old_assistant, new_user]
# LLM sees: all 3 messages
# LLM responds with context
Enter fullscreen mode Exit fullscreen mode

This is why using add_messages is crucial - it preserves all messages, not just the latest.

Without add_messages (What would happen)

If we used messages: list instead:

# First invoke:
messages: [user_message_1]

# Second invoke (without add_messages):
messages: [user_message_2]   Lost user_message_1!
Enter fullscreen mode Exit fullscreen mode

The AI would forget everything before the new message.


6. Tools Integration

What Are Tools?

Tools are functions the AI can decide to call. They're external actions the AI can take.

In your code:

@tool
def get_stock_price(symbol: str) -> float:
    """Return the current price of a stock given the stock symbol"""
    return {"MSFT": 200.3, "AAPL": 100.4}.get(symbol, 0.0)
Enter fullscreen mode Exit fullscreen mode

This creates a tool that the AI can choose to use.

How Are Tools Used?

Step 1: Define tool ✓ (you did this with @tool)

Step 2: Make list of tools

tools = [get_stock_price]
Enter fullscreen mode Exit fullscreen mode

Step 3: Bind tools to LLM

llm_with_tools = llm.bind_tools(tools)
Enter fullscreen mode Exit fullscreen mode

Now the AI model knows:

  • What tools exist
  • What each tool does (from the docstring)
  • What parameters each tool needs

Step 4: AI decides to use tools

When you ask a question, the AI decides:

  • "Do I need to use any tools?"
  • If YES → "Which tool and what parameters?"
  • If NO → "I'll answer directly"

Step 5: System executes the tool

If AI decides to use a tool, the ToolNode runs it:

builder.add_node("tools", ToolNode(tools))
Enter fullscreen mode Exit fullscreen mode

Step 6: Result goes back to AI

The AI sees the tool result and can use it in its response.

Why Tools Are Important in Agentic AI

Without tools:

User: "What's MSFT stock price?"
AI: "I don't know, I'm not trained on real-time data"
Enter fullscreen mode Exit fullscreen mode

With tools:

User: "What's MSFT stock price?"
AI: "Let me check" → calls get_stock_price → gets 200.3 → "It's $200.3"
Enter fullscreen mode Exit fullscreen mode

Tools give AI access to:

  • Real-time data (stock prices, weather, news)
  • External systems (databases, APIs, calculators)
  • Custom logic (your business rules)

Real-World Example: Weather Agent

@tool
def get_weather(city: str) -> str:
    """Get current weather for a city"""
    # In real code, this calls an API
    return "Sunny, 72°F"

tools = [get_weather]
llm_with_tools = llm.bind_tools(tools)
Enter fullscreen mode Exit fullscreen mode

When user asks: "What's the weather in New York?"

The AI:

  1. Decides it needs weather data
  2. Calls get_weather("New York")
  3. Receives "Sunny, 72°F"
  4. Responds: "The weather in New York is sunny and 72 degrees"

7. Memory System

What Is Memory Doing in Your Code?

from langgraph.checkpoint.memory import MemorySaver
memory = MemorySaver()
graph = builder.compile(checkpointer=memory)
Enter fullscreen mode Exit fullscreen mode

This creates a persistent memory for conversations.

How It Works

Without memory:

graph = builder.compile()
# Each call is isolated
invoke_1: "Hi"  "Hello"
invoke_2: "What did I say?"  "I don't know"   No memory
Enter fullscreen mode Exit fullscreen mode

With memory:

graph = builder.compile(checkpointer=memory)

config1 = {"configurable": {"thread_id": "user_123"}}
invoke_1: "Hi"  Saves to memory with key "user_123"

invoke_2: Uses same config  Loads from memory "user_123"  Has context
Enter fullscreen mode Exit fullscreen mode

The thread_id System

thread_id is like a conversation ID. Each thread has its own memory.

# User 1's conversation
config1 = {"configurable": {"thread_id": "1"}}
graph.invoke(..., config=config1)  # Saved as conversation "1"

# User 2's conversation  
config2 = {"configurable": {"thread_id": "2"}}
graph.invoke(..., config=config2)  # Saved as conversation "2"

# Back to User 1
graph.invoke(..., config=config1)  # Loads conversation "1"
Enter fullscreen mode Exit fullscreen mode

Each thread_id is completely separate:

  • Thread "1" doesn't see Thread "2"'s messages
  • Multiple users can use the same graph
  • Each user gets their own conversation history

Why This Matters

Multi-user support:

Bot (same code)
  ├─ Thread "alice" → Remembers Alice's messages
  ├─ Thread "bob"   → Remembers Bob's messages
  └─ Thread "carol" → Remembers Carol's messages
Enter fullscreen mode Exit fullscreen mode

Conversation resumption:

User: "Hi, I want to talk about Python"
Bot: "Sure! Let's discuss Python"
[User closes app]

[User reopens app, same thread_id]
User: "What were we talking about?"
Bot: "We were discussing Python"  ← Remembers!
Enter fullscreen mode Exit fullscreen mode

Stateless vs Stateful

Stateless (without memory):

  • Every request starts fresh
  • No conversation history
  • Simple but limited
  • Good for one-off questions

Stateful (with memory):

  • Requests build on previous ones
  • Full conversation history
  • More complex but powerful
  • Good for ongoing conversations

Your code is stateful - it remembers everything across calls to the same thread.


8. Human-in-the-Loop Systems

What Does "Human-in-the-Loop" Mean?

It means humans are part of the decision-making process. The AI doesn't just do everything automatically - sometimes it asks for human approval.

Where Human Intervention Can Happen

In your graph, potential intervention points:

After chatbot decides to use a tool:

# AI decides: "I'll call get_stock_price"
# STOP HERE: Ask human: "Should I really call this tool?"
# Human says: "Yes" or "No"
# Continue based on human decision
Enter fullscreen mode Exit fullscreen mode

Before responding:

# AI creates a response
# STOP HERE: Ask human: "Does this response look good?"
# Human edits or approves
# Then send to user
Enter fullscreen mode Exit fullscreen mode

Why It's Useful

Reason 1: Safety

AI decides: "I'll delete all records"
Human intervention: "STOP! Don't do that!"
Enter fullscreen mode Exit fullscreen mode

Reason 2: Quality control

AI creates a response
Human checks: "Is this accurate?"
If not, human corrects it
Enter fullscreen mode Exit fullscreen mode

Reason 3: Transparency

Shows humans what the AI is doing
Builds trust
Enter fullscreen mode Exit fullscreen mode

Reason 4: Learning

Human feedback teaches the system
System improves over time
Enter fullscreen mode Exit fullscreen mode

Real-World Example: Content Moderation

User submits comment
↓
AI checks: "Is this appropriate?"
↓
If uncertain: Ask human moderator
↓
Human reviews and approves/rejects
↓
System learns from human decisions
Enter fullscreen mode Exit fullscreen mode

How to Implement in LangGraph

In your graph, you could add:

def human_approval(state: State) -> bool:
    """Ask human for approval"""
    response = input("Approve this action? (yes/no): ")
    return response.lower() == "yes"

builder.add_conditional_edges(
    'chatbot',
    human_approval,  # Ask human
    {
        True: 'tools',   # Human said yes
        False: 'end'     # Human said no
    }
)
Enter fullscreen mode Exit fullscreen mode

This creates a decision point where a human must approve before tools run.


9. Tracing with LangSmith

What Is Tracing?

Tracing means recording exactly what happened step-by-step, like a detailed log of the entire execution.

@traceable
def call_graph(query: str):
    state = graph.invoke({
        "messages": [{"role": "user", "content": query}]
    }, config=config)
    return state["messages"][-1].content
Enter fullscreen mode Exit fullscreen mode

The @traceable decorator means: Record everything this function does.

Why Debugging AI Systems Is Hard

With regular code, debugging is straightforward:

result = function(input)
# If wrong, set breakpoint, see exactly where it failed
Enter fullscreen mode Exit fullscreen mode

With AI systems, it's harder:

User: "What's MSFT price?"
AI: "I don't know"  # Wrong! Why?
Enter fullscreen mode Exit fullscreen mode

Why did the AI give wrong answer?

  • Did it not understand the question?
  • Did it forget to call the tool?
  • Did the tool return wrong data?
  • Did the AI misinterpret the tool result?

It's hard to know without seeing every step.

How Tracing Helps

Tracing records every step:

Step 1: User message received
Step 2: Chatbot node called
Step 3: LLM invoked with messages
Step 4: LLM returned: tool_call(get_stock_price, "MSFT")
Step 5: Tool node executed
Step 6: get_stock_price returned 200.3
Step 7: Chatbot node called again
Step 8: LLM returned: "The price is $200.3"
Step 9: tools_condition checked: No more tools
Step 10: Reached END
Enter fullscreen mode Exit fullscreen mode

Now you can see exactly where things went wrong!

Using LangSmith Tracing

from langsmith import traceable

@traceable
def my_agent_call(query):
    # ... your code ...
    return result
Enter fullscreen mode Exit fullscreen mode

When you run my_agent_call("something"):

  1. Every step is recorded
  2. Sent to LangSmith dashboard
  3. You can see: timing, inputs, outputs, errors
  4. Visual representation of the graph execution

Benefits:

  • See exactly what the AI did
  • Find performance bottlenecks
  • Debug errors quickly
  • Improve the system based on traces

10. Examples

Example 1: Simple Stock Price Query

Input: "What is the stock price of AAPL?"

What happens internally:

  1. State: messages: [user_message]

  2. Chatbot node: Calls LLM

    • LLM sees: "User wants stock price"
    • LLM decides: "I need to call get_stock_price with AAPL"
  3. Decision: Does AI want tools?

    • Yes → Go to tools node
  4. Tools node:

    • Calls: get_stock_price("AAPL")
    • Gets: 100.4
    • Adds to messages: tool call + result
  5. Chatbot node again:

    • LLM sees all messages + tool result
    • Creates response: "The stock price of AAPL is $100.4"
  6. Decision: Does AI want tools again?

    • No → Go to END
  7. Output:

{
    "messages": [
        {"role": "user", "content": "What is the stock price of AAPL?"},
        {"role": "assistant", "tool_calls": [...]},
        {"content": "100.4", "name": "get_stock_price"},
        {"role": "assistant", "content": "The stock price of AAPL is $100.4"}
    ]
}
Enter fullscreen mode Exit fullscreen mode

Example 2: Conversation with Memory

Setup:

config = {"configurable": {"thread_id": "user_alice"}}
Enter fullscreen mode Exit fullscreen mode

Turn 1:

User: "Hi, my name is Alice"
Bot: "Nice to meet you, Alice!"

Memory saved: [user_message, assistant_response]
Enter fullscreen mode Exit fullscreen mode

Turn 2:

config = {"configurable": {"thread_id": "user_alice"}}  # Same thread
User: "What's my name?"

Memory loads: [previous_messages]
Plus adds: [new_user_message]

LLM sees: [old_user, old_assistant, new_user]
LLM responds: "Your name is Alice, as you told me earlier"

Memory saved: [all_4_messages]
Enter fullscreen mode Exit fullscreen mode

The key: add_messages appended the new message to the list, not replaced.


Example 3: Currency Conversion (from 2_node.ipynb)

Input:

{
    "amount_usd": 100.0,
    "total_usd": 0.0,
    "target_currency": "EUR"
}
Enter fullscreen mode Exit fullscreen mode

Flow:

  1. START → calc_total_node

    • Calculates: total_usd = 100.0 * 1.5 = 150.0
    • State becomes: total_usd: 150.0
  2. Conditional Edge: choose_conversion

    • Checks: target_currency = "EUR"
    • Goes to: convert_to_eur_node
  3. convert_to_eur_node

    • Calculates: total = 150.0 * 0.85 = 127.5
    • State becomes: total: 127.5
  4. Multiple paths join → END

Output:

{
    "amount_usd": 100.0,
    "total_usd": 150.0,
    "target_currency": "EUR",
    "total": 127.5
}
Enter fullscreen mode Exit fullscreen mode

Key concept: Based on target_currency, the graph chooses which conversion node to run. The same graph can follow different paths.


11. "Big Word Alert" - Simple Explanations

Agent

Big word: An intelligent system that can make decisions and take actions.

Simple: A smart robot that can think about what to do next and do it on its own.

In code: Your chatbot with tools is an agent - it decides whether to use tools or respond directly.

State

Big word: The data that flows through the system.

Simple: Like a backpack that carries information from one step to the next.

In code:

class State(TypedDict):
    messages: Annotated[list, add_messages]
Enter fullscreen mode Exit fullscreen mode

The backpack contains: messages

Node

Big word: A unit of computation (a function).

Simple: A worker that does a specific job.

In code:

def chatbot(state: State):
    # This is a node - it does one job
    return {"messages": [llm_with_tools.invoke(state["messages"])]}
Enter fullscreen mode Exit fullscreen mode

Tool

Big word: An external function the AI can call.

Simple: A power-up the AI can use when it needs to do something specific.

In code:

@tool
def get_stock_price(symbol: str) -> float:
    # This is a tool
    return {"MSFT": 200.3}.get(symbol, 0.0)
Enter fullscreen mode Exit fullscreen mode

Invocation

Big word: Calling a function and getting a result.

Simple: Using something to get a result.

In code:

graph.invoke({...})  # This is an invocation
Enter fullscreen mode Exit fullscreen mode

Workflow

Big word: A series of steps in a specific order.

Simple: A recipe that steps follow one by one.

In code: Your entire graph is a workflow.

Binding

Big word: Connecting tools to a model.

Simple: Giving the AI a toolbox so it knows what tools are available.

In code:

llm_with_tools = llm.bind_tools(tools)
# bind_tools = "give the AI these tools"
Enter fullscreen mode Exit fullscreen mode

Conditional Edge

Big word: A path that depends on a condition.

Simple: An if-else statement in your graph.

In code:

builder.add_conditional_edges('chatbot', tools_condition)
# IF AI needs tools THEN go to tools node
# ELSE go to END
Enter fullscreen mode Exit fullscreen mode

Checkpointer

Big word: A system that saves state at checkpoints.

Simple: A save point for your conversation.

In code:

graph = builder.compile(checkpointer=memory)
# Saves the state so you can resume later
Enter fullscreen mode Exit fullscreen mode

Annotated

Big word: Adding metadata to a type.

Simple: Attaching extra instructions to a data type.

In code:

messages: Annotated[list, add_messages]
# "This is a list, and when adding to it, use add_messages function"
Enter fullscreen mode Exit fullscreen mode

Thread ID

Big word: A unique identifier for a conversation.

Simple: A conversation ID, like a thread in a forum.

In code:

{"configurable": {"thread_id": "user_alice"}}
# This conversation belongs to "user_alice"
Enter fullscreen mode Exit fullscreen mode

12. Pros and Cons

LangGraph Pros

✅ Pro 1: Non-linear workflows

  • Can have if-else decisions
  • Not stuck with sequential steps
  • Real agents can branch out

✅ Pro 2: Built-in memory support

  • Easy to add conversation memory
  • Handles state management automatically
  • Good for multi-turn interactions

✅ Pro 3: Tool integration is seamless

  • Agents can decide when to use tools
  • Automatic tool calling
  • Cleaner than manual approach

✅ Pro 4: Checkpointing/persistence

  • Save and resume conversations
  • Stateful by default
  • Great for multi-user systems

✅ Pro 5: Debugging support

  • Visual graph representation
  • LangSmith integration for tracing
  • See exactly what's happening

✅ Pro 6: Reusable patterns

  • ToolNode is built-in
  • tools_condition is pre-made
  • Less code to write

LangGraph Cons

❌ Con 1: Steeper learning curve

  • More concepts to understand
  • Harder than simple LangChain
  • More setup needed

❌ Con 2: Overkill for simple tasks

  • If you just need Q&A, this is excessive
  • More code than needed
  • Can be frustrating for beginners

❌ Con 3: Performance overhead

  • More layers of abstraction
  • Slower than direct API calls
  • Not ideal for latency-critical apps

❌ Con 4: Limited to certain patterns

  • Works best for agent patterns
  • Not ideal for other paradigms
  • Some use cases need custom solutions

❌ Con 5: Dependency on LangChain

  • Tightly coupled to LangChain ecosystem
  • Can't easily use other frameworks
  • Vendor lock-in risk

This Approach (Graph-Based Agents) Pros

✅ Pro 1: Transparency

  • You can see the graph structure
  • Easy to understand flow
  • Good for learning

✅ Pro 2: Modularity

  • Add/remove nodes easily
  • Reuse nodes across projects
  • Mix and match components

✅ Pro 3: Scalability

  • Graphs can be arbitrarily complex
  • Handles multi-step reasoning
  • Good for sophisticated agents

✅ Pro 4: Controllability

  • You define every edge
  • Explicit about flow
  • No magic happening

✅ Pro 5: Extensibility

  • Custom nodes easy to add
  • Custom conditions easy to add
  • Can build complex logic

This Approach (Graph-Based Agents) Cons

❌ Con 1: Requires planning

  • Need to think about graph structure
  • Not suitable for ad-hoc scripts
  • Design phase is important

❌ Con 2: Can become complex

  • Many nodes → hard to maintain
  • Complex conditions → hard to debug
  • Spaghetti graphs possible

❌ Con 3: Overhead for simple cases

  • Too much scaffolding for "hello world"
  • Boilerplate required
  • Friction for quick prototypes

❌ Con 4: State management complexity

  • Need to understand TypedDict
  • Understand add_messages behavior
  • Easy to make mistakes with state

13. Common Beginner Mistakes

Mistake 1: Forgetting add_messages

Wrong:

class State(TypedDict):
    messages: list  # ❌ Plain list
Enter fullscreen mode Exit fullscreen mode

What happens: Messages get replaced instead of appended. Previous context is lost.

Right:

class State(TypedDict):
    messages: Annotated[list, add_messages]  # ✅ With add_messages
Enter fullscreen mode Exit fullscreen mode

Why: add_messages tells LangGraph to append, not replace.


Mistake 2: Not binding tools to LLM

Wrong:

tools = [get_stock_price]
llm = init_chat_model("google_genai:gemini-2.0-flash")
# ❌ Forgot to bind tools to llm
Enter fullscreen mode Exit fullscreen mode

What happens: LLM doesn't know tools exist. Can't use them.

Right:

llm_with_tools = llm.bind_tools(tools)  # ✅ Bind tools
Enter fullscreen mode Exit fullscreen mode

Why: LLM needs to know what tools are available.


Mistake 3: Forgetting to add ToolNode

Wrong:

builder.add_node(chatbot)
# ❌ Forgot ToolNode
builder.add_edge(START, "chatbot")
Enter fullscreen mode Exit fullscreen mode

What happens: No node can execute the tools. AI decides to use tools but nothing happens.

Right:

builder.add_node(chatbot)
builder.add_node("tools", ToolNode(tools))  # ✅ Add ToolNode
builder.add_edge(START, "chatbot")
Enter fullscreen mode Exit fullscreen mode

Why: ToolNode knows how to execute tool calls.


Mistake 4: Not using same thread_id for memory

Wrong:

# First call - new thread_id
graph.invoke(..., config={"configurable": {"thread_id": "1"}})

# Second call - different thread_id
graph.invoke(..., config={"configurable": {"thread_id": "2"}})  # ❌ Lost context!
Enter fullscreen mode Exit fullscreen mode

What happens: No memory between calls. Each call starts fresh.

Right:

config = {"configurable": {"thread_id": "1"}}
# First call
graph.invoke(..., config=config)
# Second call - same thread_id
graph.invoke(..., config=config)  # ✅ Remembers context
Enter fullscreen mode Exit fullscreen mode

Why: Same thread_id = same conversation memory.


Mistake 5: Not returning state dict from nodes

Wrong:

def chatbot(state: State):
    response = llm_with_tools.invoke(state["messages"])
    return response  # ❌ Wrong format
Enter fullscreen mode Exit fullscreen mode

What happens: State update fails. Error in merging.

Right:

def chatbot(state: State):
    response = llm_with_tools.invoke(state["messages"])
    return {"messages": [response]}  # ✅ Dict format
Enter fullscreen mode Exit fullscreen mode

Why: State system expects dict to merge state updates.


Mistake 6: Forgetting conditional edges

Wrong:

builder.add_edge("chatbot", END)  # ❌ Always goes to END
# No chance to use tools
Enter fullscreen mode Exit fullscreen mode

What happens: Tools never get called, even if AI wants to use them.

Right:

builder.add_conditional_edges('chatbot', tools_condition)
builder.add_edge('tools', 'chatbot')  # ✅ Can loop back to tools
builder.add_edge("chatbot", END)  # Also can end
Enter fullscreen mode Exit fullscreen mode

Why: Conditional edges allow AI to decide: use tools or end.


Mistake 7: Using wrong LLM initialization

Wrong:

llm = ChatOpenAI()  # ❌ Doesn't use init_chat_model
Enter fullscreen mode Exit fullscreen mode

What happens: Can work, but not best practice. Less flexible.

Right:

llm = init_chat_model("google_genai:gemini-2.0-flash")  # ✅
Enter fullscreen mode Exit fullscreen mode

Why: init_chat_model is provider-agnostic. Easy to swap models.


Mistake 8: Confusing state fields

Wrong:

class State(TypedDict):
    messages: Annotated[list, add_messages]

def chatbot(state: State):
    return state["message"]  # ❌ "message" (singular)
Enter fullscreen mode Exit fullscreen mode

What happens: KeyError - field doesn't exist.

Right:

return state["messages"]  # ✅ "messages" (plural)
Enter fullscreen mode Exit fullscreen mode

Why: Exact spelling matters. State keys must match.


Mistake 9: Not handling the returned state

Wrong:

graph.invoke({
    "messages": [{"role": "user", "content": "Hi"}]
})
# ❌ Don't use the returned value
Enter fullscreen mode Exit fullscreen mode

What happens: You don't see the response.

Right:

result = graph.invoke({
    "messages": [{"role": "user", "content": "Hi"}]
})
response = result["messages"][-1].content  # ✅ Get the response
print(response)
Enter fullscreen mode Exit fullscreen mode

Why: The function returns the final state. You need to extract the response.


Mistake 10: Missing docstring in tools

Wrong:

@tool
def get_stock_price(symbol: str) -> float:  # ❌ No docstring
    return {"MSFT": 200.3}.get(symbol, 0.0)
Enter fullscreen mode Exit fullscreen mode

What happens: AI doesn't understand what the tool does. Wrong usage.

Right:

@tool
def get_stock_price(symbol: str) -> float:
    """Return the current price of a stock given the stock symbol"""  # ✅
    return {"MSFT": 200.3}.get(symbol, 0.0)
Enter fullscreen mode Exit fullscreen mode

Why: LLM reads the docstring to decide when to use the tool.


14. Summary

Key Takeaways

1. LangGraph enables non-linear workflows

  • Not just sequential steps
  • Agents can make decisions
  • Conditional edges create branches

2. State management is crucial

  • add_messages keeps conversation history
  • Without it, context is lost
  • Stateful behavior is the default

3. Tools give AI superpowers

  • Agents can decide when to use them
  • Results inform the AI's response
  • Real-time data, external logic, custom functions

4. Memory makes conversations work

  • Thread IDs separate conversations
  • Same thread = same memory
  • Multiple users can use same graph

5. Nodes are workers, edges are paths

  • Nodes do work (functions)
  • Edges connect them (paths)
  • Conditional edges make intelligent choices

6. Tracing helps debugging

  • See every step of execution
  • LangSmith provides visibility
  • Essential for production systems

What to Focus On Next

Near term (practice these):

  1. Build a chatbot without tools (basic flow)
  2. Add one tool and see how AI uses it
  3. Experiment with different tool combinations
  4. Try different thread_ids for separate conversations

Medium term (understand deeply):

  1. How TypedDict and Annotated work
  2. How add_messages merges state
  3. How conditional edges make decisions
  4. How ToolNode executes tool calls

Advanced (explore):

  1. Multiple tools with complex logic
  2. Multi-agent systems (agents talking to each other)
  3. Custom state merging logic
  4. Error handling and recovery
  5. Production deployment and scaling

Final Thought

You're learning a powerful paradigm: graph-based agentic AI. This is not just coding; it's defining workflows that AI can navigate intelligently.

The key is understanding that:

  • Graphs are blueprints for how data flows
  • Agents are decision-makers that navigate the blueprint
  • Tools are extensions that give agents capabilities
  • Memory is context that makes interactions coherent

Start small, build incrementally, and debug with LangSmith traces. Good luck on your journey!


15. Additional Resources in Your Project

Files to Study in Order

1. First: chatbot.ipynb

  • Simplest example
  • No tools, no memory
  • Just basic message flow
  • Best for understanding the fundamentals

2. Next: 2_node.ipynb

  • Introduces multiple nodes
  • Currency conversion example
  • Shows conditional edges
  • Clearer than abstract explanation

3. Then: nodes-notebook.ipynb

  • Two-node linear flow
  • Simple state transformation
  • Good intermediate step

4. Then: tools.ipynb

  • Introduces tools
  • AI can call functions
  • Adds interactivity

5. Then: tools_memory.ipynb

  • Adds memory with thread IDs
  • Multi-turn conversations
  • Closer to production

6. Finally: tools_memory_langsmith_trace.ipynb

  • Complete system
  • All features combined
  • Includes tracing
  • Production-ready example

Practice Suggestions

Exercise 1: Modify chatbot.ipynb

  • Change the model provider
  • Add a different greeting
  • Store conversation to file

Exercise 2: Add tools

  • Create a new tool (weather, calculator, etc.)
  • Bind it to the LLM
  • Test AI using it

Exercise 3: Experiment with thread IDs

  • Use 3 different thread_ids
  • See them remember separately
  • Test memory isolation

Exercise 4: Add your own tracing

  • Use @traceable decorator
  • Connect to LangSmith
  • Visualize execution

END OF ARTICLE

Top comments (0)