DEV Community

Farhan Khan
Farhan Khan

Posted on • Edited on

LangGraph: Core Concepts

LangGraph is a framework for building stateful, reliable agent workflows. Instead of executing a single prompt-response cycle, LangGraph enables agents to operate as directed graphs, where each node represents a step, state is shared across the workflow, and execution can branch, pause, or persist.

This article summarizes the core concepts, prebuilt utilities, and typical applications of LangGraph.

1๏ธโƒฃ State

  • The memory object that flows through the graph.
  • Nodes read from and write updates to state.
  • Managed through channels that define merge strategies:

    • Replace โ†’ overwrite existing values.
    • Append โ†’ add items (e.g., chat history).
    • Merge โ†’ combine dicts/lists.
  • Acts as the single source of truth for the agentโ€™s execution.

๐Ÿ”น Example: How State Evolves

Consider a simple graph with three nodes: plan โ†’ search โ†’ answer.

from typing import TypedDict
from langgraph.graph import StateGraph, START, END

class State(TypedDict, total=False):
    question: str
    plan: str
    results: list[str]
    final_answer: str

def plan_node(state: State) -> State:
    return {"plan": f"Search for: {state['question']}"}

def search_node(state: State) -> State:
    return {"results": [f"Result about {state['plan']}"]}

def answer_node(state: State) -> State:
    return {"final_answer": f"Based on {state['results'][0]}, here is the answer."}
Enter fullscreen mode Exit fullscreen mode

Initial Input

{"question": "What is LangGraph?"}
Enter fullscreen mode Exit fullscreen mode

After plan node

{"question": "What is LangGraph?", "plan": "Search for: What is LangGraph?"}
Enter fullscreen mode Exit fullscreen mode

After search node

{"question": "What is LangGraph?", "plan": "Search for: What is LangGraph?", "results": ["Result about Search for: What is LangGraph?"]}
Enter fullscreen mode Exit fullscreen mode

After answer node

{"question": "What is LangGraph?", "plan": "Search for: What is LangGraph?", "results": ["Result about Search for: What is LangGraph?"], "final_answer": "Based on Result about Search for: What is LangGraph?, here is the answer."}
Enter fullscreen mode Exit fullscreen mode

2๏ธโƒฃ Nodes

  • Nodes are the functional units of a LangGraph.

  • Each node is a Python function that:

    • Takes the current state (a dictionary-like object).
    • Returns updates to that state (usually as another dictionary).
  • A node can perform many different actions depending on the workflow:

    • Invoke an LLM to generate text, plans, or summaries.
    • Call external tools or APIs (e.g., search engine, database, calculator).
    • Execute deterministic logic (e.g., scoring, validation, formatting).
  • Nodes donโ€™t overwrite the whole state by default โ€” instead, they return partial updates that LangGraph merges into the global state using channels.

  • A node is not an โ€œagentโ€ by itself. The entire graph of nodes forms the agent.

๐Ÿ”น Example: Simple Node

from typing import TypedDict

class State(TypedDict, total=False):
    question: str
    plan: str

def plan_node(state: State) -> State:
    q = state["question"]
    return {"plan": f"Search online for: {q}"}
Enter fullscreen mode Exit fullscreen mode

Input State

{"question": "What is LangGraph?"}
Enter fullscreen mode Exit fullscreen mode

Output Update

{"plan": "Search online for: What is LangGraph?"}
Enter fullscreen mode Exit fullscreen mode

After merging

{"question": "What is LangGraph?", "plan": "Search online for: What is LangGraph?"}
Enter fullscreen mode Exit fullscreen mode

๐Ÿ”น Example: Node Invoking an LLM

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini")

def answer_node(state: State) -> State:
    response = llm.invoke(state["question"])
    return {"final_answer": response.content}
Enter fullscreen mode Exit fullscreen mode

๐Ÿ‘‰ Nodes are modular steps. They can be simple (string formatting) or advanced (LLM + tools). Together, they form the workflow.


3๏ธโƒฃ Edges

  • Edges define the flow of execution between nodes.

  • Normal edges โ†’ fixed transitions.

  • Conditional edges โ†’ branching logic, using a router function.

  • Special markers:

    • START โ†’ entry point.
    • END โ†’ exit point.

๐Ÿ”น Example: Normal Edges

from langgraph.graph import StateGraph, START, END

graph = StateGraph(State)
graph.add_node("plan", plan_node)
graph.add_node("search", search_node)
graph.add_node("answer", answer_node)

graph.add_edge(START, "plan")
graph.add_edge("plan", "search")
graph.add_edge("search", "answer")
graph.add_edge("answer", END)
Enter fullscreen mode Exit fullscreen mode

๐Ÿ”น Example: Conditional Edges with Router

def router(state: State) -> str:
    q = state["question"]
    if "latest" in q.lower():
        return "search"
    else:
        return "answer"

graph.add_conditional_edges(
    "plan", router, {"search": "search", "answer": "answer"}
)
Enter fullscreen mode Exit fullscreen mode
  • Input: "What is the capital of France?" โ†’ routed to answer.
  • Input: "What are the latest news on AAPL?" โ†’ routed to search.

4๏ธโƒฃ Streaming

Streaming provides live feedback during execution.

๐Ÿ”น Update Stream (node-level)

for event in app.stream(inputs, config=config, stream_mode="updates"):
    print(event)
Enter fullscreen mode Exit fullscreen mode

Output

{'plan': {'plan': 'Search for: What is LangGraph?'}}
{'search': {'results': ['Result about What is LangGraph?']}}
{'answer': {'final_answer': '...final text...'}}
Enter fullscreen mode Exit fullscreen mode

๐Ÿ”น Token Stream (LLM output)

for chunk in app.stream(inputs, config=config, stream_mode="messages"):
    print(chunk, end="", flush=True)
Enter fullscreen mode Exit fullscreen mode
  • stream_mode="updates" โ†’ node updates.
  • stream_mode="messages" โ†’ token stream (if LLM supports it).

5๏ธโƒฃ Memory

Memory in LangGraph = state + checkpointers.

๐Ÿ”น Short-Term Memory (Within a Thread)

from typing import TypedDict, List

class State(TypedDict, total=False):
    question: str
    chat_history: List[str]
    answer: str

def add_to_history(state: State) -> State:
    history = state.get("chat_history", [])
    history.append(state["question"])
    return {"chat_history": history}
Enter fullscreen mode Exit fullscreen mode

Example after two turns:

{"chat_history": ["What is LangGraph?", "Explain checkpointers"], "answer": "..."}
Enter fullscreen mode Exit fullscreen mode

๐Ÿ”น Long-Term Memory (Across Runs)

from langgraph.checkpoint.memory import MemorySaver

checkpointer = MemorySaver()
app = graph.compile(checkpointer=checkpointer)

app.invoke({"question": "What is LangGraph?"}, config={"configurable": {"thread_id": "t1"}})
app.invoke({"question": "And what are edges?"}, config={"configurable": {"thread_id": "t1"}})
Enter fullscreen mode Exit fullscreen mode

Both runs share the same thread_id โ†’ context is preserved.

๐Ÿ”น Combined

  • Short-term memory = within a run.
  • Long-term memory = across runs.
  • Together โ†’ continuity + reliability.

๐Ÿ‘‰ Unlike LangChain, LangGraph doesnโ€™t treat memory as a separate object โ€” itโ€™s baked into state + checkpointers.

Top comments (0)