DEV Community

Richard Abishai
Richard Abishai

Posted on

Build Your First LangGraph Agent

Make your models act, not just think.

We’ve built transformers. We’ve run inference.

Now it’s time to give our models something new — agency.

LangGraph is a framework that helps you build graph-based agentic systems.

Instead of chaining prompts in a line, LangGraph lets your agents reason, loop, and decide what to do next.

In this guide, we’ll build a minimal agent that plans a task, performs it, and summarizes the result.


🧩 1. Install LangGraph and Dependencies

LangGraph builds on top of LangChain, so install both.

pip install langgraph langchain openai
Enter fullscreen mode Exit fullscreen mode

(If you’re using Hugging Face or Anthropic models, add their SDKs too.)


⚙️ 2. Create a Basic Workflow File

Create a file named simple_agent.py and start with imports:

from langgraph.graph import Graph
from langchain.chat_models import ChatOpenAI
from langgraph.nodes import ToolNode, LLMNode
Enter fullscreen mode Exit fullscreen mode

LangGraph organizes logic as a directed graph:
each node can call a model, a tool, or another node.


🧠 3. Define the Nodes (Agent Logic)

Let’s build two small nodes — one that plans, another that acts.

# LLM node for planning
planner = LLMNode(
    id="planner",
    llm=ChatOpenAI(model_name="gpt-4-turbo", temperature=0),
    prompt="Plan how to complete the user's goal step by step."
)

# Tool node for execution
def search_tool(query: str) -> str:
    # Placeholder for real API calls or DB lookups
    return f"Search results for: {query}"

executor = ToolNode(
    id="executor",
    tool=search_tool
)
Enter fullscreen mode Exit fullscreen mode

🔗 4. Connect the Nodes (The Graph)

Now, link how the information flows.

graph = Graph()

graph.connect("planner", "executor")     # plan → execute
graph.connect("executor", "planner")     # feedback loop
graph.set_entry_point("planner")         # start at planner
Enter fullscreen mode Exit fullscreen mode

This structure allows iterative reasoning — the planner can refine its plan using feedback from execution.


🚀 5. Run the Agent

if __name__ == "__main__":
    result = graph.run("Find three recent AI papers on LangGraph.")
    print(result)
Enter fullscreen mode Exit fullscreen mode

Run:

python simple_agent.py
Enter fullscreen mode Exit fullscreen mode

You’ll see your model reason, call the search_tool, and return a structured summary.


⚡ 6. Add Memory (Optional)

Agents get smarter with memory.
LangGraph supports stateful nodes and context passing.

from langgraph.memory import Memory

memory = Memory()
graph.attach_memory(memory)
Enter fullscreen mode Exit fullscreen mode

Now, each loop iteration can remember previous steps — useful for long tasks or multi-turn interactions.


🧠 7. Why Graphs Beat Chains

Traditional LangChain flows are linear — A → B → C.
LangGraph introduces feedback and branching, which enables:

  • Re-planning after failure

  • Parallel node execution

  • Conditional routing

  • Multi-agent collaboration

It’s intelligence with structure.


🧰 8. Full Example (Minimal Agent)

from langgraph.graph import Graph
from langgraph.nodes import LLMNode, ToolNode
from langchain.chat_models import ChatOpenAI

def search_tool(q):
    return f"Mock result for {q}"

planner = LLMNode(
    id="planner",
    llm=ChatOpenAI(model_name="gpt-4o-mini"),
    prompt="Plan steps to achieve: {input}"
)

executor = ToolNode(id="executor", tool=search_tool)

graph = Graph()
graph.connect("planner", "executor")
graph.connect("executor", "planner")
graph.set_entry_point("planner")

result = graph.run("List three upcoming space missions to Mars.")
print(result)
Enter fullscreen mode Exit fullscreen mode

🪐 9. Ideas to Extend

Add a web-scraping tool for real data

Integrate a LangChain vector store for context

Wrap your agent in a FastAPI endpoint

Add voice I/O using Whisper or gTTS

You’re not limited to text — LangGraph agents can handle any modular pipeline.


🧩 10. Reflection

Building your first LangGraph agent teaches a deeper lesson:
Intelligence isn’t linear.
Real reasoning loops, adjusts, and re-tries — just like we do.

When you visualize your agent as a graph, you start designing systems that can truly think through problems, not just complete tasks.


Next Up → “The Day I Broke My Model” on Medium — a story about what happens when curiosity meets chaos.
Follow for more tutorials blending AI agents, physics, and automation — the pillars of my Quantum Codecast universe.

Top comments (0)