DEV Community

Cover image for A Beginner’s Guide to Using MCP with LangGraph

A Beginner’s Guide to Using MCP with LangGraph

If you’ve been following along with the LangGraph series, you already know how powerful structured, stateful workflows can be for building real AI agents. But there’s one question every developer eventually runs into:

“How do I let my agent interact with real-world tools and data?”

That’s where MCP — Model Context Protocol — steps in.

Before we dive in, here’s something you’ll love:

Learn LangChain in a clear, concise, and practical way.
Whether you’re just starting out or already building, Langcasts gives you guides, tips, hands-on walkthroughs, and in-depth classes to help you master every piece of the AI puzzle. No fluff, just actionable learning to get you building smarter and faster. Start your AI journey today at Langcasts.com.

MCP is a new open standard that gives AI models a clean, safe, predictable way to access external tools, APIs, files, and more. Instead of writing custom integrations for every single tool, MCP gives you one universal format that everything can plug into.

And here’s the best part:
LangGraph supports MCP natively.
This means your graph nodes can call MCP tools as easily as they call other functions.

In this guide, we’ll walk through the basics of MCP inside LangGraph.
We’ll keep things simple.
We’ll build something small.
And by the end, you’ll understand how to use MCP to make your LangGraph agents more capable, more useful, and much more practical.
Ready? Let’s get into it.

Core Concepts You Need to Know

Before wiring LangGraph and MCP together, there are a few core ideas you need to understand.

1. Graphs

A graph in LangGraph is simply the map your agent follows.

It defines how your agent moves from one step to the next, similar to a workflow, but more intelligent.

2. Nodes

Nodes are the “stations” in your graph.

Each node does one thing:

  • Call an LLM
  • Perform logic
  • Trigger a tool
  • Transform state

If the graph is the map, nodes are the stops on the journey.

3. State

State is your agent’s memory.

It carries everything from messages to tool results to custom variables.

In LangGraph, state updates automatically flow from node to node, so every part of your agent always sees the latest context.

4. Actions

Actions are how nodes do things.

They’re the outputs that tell LangGraph what happens next, like: “Call this tool”, “Move to that node”, “Update the state”.

5. MCP Tools

MCP tools are capabilities your agent can use, defined outside your code in a separate MCP server.

These tools come with:

  • Structured schemas
  • Validated inputs
  • Typed responses
  • Secure execution rules

You expose tools, MCP handles the plumbing, and LangGraph uses them cleanly.

6. How LangGraph treats MCP tools as callable actions

When a LangGraph node wants to use an MCP tool, it doesn’t “guess” what to do.

It returns a tool invocation action, and LangGraph knows exactly how to route that call through MCP.

Your node just says:

“Please call this tool with this data.”

LangGraph and MCP handle everything else.

7. How MCP fits into the graph execution model

Here’s the smooth relationship:

  • LangGraph runs your workflow step-by-step.
  • When a node needs external capabilities, it triggers an MCP tool action.
  • MCP executes the tool and returns structured data.
  • LangGraph folds that result back into the state and continues the graph. ## Setting Up Your First MCP Environment

Before LangGraph can call MCP tools, you need an MCP environment running.

1. Install the MCP CLI

You’ll need the official MCP command-line tool to run and test your MCP servers locally.

npm install -g @modelcontextprotocol/cli
Enter fullscreen mode Exit fullscreen mode

This gives you the mcp command, which you’ll use to run your tools.

2. Create a Simple MCP Server

An MCP server is just a small program that exposes tools.

You can start with the basic template:

mcp init my-mcp-server
Enter fullscreen mode Exit fullscreen mode

This generates a folder with:

  • A sample tool
  • JSON schema files
  • Server boilerplate
  • A clean structure to build on

Open the folder and you’ll already see how tools are defined and validated.

3. Start Your MCP Server

Run the development server:

mcp dev
Enter fullscreen mode Exit fullscreen mode

When this starts successfully, your MCP environment is live.

You’ll see logs showing:

  • Available tools
  • Their schemas
  • When the server receives calls

This is where LangGraph will eventually connect.

4. Add Your First Tool (Optional for Now)

Tools in MCP are just functions wrapped with:

  • Input schema (what data they accept)
  • Output schema (what they return)
  • Execution logic

For example, a simple “get-time” tool might look like:

{
  "name": "get_time",
  "inputSchema": {},
  "outputSchema": {
    "type": "object",
    "properties": {
      "iso": { "type": "string" }
    }
  }
}

Enter fullscreen mode Exit fullscreen mode

You’ll later reference these tools by name inside LangGraph.

5. Connect LangGraph to Your MCP Server

Once your MCP server is running, LangGraph can connect by simply pointing to it in your graph configuration:

from langgraph.mcp import MCPClient

mcp = MCPClient("http://localhost:3000")

Enter fullscreen mode Exit fullscreen mode

Boom, now your agent can call any tool served by MCP.

Writing Your First LangGraph Node That Calls an MCP Tool

Now that your MCP server is up and running, let’s hook it into LangGraph. This is the moment where your agent stops “talking about things” and starts doing things.

We’ll walk through a simple example: calling an MCP tool from a LangGraph node and feeding the result back into the state.

1. Import the Essentials

You only need a few LangGraph and MCP utilities:

from langgraph.graph import StateGraph
from langgraph.mcp import MCPClient, mcp_tool
from langgraph.prebuilt import MessagesState
Enter fullscreen mode Exit fullscreen mode

2. Connect Your LangGraph Project to MCP

Create a client pointing to your running MCP server:

mcp = MCPClient("http://localhost:3000")
Enter fullscreen mode Exit fullscreen mode

This gives your graph access to every tool the server exposes.

3. Wrap an MCP Tool for LangGraph

Let’s say your MCP server exposes a tool called "get_time".

You make it callable inside LangGraph like this:

get_time = mcp_tool(mcp, "get_time")

Enter fullscreen mode Exit fullscreen mode

get_time is now a tool action you can call inside any node.

4. Write a Node That Calls the Tool

A node just needs to return a tool invocation action.

Here’s a minimal example:

def ask_for_time(state: MessagesState):
    return get_time({})

Enter fullscreen mode Exit fullscreen mode

A few things are happening here:

  • The node receives the current state (messages, history, etc.)
  • It doesn’t need extra input for this tool, so we pass an empty dict
  • It returns an MCP tool call
  • LangGraph takes over from there

5. Consume the Tool Result in the Next Node

After the tool runs, MCP sends the response back to LangGraph.

So you create another node to process it:

def show_time(state: MessagesState):
    result = state["messages"][-1]["content"]
    return {"messages": [("assistant", f"The time is {result['iso']}")]}

Enter fullscreen mode Exit fullscreen mode

This node reads the last message (the tool’s result) and formats it into a clean assistant reply.

6. Build the Graph

Now wire everything together:

graph = StateGraph(MessagesState)

graph.add_node("ask_for_time", ask_for_time)
graph.add_node("show_time", show_time)

graph.set_entry_point("ask_for_time")
graph.add_edge("ask_for_time", "show_time")

app = graph.compile()

Enter fullscreen mode Exit fullscreen mode

7. Run It

app.invoke({"messages": [("user", "What time is it?")]})

Enter fullscreen mode Exit fullscreen mode

Your LangGraph agent will:

  1. Read the user’s message
  2. Run the ask_for_time node
  3. Trigger the "get_time" MCP tool
  4. Get the structured result
  5. Pass it into show_time
  6. Respond with the actual time


You’ve just walked through the essentials of using MCP with LangGraph. From the core concepts, to wiring up tools, to building a real working agent. Even if the details felt new at first, you now understand the big picture: LangGraph handles the workflow, MCP handles the tools, and together they give your agent real capabilities without chaos or guesswork.

The best part?
You’ve barely scratched the surface.

Once you’re comfortable with simple tools like file readers or API calls, you can scale up to full workflows, multi-step reasoning, chained tools, or even entire application backends powered by MCP. LangGraph keeps everything structured. MCP keeps everything safe and predictable. The combination gives you a foundation you can build on confidently.

So take your time, experiment, break things, fix them, and keep going.
This is all part of the fun.

Top comments (0)