DEV Community

Cover image for Building Multi-Agent Systems with LangGraph-Supervisor
Seenivasa Ramadurai
Seenivasa Ramadurai

Posted on

Building Multi-Agent Systems with LangGraph-Supervisor

In today's rapidly evolving AI landscape, creating sophisticated agent systems that collaborate effectively remains a significant challenge. The LangChain team has addressed this need with the release of two powerful new Python libraries: langgraph-supervisor and langgraph-swarm. This post explores how langgraph-supervisor enables developers to build complex multi-agent systems with hierarchical organization.

What is LangGraph-Supervisor?

LangGraph-Supervisor is a specialized Python library designed to simplify the creation of hierarchical multi-agent systems using LangGraph. But what does "hierarchical" mean in this context?

In a hierarchical multi-agent system, specialized agents operate under the coordination of a central supervisor agent. This supervisor controls all communication flow and task delegation, making intelligent decisions about which agent to invoke based on the current context and requirements. This approach brings organization and efficiency to complex multi-agent interactions.

Key Features

The library comes equipped with several powerful features that make building multi-agent systems more accessible:

🤖 Supervisor Agent Creation

At the heart of the library is the ability to create a supervisor agent that can orchestrate multiple specialized agents. This supervisor becomes the central intelligence that decides which agent should handle specific parts of a task.

🛠️ Tool-Based Agent Handoff

Communication between agents is managed through a tool-based handoff mechanism. This provides a structured way for agents to exchange information and transfer control, ensuring smooth collaboration between different specialized components.

📝 Flexible Message History Management

The library includes sophisticated conversation control through flexible message history management. This allows for maintaining context across different agent interactions, creating a cohesive experience.

Built on top of LangGraph, the library inherits powerful capabilities like streaming, short-term and long-term memory management, and human-in-the-loop functionality, making it suitable for building production-grade agent applications.

Getting Started with LangGraph-Supervisor

We are building a Multi-Agent Application consisting of three agents: a General Q&A agent, a Resume Parser agent, and a Google Search agent. These agents are managed by a Supervisor agent, which analyzes the user’s prompt or question and delegates the task to the appropriate agent.

Image description

Installation is straightforward:


**langchain-openai** – This package provides seamless integration between LangChain and OpenAI’s models (e.g., GPT-4, GPT-3.5). It allows developers to interact with OpenAI’s APIs for tasks such as text generation, embeddings, and chat-based conversations within LangChain applications.

**python-dotenv** – This package helps manage environment variables by loading them from a .env file. It’s useful for securely storing API keys, database credentials, and other configuration settings without hardcoding them into the application.

pip install langgraph-supervisor langchain-openai python-dotenv
Enter fullscreen mode Exit fullscreen mode

Building a Multi-Agent System Example

Let's look at a practical example: a multi-agent system for resume parsing, web search, and question answering.

This system can:

  • Parse resumes (PDF/DOCX)
  • Perform Google searches
  • Answer general questions

The system uses a supervisor agent to delegate tasks to specialized agents based on user input.

Requirements

  • Azure OpenAI API key
  • Python packages: langchain, langgraph, PyPDF2, python-docx, googlesearch-python, langgraph-supervisor

Implementation

from langchain_openai import AzureChatOpenAI
import os
from dotenv import load_dotenv  
from langgraph.prebuilt import create_react_agent
from PyPDF2 import PdfReader
from docx import Document
from googlesearch import search
from langgraph_supervisor import create_supervisor
from langgraph.checkpoint.memory import InMemorySaver

load_dotenv()

def extract_text_from_pdf(pdf_path: str) -> str:
    reader = PdfReader(pdf_path)
    text = ""
    for page in reader.pages:
        text += page.extract_text() or ""
    return text

def extract_text_from_docx(docx_path: str) -> str:
    doc = Document(docx_path)
    return "\n".join([paragraph.text for paragraph in doc.paragraphs])

# Initialize Azure OpenAI model
model = AzureChatOpenAI(
    deployment_name="gpt-4o-mini",
    model="gpt-4o-mini",
    temperature=0,
    openai_api_version="2023-05-15",
    openai_api_key=os.getenv("AZURE_OPENAI_API_KEY"),
)

def resume_parser(resume_file_path: str):
    if resume_file_path.endswith(".pdf"):
        return extract_text_from_pdf(resume_file_path)
    elif resume_file_path.endswith(".docx"):
        return extract_text_from_docx(resume_file_path)
    else:
        raise ValueError("Unsupported file type")

# Create resume parser agent
resume_parser_agent = create_react_agent(
    model,
    tools=[resume_parser],
    name="resume_parser_agent",
    prompt=(
        "You are a resume parser expert. "
        "Always use the one tool resume_parser to parse the resume."
    )
)

def general_question_answer(question: str):
    response = model.invoke(question)
    return response.content

# Create general Q&A agent
general_question_answer_agent = create_react_agent(
    model,
    tools=[general_question_answer],
    name="general_question_answer_agent",
    prompt=(
        "You are a general question answer expert. "
        "Always use the one tool general_question_answer to answer the question."
    )
)

def google_search(query: str):
    return list(search(query, num_results=5))

# Create Google search agent
google_search_agent = create_react_agent(
    model,
    tools=[google_search],
    name="google_search_agent",
    prompt=(
        "You are a Google search expert. "
        "Always use the one tool google_search to search the internet."
    )
)  

# Create supervisor workflow
workflow = create_supervisor(
    [resume_parser_agent, google_search_agent, general_question_answer_agent],
    model=model,
    prompt=(
        "You are a smart team supervisor managing multiple agents. Analyze the user input and delegate to the appropriate agent:\n"
        "- If the input contains a file path or mentions 'resume', use resume_parser_agent.\n"
        "- If the input contains 'search' or asks to find something online, use google_search_agent.\n"
        "- For all other questions or queries, use general_question_answer_agent.\n"
        "Choose the most appropriate agent based on the user's input."
    ),
    output_mode="last_message"
)

# Initialize checkpointer
checkpointer = InMemorySaver()
app = workflow.compile(checkpointer=checkpointer)
config = {"configurable": {"thread_id": "1"}}

# Main interaction loop
while True:
    user_input = input("\nEnter your query (or 'exit' to quit): ")

    if user_input.lower() == 'exit':
        print("Goodbye!")
        break

    result = app.invoke({
        "messages": [{
            "role": "user",
            "content": user_input
        }]
    }, config=config)

    for m in result["messages"]:
        print(m.content)
Enter fullscreen mode Exit fullscreen mode

Benefits of Using LangGraph-Supervisor

This implementation showcases how langgraph-supervisor can efficiently manage multi-agent collaboration, ensuring:

  1. Seamless task delegation - The supervisor intelligently routes tasks to the most appropriate specialized agent
  2. Improved coordination - Communication between agents is structured and efficient
  3. Streamlined execution - The system handles complex workflows with minimal overhead

By leveraging configurable output modes such as last_message and full_history, it provides flexibility in handling responses based on application needs, making it easier to build complex AI-driven applications.

Whether you're building a customer service automation, a research assistant, or any complex AI system requiring multiple specialized capabilities, LangGraph-Supervisor provides a powerful foundation for creating well-organized, efficient multi-agent systems.

Testing the Agent and Output (with output_mode="last_message")

Image description

Image description

Image description

After a few conversations, I'm now asking: Do you remember my name? Since we've added an in-memory checkpointer (short-term), it should be able to recall my name.

Image description

Asking AI Agent to search my name in google

Image description

Asking AI Agent to parse my resume

Image description

Image description

Image description

Coming Soon: Part II - Exploring LangGraph-Swarm

In Part II of this blog series, we will delve into the langgraph-swarm Python library and explore the key differences between langgraph-supervisor and langgraph-swarm. when comes to building multi AI Agents application.

Thanks
Sreeni Ramadorai

Top comments (8)

Collapse
 
satya_prakash_dd215e3c638 profile image
Satya prakash

Hi, we're seeing a strange issue during routing from supervisor to agents. There are lot of messages like "Transferred to ..." in the message history which we suspect are confusing the supervisor. Is there a way to trim these off so routing decisions are taken without considering these? Basically cleanup all old routing messages

Collapse
 
sreeni5018 profile image
Seenivasa Ramadurai

Hi Satya,

Hope you're returning only the last_message from the agent. Instead of relying on supervisor delegates or routing to other agents, you might want to try using MCP Servers to create your tools and add them to a LangGraph BigTool. That way, a single agent can handle all the decision-making.

We're using MCP Servers along with a Supervisor and Tool Agents. Based on the prompt, the Supervisor delegates to the right agent—and it’s been working well for us without any issues. Try improving your supervisor prompt into this setup and see how it goes.

Let me know how it works out!

Thanks
Sreeni Ramadorai

Collapse
 
icecoldwatta profile image
Ernest

@sreeni5018
This was super helpful... thank you! I'm working on a use case where I perform intent detection from natural language queries over business analytics data. For example, if a user asks, "Show me omni customer sales data for Q1 2025," the intent detection agent uses an LLM to classify the intent and assign a confidence score. Based on that, the supervisor routes the request to downstream agents like sql_generation.

I'm fronting this through a Streamlit app, and it's working reasonably well so far.

Do you happen to have an example where you're using an interrupt — either due to missing information or to prompt for additional context before continuing?

Collapse
 
sreeni5018 profile image
Seenivasa Ramadurai

Thanks, glad it helped! Yes, I have a well-defined delegation prompt in my SupervisorAgent that knows how to route calls to the appropriate downstream agents. I also have an AI_ClassifyAgent that validates the delegation decision, adding an extra layer of confidence before execution.

As for interrupts — yes, I’ve implemented that as well. For cases where essential info is missing or ambiguous (like "Show me sales data" without a timeframe or region), the agent triggers an interrupt state. This essentially halts the flow and prompts the user for additional context before resuming. I’m managing this via state updates in LangGraph, where the node can loop back to the user prompt step until required fields are filled.

Since you're using Streamlit, you could model this by maintaining a session state and displaying a follow-up question in the UI when an interrupt is triggered. Happy to share a sample workflow if you're interested!
Thanks
Sreeni Ramadorai

Collapse
 
icecoldwatta profile image
Ernest

The sample workflow would be really helpful, thanks a lot.

Thread Thread
 
sreeni5018 profile image
Seenivasa Ramadurai

Thank you .

Collapse
 
fenix_compulon_4b27eef64c profile image
Fenix Compulon

Hello, excellent article. I'm impressed by the level of abstraction and simplification offered by the Supervisor library. Now I see that I don't need to add nodes and their relationships with add_node and add_edge, and the messages HumanMessage, BaseMessage, AIMessage, and SystemMessage are no longer used. And finally, I no longer see ToolNode and Tool....

Collapse
 
sreeni5018 profile image
Seenivasa Ramadurai

Yes . Please take a look at the Langgraph swarm agent library. This even eliminates supervisors. I do have blog using this library