DEV Community

Cover image for Interrupts and Commands in LangGraph: Building Human-in-the-Loop Workflows
James
James

Posted on

Interrupts and Commands in LangGraph: Building Human-in-the-Loop Workflows

Welcome to this tutorial on using interrupts and commands in LangGraph to create interactive, human-in-the-loop workflows. If you're new to LangGraph or want a visual walkthrough, check out the accompanying YouTube video first: Watch the Video.

In this post, we'll explore how to build a simple workflow that pauses for user approval and dynamically routes based on that decision. This is perfect for scenarios where you need human oversight, like approving deployments or reviewing AI-generated decisions. We'll break it down step by step, with code blocks you can copy and follow along in your own Jupyter notebook or Python environment. By the end, you'll understand how to implement interrupts for pausing execution and commands for dynamic routing.

LangGraph is a powerful library for building stateful, multi-actor applications with LLMs. Here, we'll focus on its interrupt and command features, which leverage checkpointing to save and restore state seamlessly.

Key Concepts

Before diving into the code, let's cover the essentials. Interrupts allow you to pause graph execution and wait for human input, while commands enable dynamic control over the graph's flow. Checkpointing is crucial because it persists the state during these pauses.

The interrupt feature simplifies human-in-the-loop agents by using LangGraph's persistence layer. It lets you approve or reject steps, such as LLM decisions or function calls, and direct the graph accordingly. You can also review and edit the graph state, like updating responses or documents, or approve tool calls for oversight.

Commands, on the other hand, provide expressive communication between nodes. They support dynamic routing without predefined edges, handoffs in multi-agent setups, and enhanced control flow by updating state and dictating paths.

Workflow Overview

Our example workflow presents a task for approval, pauses for a user decision, and then routes to either completion or cancellation. This demonstrates a practical human-in-the-loop process: start with a task like "Deploy new feature to production," interrupt for approval, and proceed based on whether the user says "approve" or "reject."

Now, let's build it. You'll need LangGraph installed—run pip install langgraph if you haven't already. We'll also use some typing utilities and IPython for display, but the core is pure Python.

State Definition

The foundation of any LangGraph workflow is the state. This is a shared dictionary that all nodes can access and modify. For our workflow, it tracks the task, the user's decision, and the final status.

from typing_extensions import TypedDict

class WorkflowState(TypedDict):
    task: str
    # The user's decision ('approve' or 'reject') will be stored here after the interrupt.
    user_decision: str
    # The final status of our workflow.
    status: str
Enter fullscreen mode Exit fullscreen mode

This TypedDict ensures type safety as we pass state around.

Node Functions

Nodes are the building blocks—functions that perform actions and update the state. We have four: one for getting approval (with interrupt), one for routing (with command), and two terminals for completing or canceling.

The get_approval node pauses the graph, prints the task, and waits for input. The router decides the next step dynamically. The others just finalize the status.

from langgraph.types import interrupt, Command

def get_approval(state: WorkflowState):
    """
    This node uses 'interrupt' to pause the graph.
    It waits for a human to provide a decision before the graph can proceed.
    """
    print('--- ⏸️ PAUSING FOR APPROVAL ---')
    print(f"Task: '{state['task']}'")
    # The 'interrupt' function stops execution here. The string passed to it
    # is a message to the user about what input is expected.
    # When the graph is resumed, the value provided will be the return value of this function.
    decision = interrupt("Please enter 'approve' or 'reject' to continue.")
    print(f"--- ▶️ RESUMING WITH DECISION: '{decision}' ---")
    return {'user_decision': decision}

def router(state: WorkflowState) -> Command:
    """
    This node uses 'Command' to dynamically route the graph's execution.
    Based on the user's decision, it decides which node to run next.
    """
    print('--- 🔀 ROUTING ---')
    decision = state.get('user_decision', '').strip().lower()

    if decision == 'approve':
        print("Decision: ✅ Approved -> Routing to 'complete_task'")
        # Command(goto=...) tells LangGraph which node to execute next.
        return Command(goto='complete_task')
    else:
        print("Decision: ❌ Rejected -> Routing to 'cancel_task'")
        return Command(goto='cancel_task')

def complete_task(state: WorkflowState):
    """A final node for when the task is approved."""
    print('--- 🎉 TASK COMPLETED ---')
    return {'status': 'done'}

def cancel_task(state: WorkflowState):
    """A final node for when the task is rejected."""
    print('--- 🗑️ TASK CANCELED ---')
    return {'status': 'canceled'}
Enter fullscreen mode Exit fullscreen mode

Copy these into your script. Notice how interrupt halts execution, and Command(goto=...) handles routing without fixed edges.

Graph Construction and Visualization

Now, construct the graph. We use a StateGraph builder, add nodes, and define edges. Importantly, we enable checkpointing with an in-memory saver for interrupts to work. The router doesn't need explicit edges—commands handle that at runtime.

For visualization, we'll generate a diagram, but remember, dynamic parts won't show statically.

from langgraph.graph import StateGraph, START, END
from langgraph.checkpoint.memory import InMemorySaver
from IPython.display import Image, display

# Initialize an in-memory checkpointer. This is required for 'interrupt' to work,
# as it needs to save the graph's state when it pauses.
memory = InMemorySaver()

# Create the graph builder.
builder = StateGraph(WorkflowState)

# Add the functions as nodes to the graph.
builder.add_node('get_approval', get_approval)
builder.add_node('router', router)
builder.add_node('complete_task', complete_task)
builder.add_node('cancel_task', cancel_task)

# Define the graph's structure (its edges).
builder.add_edge(START, 'get_approval')
builder.add_edge('get_approval', 'router')

# NOTE: We do NOT need to add conditional edges from the 'router' node.
# The 'Command' object returned by the router handles the routing dynamically.

builder.add_edge('complete_task', END)
builder.add_edge('cancel_task', END)

# Compile the graph, enabling the checkpointer.
graph = builder.compile(checkpointer=memory)

# You can visualize the graph structure.
# Notice the router doesn't have explicit paths leading out of it.
try:
    display(Image(graph.get_graph().draw_mermaid_png()))
except Exception as e:
    print(f'Could not display graph: {e}')
Enter fullscreen mode Exit fullscreen mode

Run this to build your graph. If you're in Jupyter, you'll see the diagram; otherwise, it might just print an error if dependencies are missing.

Workflow Execution

Finally, let's run it. We'll simulate two scenarios: approval and rejection. Each uses a unique thread ID to track state independently. We stream events to see progress, pause at the interrupt, and resume with a decision.

First, the approval run:

# --- Run 1: Approve the task ---
print('\n' + '=' * 50 + '\n🚀 STARTING RUN 1: APPROVAL\n' + '=' * 50)

# A 'thread_id' is needed to track the state of a single run.
thread = {'configurable': {'thread_id': 'run-1'}}
initial_task = {'task': 'Deploy new feature to production'}

# Start the graph. It will run until it hits the 'interrupt' in the 'get_approval' node.
# We use 'stream' to see the events as they happen.
for event in graph.stream(initial_task, thread, stream_mode='values', debug=True):
    print(f'\n[STREAM EVENT]:\n{event}\n')

# At this point, the graph is paused. Let's resume it with the user's decision.
# We send a Command object with the 'resume' payload.
print("\n... Resuming Run 1 with 'approve' ...\n")

for event in graph.stream(Command(resume='approve'), thread, stream_mode='values', debug=True):
    print(f'\n[STREAM EVENT]:\n{event}\n')
Enter fullscreen mode Exit fullscreen mode

This should pause, wait for "approve," then complete the task.

Now, the rejection run:

# --- Run 2: Reject the task ---
print('\n' + '=' * 50 + '\n🚀 STARTING RUN 2: REJECTION\n' + '=' * 50)

# Use a new thread_id for the second, independent run.
thread2 = {'configurable': {'thread_id': 'run-2'}}

# Start the second run.
for event in graph.stream(initial_task, thread2, stream_mode='values', debug=True):
    print(f'\n[STREAM EVENT]:\n{event}\n')

# Resume the second run, but this time with a 'reject' decision.
print("\n... Resuming Run 2 with 'reject' ...\n")
for event in graph.stream(Command(resume='reject'), thread2, stream_mode='values', debug=True):
    print(f'\n[STREAM EVENT]:\n{event}\n')
Enter fullscreen mode Exit fullscreen mode

Here, it routes to cancellation.

Conclusion

You've now built a human-in-the-loop workflow with LangGraph! Interrupts handle pauses for input, commands enable dynamic routing, and checkpointing keeps everything stateful. Experiment by modifying the task or adding more nodes. For more advanced uses, check LangGraph's docs. If this helped, like the video and share your thoughts in the comments!

Top comments (0)