The ability to create stateful, multi-actor applications is crucial for developing complex systems that can handle a variety of tasks. LANGCHAIN LangGraph is a library designed for this purpose, allowing developers to build applications with Large Language Models (LLMs) by modeling steps as edges and nodes in a graph. In this article, we will explore how to implement a LANGCHAIN LangGraph in TypeScript, complete with code examples and a custom tool that we will invent.
Understanding LangGraph
LangGraph is a part of the LangChain framework, which is a set of tools for building applications powered by LLMs. LangGraph extends the LangChain Expression Language with the ability to coordinate multiple chains or actors across multiple steps of computation in a cyclic manner. It is inspired by Pregel and Apache Beam and is not optimized for only Directed Acyclic Graph (DAG) workflows. Instead, it is designed for adding cycles to your LLM application, which is essential for agent-like behaviors.
Installation
To get started with LangGraph in TypeScript, you need to install the LangGraph package using npm:
npm install @langchain/langgraph
Additionally, you should install the LangChain OpenAI integration package:
npm i @langchain/openai
Make sure to export your OpenAI API key as an environment variable:
export OPENAI_API_KEY=sk-...
Creating a Simple LangGraph
Let's create a simple LangGraph that contains a single node called "oracle" that executes a chat model and returns the result. Here's how you can do it:
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage, BaseMessage } from "@langchain/core/messages";
import { END, MessageGraph } from "@langchain/langgraph";
const model = new ChatOpenAI({ temperature: 0 });
const graph = new MessageGraph();
graph.addNode("oracle", async (state: BaseMessage[]) => {
return model.invoke(state);
});
graph.addEdge("oracle", END);
graph.setEntryPoint("oracle");
const runnable = graph.compile();
// For Message graph, input should always be a message or list of messages.
const res = await runnable.invoke(new HumanMessage("What is 1 + 1?"));
The code above initializes a model and a MessageGraph, adds a single node that calls the model with the given input, and compiles the graph. When executed, the graph adds the input message to the internal state, passes the state to the "oracle" node, and outputs the final state after execution.
Adding Conditional Edges
LangGraph also allows for conditional edges, which can route execution to a node based on the current state using a function. Here's an example of adding a "calculator" node that conditionally executes based on the model's output:
import { ToolMessage } from "@langchain/core/messages";
import { Calculator } from "langchain/tools/calculator";
import { convertToOpenAITool } from "@langchain/core/utils/function_calling";
const model = new ChatOpenAI({
temperature: 0,
}).bind({
tools: [convertToOpenAITool(new Calculator())],
tool_choice: "auto",
});
const graph = new MessageGraph();
graph.addNode("oracle", async (state: BaseMessage[]) => {
return model.invoke(state);
});
graph.addNode("calculator", async (state: BaseMessage[]) => {
const tool = new Calculator();
const toolCalls = state[state.length - 1].additional_kwargs.tool_calls ?? [];
const calculatorCall = toolCalls.find(
(toolCall) => toolCall.function.name === "calculator"
);
if (calculatorCall === undefined) {
throw new Error("No calculator input found.");
}
const result = await tool.invoke(
JSON.parse(calculatorCall.function.arguments)
);
return new ToolMessage({
tool_call_id: calculatorCall.id,
content: result,
});
});
graph.addEdge("calculator", END);
const router = (state: BaseMessage[]) => {
const toolCalls = state[state.length - 1].additional_kwargs.tool_calls ?? [];
if (toolCalls.length) {
return "calculator";
} else {
return "end";
}
};
graph.addConditionalEdges("oracle", router, {
calculator: "calculator",
end: END,
});
const runnable = graph.compile();
In the example above, if the "oracle" node returns a message expecting a tool call, the "calculator" node is executed. Otherwise, execution ends.
Creating a Custom Tool
Now, let's invent a custom tool that we can integrate into our LangGraph. We'll create a simple "SentimentAnalyzer" tool that takes a text input and returns a sentiment score.
import { BaseTool, ToolMessage } from "@langchain/core/messages";
class SentimentAnalyzer extends BaseTool {
async invoke(input: string): Promise<ToolMessage> {
// For simplicity, we'll return a fixed sentiment score.
// In a real-world scenario, you would integrate with a sentiment analysis API.
const sentimentScore = 0.8; // Let's assume a positive sentiment
return new ToolMessage({
content: `Sentiment score: ${sentimentScore}`,
});
}
}
// Now, let's bind the SentimentAnalyzer to our model as a tool.
const sentimentAnalyzer = new SentimentAnalyzer();
const modelWithSentiment = model.bind({
tools: [convertToOpenAITool(sentimentAnalyzer)],
tool_choice: "auto",
});
// You can now add the SentimentAnalyzer node to your graph and use it as needed.
Conclusion
LangGraph is a powerful library for building stateful, multi-actor applications with LLMs. By following the steps outlined in this article, you can implement a LangGraph in TypeScript, add conditional edges, and even create custom tools to extend the functionality of your application. With LangGraph, you can create complex AI systems that can handle a wide range of tasks with agent-like behaviors.
Remember to explore the LangChain documentation and GitHub repository for more examples and detailed information on how to use LangGraph and other components of the LangChain framework. Happy coding!
References
- LangGraph Documentation: LangGraph | 🦜️ LangChain
- LangChain GitHub Repository: langchain-ai/langchain
(Note: The code examples provided in this article are based on the documentation and may require adjustments based on the actual LangChain and LangGraph library versions and API changes.)
Top comments (0)