The LangGraph ecosystem makes it easy to build complex AI workflows in TypeScript and JavaScript — but once you have a graph, how do you ship it to production with proper storage, streaming, and framework integration?
That’s where Open LangGraph Server comes in.
Open LangGraph Server is the easiest way to integrate LangGraph.js into real-world web applications. It gives you a standard HTTP endpoint for your graphs, built-in support for Next.js and Hono.js, and pluggable storage backends like SQLite, PostgreSQL, and Redis — all with TypeScript-first ergonomics.
In this post, we’ll walk through what it is, when you should use it, and how to get started.
What is Open LangGraph Server?
Open LangGraph Server is a server layer for LangGraph.js:
- It exposes your graphs as HTTP APIs (assistants, threads, runs, streams).
- It plugs into frameworks like Next.js and Hono.js with minimal glue code.
- It handles persistence, streaming, and thread lifecycle so you don’t have to reinvent it.
At its core, you install a single package:
pnpm add @langgraph-js/pure-graph
# or
npm install @langgraph-js/pure-graph
Then you register your LangGraph graphs, and Open LangGraph Server takes care of routing, state management, and storage.
Why would you use it?
If you’re building anything beyond a toy demo, you typically need:
Framework integration
Drop-in endpoints for Next.js and Hono.js, plus compatibility with other JS frameworks.Enterprise-ready storage
Pick between in-memory, SQLite, PostgreSQL, and Redis depending on your environment.Real-time streaming
Stream responses via Server-Sent Events (SSE) with a Redis-backed queue for performance.Thread & run management
Built-in APIs for managing assistants, threads, runs, and streaming runs.Type-safe, TS-first
Fully typed APIs, runtime validation, and patterns aligned with LangGraph.js.Flexible auth story
Integrates with Better Auth, API keys, or custom framework middleware.
If you’re already using LangGraph.js and want a production-grade HTTP layer that fits into a modern TypeScript codebase, Open LangGraph Server is designed for exactly that.
What can you build with it?
Some typical use cases:
- AI chat applications with streaming responses and long-lived threads.
- Internal tools where agents orchestrate actions across APIs and services.
- Workflow automation that runs multi-step, branching workflows reliably.
- Multi-tenant AI platforms where each user or team has isolated context.
- Backend AI services consumed by web, mobile, or other microservices.
The key idea: LangGraph handles reasoning and orchestration, while Open LangGraph Server handles serving, persistence, and integration.
Core concepts: Assistants, Threads, and Runs
Open LangGraph Server exposes a clear API model around LangGraph graphs:
-
Assistants – your registered graphs (e.g.
support-bot,research-agent). - Threads – long-lived conversations or workflows.
- Runs – individual executions of a graph in the context of a thread.
The server exposes REST-style endpoints such as:
-
GET /assistants– list/search assistants -
GET /assistants/{assistantId}– get a specific assistant -
POST /threads– create a thread -
GET /threads/{threadId}– fetch thread data -
POST /threads/{threadId}/runs– start a new run -
GET /threads/{threadId}/runs/{runId}/stream– stream a run
You focus on graph logic; Open LangGraph Server provides the HTTP contract around it.
Getting started with Next.js
Let’s look at how to wire Open LangGraph Server into a Next.js app.
1. Install the package
pnpm add @langgraph-js/pure-graph
2. Register your graph
Create an agent folder and register your LangGraph graph:
// agent/index.ts
import { registerGraph } from '@langgraph-js/pure-graph';
import graph from 'your-langgraph-graph';
registerGraph('test', graph);
export {};
You can register multiple graphs if you like — each one gets an assistant ID.
3. Add the API route
Create a catch-all route for LangGraph under app/api/langgraph/[...path]/route.ts:
// app/api/langgraph/[...path]/route.ts
import { NextRequest } from 'next/server';
import { ensureInitialized } from '@langgraph-js/pure-graph/dist/adapter/nextjs/index';
export const dynamic = 'force-dynamic';
export const revalidate = 0;
const registerGraph = async () => {
// Keep graph registration in a separate module so Next.js
// doesn’t re-import the graph multiple times.
await import('@/agent/index');
};
export const GET = async (req: NextRequest) => {
const { GET } = await ensureInitialized(registerGraph);
return GET(req);
};
export const POST = async (req: NextRequest) => {
const { POST } = await ensureInitialized(registerGraph);
return POST(req);
};
export const DELETE = async (req: NextRequest) => {
const { DELETE } = await ensureInitialized(registerGraph);
return DELETE(req);
};
Now your Next.js app exposes a LangGraph-compatible API under /api/langgraph/*.
Getting started with Hono.js
If you prefer a lightweight server framework, Open LangGraph Server also ships a Hono adapter.
1. Register your graph
// agent/graph-name/graph.ts
export const graph = /* your LangGraph graph */;
2. Create the Hono app
// app.ts
import { registerGraph } from '@langgraph-js/pure-graph';
import { graph } from './agent/graph-name/graph';
import { Hono } from 'hono';
import LangGraphApp, {
type LangGraphServerContext,
} from '@langgraph-js/pure-graph/dist/adapter/hono/index';
import { cors } from 'hono/cors';
registerGraph('test', graph);
const app = new Hono<{ Variables: LangGraphServerContext }>();
app.use(cors());
app.route('/', LangGraphApp);
export default app;
This mounts the LangGraph server under the root path, exposing the same assistants/threads/runs endpoints via Hono.
Advanced: Entrypoint-based graphs
If you’re using LangGraph’s entrypoint pattern, Open LangGraph Server can wrap that, too.
Here’s a simplified example using entrypoint and createEntrypointGraph:
// agent/entrypoint-graph.ts
import { entrypoint, getConfig } from '@langchain/langgraph';
import { createReactAgent, createReactAgentAnnotation } from '@langchain/langgraph/prebuilt';
import { createState } from '@langgraph-js/pro';
import { createEntrypointGraph } from '@langgraph-js/pure-graph';
import { ChatOpenAI } from '@langchain/openai';
const State = createState(createReactAgentAnnotation()).build({});
const workflow = entrypoint('my-entrypoint', async (state: typeof State.State) => {
const config = getConfig();
console.log('User ID from context:', config.configurable?.userId);
const agent = createReactAgent({
llm: new ChatOpenAI({ model: 'your-model' }),
prompt: 'You are a helpful assistant',
tools: [],
});
return agent.invoke(state);
});
export const graph = createEntrypointGraph({
stateSchema: State,
graph: workflow,
});
You can then register this graph the same way as before and expose it via Next.js or Hono.
Passing context into your graphs
Real applications often need user-specific context — user IDs, sessions, preferences, feature flags, etc.
Open LangGraph Server lets you pass context through the request (e.g. headers, cookies), and access it in your graph via getConfig().configurable.
For example, in a Hono app:
// app.ts
import { Hono } from 'hono';
import LangGraphApp, { type LangGraphServerContext } from '@langgraph-js/pure-graph/dist/adapter/hono/index';
import { registerGraph } from '@langgraph-js/pure-graph';
import { graph as contextAwareGraph } from './agent/context-aware-graph';
registerGraph('context-aware', contextAwareGraph);
const app = new Hono<{ Variables: LangGraphServerContext }>();
app.use('/api/langgraph/*', async (c, next) => {
const userId = c.req.header('x-user-id') || 'anonymous';
const sessionId = c.req.header('x-session-id') || 'session-123';
c.set('langgraph_context', {
userId,
sessionId,
preferences: { theme: 'dark', language: 'en' },
metadata: { source: 'hono-app', timestamp: new Date().toISOString() },
});
await next();
});
app.route('/api', LangGraphApp);
export default app;
Inside your graph, you can then read this context via getConfig() and adapt behavior per user or tenant.
Storage and persistence options
Open LangGraph Server supports multiple storage backends out of the box, selectable via environment variables:
Memory (default)
Ideal for local development and tests; no persistence across restarts.SQLite
Great for single-server setups or small projects:
SQLITE_DATABASE_URI=./.langgraph_api/chat.db
- PostgreSQL Recommended for production with higher throughput and reliability:
DATABASE_URL=postgresql://username:password@localhost:5432/langgraph_db
DATABASE_INIT=true # first run only
CHECKPOINT_TYPE=postgres
- Redis (full or shallow) For high-performance checkpointing and message queues:
REDIS_URL=redis://localhost:6379
CHECKPOINT_TYPE=redis # or shallow/redis
If Redis is configured, Open LangGraph Server will also use it to back message queues for streaming, with sensible TTLs and cleanup strategies.
Storage selection follows a priority order:
- Redis
- PostgreSQL
- SQLite
- Memory (fallback)
Local debugging with Studio
Open LangGraph Server works great with LangGraph Studio for visual inspection and debugging:
npx @langgraph-js/ui
Connect to your running server, explore graphs, inspect state, and iterate faster on complex workflows.
Wrapping up
Open LangGraph Server gives you a production-focused, TypeScript-native server layer for LangGraph.js:
- Integrates naturally with Next.js and Hono.js
- Handles assistants, threads, runs, and streaming
- Supports multiple storage backends with a simple env-based configuration
- Plays nicely with auth, middleware, and context passing
If you’re building AI-powered applications with LangGraph.js and want to ship them with real-world concerns like persistence and streaming solved for you, Open LangGraph Server is worth a serious look.
You can find the package on npm as @langgraph-js/pure-graph, and dive deeper into the docs for advanced usage, architecture, and more.
Top comments (0)