DEV Community

Alex Spinov
Alex Spinov

Posted on

LangChain.js Has a Free AI Framework: Build LLM-Powered Apps With Chains, Agents, and RAG in TypeScript

You want to build an AI app that searches your documents, calls APIs, and reasons through multi-step problems. The OpenAI SDK gives you chat completions. But chaining prompts, managing context windows, vectorizing documents, and building agent loops? That's on you.

LangChain.js gives you composable building blocks — chains, agents, retrievers, memory — for building complex AI applications.

Quick Start

npm install langchain @langchain/openai
Enter fullscreen mode Exit fullscreen mode

Basic Chain

import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";

const model = new ChatOpenAI({ modelName: "gpt-4-turbo" });

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a technical writer. Write concise explanations."],
  ["user", "Explain {topic} in 3 sentences for {audience}."],
]);

const chain = prompt.pipe(model).pipe(new StringOutputParser());

const result = await chain.invoke({
  topic: "WebSockets",
  audience: "junior developers",
});
Enter fullscreen mode Exit fullscreen mode

RAG (Retrieval Augmented Generation)

import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { OpenAIEmbeddings } from "@langchain/openai";

// 1. Split documents into chunks
const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200 });
const docs = await splitter.createDocuments([myDocumentText]);

// 2. Create vector store
const vectorStore = await MemoryVectorStore.fromDocuments(docs, new OpenAIEmbeddings());

// 3. Create retriever
const retriever = vectorStore.asRetriever({ k: 4 });

// 4. Build RAG chain
import { createRetrievalChain } from "langchain/chains/retrieval";
import { createStuffDocumentsChain } from "langchain/chains/combine_documents";

const combineDocsChain = await createStuffDocumentsChain({
  llm: model,
  prompt: ChatPromptTemplate.fromMessages([
    ["system", "Answer based on the following context:\n{context}"],
    ["user", "{input}"],
  ]),
});

const ragChain = await createRetrievalChain({
  retriever,
  combineDocsChain,
});

const response = await ragChain.invoke({ input: "What does the refund policy say?" });
Enter fullscreen mode Exit fullscreen mode

Agents — LLM Decides What Tools to Use

import { TavilySearchResults } from "@langchain/community/tools/tavily_search";
import { Calculator } from "@langchain/community/tools/calculator";
import { createReactAgent } from "@langchain/langgraph/prebuilt";

const tools = [new TavilySearchResults(), new Calculator()];

const agent = createReactAgent({ llm: model, tools });

const result = await agent.invoke({
  messages: [{ role: "user", content: "What's the population of Tokyo divided by the area in km²?" }],
});
// Agent: searches for Tokyo population, searches for area, calculates density
Enter fullscreen mode Exit fullscreen mode

LangChain vs AI SDK vs Direct API

Feature LangChain.js AI SDK Direct API
RAG Built-in Manual Manual
Agents Built-in Tool calling Manual
Memory Built-in Manual Manual
Streaming Yes Yes SSE
React hooks No Yes No
Complexity High Low Medium
Best for Complex AI apps Chat UIs Simple calls

Choose LangChain for RAG, agents, and complex AI workflows. Choose AI SDK for streaming chat UIs.

Start here: js.langchain.com


Need custom data extraction, scraping, or automation? I build tools that collect and process data at scale — 78 actors on Apify Store and 265+ open-source repos. Email me: Spinov001@gmail.com | My Apify Actors

Top comments (0)