DEV Community

Alex Spinov
Alex Spinov

Posted on

LangChain.js Has a Free Framework That Chains LLM Calls — Build AI Agents With RAG in 50 Lines

The LLM Problem

You call the OpenAI API. You get a response. Now what?

  • How do you add memory to a conversation?
  • How do you give the LLM access to your documents?
  • How do you chain multiple LLM calls together?
  • How do you let the LLM use tools?

LangChain.js solves all of this. Chains, agents, RAG, memory — all composable.

What LangChain.js Gives You

Simple Chat

import { ChatOpenAI } from '@langchain/openai';

const model = new ChatOpenAI({ model: 'gpt-4o' });
const response = await model.invoke('Explain Docker in one sentence');
console.log(response.content);
Enter fullscreen mode Exit fullscreen mode

RAG (Retrieval-Augmented Generation)

import { ChatOpenAI, OpenAIEmbeddings } from '@langchain/openai';
import { MemoryVectorStore } from 'langchain/vectorstores/memory';
import { RecursiveCharacterTextSplitter } from 'langchain/text_splitter';

// 1. Load and split documents
const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });
const docs = await splitter.createDocuments([yourDocumentText]);

// 2. Create vector store
const vectorStore = await MemoryVectorStore.fromDocuments(
  docs,
  new OpenAIEmbeddings()
);

// 3. Query with context
const retriever = vectorStore.asRetriever();
const relevantDocs = await retriever.invoke('How does auth work?');

const model = new ChatOpenAI({ model: 'gpt-4o' });
const answer = await model.invoke([
  { role: 'system', content: `Answer based on this context: ${relevantDocs.map(d => d.pageContent).join('\n')}` },
  { role: 'user', content: 'How does auth work?' }
]);
Enter fullscreen mode Exit fullscreen mode

Tool-Using Agents

import { tool } from '@langchain/core/tools';
import { z } from 'zod';

const weatherTool = tool(
  async ({ city }) => {
    const res = await fetch(`https://wttr.in/${city}?format=j1`);
    const data = await res.json();
    return `${data.current_condition[0].temp_C}°C`;
  },
  {
    name: 'get_weather',
    description: 'Get current weather for a city',
    schema: z.object({ city: z.string() }),
  }
);

const agent = createToolCallingAgent({
  llm: new ChatOpenAI({ model: 'gpt-4o' }),
  tools: [weatherTool],
});

const result = await agent.invoke('What is the weather in Tokyo?');
// Agent calls get_weather tool, then responds with the result
Enter fullscreen mode Exit fullscreen mode

Streaming

for await (const chunk of await model.stream('Write a poem about coding')) {
  process.stdout.write(chunk.content);
}
Enter fullscreen mode Exit fullscreen mode

Works With Multiple Providers

OpenAI, Anthropic, Google (Gemini), Ollama (local), Mistral, Cohere — swap providers with one line.

Quick Start

npm install langchain @langchain/openai
Enter fullscreen mode Exit fullscreen mode

Why This Matters

Raw LLM APIs are like raw SQL — powerful but tedious. LangChain.js gives you the ORM-level abstractions for AI: chains, agents, memory, and RAG.


Building AI apps that need web data? Check out my web scraping actors on Apify Store — feed structured data into your LangChain pipelines. For custom solutions, email spinov001@gmail.com.

Top comments (0)