DEV Community

Cover image for Learning the Basics of Large Language Model (LLM) Applications with LangChainJS
Praveen
Praveen

Posted on

Learning the Basics of Large Language Model (LLM) Applications with LangChainJS

LangChainJS is a powerful tool for building and operating Large Language Models (LLMs) in JavaScript. It’s perfect for creating applications across various platforms, including browser extensions, mobile apps with React Native, and desktop apps with Electron. The popularity of JavaScript among developers, combined with its ease of deployment and scalability, makes LangChainJS an ideal choice for these tasks.

LangChainJS uses a special language to create chains of components called "runnables." These runnables define core methods, input and output types, and enable functionalities like invoking, streaming, batching, and modifying runtime parameters.

Example: Making a Joke Bot
Here’s a simple example to demonstrate how LangChainJS works:

import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";

const model = new ChatOpenAI({
    modelName: "gpt-3.5-turbo-1106"
});

await model.invoke([
    new HumanMessage("Tell me a joke.")
]);

// Expected Response:
// "Why don't skeletons fight each other? They don't have the guts!"
Enter fullscreen mode Exit fullscreen mode

Understanding Prompt Templates

Prompt templates are standardized formats used for creating prompts in LLM applications. They include placeholders for variable input, making them reusable and adaptable for different queries. In LangChain, prompt templates are implemented using classes like PromptTemplate and ChatPromptTemplate.

`import { ChatPromptTemplate } from "@langchain/core/prompts";

const prompt = ChatPromptTemplate.fromTemplate(
    `What are three good names for a company that makes {product}?`
);

await prompt.format({
    product: "colorful socks"
});

// Expected Output:
// "What are three good names for a company that makes colorful socks?"
`
Enter fullscreen mode Exit fullscreen mode

LangChain Expression Language (LCEL)

LCEL connects different components (runnables) in a sequence, creating workflows where the output of one component becomes the input for another. These runnables come with methods like invoke, stream, and batch.

const chain = prompt.pipe(model);
await chain.invoke({
    product: "colorful socks"
});

// Expected Response:
// "1. Rainbow Soles\n2. Vivid Footwear Co.\n3. Chromatic Sockworks"

Enter fullscreen mode Exit fullscreen mode

Using Output Parsers

Output parsers transform the chat model output into a different format, such as a simple string.

import { StringOutputParser } from "@langchain/core/output_parsers";

const outputParser = new StringOutputParser();
const nameGenerationChain = prompt.pipe(model).pipe(outputParser);

await nameGenerationChain.invoke({
    product: "fancy cookies"
});

// Expected Response:
// "1. Gourmet Cookie Creations\n2. Delicate Delights Bakery\n3. Heavenly Sweet Treats Co."

Enter fullscreen mode Exit fullscreen mode

Streaming Responses

The .stream method allows handling LLM responses that take a long time to generate, returning the output in an iterable stream.

const stream = await nameGenerationChain.stream({
  product: "really cool robots",
});

for await (const chunk of stream) {
    console.log(chunk);
}

Enter fullscreen mode Exit fullscreen mode

Batch Processing

The batch method performs multiple operations concurrently, handling multiple inputs simultaneously.

const inputs = [
    { product: "large calculators" },
    { product: "alpaca wool sweaters" }
];

await nameGenerationChain.batch(inputs);

// Expected Response:
// ["1. GiantCalc Co.\n2. MegaMath Devices\n3. JumboCalculations Inc.",
//  "1. Alpaca Luxe\n2. Sweater Alpaca\n3. Woolly Alpaca Co."]

Enter fullscreen mode Exit fullscreen mode

Retrieval Augmented Generation (RAG)

RAG combines the capabilities of LLMs with retrieval techniques to generate text with contextual information. It involves loading documents, splitting them for clarity, embedding them into vectors, and storing them in a vector database for efficient retrieval.

Document Loading with LangChainJS

LangChainJS offers document loaders to collect data from various sources. For example, you can load a GitHub repository:

import { GithubRepoLoader } from "langchain/document_loaders/web/github";
import ignore from "ignore";

const loader = new GithubRepoLoader(
  "https://github.com/langchain-ai/langchainjs",
  { recursive: false, ignorePaths: ["*.md", "yarn.lock"] }
);

const docs = await loader.load();
console.log(docs.slice(0, 3));

Enter fullscreen mode Exit fullscreen mode

Splitting Documents

LangChainJS provides strategies for splitting documents to ensure coherence and context.

import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";

const splitter = RecursiveCharacterTextSplitter.fromLanguage("js", {
  chunkSize: 32,
  chunkOverlap: 0,
});
const code = `function helloWorld() {
console.log("Hello, World!");
}
// Call the function
helloWorld();`;

await splitter.splitText(code);

// Expected Output:
// ["function helloWorld() {", 'console.log("Hello, World!");\n}', "// Call the function", "helloWorld();"]

Enter fullscreen mode Exit fullscreen mode

Embedding and Searching

Embedding converts document contents into vectors, which are then stored in a vector database. You can search for relevant chunks using these embeddings.

import { OpenAIEmbeddings } from "@langchain/openai";

const embeddings = new OpenAIEmbeddings();
await embeddings.embedQuery("This is some sample text");

// Expected Output:
// An array of numbers representing the embedded text.

Enter fullscreen mode Exit fullscreen mode

Constructing a Retrieval Chain

Create a chain to retrieve documents and generate answers based on user queries.

import { RunnableSequence } from "@langchain/core/runnables";
import { ChatOpenAI } from "@langchain/openai";
import { StringOutputParser } from "@langchain/core/output_parsers";

const retrievalChain = RunnableSequence.from([
  {
    context: documentRetrievalChain,
    question: (input) => input.question,
  },
  answerGenerationPrompt,
  model,
  new StringOutputParser(),
]);

const answer = await retrievalChain.invoke({
  question: "What are the prerequisites for this course?"
});

console.log(answer);

// Expected Response:
// Detailed answer about the prerequisites for the course.

Enter fullscreen mode Exit fullscreen mode

Handling Follow-up Questions

LangChainJS can handle follow-up questions by saving chat history and rephrasing questions to make them standalone.

import { MessagesPlaceholder } from "@langchain/core/prompts";
import { RunnableWithMessageHistory } from "@langchain/core/runnables";
import { ChatMessageHistory } from "langchain/stores/message/in_memory";
import { HumanMessage, AIMessage } from "@langchain/core/messages";

const messageHistory = new ChatMessageHistory();
const finalRetrievalChain = new RunnableWithMessageHistory({
  runnable: conversationalRetrievalChain,
  getMessageHistory: (_sessionId) => messageHistory,
  historyMessagesKey: "history",
  inputMessagesKey: "question",
});

const originalQuestion = "What are the prerequisites for this course?";
const originalAnswer = await finalRetrievalChain.invoke({
  question: originalQuestion,
}, {
  configurable: { sessionId: "test" }
});

const finalResult = await finalRetrievalChain.invoke({
  question: "Can you list them in bullet point form?",
}, {
  configurable: { sessionId: "test" }
});

console.log(finalResult);

// Expected Response:
// List of prerequisites in bullet points.

Enter fullscreen mode Exit fullscreen mode

This guide covers the basics of using LangChainJS for building LLM applications, from loading documents and creating prompt templates to constructing retrieval chains and handling follow-up questions. By leveraging these tools, you can create powerful and efficient LLM applications in JavaScript.

Top comments (2)

Collapse
 
monarchwadia profile image
Monarch Wadia • Edited

Thanks for the great article. Are you enjoying Langchain so far? I've been building a more lightweight, open source library that competes with Langchain. Would love your feedback on it too: dev.to/monarchwadia/use-openai-in-...

Collapse
 
sriramanam profile image
Raman Jha

Pl write a use case of LangChainJs to understand it better.