DEV Community

Cover image for Build AI Applications With LangChain, JavaScript, and React
Roy Derks
Roy Derks

Posted on • Originally published at hackteam.io

Build AI Applications With LangChain, JavaScript, and React

As a software developer seeking to accelerate development and incorporate AI into products, integrating tools like LangChain into your workflow is an exciting prospect. This post will provide a guide for developers on leveraging LangChain to execute prompting with OpenAI models.

We will walk through:

  • Setting up a React app with Vite
  • Installing and configuring LangChain
  • Connecting to the OpenAI API
  • Building a query function with LangChain
  • Handling form submission to interface with the AI

Click the image below to watch the YouTube video version of this blog post:

Build AI Applications With LangChain, JavaScript, and React

Setting up a React app with Vite

Vite is a rapid frontend build tool for modern web development. For this tutorial, we use a basic Vite app styled similarly to ChatGPT - with a query box against an empty background ready to display questions and answers.

I've already created a boilerplate for this tutorial, which you can find here.

You can check out the repository by running the following command in your terminal:

git clone https://github.com/royderks/ai-frontend-workshop.git
Enter fullscreen mode Exit fullscreen mode

After cloning the repository, you'll need to move into the new directory and install the dependencies.

cd ai-frontend-workshop/my-gpt
npm install
Enter fullscreen mode Exit fullscreen mode

Once you've got everything installed, open your project in whatever IDE you use – we're gonna go with Visual Studio Code here. The project folder is gonna look pretty familiar if you've ever dabbled with Create React App. You'll find static assets such as images in public and your React code in src. Plus, all the must-haves like package.json and vite.config.js are there. Oh, and TypeScript users, you haven’t been forgotten – there’s a tsconfig.json in there for you.

Ready to see your project live? Start your local development server with:

npm run dev
Enter fullscreen mode Exit fullscreen mode

And just like that, your Vite app is up and running in your browser. Which will look something like the screenshot you can see below.

Initial chat app with Vite

You get hot-reloading and all the modern features that make coding less of a chore.

Installing and configuring LangChain

LangChainJS is a JavaScript library designed to integrate language AI capabilities into web and Node.js applications. LangChain began as a Python SDK but now has JavaScript and TypeScript support. By offering developers an easy-to-use interface to leverage Large Language Models (LLMs) like GPT-4, LangChainJS simplifies the process of incorporating advanced natural language understanding and generation into projects.

Popular use cases for LangChainJS are chatbots, automating content creation, enhancing search functionalities, or crafting personalized user experiences. You can do all of this with a limited amount of knowledge about LLMs as LangChain offers a set of abstractions on top of popular LLM providers such as OpenAI and IBM Watson.

Before installing LangChain, let's stop the dev server and run:

npm install langchain @langchain/openai
Enter fullscreen mode Exit fullscreen mode

This will install both the general LangChain library and the library needed to connect to OpenAI. In the next section, we'll be connecting to OpenAI for which we need to generate an API Key first.

Connecting to OpenAI API

OpenAI isn't only the creator of ChatGPT; it also has a platform where you can access the models used by ChatGPT. Developers can sign up for free, and often, will get a small credit of $5 to try out their APIs. On

the OpenAI dashboard, we will generate a new secret key to authenticate our app.

With our key secured, we create an environment file called .env to store it:

VITE_OPENAI_API_KEY=<your_key>
Enter fullscreen mode Exit fullscreen mode

We're using VITE_ as a prefix so the environment variable gets picked up by Vite.

Now we can initialize the connection in a new file called langchain.ts:

import { OpenAI } from "@langchain/openai";

const llm = new OpenAI({
    openAIApiKey: import.meta.env.VITE_OPENAI_API_KEY,
});
Enter fullscreen mode Exit fullscreen mode

After creating the connection to OpenAI, we can create a function to call the LLM and pass our question to retrieve the answer.

Building a query function with LangChain

There are multiple methods to query an LLM using LangChain, each of these methods will behave slightly differently. For this tutorial, we'll use the invoke method, one of the simplest and quickest available in LangChainJS.

With our connection configured, we can build out the query function in the same langchain.ts file:

import { OpenAI } from "@langchain/openai";

const llm = new OpenAI({
    openAIApiKey: import.meta.env.VITE_OPENAI_API_KEY,
});

export async function getAnswer(question: string) {
    let answer = ''

    try {
        answer = await llm.invoke(question);
    } catch (e) {
        console.error(e);
    }

    return answer;
}
Enter fullscreen mode Exit fullscreen mode

This async function takes the question, queries the API using LangChain's invoke method, and returns the answer. There's a try/catch block around the function so we can catch errors, for example, when we run out of tokens.

Handling form submission to interface with the AI

The final step to completing this tutorial is to use the getAnswer function in the user interface that we built in the first section. As we're using client-side React, we need to create a few state variables in src/App.tsx and import the getAnswer function from langchain.ts. Also, the getAnswer function needs to be wrapped in a function that we can connect to the onSubmit function of the text input for typing the question:

import { useState } from "react";
import { getAnswer } from "./langchain";

export default function App() {
  const [question, setQuestion] = useState("");

  async function handleSubmit(e: React.FormEvent<HTMLFormElement>) {
    e.preventDefault();
    const result = await getAnswer(question);
    console.log(result);
  }

  // ...
Enter fullscreen mode Exit fullscreen mode

We can now handle the submission of the question by adding this function to the onSubmit in the form element:

<form
    className="stretch mx-2 flex flex-row gap-3 last:mb-2 md:mx-4 md:last:mb-6 lg:mx-auto lg:max-w-2xl xl:max-w-3xl"
    onSubmit={handleSubmit}
>
Enter fullscreen mode Exit fullscreen mode

With that, our app can now interface with the AI! We can ask questions and see it return answers that are logged to the console. If you want to display the result in the user interface, you can create another state variable for the answer:

import { useState } from "react";
import { getAnswer } from "./langchain";

export default function App() {
  const [question, setQuestion] = useState("");
  const [answer, setAnswer] = useState("");

  async function handleSubmit(e: React.FormEvent<HTMLFormElement>) {
    e.preventDefault();
    const result = await getAnswer(question);

    setAnswer(result);
  }

  // ...
Enter fullscreen mode Exit fullscreen mode

The value for answer can be rendered anywhere in the user interface, for example, in a text balloon.

What's next?

Integrating LangChain opens up lots of possibilities for automating workflows, analyzing data, providing search functionality, and more. The contents of this tutorial should give you a good start in building your own AI integrations. Feel free to leave any questions below!

If you found this article useful, let me know on Twitter at @gethackteam. Please share it around the web or subscribe to my YouTube channel for more exciting content on web technologies.

Top comments (0)