DEV Community

Cover image for useLLM - React Hooks for Large Language Models
Aakash N S
Aakash N S

Posted on

useLLM - React Hooks for Large Language Models

useLLM is a React library that lets you integrate large language models like OpenAI's ChatGPT and add AI-powered features into your React app with just a few lines of code.

It supports message streaming, prompt engineering, audio transcription, text-to-speech,and much more right out of the box, offers powerful abstractions for building complex AI apps. You can read the docs and try out live demos here: https://usellm.org .

useLLM facilitates a clear segregation of client and server side code, and helps you adhere to best practices for avoiding security vulnerabilities in your apps.

Usage

Let's build a simple ChatGPT-powered travel planner app using useLLM to see how it works.

1. Create an Application

While useLLM can be used with any React framework, we'll use Next.js in this example. Run the following command to create a new Next.js app:

npx create-next-app@latest travel-planner
Enter fullscreen mode Exit fullscreen mode

NOTE: You can use npx create-next-app@latest . if you'd like to create the app in the current directory.

Select the default options when prompted:

Would you like to use TypeScript with this project? > [Yes]
✔ Would you like to use ESLint with this project? > [Yes]
✔ Would you like to use Tailwind CSS with this project? > [Yes]
✔ Would you like to use src/ directory with this project? > [No]
✔ Use App Router (recommended)? > [Yes]
✔ Would you like to customize the default import alias? > [No]
Enter fullscreen mode Exit fullscreen mode

Once the app is created, navigate to the project directory and install the useLLM package:

cd travel-planner
npm install usellm@latest
Enter fullscreen mode Exit fullscreen mode

You can now run the development server:

npm run dev
Enter fullscreen mode Exit fullscreen mode

Open http://localhost:3000 with your browser to see the default Next.js app.

sample-output

You can also open the project for development in VS Code:

code .
Enter fullscreen mode Exit fullscreen mode

2. Create an API Route

To communicate with the OpenAI API, we'll need to create an API route. Add the following code inside a file named app/api/llm/route.ts:

import { createLLMService } from "usellm";

export const runtime = "edge";

const llmService = createLLMService({
  openaiApiKey: process.env.OPENAI_API_KEY,
  actions: ["chat"],
});

export async function POST(request: Request) {
  const body = await request.json();

  // add authentication and rate limiting here

  try {
    const { result } = await llmService.handle({ body, request });
    return new Response(result, { status: 200 });
  } catch (error: any) {
    return new Response(error.message, { status: error?.status || 400 });
  }
}
Enter fullscreen mode Exit fullscreen mode

The above file creates a route handler within the app router and uses the
edge runtime to allow streaming of responses. Don't worry if these terms don't make sense to you, you can read more about them later in the Next.js docs.

You'll need to provide your OpenAI secret API key
to let the library communicate with the OpenAI API. Create a file .env.local and place your API key inside it:

OPENAI_API_KEY=sk-...
Enter fullscreen mode Exit fullscreen mode

The .env.local is already added to your .gitignore file to prevent your API key from being committed to your repository.

3. Use the React Hook

We're now ready to use the useLLM react hook within our application to connect to the API route we just created. Replace the contents of app/page.tsx with the following code:

"use client";
import useLLM from "usellm";
import { useState } from "react";

export default function Home() {
  const llm = useLLM();
  const [location, setLocation] = useState("");
  const [result, setResult] = useState("");

  async function handleClick() {
    try {
      await llm.chat({
        messages: [
          {
            role: "system",
            content: `You're a travel planner bot. Given a destination, generate an itinerary for a one week trip. Keep it short and sweet.`,
          },
          { role: "user", content: `Destination: ${location}` },
        ],
        stream: true,
        onStream: ({ message }) => setResult(message.content),
      });
    } catch (error) {
      console.error("Something went wrong!", error);
    }
  }
  return (
    <div className="min-h-screen mx-auto my-8 max-w-4xl">
      <h1 className="text-center mb-4 text-2xl">Travel Planner</h1>
      <div className="flex">
        <input
          value={location}
          onChange={(e) => setLocation(e.target.value)}
          placeholder="Enter a destination"
          className="rounded border p-2 mr-2 text-black"
        />
        <button
          className="rounded border border-black dark:border-white p-2"
          onClick={handleClick}
        >
          Submit
        </button>
      </div>
      <div className="mt-4 whitespace-pre-wrap">{result}</div>
    </div>
  );
}
Enter fullscreen mode Exit fullscreen mode

Here's what the above code does:

  • A page is created with an input box where the user can enter a location and a submit button.
  • The useLLM React is used to create an llm object to communicate with the API route.
  • When the user clicks "Submit", the handleClick function is called, which invokes llm.chat.
  • llm.chat sends a message to the API route, which in turn sends it to the OpenAI API.
  • The response is streamed back word by word & passed to the onStream callback.
  • The onStream callback updates the result state variable, which in turn updates the UI.

Visit https://localhost:3000 on your browser to try it yourself. Here's what it looks like:

sample-output

4. Deploy to the Cloud

Let's push this project to GitHub and deploy it to Vercel. Before deploying to production, you might want to add authentication
(check out Clerk) or rate limiting (check out Upstash) to your API route to prevent abuse.

Follow these steps to push your project to GitHub:

  1. Stage and commit your changes to the git repository (already initialized by create-next-app):
   git commit -m "Initial commit"
Enter fullscreen mode Exit fullscreen mode
  1. Go to https://github.com and create a new repository. You don't need to initialize the repository with a README, .gitignore or License. The repository should be completely empty.

  2. Copy the URL of your new GitHub repository. It should look something like https://github.com/username/reponame.git.

  3. Link the local repository to your GitHub repository:

   git remote add origin YOUR_GITHUB_REPO_URL
Enter fullscreen mode Exit fullscreen mode

Replace YOUR_GITHUB_REPO_URL with the URL you copied in step 5.

  1. Finally, push your local repository to GitHub:
   git push -u origin main
Enter fullscreen mode Exit fullscreen mode

Sure, here's how you can include an environment variable during the deployment process:

  1. Visit Vercel and sign in or sign up using your GitHub account.

  2. After you are signed in, click on the "Import Project" button. Click "Continue with GitHub" and authorize Vercel to access the repository.

  3. Once the repository is selected, you will be taken to the "Import Git Repository" page. On this page, before clicking "Deploy", click on the "Environment Variables" section to expand it.

  4. In the "Name" field, enter OPENAI_API_KEY, and in the "Value" field, enter the actual API key. Set the "Environment" to "Production". Then click on "Add".

  5. After adding the environment variable, you can leave the rest of the settings as they are for a basic Next.js application, and click "Deploy".

  6. Vercel will create the project and start the deployment. You can watch the progress and see the URL of the deployed application once it's ready.

After the deployment is complete, any subsequent pushes to branches will trigger a deployment preview, and any pushes to the main branch (or the one you configured) will result in a production deployment.

And that's it! You've successfully built and deployed your first AI-powered application to the cloud.

Next Steps

You can now use useLLM to build a wide variety of AI-powered applications. Here's where you can go from here:

Happy building!

Top comments (0)