useLLM
is a React library that lets you integrate large language models like OpenAI's ChatGPT and add AI-powered features into your React app with just a few lines of code.
It supports message streaming, prompt engineering, audio transcription, text-to-speech,and much more right out of the box, offers powerful abstractions for building complex AI apps. You can read the docs and try out live demos here: https://usellm.org .
useLLM
facilitates a clear segregation of client and server side code, and helps you adhere to best practices for avoiding security vulnerabilities in your apps.
Usage
Let's build a simple ChatGPT-powered travel planner app using useLLM
to see how it works.
1. Create an Application
While useLLM
can be used with any React framework, we'll use Next.js in this example. Run the following command to create a new Next.js app:
npx create-next-app@latest travel-planner
NOTE: You can use npx create-next-app@latest .
if you'd like to create the app in the current directory.
Select the default options when prompted:
Would you like to use TypeScript with this project? > [Yes]
✔ Would you like to use ESLint with this project? > [Yes]
✔ Would you like to use Tailwind CSS with this project? > [Yes]
✔ Would you like to use src/ directory with this project? > [No]
✔ Use App Router (recommended)? > [Yes]
✔ Would you like to customize the default import alias? > [No]
Once the app is created, navigate to the project directory and install the useLLM
package:
cd travel-planner
npm install usellm@latest
You can now run the development server:
npm run dev
Open http://localhost:3000 with your browser to see the default Next.js app.
You can also open the project for development in VS Code:
code .
2. Create an API Route
To communicate with the OpenAI API, we'll need to create an API route. Add the following code inside a file named app/api/llm/route.ts
:
import { createLLMService } from "usellm";
export const runtime = "edge";
const llmService = createLLMService({
openaiApiKey: process.env.OPENAI_API_KEY,
actions: ["chat"],
});
export async function POST(request: Request) {
const body = await request.json();
// add authentication and rate limiting here
try {
const { result } = await llmService.handle({ body, request });
return new Response(result, { status: 200 });
} catch (error: any) {
return new Response(error.message, { status: error?.status || 400 });
}
}
The above file creates a route handler within the app router and uses the
edge runtime to allow streaming of responses. Don't worry if these terms don't make sense to you, you can read more about them later in the Next.js docs.
You'll need to provide your OpenAI secret API key
to let the library communicate with the OpenAI API. Create a file .env.local
and place your API key inside it:
OPENAI_API_KEY=sk-...
The .env.local
is already added to your .gitignore
file to prevent your API key from being committed to your repository.
3. Use the React Hook
We're now ready to use the useLLM
react hook within our application to connect to the API route we just created. Replace the contents of app/page.tsx
with the following code:
"use client";
import useLLM from "usellm";
import { useState } from "react";
export default function Home() {
const llm = useLLM();
const [location, setLocation] = useState("");
const [result, setResult] = useState("");
async function handleClick() {
try {
await llm.chat({
messages: [
{
role: "system",
content: `You're a travel planner bot. Given a destination, generate an itinerary for a one week trip. Keep it short and sweet.`,
},
{ role: "user", content: `Destination: ${location}` },
],
stream: true,
onStream: ({ message }) => setResult(message.content),
});
} catch (error) {
console.error("Something went wrong!", error);
}
}
return (
<div className="min-h-screen mx-auto my-8 max-w-4xl">
<h1 className="text-center mb-4 text-2xl">Travel Planner</h1>
<div className="flex">
<input
value={location}
onChange={(e) => setLocation(e.target.value)}
placeholder="Enter a destination"
className="rounded border p-2 mr-2 text-black"
/>
<button
className="rounded border border-black dark:border-white p-2"
onClick={handleClick}
>
Submit
</button>
</div>
<div className="mt-4 whitespace-pre-wrap">{result}</div>
</div>
);
}
Here's what the above code does:
- A page is created with an input box where the user can enter a location and a submit button.
- The
useLLM
React is used to create anllm
object to communicate with the API route. - When the user clicks "Submit", the
handleClick
function is called, which invokesllm.chat
. -
llm.chat
sends a message to the API route, which in turn sends it to the OpenAI API. - The response is streamed back word by word & passed to the
onStream
callback. - The
onStream
callback updates theresult
state variable, which in turn updates the UI.
Visit https://localhost:3000 on your browser to try it yourself. Here's what it looks like:
4. Deploy to the Cloud
Let's push this project to GitHub and deploy it to Vercel. Before deploying to production, you might want to add authentication
(check out Clerk) or rate limiting (check out Upstash) to your API route to prevent abuse.
Follow these steps to push your project to GitHub:
- Stage and commit your changes to the git repository (already initialized by
create-next-app
):
git commit -m "Initial commit"
Go to https://github.com and create a new repository. You don't need to initialize the repository with a README, .gitignore or License. The repository should be completely empty.
Copy the URL of your new GitHub repository. It should look something like
https://github.com/username/reponame.git
.Link the local repository to your GitHub repository:
git remote add origin YOUR_GITHUB_REPO_URL
Replace YOUR_GITHUB_REPO_URL
with the URL you copied in step 5.
- Finally, push your local repository to GitHub:
git push -u origin main
Sure, here's how you can include an environment variable during the deployment process:
Visit Vercel and sign in or sign up using your GitHub account.
After you are signed in, click on the "Import Project" button. Click "Continue with GitHub" and authorize Vercel to access the repository.
Once the repository is selected, you will be taken to the "Import Git Repository" page. On this page, before clicking "Deploy", click on the "Environment Variables" section to expand it.
In the "Name" field, enter
OPENAI_API_KEY
, and in the "Value" field, enter the actual API key. Set the "Environment" to "Production". Then click on "Add".After adding the environment variable, you can leave the rest of the settings as they are for a basic Next.js application, and click "Deploy".
Vercel will create the project and start the deployment. You can watch the progress and see the URL of the deployed application once it's ready.
After the deployment is complete, any subsequent pushes to branches will trigger a deployment preview, and any pushes to the main branch (or the one you configured) will result in a production deployment.
And that's it! You've successfully built and deployed your first AI-powered application to the cloud.
Next Steps
You can now use useLLM
to build a wide variety of AI-powered applications. Here's where you can go from here:
Read the docs for
useChat
andregisterAgent
to build more powerful & flexible chatbotsCheck out the live demos on the project homepage and browse their source code
Read the API reference to learn more about the available methods and options
Follow our step-by-step tutorials to build your own AI-powered apps with
useLLM
Join our Slack workspace to get help from the community, and share your projects
Star our GitHub repository, subscribe to our blog and follow us on Twitter for updates
Happy building!
Top comments (0)