The OpenAI team has made it easier than ever to interact with the GPT-3 AI. You can easily create your own project using the OpenAI API. This tutorial will explore how to interact with OpenAI's GPT-3 API using Next.js.
OpenAI also gives you $14.58 worth of credits free to use.
๐น Play with the AI first
Before starting, I recommend you play with the AI here so that you have an idea as to how it works.
๐ค Let's Start
We'll create a simple Advice Generator App for this guide.
The live example GTP-3 project.
๐ง Setup Next.js and install OpenAI
npx create-next-app@latest
npm i openai
๐ Get Your OpenAI API Key
Include your OpenAI api key in your .env.local
file.
.env.local
OPENAI_API_KEY=your-openai-api-key
The following code fetches the response from OpenAI
Please note: The OpenAI node.js library cannot be used on the client and must be used server-side.
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
const completion = await openai.createCompletion({
model: "text-davinci-002",
prompt: `Replace this string with your prompt`,
max_tokens: 200,
});
console.log(completion.data.choices[0].text);
We can see we pass an options object to the createCompletion() function. Here are some things to consider:
- model: Choose between text-davinci-002, text-curie-001, text-babbage-001, or text-ada-001 from most capable to least powerful. The mode capable the AI the more effective it is at giving a good response. Keep in mind the more capable the AI the more expensive and slow the usage is.
- prompt: The question or text that you want the AI to complete.
- max_tokens: The length limit of the response (The more tokens the more expensive).
If you want to go more indepth please checkout this link.
๐จ Letโs set up an API endpoint
/pages/api/advice
const { Configuration, OpenAIApi } = require("openai");
//Setup OpenAI
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
const handler = async (req, res) => {
switch (req.method) {
case 'GET':
await getAdvice(req, res);
break;
default:
res.setHeader('Allow', ['GET'])
res.status(405).end(`Method ${req.method} Not Allowed`)
}
}
const getAdvice = async (req, res) => {
try {
const completion = await openai.createCompletion({
model: "text-davinci-002",
prompt: `Give me some advice on ${req.query.prompt}`,
max_tokens: 200,
});
res.status(200).json({ text: completion.data.choices[0].text });
} catch (error) {
if (error.response) {
res.status(error.response.status).send(error.response.data);
} else {
res.status(500).send(error.message);
}
}
}
export default handler;
๐ Make a get request from anywhere in your project
const res = await fetch(`/api/advice?prompt=${input}`);
const data = await res.json();
console.log(data.text);
Make a get request to /api/advice?prompt=your-prompt
Setup your frontend anyway you like.
You may also use my example on Github.
๐ Thanks for reading!
I'm currently looking for help on the project Emoji Story
Please reach out to wmatthew123@gmail.com if you're interested. Thank You!
Top comments (0)