DEV Community

Cover image for [Tutorial] Interacting with OpenAI in Node.js and Express
Hassan Djirdeh
Hassan Djirdeh

Posted on

[Tutorial] Interacting with OpenAI in Node.js and Express

✨ This article is the first article and tutorial sent on the https://frontendfresh.com newsletter.

Today's article is the first article of a 3 to 4-part tutorial series where we'll be working towards building our very own custom Q&A chatbot. We'll achieve this by:

  • Building a Node/Express server to interact with OpenAI's APIs (today's email).
  • Using React to build the UI of our Q&A chatbot.
  • Finally, we'll then investigate how to fine-tune our app to have our Q&A chatbot return custom information.

Our final app will look something like the following:

custom_gpt_2.gif


Today, we'll be focusing solely on creating a Node.js server where we can interact directly with OpenAI's APIs. This is a precursor to setting up our front-end app that would then interact with the local API we'll create.

If you want to follow along, you'll need Node and NPM installed in your machine and an OpenAI API key (we'll show you how to get this in the next section).

Generating an OpenAI API key

Follow these steps to generate an API key with OpenAI:

When an API key is created, you'll be able to copy the secret key and use it when you begin development.

image.png

Note: OpenAI currently provides 18$ free credits for 3 months which is great since you won't have to provide your payment details to begin interacting with the API for the first time.

Setting up a Node/Express app

We'll now move towards creating a new directory for our Node project and we'll call it custom_chat_gpt.

mkdir custom_chat_gpt
Enter fullscreen mode Exit fullscreen mode

We'll navigate into the new directory and we'll run the following command to create a package.json file.

npm init -y
Enter fullscreen mode Exit fullscreen mode

Once the package.json file has been created appropriately, we'll then install the three dependencies we'll need for now.

npm install dotenv express openai
Enter fullscreen mode Exit fullscreen mode
  • dotenv: will allow us to load environment variables from a .env file when working locally.
  • express: is the Node.js framework we'll use to spin up a Node server.
  • openai: is the Node.js library for the OpenAI API.

We'll then create a file named index.js. The index.js file is where we'll build our Node.js/Express server.

In the index.js file, we'll:

  • Include the express module with require("express").
  • We'll then run the express() function to define the Express instance and assign it to a constant labeled app.
  • We'll specify a middleware in our Express instance (with app.use()) to parse incoming JSON requests and place the parsed data in a req body.
  • We'll specify a port variable that will be given a value that comes from a PORT environment variable or 5000 if the PORT environment variable is undefined.
const express = require("express");

const app = express();
app.use(express.json());

const port = process.env.PORT || 5000;
Enter fullscreen mode Exit fullscreen mode

We'll then set up a POST route labeled /ask that will act as the endpoint our client will trigger. In this route, we will expect a prompt value to exist in the request body and if it doesn't, we'll throw an error. If the prompt value does exist, we'll simply return a response of status 200 that contains the prompt in a message field.

Lastly, we'll run the app.listen() function to have our app listen on the port value we've specified in the port variable.

const express = require("express");

const app = express();
app.use(express.json());

const port = process.env.PORT || 5000;

// POST request endpoint
app.post("/ask", async (req, res) => {
  // getting prompt question from request
  const prompt = req.body.prompt;

  try {
    if (prompt == null) {
      throw new Error("Uh oh, no prompt was provided");
    }

    // return the result
    return res.status(200).json({
      success: true,
      message: prompt,
    });
  } catch (error) {
    console.log(error.message);
  }
});

app.listen(port, () => console.log(`Server is running on port ${port}!!`));
Enter fullscreen mode Exit fullscreen mode

With this change saved, it will be a good time to test that our changes work. We'll run the server with:

node index.js
Enter fullscreen mode Exit fullscreen mode

With our server running, we can then attempt to trigger our /ask POST request through a CURL command to verify our server is set up appropriately.

curl -X POST \
  http://localhost:5000/ask \
  -H 'Content-Type: application/json' \
  -d '{ "prompt": "Hi! This is a test prompt!" }'
Enter fullscreen mode Exit fullscreen mode

We'll be provided with a successful response and our prompt returned to us.

image.png

With our server now working as intended, we can move towards having our /ask endpoint interact with OpenAI's /completions endpoint.

Interacting with OpenAI's completions endpoint

OpenAI provides a /completions endpoint in their API that provides completion suggestions for text input.

When we send a request to the /completions endpoint and we include a prompt or seed text, the API will generate a continuation of that text based on its training data.

With this /completions endpoint, we can build our own custom version of ChatGPT (with some caveat that ChatGPT most likely uses a more powerful Machine Learning model that isn't available through the OpenAI API).

To interact with the OpenAI API, we will need the unique API key that we created earlier through the OpenAI website. Sensitive information, such as API keys, should not be hard-coded directly into application source code.

We'll create a .env file in the root directory of our project to store environment variables that contain sensitive information.

custom_chat_gpt
  .env
  // ...
Enter fullscreen mode Exit fullscreen mode

In the .env file, we'll create a new environment variable labeled OPENAI_API_KEY and give it the value of the OpenAI API key.

# your unique API key value goes here
OPENAI_API_KEY=sk-############
Enter fullscreen mode Exit fullscreen mode

In our index.js file, we'll require and use the dotenv module to load environment variables from the .env file into the process environment of our application. We'll also import the classes we'll need from the openai Node.js library — the Configuration and OpenAIApi classes.

// configure dotenv
require("dotenv").config();

// import modules from OpenAI library
const { Configuration, OpenAIApi } = require("openai");

// ...
Enter fullscreen mode Exit fullscreen mode

Next, we'll need to create a configuration object for interacting with the OpenAI API. We'll do this by instantiating the Configuration() constructor and passing in the value of the OPENAI_API_KEY environment variable to the apiKey field.

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
});
Enter fullscreen mode Exit fullscreen mode

We'll then set up a new instance of the OpenAI API class like the following:

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
Enter fullscreen mode Exit fullscreen mode

We can now use the openai variable we've created to make API calls and process responses from OpenAI.

In our /ask POST request function, we'll run the openai.createCompletion() function that essentially triggers a call to OpenAI's completions endpoint.

app.post("/ask", async (req, res) => {
  const prompt = req.body.prompt;

  try {
    if (prompt == null) {
      throw new Error("Uh oh, no prompt was provided");
    }

    // trigger OpenAI completion
    const response = await openai.createCompletion();

    // ...
  } catch (error) {
    console.log(error.message);
  }
});
Enter fullscreen mode Exit fullscreen mode

The OpenAI completions endpoint allows us to pass in a large number of optional fields into the request to modify how we want our text completion to behave. For our use case, we'll only look into providing values for two fields — model and prompt.

  • model: specifies the name of the language model that the API should use to generate the response to the request. OpenAI provides several different language models, each with its strengths and capabilities. For our use case, we'll specify that we want to use the text-davinci-003 model which is OpenAI's most capable GPT-3 model.
  • prompt: is the prompt that we want OpenAI to generate a completion for. Here we'll just pass the prompt value that exists in the body of our /ask request.
app.post("/ask", async (req, res) => {
  const prompt = req.body.prompt;

  try {
    if (prompt == null) {
      throw new Error("Uh oh, no prompt was provided");
    }

    // trigger OpenAI completion
    const response = await openai.createCompletion({
      model: "text-davinci-003",
      prompt,
    });

    // ...
  } catch (error) {
    console.log(error.message);
  }
});
Enter fullscreen mode Exit fullscreen mode

The text returned from the OpenAI response exists within a choices array that itself is within a response.data object. We'll attempt to access the text returned from the first choice returned in the API which will look like the following:

app.post("/ask", async (req, res) => {
  const prompt = req.body.prompt;

  try {
    if (prompt == null) {
      throw new Error("Uh oh, no prompt was provided");
    }

    const response = await openai.createCompletion({
      model: "text-davinci-003",
      prompt,
    });
    // retrieve the completion text from response
    const completion = response.data.choices[0].text;

    // ...
  } catch (error) {
    console.log(error.message);
  }
});
Enter fullscreen mode Exit fullscreen mode

The last thing we'll do is have this completion response be returned in the successful response of our /ask request. With this change and all the changes we've made, our index.js file will look like the following.

require("dotenv").config();
const express = require("express");
const { Configuration, OpenAIApi } = require("openai");

const app = express();
app.use(express.json());

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);

const port = process.env.PORT || 5000;

app.post("/ask", async (req, res) => {
  const prompt = req.body.prompt;

  try {
    if (prompt == null) {
      throw new Error("Uh oh, no prompt was provided");
    }

    const response = await openai.createCompletion({
      model: "text-davinci-003",
      prompt,
    });
    const completion = response.data.choices[0].text;

    return res.status(200).json({
      success: true,
      message: completion,
    });
  } catch (error) {
    console.log(error.message);
  }
});

app.listen(port, () => console.log(`Server is running on port ${port}!!`));
Enter fullscreen mode Exit fullscreen mode

We'll save our changes and restart our server. With our server running, we can ask certain questions to our API like "What is the typical weather in Dubai?".

curl -X POST \
  http://localhost:5000/ask \
  -H 'Content-Type: application/json' \
  -d '{ "prompt": "What is the typical weather in Dubai?" }'
Enter fullscreen mode Exit fullscreen mode

After waiting a few seconds, our API will return to us a valid answer to our question!

image.png

That's it! We've managed to build a simple API with Node/Express to interact with OpenAI's completions endpoint.

Next week, we'll be continuing on this tutorial by building a React app that triggers the /ask request when an input field is submitted.

Closing thoughts

🙂

— Hassan (@djirdehh)

Top comments (0)