DEV Community

Cover image for Building ChatGPT with React
Kacey Cleveland
Kacey Cleveland

Posted on

1

Building ChatGPT with React

Building out a ChatGPT clone using OpenAI's API is a great way to familiarize yourself both with OpenAI and React in general in itself. This post will go over a high level example of implementing a ChatGPT clone and the implementation I used in my side project. My codebase is very much a work in progress but you can follow along with my progress in my side project below!

https://github.com/kaceycleveland/help-me-out-here

Note: This is calling the OpenAI API on the client; it should not be used as is unless the requests are being proxied through a backend service of some sort to hide your API keys or if you are truly building a client side application.

Image description

Dependencies

To send and receive messages to OpenAI we can use OpenAI's official npm package:

https://www.npmjs.com/package/openai

In addition to this, we will be using TypeScript and Tanstack Query. Tanstack Query will serve as a wrapper to help send and process data to be consumed by our react application. You can read more about Tanstack Query here.

1. Instantiate the OpenAI Client

We first need a way to send OpenAI chat completion requests and get the responses back using the OpenAI npm package:

import { Configuration, OpenAIApi } from "openai";

const createOpenAiClient = () => {
  const config = new Configuration({
    organization: import.meta.env.OPENAI_ORG,
    apiKey: import.meta.env.OPENAI_KEY,
  });

  return new OpenAIApi(config);
};

export const openAiClient = createOpenAiClient();

Enter fullscreen mode Exit fullscreen mode

Now we can use the openAiClient to create chat completion requests as described here.

  1. Create a Chat Mutation Hook

We can now create a react hook wrapped around the OpenAI client to make calls to the OpenAI API.

import { useMutation, UseMutationOptions } from "@tanstack/react-query";
import { openAiClient } from "../openai";
import { CreateChatCompletionResponse, CreateChatCompletionRequest } from "openai";
import { AxiosResponse } from "axios";

export const useChatMutation = (
  options?: UseMutationOptions<
    AxiosResponse<CreateChatCompletionResponse>,
    unknown,
    CreateChatCompletionRequest
  >
) => {
  return useMutation<
    AxiosResponse<CreateChatCompletionResponse>,
    unknown,
    CreateChatCompletionRequest
  >({
    mutationFn: (request) => {
      return openAiClient.createChatCompletion(request)
    },
    ...options,
  });
};

Enter fullscreen mode Exit fullscreen mode

3. Consume the useChatMutation hook

import { ChatCompletionRequestMessage } from "openai/dist/api";
import { useState, useCallback, useRef } from "react";
import { useChatMutation } from "./useChatMutation";

function App() {
  // Store the recieved messages and use them to continue the conversation with the OpenAI Client
  const [messages, setMessages] = useState<ChatCompletionRequestMessage[]>([]);
  const inputRef = useRef<HTMLTextAreaElement>(null);

  /**
   * Use the chat mutation hook to submit the request to OpenAI
   * This is a basic example, but using tanstack query lets you easily
   * render loading, error, and success states.
   *  */

  const { mutateAsync: submitChat } = useChatMutation({
    onSuccess: (response) => {
      const foundMessage = response.data.choices.length
        ? response.data.choices[0].message
        : undefined;
      if (foundMessage) {
        const messageBody: ChatCompletionRequestMessage[] = [
          ...messages,
          foundMessage,
        ];
        setMessages(messageBody);
      }
    },
  });

  const handleSubmit = useCallback(() => {
    if (inputRef.current?.value) {
      const messageBody: ChatCompletionRequestMessage[] = [
        ...messages,
        { role: "user", content: inputRef.current?.value },
      ];
      setMessages(messageBody);
      // For simplicility, the settings sent to OpenAI are hard coded here.
      submitChat({
        model: "gpt-3.5-turbo",
        max_tokens: 100,
        presence_penalty: 1,
        frequency_penalty: 1,
        messages: messageBody,
      });
    }
  }, [messages]);

  return (
    <div className="App">
      <div>
        {messages.map((message) => {
          return (
            <div>
              <div>{message.role}</div>
              <div>{message.content}</div>
            </div>
          );
        })}
      </div>
      <div className="card">
        <textarea ref={inputRef}></textarea>
        <button onClick={handleSubmit}>Submit</button>
      </div>
    </div>
  );
}

export default App;

Enter fullscreen mode Exit fullscreen mode

Fin

This example can be expanded upon in various ways such as:

  • Customizing the interface/settings being sent with the messages
  • Customizing old and future messages to "prime" the AI for future responses.
  • Better UI rendering for different states
  • Better UI rendering for returned data. (Think rendering code blocks or markdown from the returned OpenAI data!)

Most of the above is what I am working on in my project:
https://github.com/kaceycleveland/help-me-out-here

If you want the full repo of this basic example, check it out here:
https://github.com/kaceycleveland/openai-example

API Trace View

How I Cut 22.3 Seconds Off an API Call with Sentry πŸ•’

Struggling with slow API calls? Dan Mindru walks through how he used Sentry's new Trace View feature to shave off 22.3 seconds from an API call.

Get a practical walkthrough of how to identify bottlenecks, split tasks into multiple parallel tasks, identify slow AI model calls, and more.

Read more β†’

Top comments (0)

Sentry image

See why 4M developers consider Sentry, β€œnot bad.”

Fixing code doesn’t have to be the worst part of your day. Learn how Sentry can help.

Learn more

πŸ‘‹ Kindness is contagious

Please leave a ❀️ or a friendly comment on this post if you found it helpful!

Okay