DEV Community

Cover image for Build a Docs‑Aware Chatbot with React, Vite, Node, and OpenAI (plus fun DALL·E avatars)
Pato for Cloudinary

Posted on • Edited on • Originally published at cloudinary.com

Build a Docs‑Aware Chatbot with React, Vite, Node, and OpenAI (plus fun DALL·E avatars)

What you’ll build

  • A chat UI that streams answers from your own examples/context
  • A Node/Express API that calls OpenAI for text and image generation
  • Two cute, auto‑generated avatars (user & assistant)

Demo question shown here: “How do I use the Cloudinary React SDK?”

Repo (reference): Cloudinary-Chatbot-OpenAI-Demo


Prereqs

  • Node 18+
  • An OpenAI API key stored server‑side (never in the browser). How to create/manage keys: see the official docs. (OpenAI Platform)

1) Create the React app (Vite)

# New project
npm create vite@latest cloudinary-chatbot -- --template react
cd cloudinary-chatbot
npm i
Enter fullscreen mode Exit fullscreen mode

Vite proxy (avoid CORS while developing)

vite.config.js

import { defineConfig } from 'vite'
import react from '@vitejs/plugin-react'

export default defineConfig({
  plugins: [react()],
  server: {
    port: 3000,
    proxy: {
      '/api': {
        target: 'http://localhost:6000',
        changeOrigin: true,
        secure: false,
      },
    },
  },
})
Enter fullscreen mode Exit fullscreen mode

Chatbot UI

Create a src/App.jsx file. You can file the full code of this file in the repo.

The Chat functionality

  const sendMessage = async () => {
    if (!inputMessage.trim()) return;
    const newMessages = [...messages, { role: "user", content: inputMessage.trim() }];
    setMessages(newMessages);
    setInputMessage("");
    setStatus("loading");

    try {
      const res = await fetch("/api/chat", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify({ messages: newMessages }),
      });
      const data = await res.json();
      setMessages([...newMessages, { role: "assistant", content: data.content }]);
    } catch (e) {
      setMessages([...newMessages, { role: "assistant", content: "Server error. Try again." }]);
    } finally {
      setStatus("idle");
    }
  };
Enter fullscreen mode Exit fullscreen mode

This function sends the user’s message to the backend and updates the chat UI. It first adds the user’s message to the conversation, clears the input, and sets a loading state. Then it POSTs the full message history to /api/chat and appends the assistant’s reply when the server responds. If the request fails, it adds an error message instead. Finally, it resets the status to idle.

Generating avatars

  useEffect(() => {
    const makeAvatars = async () => {
      try {
        const res = await fetch("/api/avatar", { method: "POST" });
        const data = await res.json(); // [{url}, {url}]
        setUserImage(data[0].url);
        setAssistantImage(data[1].url);
      } catch {
        // silently ignore; UI still works without avatars
      }
    };
    makeAvatars();
  }, []);
Enter fullscreen mode Exit fullscreen mode

To make a more realistic chatBot, we are going to generate 2 avatars. In our backend, we have the endpoint /api/avatar. We call that endpoint, and we assign the avatars one for the user and one for the bot/assistant.

Add your own CSS or use our existing App.css.


2) Add the backend (Express + OpenAI)

Inside the project root, create a folder named backend and a file server.js. We’ll reuse the root package.json for simplicity.

Install deps

npm i express dotenv openai
# optional: nodemon for dev
npm i -D nodemon
Enter fullscreen mode Exit fullscreen mode

Environment variables

Create .env in the project root:

OPENAI_API_KEY=sk-...
Enter fullscreen mode Exit fullscreen mode

Server code (Responses API + Images API)

  • Why Responses API? It’s the recommended, modern way to generate text and stream outputs going forward; if you’re coming from Chat Completions, see the official migration guide. (OpenAI Platform)
  • Why SDK over raw fetch? Cleaner code, types, and built‑ins for images. (OpenAI Platform). You can find the full code to the backend in the repo.

backend/server.js

Training the chatBot

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

const demoModel = [
  {
    role: "user",
    content: "Where is the Cloudinary React SDK documentation?"
  },
  {
    role: "assistant",
    content: "See: https://cloudinary.com/documentation/react_integration"
  },
  {
    role: "user",
    content: "Where can I read about Cloudinary image transformations in React?"
  },
  {
    role: "assistant",
    content: "See: https://cloudinary.com/documentation/react_image_transformations"
  },
  {
    role: "user",
    content: "How do I display an image using the Cloudinary React SDK?"
  },
  {
    role: "assistant",
    content:
`Use @cloudinary/react and @cloudinary/url-gen. Example:

import { AdvancedImage } from '@cloudinary/react';
import { Cloudinary } from '@cloudinary/url-gen';
import { sepia } from '@cloudinary/url-gen/actions/effect';

const cld = new Cloudinary({ cloud: { cloudName: 'demo' } });
const img = cld.image('front_face').effect(sepia());

<AdvancedImage cldImg={img} />`
  }
];
Enter fullscreen mode Exit fullscreen mode

This code initializes the OpenAI client on the server using an API key from environment variables, ensuring all AI calls are authenticated. It also defines a demoModel array containing a series of example question-and-answer pairs. These serve as “conversation seeds” that help guide the AI toward Cloudinary-specific knowledge by showing it how the assistant should respond. Each entry mimics a real chat message, with a role (user or assistant) and content. Together, these samples act as contextual training data, steering the model to provide accurate Cloudinary documentation links and example React code when similar questions are asked.

Creating the chatBot with OpenAI

app.post("/api/chat", async (req, res) => {
  try {
    const { messages = [] } = req.body;

    // System prompt keeps the bot scoped to your docs
    const system = {
      role: "system",
      content:
        "You are a helpful developer docs assistant. Prefer official Cloudinary docs. Include links when helpful."
    };

    const input = [system, ...demoModel, ...messages]
      .map(m => ({ role: m.role, content: m.content }));

    const r = await openai.responses.create({
      model: "gpt-4o-mini",
      input
    });

    // Convenience: return plain text
    res.json({ content: r.output_text });
  } catch (err) {
    console.error(err);
    res.status(500).json({ error: "Internal Server Error" });
  }
});
Enter fullscreen mode Exit fullscreen mode

This endpoint takes the user’s chat messages, prepends a system prompt, and mixes in example Q&A pairs to guide the model toward Cloudinary-focused answers. It sends this combined conversation to OpenAI using the gpt-4o-mini model, then returns the assistant’s plain-text reply to the client. If an error occurs, it logs the issue and responds with a 500 error.

Creating the Avatars with DALL-E

app.post("/api/avatar", async (_req, res) => {
  try {
    const r = await openai.images.generate({
      model: "gpt-image-1",
      prompt:
        "minimal, cute round animal avatar on flat background, high contrast, centered, no text",
      n: 2,
      size: "256x256"
    });

    // Return {url} objects for the UI
    res.json(r.data.map(d => ({ url: d.url })));
  } catch (err) {
    console.error(err);
    res.status(500).json({ error: "Internal Server Error" });
  }
});
Enter fullscreen mode Exit fullscreen mode

This endpoint generates simple AI-created avatars. When called, it sends a prompt to OpenAI’s image generation model (gpt-image-1) requesting two minimal, cute, round animal avatars at 256×256 resolution. The OpenAI API returns image objects containing URLs, which the server maps into a clean { url } format for the frontend. If generation fails, the server logs the error and responds with a 500 status code.


3) Dev scripts

Add these to your root package.json:

{
  "scripts": {
    "dev": "vite",
    "server": "node backend/server.js",
    "server:dev": "nodemon backend/server.js"
  }
}
Enter fullscreen mode Exit fullscreen mode

4) Run it

# terminal 1
npm run server:dev

# terminal 2
npm run dev
# open http://localhost:3000
Enter fullscreen mode Exit fullscreen mode

Type:
How do I use the Cloudinary React SDK?
You should get a helpful, linked answer pulled in the spirit of your seed examples.


Production notes (important)

  • Never expose your API key in client code or public repos. Use env vars and a server. (OpenAI Platform)
  • Prefer Responses API for new builds and streaming; see migration notes if you used Chat Completions before. (OpenAI Platform)
  • If you want retrieval over your actual docs (beyond hardcoded examples), look at the Assistants API with tools or Retrieval. (OpenAI Platform)

Wrap‑up

You now have a lightweight, dev‑friendly chatbot that answers from your docs and greets users with generated avatars. From here you can:

  • Swap the seed examples for real retrieval.
  • Add streaming UI for token‑by‑token responses. (OpenAI Platform)
  • Validate outputs with structured JSON. (OpenAI Platform)

Further reading:

Repo: Cloudinary-Chatbot-OpenAI-Demo

Top comments (1)

Collapse
 
jwp profile image
JWP

I think I missed how it does this: "A chat UI that streams answers from your own examples/context". Would you please point that out to me? Thank You! I have a project with many .md files and would like to do this.