DEV Community

Cover image for How to Create an AI Chatbot with Google Gemini Using Node.js
Ciphernutz
Ciphernutz

Posted on

How to Create an AI Chatbot with Google Gemini Using Node.js

AI chatbots aren’t impressive anymore.
Useful ones are.

If you’re building a chatbot today, your goal shouldn’t be “connect an LLM and return text, it should be:

“How do I build a chatbot that understands users, remembers context, and scales cleanly?”

In this guide, you’ll build a production-ready AI chatbot using Google Gemini and Node.js, while learning why each step matters.

Why Google Gemini?

Gemini is well-suited for real-world chatbot use because it supports:

  • Long context windows
  • Fast responses (Gemini Flash)
  • Strong reasoning
  • Multimodal inputs (text, image, tools)

Perfect for:

  • SaaS copilots
  • Support bots
  • Internal AI assistants

Architecture

Client → Node.js API → Gemini → Response

Key principles:

  • Keep prompts clean
  • Inject context intentionally
  • Store conversation history
  • Separate system instructions from user input

Step 1: Project Setup

Install dependencies:

npm init -y
npm install express dotenv @google/generative-ai

Enter fullscreen mode Exit fullscreen mode

Create .env:

GEMINI_API_KEY=your_api_key_here

Enter fullscreen mode Exit fullscreen mode

Step 2: Initialize Gemini in Node.js

import express from "express";
import dotenv from "dotenv";
import { GoogleGenerativeAI } from "@google/generative-ai";

dotenv.config();

const app = express();
app.use(express.json());

const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY);

Enter fullscreen mode Exit fullscreen mode

Step 3: Define System Instructions (Critical Step)
Most chatbots fail because they skip this.

const model = genAI.getGenerativeModel({
  model: "gemini-1.5-flash",
  systemInstruction: `
You are a helpful AI assistant.
Respond clearly, accurately, and concisely.
Ask follow-up questions when needed.
`
});

Enter fullscreen mode Exit fullscreen mode

System instructions = personality + boundaries + clarity

Step 4: Add Conversation Memory

Without memory, your chatbot resets every message.

let chatHistory = [];

app.post("/chat", async (req, res) => {
  const userMessage = req.body.message;

  chatHistory.push({
    role: "user",
    parts: [{ text: userMessage }]
  });

  const chat = model.startChat({
    history: chatHistory,
  });

  const result = await chat.sendMessage(userMessage);
  const reply = result.response.text();

  chatHistory.push({
    role: "model",
    parts: [{ text: reply }]
  });

  res.json({ reply });
});

Enter fullscreen mode Exit fullscreen mode

Now your chatbot:

  • Remembers context
  • Answers consistently
  • Feels conversational

Step 5: Inject Dynamic Context (What Pros Do)

You can dramatically improve output by injecting runtime context.

const dynamicContext = `
User role: SaaS Founder
Product stage: MVP
Goal: Reduce support tickets
`;

const chat = model.startChat({
  history: [
    { role: "user", parts: [{ text: dynamicContext }] },
    ...chatHistory
  ],
});

Enter fullscreen mode Exit fullscreen mode

This makes responses specific, not generic.

Common Mistakes to Avoid

❌ Overloading prompts
❌ No memory handling
❌ Mixing system + user input
❌ Treating the chatbot as stateless

When to Use Gemini Flash vs Pro

Final Thoughts

A good AI chatbot isn’t about clever prompts.

It’s about:

  • Context
  • Memory
  • Intent
  • Clean architecture

Gemini + Node.js gives you all the building blocks to build scalable, intelligent chatbots — from real-time conversations to production-grade AI assistants.

Top comments (0)