Hi there,
As I promised at the start of this year, I'm fulfilling my commitment. π
In this blog, I will discuss how you can create a chatbot far better than the rest of the other chatbots around. This blog is mostly for learning, so keep an eye at every section. I will be referencing the Chatbot I've created. Btw, my chatbot is called "Radhika", so I will call it by its name itself.
Try it out, give feedback and suggestions, request changes, etc,.
It's open-source as well, so leave a github star if you loved it.
Note:
Switch to GROQ in case Gemini encounters an error. Currently, I'm using the free plan, so the tokens might get exhausted.
Β
Let's start building our Chatbot!
Before doing anything else, decide on your tech stack.
There are many ways to build a chatbot, and plenty of stacks you can choose from. Pick the one you are most comfortable with.
For Radhika, I used TypeScript, which is a typed version of JavaScript. I built it using Next.js so both the frontend and backend live in the same project. This also makes deployment simpler since everything is deployed together. Simple and efficient.
(To be honest, vibe coding NextJS apps is easier, and the AI tools do a great job)
Design it.
If you're a designer, then design the bot in applications like Canva or Figma.
If you want to see Radhika's design, please check this radhika_figma_design.
If you're not a designer, describe the prompt as much as you can with all the details you've in your mind. Don't think about the features, think about the design.
Remember, our first target should either be the backend or the frontend. You can start with any of them, but I usually suggest creating the frontend first and then working on the backend.
For a start, I'm adding a sample prompt for this, but before that I need to cover an important question, i.e., WHERE TO ADD THIS PROMPT?
You can add the prompts to generate the frontend in these platforms:
But since we are going to create a full-stack chatbot, I suggest to use claude-opus-4.5 model in your IDE. And if you
If you're a student, try to avail the Github's Student Pack and then use your IDE + Github Copilot. (you'll get a hell lot of free credits to most of the models, including claude-opus-4.5 create anything good.)
Here's your GO TO PROMPT to build your first Chatbot:
### Agentic UI Replication Prompt (Design + Three.js + Maintainable Code)
You are a senior product designer and frontend engineer with strong experience in **scalable UI systems**, **Three.js**, and **long-term maintainable codebases**.
Your task is to replicate a **AI assistant ChatBot** with a **three-column layout**, glassmorphism, and subtle real-time visualizations, while keeping the implementation **clean, readable, and modular**.
## Core Principles
* Prioritize **readability over cleverness**
* Small, focused files
* Clear separation of concerns
* No duplicated logic
* All visuals, logic, and data flows should be easy to reason about
## Project Structure Guidelines
* Organize the codebase by **feature, not by file type**
* Each major UI section should live in its own folder
* Each folder should contain:
* One main component
* One styles file
* Optional subcomponents
* A clear entry point
* Three.js logic must be isolated from UI layout code
* Avoid large monolithic components
## Overall Visual Style
* Dark navy to near-black gradient background
* Glassmorphic cards with subtle blur and soft inner glow
* Rounded corners throughout
* Accent color: cyan / electric blue
* Premium, calm, futuristic AI aesthetic
* No harsh borders, no visual noise
## Left Sidebar (Assistant Identity + Modes)
* Fixed vertical glass panel
* Sections split into small components:
* Assistant identity header
* Mode selector list
* Unlock CTA card
* Quick actions list
* Active mode visually highlighted with soft cyan glow
* All icons and labels driven from a single config file
## Center Panel (Primary Interaction Area)
### Header
* Mode title
* Model badge
* Minimal action icons
* Primary CTA button
* Header logic isolated from content logic
### Core Visualization (Three.js)
* Use a dedicated Three.js scene module
* Render an abstract **circular particle system**
* Particles form a softly rotating sphere or neural cluster
* Color palette limited to cyan, blue, and soft white
* Animation:
* Slow rotation
* Subtle breathing or noise-based movement
* Interaction:
* Light cursor-based parallax
* The canvas must be:
* Self-contained
* Easily removable or replaceable
* Not tightly coupled to UI state
### Main Content
* Headline and subtext in a simple content component
* Quick action pills rendered from a data array
* No hardcoded UI strings inside logic
### Input Section
* Input bar broken into:
* Text input component
* Left action icons
* Right action icons
* Icons reusable across the app
* Keyboard behavior handled in a dedicated hook or utility
## Right Sidebar (Analytics & Activity)
### Activity Matrix (Three.js)
* Separate Three.js scene from layout code
* Visualize activity using:
* Nodes and connecting lines
* Temporal motion paths
* Minimal neon wireframe aesthetic
* Motion should feel ambient, not informational
* Scene must support pausing when off-screen
### Stats & Mode Usage
* Each stat card as its own component
* Mode usage bars driven by data, not hardcoded values
* Visual styles shared via common tokens
### AI Status Section
* Reusable status row component
* Indicator colors derived from state mapping
* No inline conditional styling
## Three.js & Performance Constraints
* Keep particle counts low and configurable
* Centralized animation loop management
* Clean disposal of geometries and materials
* requestAnimationFrame usage should be controlled and predictable
* WebGL failure should gracefully fall back to static UI
## Maintainability Constraints
* No inline styles for complex layouts
* No deeply nested components without justification
* Use clear naming conventions
* Comments should explain intent, not obvious behavior
* Any complex logic must be documented at the top of the file
## Final Goal
The result should feel like a **production-ready AI dashboard** that is:
* Visually calm
* Technically elegant
* Easy to extend
* Easy for another engineer to understand within minutes
Make some iterations until you reach a satisfactory level. (Satisfaction doesn't mean you need the best out of the best. It should be something you think is good enough)
Β
Start implementing the Core Features
And since we are creating it through vibe coding, let's go through a little-little code only. Want no code? Skip this section.
In your chatbot, you can add multi-provider support or you can choose only a single provider. In my case, I love to have multi-provider support because this allows the user to switch to any model they want.
The following diagram represents the implementation I've done in Radhika:

But the most important part than this is to parse the request sent by the user.
The request handler begins by parsing the JSON body from the incoming POST request:
const body = await req.json();
const { messages, mode = "general", provider = "groq", apiKey } = body;
-
messages: The conversation history sent by the client. -
mode: Determines which system prompt to use (e.g., bff, learning, etc.). // totally depends on you if want to add multiple modes -
provider: Specifies the AI backend to use (groq,openai,claude,gemini, or whatever provider you're using). -
apiKey: Required for OpenAI and Claude if a user key is needed.
Assign the Prompt
Add a system prompt on how you want your chatbot should behave and talk about. You can also link it with a KB (knowledge base) or create your own mini knowledge base to make your bot more specific to something.
I've not implemented this as I wanted my Radhika to be generic and serve as a good template for someone like you.
If you're also adding multi-provider, then route to the correct provider. Each provider has custom logic to instantiate the model, handle errors, and stream the response using:
await streamText({...})
Looking for a prompt? Here it is: (add this in the same chat session or the create one)
### Agentic Backend Architecture Prompt
You are a senior backend engineer designing a **production-ready AI chat backend** with **multi-provider model support**, clean architecture, and long-term maintainability.
Your goal is to build a backend that powers an AI assistant similar to **Radhika**, where users can switch models dynamically while keeping the system simple, extensible, and readable.
## Core Backend Principles
* Clear separation of responsibilities
* No provider-specific logic inside request handlers
* Easy to add or remove AI providers
* Streaming responses by default
* Minimal abstractions, no over-engineering
* Code should read like documentation
## Request Handling Flow
* Use a single POST endpoint for chat interactions
* Parse the incoming JSON request at the boundary
`'``ts
const body = await req.json();
const { messages, mode = "general", provider = "groq", apiKey } = body;
`'``
* `messages`: full conversation history from the client
* `mode`: determines which system prompt to apply
* `provider`: selected AI backend
* `apiKey`: optional user-supplied key for providers that require it
Validation should happen immediately after parsing and fail early with clear errors.
## Prompt Management
* System prompts must be centralized
* Store prompts in a dedicated module or folder
* Map prompts by mode name
* Avoid hardcoded prompt strings inside logic
* Allow future expansion into:
* Knowledge base injection
* Context enrichment
* Mode-specific behavior tuning
If no custom knowledge base is attached, default to a generic assistant prompt suitable for broad usage.
## Provider Routing
* Use a provider router layer
* Route requests based on the `provider` field
* Each provider must live in its own isolated module
* No shared state between providers
Each provider module should:
* Instantiate its own client
* Normalize input messages
* Handle provider-specific errors
* Support streaming responses using:
`'``ts
await streamText({ ... })
`'``
The request handler should never know how a provider works internally.
## Streaming Architecture
* Streaming must be first-class, not optional
* Keep streaming logic abstracted behind a common interface
* Providers should emit tokens in a unified format
* Handle disconnects and stream cleanup gracefully
## File Organization Guidelines
* Organize by responsibility, not file type
* Suggested structure:
* `/routes` for request handlers
* `/providers` for AI provider implementations
* `/prompts` for system and mode prompts
* `/utils` for shared helpers
* `/types` for request and provider contracts
* One provider per file
* One responsibility per file
* No oversized files
## Error Handling
* Normalize all provider errors
* Never leak raw provider error messages to clients
* Return consistent error shapes
* Log provider-specific details internally
## Extensibility Rules
* Adding a new provider should require:
* One new provider file
* One registration entry in the provider router
* No changes to the core request handler
* No breaking changes to existing providers
## Maintainability Constraints
* Avoid deeply nested conditionals
* Prefer early returns
* Use explicit naming over clever abstractions
* Add comments only where intent is non-obvious
* Keep configuration and logic separate
## Final Goal
The backend should feel like a **reference implementation** that:
* Supports multiple AI providers cleanly
* Streams responses efficiently
* Is easy for other developers to fork
* Serves as a strong template for building AI chat systems
Β
Adding a Database (optional)
Though this is totally optional, if you want to add persistent storage to the user chats that can be retrieved later, then add a database. Or if you want to track your bot users, you can create a database.
I used it to implement both.
Initially, I was using the Supabase free tier, but I was hitting the limits, and my app was becoming stale. Then I switched to Appwrite. Both are totally different; one is SQL, while the latter one is NoSQL. Although use node-appwrite package to skip the manual schema add-ons.
If you want to create something similar to what I've created, then modify or replicate setup_appwrite_schema.
Looking for a prompt? Think your logic first and write it, and then ask ChatGPT to convert it into an agentic prompt. You will enjoy this!
You're so done!
This is how simple to create a full-fledged chatbot with a professional codebase.
Once done, you can upload it to GitHub and then host it on serverless platforms like Vercel.
If that's not enough to you, then start with Radhika and modify as much as you can.
Radhika
A modern AI assistant that adapts to how you work and think. Multiple modes, multiple models, one seamless chat experience. Features multiple LLM providers, image generation, voice interaction, and persistent chat history.
Try now: https://radhika-sharma.vercel.app
Features
- 6 Chat Modes: General, Productivity, Wellness, Learning, Creative, BFF
- Multi-Provider LLM: Groq, Gemini, OpenAI, Claude
- Image Generation: Pollinations, DALLΒ·E 3, Hugging Face, Free alternatives
- Voice: Speech-to-text input & text-to-speech output
- Auth & Persistence: Appwrite auth with chat history & favorites
- UI: Light/dark themes, modern & pixel UI styles
Quick Start
git clone https://github.com/RS-labhub/radhika.git
cd radhika
bun install # or npm install
bun run dev # or npm run dev
Open: http://localhost:3000
License
MIT License - see LICENSE
I have implemented a lot of other features like voice recognition and synthesis using WebKit and Eleven Labs, image generation using pollination.ai, openai, gemini, huggingface models, etc,. (pollinations ai and HuggingFace models are free to generate images/videos/texts/etc,.)
Β
Conclusion
Creating a chatbot is super-easy and doesn't need much knowledge. It's just like texting messages to agents.
However, you need prompting skills to get the product in a very less time.
Want some prompting tips? Comment or reach out to me!
Before sharing the ways to reaching me, I want you to star the GitHub repo of Radhika.
Find the Live Demo here: https://radhika-sharma.vercel.app/
Want to connect with me? Visit my Portfolio, contact's page.



Top comments (0)