Everyone wants an AI assistant these days, but not everyone wants to spend $100+ a month to run one. Let’s change that.
- Introduction
Hook: The rise of AI assistants (ChatGPT, Claude, Gemini, etc.)
Problem: APIs are expensive, and many tutorials assume you have enterprise budgets.
Promise: You’ll learn how to build a working AI chatbot that’s affordable, simple, and customizable.
- Choosing the Right Tools
Explain your tech stack and why it’s cost-effective:
Frontend: Next.js / React / plain HTML + JS
Backend: Node.js + Express or Python + FastAPI
AI API options:
OpenAI GPT-4-turbo or GPT-3.5-turbo (cheap + reliable)
Anthropic Claude 3 Haiku (inexpensive and fast)
Ollama + Local LLMs (free but resource-intensive)
🪙 Tip: Mention pricing comparison — e.g., GPT-3.5-turbo costs ~$0.001 per message.
- Setting Up the Backend
Show how to create a simple endpoint:
import express from "express";
import fetch from "node-fetch";
const app = express();
app.use(express.json());
app.post("/chat", async (req, res) => {
const { message } = req.body;
const response = await fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": `Bearer ${process.env.OPENAI_API_KEY}`
},
body: JSON.stringify({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: message }],
})
});
const data = await response.json();
res.json({ reply: data.choices[0].message.content });
});
app.listen(3000, () => console.log("Chatbot running on port 3000"));
- Adding a Simple Frontend
Minimal HTML/JS frontend with a chat box
Use fetch() to call the backend
Add a scrollable message UI for chat-like experience
You could even show a snippet like:
<input id="userInput" placeholder="Ask me anything..." />
<button onclick="sendMessage()">Send</button>
<div id="chat"></div>
<script>
async function sendMessage() {
const message = document.getElementById('userInput').value;
const res = await fetch('/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ message })
});
const data = await res.json();
document.getElementById('chat').innerHTML += `<p>You: ${message}</p><p>Bot: ${data.reply}</p>`;
}
</script>
- Hosting It Cheaply
Use Render, Railway, or Vercel for free/cheap hosting
Use Environment Variables for API keys
Optionally connect a domain (free .vercel.app domain works too)
💰 Tip: You can run a chatbot for under $5/month using GPT-3.5 and smart caching.
- Enhancing the Chatbot
Optional ideas for future upgrades:
Memory (using a lightweight database or localStorage)
Persona system (add system prompts)
File upload and context
Voice integration (Web Speech API)
Embedding-based knowledge base using Supabase or Pinecone
- Conclusion
Wrap up with:
What was built
Cost recap
Possible next steps
Invitation for readers to fork your repo or share their own chatbot versions
Top comments (0)