How I built a multi-language AI chatbot for $2/month (full code walkthrough)
I needed a chatbot that could handle English, Spanish, Portuguese, and Tagalog for different user segments. My first instinct was to use the OpenAI API directly. Then I saw the bill.
Here's how I ended up building the same thing for about $2/month total — and what I learned along the way.
The problem with rolling your own API access
When you call Claude or GPT directly, you pay per token. That sounds cheap until you realize:
- A single conversation of 10 messages ≈ 2,000-4,000 tokens
- 1,000 conversations/month = 2-4 million tokens
- At $3/million tokens for Claude Sonnet, that's $6-12 just in API costs
For a side project with uncertain traffic, that unpredictability is painful.
The architecture I used
Instead of managing API keys, rate limits, and billing myself, I proxied everything through SimplyLouie's developer API — a flat $10/month tier that handles all the Claude API access.
Here's the full setup:
1. Backend handler (Node.js)
const express = require('express');
const axios = require('axios');
const app = express();
app.use(express.json());
const LOUIE_API_KEY = process.env.LOUIE_API_KEY;
const SUPPORTED_LANGUAGES = ['en', 'es', 'pt', 'tl'];
// Detect language from user message
async function detectLanguage(text) {
const response = await axios.post('https://simplylouie.com/api/chat', {
message: `Detect the language of this text and respond with only the 2-letter ISO code (en, es, pt, tl, etc): "${text}"`
}, {
headers: { 'Authorization': `Bearer ${LOUIE_API_KEY}` }
});
const lang = response.data.response.trim().toLowerCase();
return SUPPORTED_LANGUAGES.includes(lang) ? lang : 'en';
}
// System prompts by language
const SYSTEM_PROMPTS = {
en: 'You are a helpful assistant. Respond in English.',
es: 'Eres un asistente útil. Responde en español.',
pt: 'Você é um assistente prestativo. Responda em português.',
tl: 'Ikaw ay isang kapaki-pakinabang na katulong. Sumagot sa Filipino.'
};
app.post('/chat', async (req, res) => {
const { message, sessionId } = req.body;
// Auto-detect language
const lang = await detectLanguage(message);
// Send to Claude via SimplyLouie proxy
const response = await axios.post('https://simplylouie.com/api/chat', {
message,
systemPrompt: SYSTEM_PROMPTS[lang],
sessionId
}, {
headers: { 'Authorization': `Bearer ${LOUIE_API_KEY}` }
});
res.json({
response: response.data.response,
detectedLanguage: lang
});
});
app.listen(3000);
2. Simple frontend (React)
import { useState } from 'react';
export default function MultiLanguageChat() {
const [messages, setMessages] = useState([]);
const [input, setInput] = useState('');
const [loading, setLoading] = useState(false);
const sessionId = useState(() => Math.random().toString(36))[0];
const sendMessage = async () => {
if (!input.trim()) return;
const userMessage = input;
setInput('');
setMessages(prev => [...prev, { role: 'user', text: userMessage }]);
setLoading(true);
const res = await fetch('/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ message: userMessage, sessionId })
});
const data = await res.json();
setMessages(prev => [...prev, {
role: 'assistant',
text: data.response,
lang: data.detectedLanguage
}]);
setLoading(false);
};
return (
<div className="chat-container">
<div className="messages">
{messages.map((msg, i) => (
<div key={i} className={`message ${msg.role}`}>
{msg.lang && <span className="lang-badge">{msg.lang}</span>}
{msg.text}
</div>
))}
{loading && <div className="loading">Thinking...</div>}
</div>
<div className="input-area">
<input
value={input}
onChange={e => setInput(e.target.value)}
onKeyPress={e => e.key === 'Enter' && sendMessage()}
placeholder="Type in any language..."
/>
<button onClick={sendMessage}>Send</button>
</div>
</div>
);
}
3. Deploy with Docker
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
# docker-compose.yml
version: '3.8'
services:
chatbot:
build: .
ports:
- "3000:3000"
environment:
- LOUIE_API_KEY=${LOUIE_API_KEY}
restart: unless-stopped
What this costs at scale
| Users/month | My infra cost | API cost | Total |
|---|---|---|---|
| 100 | $5 (VPS) | $10 (flat) | $15 |
| 1,000 | $5 (VPS) | $10 (flat) | $15 |
| 10,000 | $20 (VPS) | $10 (flat) | $30 |
The flat-rate API means the chatbot cost doesn't scale with usage — only my infrastructure does.
Language performance notes
After running this for a few weeks:
- English: Near-perfect, as expected
- Spanish: Excellent — Claude handles Latin American variants naturally
- Portuguese (Brazil): Very good — just specify "Brazilian Portuguese" in the system prompt
- Tagalog/Filipino: Good for common phrases, occasionally switches to English for technical terms (which is actually how Filipino speakers naturally talk)
The part that surprised me
The language detection step (the extra API call to identify the language) adds ~200ms of latency. For most use cases this is fine, but if you need real-time responsiveness, you can:
- Use a lightweight local library like
francfor detection (no API call) - Let the user select their language upfront
- Detect from browser
Accept-Languageheader
I went with option 1 for production — franc adds a few KB to the bundle but saves 200ms per message.
Try it yourself
If you want to build something similar, the developer API is at simplylouie.com/developers. $10/month flat, no per-token surprises.
If you just want the chatbot without the API layer, there's also a personal plan at $2/month — the most affordable AI assistant I've found.
What languages are you building AI features for? Drop a comment — I'm curious what's working for people outside the English-speaking world.
Top comments (0)