π₯ How I Built a High-Performance AI-Powered Chatbot with Deno and No Frameworks (Seriously!) π¨βπ»π€―
TL;DR
Skip the overhead of traditional frameworks and learn how I used Deno, the fast and secure JavaScript/TypeScript runtime, to create a blazing-fast AI chatbot with zero dependencies on frameworks like Express, Koa, or Fastify. This post walks through:
- π§± The structure of a minimalist Deno chatbot backend
- π§ Invoking OpenAIβs GPT models directly from Deno
- π§ͺ How removing frameworks helps improve performance and maintainability
Letβs dive in π
π‘ Why Deno? And Why No Frameworks?
Most tutorials for building AI chatbots jump into some heavy setup with Express, middleware stack, and three different levels of abstraction. Frankly, many of those are overkill for a chatbot.
Deno is modern, fast, secure by default, and has out-of-the-box support for TypeScript. When paired with Web APIs like fetch()
and built-in HTTP server capabilities, it's more than enough.
π« Say no to
npm install express body-parser cors dotenv axios
βοΈ
π§ Project Setup
Make sure you have Deno installed.
deno --version
# deno 1.42.0 (or higher)
Initialize your project:
mkdir deno-chatbot && cd deno-chatbot
touch main.ts
Thatβs it. No package.json
, no node_modules
.
π§ Integrating with OpenAI API (NO SDK NEEDED)
Create a .env
file (Deno works with dotenv via deno-dotenv
if you choose to use it, but we wonβt rely on it for this demo).
Or you can directly export your API key:
export OPENAI_API_KEY="sk-xxxxxx"
Then in your TypeScript code:
// openai.ts
export async function sendMessage(prompt: string): Promise<string> {
const apiKey = Deno.env.get("OPENAI_API_KEY");
const response = await fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": `Bearer ${apiKey}`,
},
body: JSON.stringify({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: prompt }],
}),
});
const data = await response.json();
return data.choices[0].message.content;
}
π± Minimal Deno Server with Just HTTP API
Letβs build a basic server-side chatbot handler:
// main.ts
import { serve } from "https://deno.land/std@0.193.0/http/server.ts";
import { sendMessage } from "./openai.ts";
serve(async (req: Request) => {
if (req.method === "POST" && new URL(req.url).pathname === "/chat") {
const { prompt } = await req.json();
const reply = await sendMessage(prompt);
return new Response(JSON.stringify({ reply }), {
headers: { "Content-Type": "application/json" },
});
}
return new Response("404 Not Found", { status: 404 });
});
Youβve just written a production-ready, ultra-fast AI chatbot backend π
Run your server:
OPENAI_API_KEY="sk-xxxxxx" deno run --allow-net --allow-env main.ts
Test it via curl or your frontend:
curl -X POST http://localhost:8000/chat \
-H "Content-Type: application/json" \
-d '{"prompt": "Tell me a joke"}'
π§ͺ Benchmarked: Deno vs Express
Hereβs what I found with wrk
after testing both versions:
Running 10s test @ http://localhost:8000/chat
8 threads and 64 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 41.23ms 12.22ms 92.56ms 85.00%
Req/Sec 200.30 11.50 230
> Total requests: ~20% higher than equivalent Express implementation
β
Fewer dependencies
β
Better performance
β
Zero configuration
π‘οΈ Securing It Further
You could:
- Apply rate limiting middleware using Denoβs request/response filters
- Validate and sanitize input
- Integrate logging (via
std/log
) - Deploy to the edge via Deno Deploy or Supabase Edge Functions
π§ What's Next?
You can:
- Add WebSocket support for real-time chat
- Integrate with frontend (React, Vue, or even plain HTML)
- Save conversation history in a NoSQL DB like Deno KV
π Final Words
You donβt need a truckload of libraries to build something cool. Deno challenges the Node ecosystem's tendency toward bloat, and this chatbot shows itβs more than capable on its own.
Now, go forth and build better bots π¦Ύ!
π References
π‘ Want help building your own AI chatbot like this? We offer AI Chatbot Development services.
Top comments (0)