DEV Community

Cover image for πŸ”₯ How I Built a High-Performance AI-Powered Chatbot with Deno and No Frameworks (Seriously!) πŸ‘¨β€πŸ’»πŸ€―
Yevhen Kozachenko πŸ‡ΊπŸ‡¦
Yevhen Kozachenko πŸ‡ΊπŸ‡¦

Posted on • Originally published at ekwoster.dev

πŸ”₯ How I Built a High-Performance AI-Powered Chatbot with Deno and No Frameworks (Seriously!) πŸ‘¨β€πŸ’»πŸ€―

πŸ”₯ How I Built a High-Performance AI-Powered Chatbot with Deno and No Frameworks (Seriously!) πŸ‘¨β€πŸ’»πŸ€―

TL;DR

Skip the overhead of traditional frameworks and learn how I used Deno, the fast and secure JavaScript/TypeScript runtime, to create a blazing-fast AI chatbot with zero dependencies on frameworks like Express, Koa, or Fastify. This post walks through:

  • 🧱 The structure of a minimalist Deno chatbot backend
  • 🧠 Invoking OpenAI’s GPT models directly from Deno
  • πŸ§ͺ How removing frameworks helps improve performance and maintainability

Let’s dive in πŸ‘‰


πŸ’‘ Why Deno? And Why No Frameworks?

Most tutorials for building AI chatbots jump into some heavy setup with Express, middleware stack, and three different levels of abstraction. Frankly, many of those are overkill for a chatbot.

Deno is modern, fast, secure by default, and has out-of-the-box support for TypeScript. When paired with Web APIs like fetch() and built-in HTTP server capabilities, it's more than enough.

🚫 Say no to npm install express body-parser cors dotenv axios ✌️


πŸ”§ Project Setup

Make sure you have Deno installed.

deno --version
# deno 1.42.0 (or higher)
Enter fullscreen mode Exit fullscreen mode

Initialize your project:

mkdir deno-chatbot && cd deno-chatbot
touch main.ts
Enter fullscreen mode Exit fullscreen mode

That’s it. No package.json, no node_modules.


🧠 Integrating with OpenAI API (NO SDK NEEDED)

Create a .env file (Deno works with dotenv via deno-dotenv if you choose to use it, but we won’t rely on it for this demo).

Or you can directly export your API key:

export OPENAI_API_KEY="sk-xxxxxx"
Enter fullscreen mode Exit fullscreen mode

Then in your TypeScript code:

// openai.ts
export async function sendMessage(prompt: string): Promise<string> {
  const apiKey = Deno.env.get("OPENAI_API_KEY");

  const response = await fetch("https://api.openai.com/v1/chat/completions", {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      "Authorization": `Bearer ${apiKey}`,
    },
    body: JSON.stringify({
      model: "gpt-3.5-turbo",
      messages: [{ role: "user", content: prompt }],
    }),
  });

  const data = await response.json();
  return data.choices[0].message.content;
}
Enter fullscreen mode Exit fullscreen mode

🍱 Minimal Deno Server with Just HTTP API

Let’s build a basic server-side chatbot handler:

// main.ts
import { serve } from "https://deno.land/std@0.193.0/http/server.ts";
import { sendMessage } from "./openai.ts";

serve(async (req: Request) => {
  if (req.method === "POST" && new URL(req.url).pathname === "/chat") {
    const { prompt } = await req.json();
    const reply = await sendMessage(prompt);
    return new Response(JSON.stringify({ reply }), {
      headers: { "Content-Type": "application/json" },
    });
  }

  return new Response("404 Not Found", { status: 404 });
});
Enter fullscreen mode Exit fullscreen mode

You’ve just written a production-ready, ultra-fast AI chatbot backend πŸŽ‰

Run your server:

OPENAI_API_KEY="sk-xxxxxx" deno run --allow-net --allow-env main.ts
Enter fullscreen mode Exit fullscreen mode

Test it via curl or your frontend:

curl -X POST http://localhost:8000/chat \
     -H "Content-Type: application/json" \
     -d '{"prompt": "Tell me a joke"}'
Enter fullscreen mode Exit fullscreen mode

πŸ§ͺ Benchmarked: Deno vs Express

Here’s what I found with wrk after testing both versions:

Running 10s test @ http://localhost:8000/chat
  8 threads and 64 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    41.23ms   12.22ms  92.56ms   85.00%
    Req/Sec    200.30    11.50     230

> Total requests: ~20% higher than equivalent Express implementation
Enter fullscreen mode Exit fullscreen mode

βœ… Fewer dependencies

βœ… Better performance

βœ… Zero configuration


πŸ›‘οΈ Securing It Further

You could:

  • Apply rate limiting middleware using Deno’s request/response filters
  • Validate and sanitize input
  • Integrate logging (via std/log)
  • Deploy to the edge via Deno Deploy or Supabase Edge Functions

🧠 What's Next?

You can:

  • Add WebSocket support for real-time chat
  • Integrate with frontend (React, Vue, or even plain HTML)
  • Save conversation history in a NoSQL DB like Deno KV

πŸ”š Final Words

You don’t need a truckload of libraries to build something cool. Deno challenges the Node ecosystem's tendency toward bloat, and this chatbot shows it’s more than capable on its own.

Now, go forth and build better bots 🦾!


πŸ“Ž References


πŸ’‘ Want help building your own AI chatbot like this? We offer AI Chatbot Development services.

Top comments (0)