I was building a website builder platform and needed an AI assistant that could talk to users, call backend functions, and drop into any page without coupling to a framework. Everything I found was tied to React or required a backend SDK. So I built my own.
What it does
aichats renders a floating chat button on your page. Click it, chat opens. The LLM handles conversation and when it needs to do something — search products, create a ticket, query a database — it calls your tools automatically.
The full tool-calling loop runs without manual orchestration. You define tools, the LLM decides when to call them, the plugin executes them, feeds results back, and the LLM responds.
npm install aichats
Quick start — script tag
Drop this on any HTML page:
<script src="https://unpkg.com/aichats/dist/ai-chat-plugin.js"></script>
<script>
AIChatPlugin.create({
apiKey: "sk-...",
provider: "openai",
systemPrompt: "You help customers find products.",
tools: [
{
type: "function",
function: {
name: "search_products",
description: "Search the product catalog",
parameters: {
type: "object",
properties: { query: { type: "string" } },
required: ["query"],
},
},
},
],
actionEndpoint: "/api/chat",
});
</script>
ES Module
import { create } from "aichats";
create({
provider: "anthropic",
apiKey: "sk-ant-...",
model: "claude-sonnet-4-6",
systemPrompt: "You are a helpful assistant.",
tools: TOOLS,
actionEndpoint: "/api/chat",
});
Next.js
"use client";
import { useEffect } from "react";
export default function Chat() {
useEffect(() => {
import("aichats").then((mod) => {
mod.create({
provider: "openai",
apiKey: process.env.NEXT_PUBLIC_AI_KEY!,
tools: TOOLS,
actionEndpoint: "/api/chat",
});
});
}, []);
return null;
}
Tool calling — the interesting part
Define tools with the OpenAI function-calling schema. The plugin handles the loop:
- User sends message
- LLM responds with text, tool calls, or both
- Plugin executes each tool call against your endpoint
- Results fed back to the LLM
- Repeat until the LLM responds with just text
Your server receives:
{ "action": "search_products", "args": { "query": "shoes" } }
And responds:
{ "result": "[{\"name\": \"Running Shoes\", \"price\": 89.99}]" }
The LLM reads the result and summarises it for the user.
Or handle tools client-side — no server needed:
create({
onAction: async (name, args) => {
if (name === "get_cart") return JSON.stringify(cartStore.getItems());
return "Unknown action";
},
});
Server route examples
Next.js App Router:
// app/api/chat/route.ts
import { NextResponse } from "next/server";
export async function POST(request: Request) {
const { action, args } = await request.json();
switch (action) {
case "search_products": {
const products = await db.product.findMany({
where: { name: { contains: String(args.query) } },
take: 10,
});
return NextResponse.json({ result: JSON.stringify(products) });
}
default:
return NextResponse.json({ result: `Unknown: ${action}` });
}
}
Express:
app.post("/api/chat", async (req, res) => {
const { action, args } = req.body;
switch (action) {
case "search_products": {
const { rows } = await pool.query(
"SELECT * FROM products WHERE name ILIKE $1 LIMIT 10",
[`%${args.query}%`]
);
return res.json({ result: JSON.stringify(rows) });
}
default:
return res.json({ result: `Unknown: ${action}` });
}
});
FastAPI:
@app.post("/api/chat")
async def chat(body: ChatAction):
if body.action == "search_products":
rows = await pool.fetch(
"SELECT * FROM products WHERE name ILIKE $1 LIMIT 10",
f"%{body.args['query']}%"
)
return {"result": json.dumps([dict(r) for r in rows])}
return {"result": f"Unknown: {body.action}"}
Theming
All CSS custom properties. Override on your page — no build step:
/* Dark theme */
:root {
--acp-primary: #6366f1;
--acp-bg: #0f172a;
--acp-text: #e2e8f0;
--acp-border: #334155;
}
/* Size */
:root {
--acp-width: 480px;
--acp-height: 700px;
}
/* Brand colours */
:root {
--acp-primary: #16a34a;
--acp-blue: #16a34a;
--acp-blue-light: #f0fdf4;
}
Mobile responsive — goes fullscreen under 480px automatically.
Under the hood
- Zero dependencies — just HTML, CSS, JS
- ~30KB minified
- Multi-provider: OpenAI, Anthropic, LM Studio, Ollama (any OpenAI-compatible API)
- Expand/collapse panel with keyboard support
- Proxy support for CORS (local models on different ports)
- Max iteration cap to prevent runaway tool loops
-
Runtime API:
setTools(),setSystemPrompt(),send(),clear(),destroy() - TypeScript types included
All config options
| Option | Type | Default | Description |
|---|---|---|---|
provider |
`"openai" \ | "anthropic" \ | "lmstudio"` |
apiKey |
string |
required | API key (stays client-side) |
model |
string |
provider default | Model override |
baseUrl |
string |
provider default | Base URL override |
proxyUrl |
string |
— | Route all LLM requests through a proxy |
systemPrompt |
string |
— | System prompt |
tools |
ToolDef[] |
[] |
Tools the LLM can call |
actionEndpoint |
string |
— | Server URL for tool execution |
onAction |
(name, args) => Promise<string> |
— | Client-side tool handler |
maxIterations |
number |
8 |
Max tool round-trips per message |
title |
string |
"AI Chat" |
Header title |
position |
`"bottom-right" \ | "bottom-left"` | "bottom-right" |
Links
- npm: npmjs.com/package/aichats
- GitHub: github.com/will-march/aichats
Would love feedback. What's missing? What would make you actually use this?
Top comments (0)