TL;DR:
- React stack → assistant-ui
- Full agent framework → CopilotKit
- Quick prototype or non-React → Deep Chat
- Python backend → Chainlit (check project status first)
- Multi-framework + composable, early adopter → Loquix (no production deployments yet — read the limitations)
⚠️ Conflict of interest: I'm the author of Loquix, one of the libraries reviewed here. The independent reviews come first. Loquix gets its own clearly-labeled section at the end, written by me about my own work — treat it accordingly. If you want the quick take: for most teams, assistant-ui (React) or Deep Chat (everything else) is the more pragmatic answer.
Before the reviews: four types of lock-in
After evaluating all of these libraries, the biggest insight wasn't about any specific tool — it was that "lock-in" isn't one thing. There are at least four distinct flavors, and understanding which one you're accepting changes how much it matters:
| Type | Example | When it bites |
|---|---|---|
| Framework | assistant-ui (React-only) | When you need to support other frameworks |
| Architecture/runtime | CopilotKit | When your agent infrastructure evolves independently of your frontend |
| Ecosystem | Vercel stack | Creeps up over time; each layer pulls you deeper |
| API-surface | Deep Chat | When your requirements outgrow the component's config options |
Framework lock-in is rarely a problem if your whole team is on React. Architecture lock-in hurts when backend and frontend evolve at different speeds. Ecosystem lock-in you feel slowly — you start with useChat, six months later you're on Vercel hosting, each step made sense. API-surface lock-in only matters when your requirements outgrow the defaults.
I'll reference these types throughout the reviews so you can calibrate the tradeoffs for your situation.
What "chat UI" actually requires
"Messages going up, input at the bottom" — I thought the same thing. Then I started building. Here's the real scope:
Core: Token-by-token streaming without UI glitches, markdown + code blocks with syntax highlighting, a composer that auto-grows and handles Shift+Enter.
Interactive layer: File attachments (drag-and-drop + clipboard paste), feedback buttons for RLHF, a stop-generating button (you'll add this at 2 AM after your agent writes a 4,000-word essay about semicolons), model selectors.
Trust layer: Citations, reasoning traces, tool execution logs, cost estimates. Users increasingly expect to see why the AI said what it said.
The invisible stuff: Accessibility (WCAG, keyboard nav), theming that survives a designer, i18n.
Build all of that from scratch: 2–4 weeks, before a single line of agent logic. That's why this market exists.
The libraries
assistant-ui ⭐ ~7.9k — React, composable, headless — MIT
YC-backed. The library that shows up in every "build a ChatGPT clone" thread.
Follows the Radix headless pattern — unstyled primitives (Thread, Composer, Message) you compose yourself. State management is thoughtful, streaming is first-class, Vercel AI SDK integration is tight.
Who it's for: React + Next.js teams that want maximum control without implementing the hard parts. If your whole stack is React, this is the safe default — mature ecosystem, responsive maintainers, well-designed composability model.
Where it hurts: React-only (framework lock-in). Vue, Svelte, Angular, vanilla JS — not supported. "Headless" also means you do all the design work; getting a polished UI requires assembling many pieces and the learning curve is real.
Lock-in type: Framework.
CopilotKit ⭐ ~28.6k — React, agent framework with UI — commercial tiers available
Not just a UI library — an agentic application framework that includes UI components. Drop in <CopilotPortal />, get a full copilot experience. Agents can read your app state, call your functions, generate UI dynamically. They co-created AG-UI and partnered with Google on A2UI.
Who it's for: Greenfield projects where you're buying into their whole architecture. If you need deep agent-app state sync and generative UI, nothing else comes close.
Where it hurts: If all you need is a good chat interface, this is a flamethrower for a candle. More importantly: it's not just framework lock-in, it's architecture lock-in. You're adopting their agent execution model, state sync, action system — not just swapping in components. On brownfield projects with existing agent infrastructure, that means rethinking things you've already built.
Lock-in type: Architecture/runtime (the heaviest kind).
Vercel AI SDK + AI Elements — React, hooks + growing component layer — MIT
The AI SDK is everywhere: model-provider abstraction with useChat / useCompletion hooks. Not a UI library at its core, but Vercel has been expanding into actual components with AI Elements — composable React components on top of the SDK.
Worth mentioning separately: the Vercel AI Chatbot is the official open-source reference implementation built on this stack. Many teams use it as a starting point or benchmark for what a production chat UI looks like.
Who it's for: Anyone already in the Vercel ecosystem. AI Elements is worth evaluating alongside assistant-ui if you're on Next.js.
Where it hurts: The Vercel stack has gravitational pull. You start with useChat. Six months later you're on Vercel hosting. Each step made sense individually — the cumulative effect is ecosystem lock-in that's softer than CopilotKit's but real.
Lock-in type: Ecosystem.
TanStack AI — alpha, framework-agnostic hooks — MIT
TanStack — the team behind React Query, React Router, React Table — launched their AI toolkit. Framework-agnostic core, adapters for React, Solid, vanilla JS. Headless, type-safe.
Important context: this is alpha from a team with a strong track record of shipping production-grade, widely-adopted tools. That's different from "alpha from unknowns." If it matures the way React Query did, this will be a serious foundation layer.
Who it's for: Nobody in production yet. Watch this space.
Where it hurts: No pre-built UI components — hooks and utilities, not a chat interface. Vue and Angular adapters aren't there. Not ready for production today.
Google A2UI — declarative format for agent-generated UI
Agents send JSON describing which components to render; your frontend renders them from a trusted catalog. No executable code from the agent — no XSS nightmares. Framework-agnostic by design. Ships initial renderers for Lit, Angular, Flutter.
Who it's for: Teams building agent-generated dynamic UIs. The CopilotKit integration is worth watching, and this is one of the more interesting directions for generative UI / RSC-style patterns outside the React ecosystem.
Where it hurts: It's an infrastructure layer, not a component library. Not something you'd drop into a project and ship today.
Deep Chat ⭐ ~3.3k — Web Component, framework-agnostic — MIT
The one that made me sit up. A single Web Component for AI chat — framework-agnostic, built-in connections to OpenAI, HuggingFace, Cohere, Azure, Stability AI, AssemblyAI out of the box. Ships with speech-to-text, text-to-speech, file uploads, image handling.
Time-to-working-chat: under 10 minutes. No backend glue required to get started.
Who it's for: Prototypes, internal tools, non-React projects, or anywhere chat is a feature rather than the core experience. Best option when you need something working fast that doesn't care about your frontend framework.
Where it hurts: One component with configuration options, not a composable system. When you need to restructure layout, inject custom UI between messages, or add domain-specific controls — you're working within what the property API can express. Same tradeoff as any "convention over composition" approach: faster start, less flexibility when requirements diverge from defaults.
Lock-in type: API-surface.
Chainlit ⭐ ~11.4k — Python-first, full-stack
The go-to for Python developers who wanted a chat UI without touching JavaScript. Write your agent in Python, get a polished web UI with reasoning steps, file uploads, markdown.
Caveat: The original team stepped back from active development in May 2025. Community maintainers have picked it up. Worth verifying current project status before building on it.
One more thing: "Chat UI" is becoming "Agent UI"
Streaming is solved. Every library handles it well now — it's no longer a differentiator.
What's not solved is composition for agent-specific UI patterns: approval flows, reasoning traces, tool execution displays, cost transparency. These are becoming table stakes, not features. Libraries that only solve the messaging problem will feel incomplete within a year.
Generative UI (agents that emit UI, not just text) is the next frontier — watch A2UI + CopilotKit here, and the RSC patterns in the Vercel ecosystem.
Loquix — what I built, and why you probably shouldn't use it yet
Repeat disclosure: this is my library. Everything below is me describing my own work. The same "where it hurts" standard I applied above applies here too.
None of the options above fit my specific situation: a product where the customer's frontend framework isn't fixed, and I wanted composable building blocks rather than a monolithic widget or agent runtime.
So I built Loquix — a Web Components library on Lit, specifically for AI chat interfaces. (npm install @loquix/core)
The approach
35 individual Web Components that compose like this:
<loquix-chat-container>
<loquix-chat-header slot="header">
<loquix-model-selector slot="actions" />
</loquix-chat-header>
<loquix-message-list>
<loquix-message-item role="assistant">
<loquix-message-content streaming />
<loquix-message-actions>
<loquix-action-copy />
<loquix-action-feedback />
</loquix-message-actions>
</loquix-message-item>
</loquix-message-list>
<loquix-chat-composer>
<loquix-prompt-input />
<loquix-drop-zone />
</loquix-chat-composer>
</loquix-chat-container>
Verbose, yes. But when your designer says "move feedback above the message" — you move the component. No prop drilling, no checking if feedbackPosition is a valid prop value.
Backend integration is a type-only interface with no runtime dependency on any specific provider:
interface AgentProvider {
send(messages: Message[], options?: SendOptions): Promise<{
stream: ReadableStream<string>;
}>;
}
Works in React, Vue, Svelte, Angular, vanilla JS. @loquix/react ships with proper wrappers for React's synthetic event system. Vue/Svelte dedicated packages are planned, not shipped.
Lock-in type: Platform (Web Components/Shadow DOM model, Lit if you extend components deeply) — just at the browser standards level rather than the library level.
Where it hurts — same standard as above
No production deployments. I can't point you to a production app using Loquix. That's a significant unknown that none of the other libraries in this review share.
Small community. No Stack Overflow answers. You'll be reading source code.
Incomplete roadmap. Reasoning traces, citations, cost estimates — Phase 4 and 5 — still in development. If you need those today, build them yourself or wait.
Lit learning curve. Consuming is easy. Deep customization requires understanding Lit's reactive model and Shadow DOM. React developers find this counterintuitive at first.
No built-in provider integrations. Deep Chat connects to OpenAI in 10 minutes. assistant-ui integrates with Vercel AI SDK out of the box. Loquix gives you a TypeScript interface and says "implement it." Better for experienced teams; more friction for everyone else — the same critique I'd apply to any library that takes this approach.
Concrete example where assistant-ui wins: chat UI integrating with Vercel AI SDK's useChat, rendering generative UI streamed from server, thread persistence, all in Next.js. With assistant-ui: well-documented, battle-tested. With Loquix: you'd be wiring streaming yourself, no RSC integration (Web Components and React Server Components don't mix naturally), and you'd be the first person to try this combination in production. Hard to justify.
If your situation is specifically "multi-framework team that wants composable components and is willing to be an early adopter" — evaluate it. Otherwise, the options above are more pragmatic.
The decision table
| Situation | Recommendation | Maturity |
|---|---|---|
| React + Next.js | assistant-ui | ✅ Production-ready |
| Full agent framework, generative UI | CopilotKit | ✅ Production-ready |
| Provider abstraction + React | Vercel AI SDK + AI Elements | ✅ Production-ready |
| Quick prototype, non-React, embedded widget | Deep Chat | ✅ Production-ready |
| Python-first backend | Chainlit | ⚠️ Check project status |
| Multi-framework + composable, early adopter | Loquix | 🚧 No production deployments |
| Agent-generated dynamic UI | A2UI + CopilotKit | 👀 Watch this space |
Greenfield vs. brownfield: heavier frameworks (CopilotKit, Vercel full stack) accelerate you on greenfield when you're choosing everything fresh. Thinner UI layers (assistant-ui, Deep Chat, Loquix) cause less friction on brownfield where the architecture already exists.
The days of spending weeks building chat UI from scratch are over. Pick a library, commit, and spend your engineering time on the thing that actually differentiates your product.
If I missed a library you've used in production, drop it in the comments.
Unless you enjoy writing auto-scroll logic at 2 AM. In which case, I have a useEffect cleanup function I'd like to sell you.

Top comments (0)