You’ve probably used Gmail’s Smart Reply or Slack’s suggested messages, which are short, AI-generated responses like “Sounds good 👍” or “Let’s do it!”. With a single tap response, you can save time and keep conversations flowing.
By reducing time spent typing, conversations stay active, drop-off rates decrease, and chat apps feel more innovative and modern.
Smart replies can be used in customer support apps (agents pick relevant replies and respond faster), marketplaces or communities (buyers and sellers exchange quick transactional messages), or even productivity tools or workplace chat platforms (users save time on repetitive confirmations like “Yes,” “Got it,” or “On it”).
In this tutorial, we’ll use Stream’s React SDK and Synthetic’s inference API to bring smart replies into chat. You’ll build a chat application that generates real-time, context-aware reply suggestions directly within conversations.
Watch a demo of what you’ll build:
Prerequisites
To follow along, make sure you have the following prerequisites:
- Node.js (version 16 or higher) and NPM installed
- A free Stream account with your API key
- A Synthetic API key for accessing the inference API
- Basic knowledge of React, Next.js, and Typescript
Ready? Let’s begin!
Stream is a platform that provides APIs and SDKs for developers to build real-time chat, video, and activity feeds into their applications.
To get started with Stream, create your free account.
Learn how to set up your Stream account in this guide.
After you’ve signed up, navigate to your dashboard and click on the Create App button in the top right of the page, like so:
This modal then loads, where you enter details of the new app you’re creating. Enter the app name and then select the chat & video data region, feed data storage location, and environment (you can choose 'development'). Click 'Create App' to proceed.
Next, you’ll get your App API Access Keys from the Chat Overview in your dashboard. Copy the Key and Secret and add it to your .env file in your codebase.
STREAM_API_KEY=”your_stream_api”
STREAM_API_SECRET=”your_stream_secret”
We will be using the Stream React Chat SDK for this application. Learn more about it in the documentation.
Setting Up the React Project
Open your terminal and enter this command to create a new app:
npx create-next-app@latest smart-chat --typescript --tailwind --eslint --app --src-dir --import-alias "@/*"
This creates a new Next.js app in your directory called “smart-chat”, with Typescript and Tailwind configuration. It also adds an Import alias.
Note: Import aliases help with better imports and readability in your codebase. When adding new imports, simply update the alias configuration in
tsconfig.jsonorjsconfig.json, rather than updating import paths throughout your codebase.
Next, navigate to the new project directory we just created with this command.
cd smart-chat
Then, install the Stream Chat React SDK with this command:
npm install stream-chat stream-chat-react
This command installs the stream-chat and stream-chat-react packages. We will import the required components, such as Chat, Channel, MessageList, and MessageInput, from stream-chat-react into the main application file, which is the page.tsx file.
Next, create a stream-chat-container.tsx file in the components/ directory and import the necessary components from stream-chat-react to build the chat interface:
import { StreamChat } from "stream-chat"
import {
Chat,
Channel,
ChannelHeader,
MessageList,
MessageInput,
Window,
} from "stream-chat-react"
import "stream-chat-react/dist/css/v2/index.css"
The stream-chat package provides the core Stream Chat client for initialising and managing chat functionality, while stream-chat-react offers pre-built React components like Chat,Channel, MessageList, and MessageInput.
-
Chatcomponent acts as a provider, wrapping the chat interface to manage the Stream client. -
Channelcomponent renders a specific chat channel. -
MessageListdisplays the conversation history, and<MessageInput>allows you to compose messages.
The full implementation is available in the GitHub repository.
In your project’s root directory, create a page.tsx file in the /app directory and enter the code below:
"use client"
import { useState } from "react"
import { StreamChatContainer } from "@/components/stream-chat-container"
export default function Home() {
const [isLoggedIn, setIsLoggedIn] = useState(false)
const [userId, setUserId] = useState("")
const [userName, setUserName] = useState("")
const [inputUserId, setInputUserId] = useState("")
const [inputUserName, setInputUserName] = useState("")
const handleLogin = () => {
if (inputUserId && inputUserName) {
setUserId(inputUserId)
setUserName(inputUserName)
setIsLoggedIn(true)
}
}
if (isLoggedIn) {
return <StreamChatContainer userId={userId} userName={userName} />
}
}
Due to the length of the code file above, we have included only a snippet here. The complete code is available in the GitHub repository.
This sets up the main entry point for the application by prompting you to enter a User ID and Display Name before rendering the chat interface.
It uses React’s useState hook to manage user login state. When you enter a User ID and Display Name and click the Join Chat button, the handleLogin function updates the state and renders the component, passing the userId and userName props. For the UI, we’re adapting a Discord-like interface.
To set up the layout for this application, create a layout.tsx file also in the /app directory, and enter this code:
import type React from "react"
import type { Metadata } from "next"
import "./globals.css"
import { Geist, Geist_Mono } from 'next/font/google'
// Initialize fonts
const geist = Geist({
subsets: ["latin"],
variable: "--font-geist",
weight: ["100", "200", "300", "400", "500", "600", "700", "800", "900"],
})
const geistMono = Geist_Mono({
subsets: ["latin"],
variable: "--font-geist-mono",
weight: ["100", "200", "300", "400", "500", "600", "700", "800", "900"],
})
export const metadata: Metadata = {
title: "AI-Powered Live Chat with Intelligent Replies",
description: "Real-time chat with context-aware AI reply suggestions powered by Synthetic",
}
export default function RootLayout({
children,
}: Readonly<{
children: React.ReactNode
}>) {
return (
<html lang="en" className={`${geist.variable} ${geistMono.variable}`}>
<body className="font-sans">{children}</body>
</html>
)
}
Connecting to the Chat App
Next, we initialise the Stream client with your API Key from the app we created earlier in the Stream Dashboard above.
In the app directory, create a file named api/token/route.ts to handle user authentication for Stream’s Chat React SDK. Add the following code:
-
/token/route.ts
import { type NextRequest, NextResponse } from "next/server"
import { StreamChat } from "stream-chat"
const apiKey = process.env.NEXT_PUBLIC_STREAM_API_KEY
const apiSecret = process.env.STREAM_API_SECRET
export async function POST(request: NextRequest) {
try {
if (!apiKey || !apiSecret) {
console.error("Missing Stream credentials - NEXT_PUBLIC_STREAM_API_KEY or STREAM_API_SECRET not set")
return NextResponse.json(
{
error:
"Stream credentials not configured. Please add NEXT_PUBLIC_STREAM_API_KEY and STREAM_API_SECRET environment variables.",
},
{ status: 500 },
)
}
const { userId, userName } = await request.json()
if (!userId || !userName) {
return NextResponse.json({ error: "userId and userName are required" }, { status: 400 })
}
console.log("Generating token for user:", userId)
// Use Stream official server client to create a user token
const serverClient = StreamChat.getInstance(apiKey, apiSecret)
// Ensure the user exists/updated on Stream
await serverClient.upsertUser({ id: userId, name: userName })
const token = serverClient.createToken(userId)
console.log("Token generated successfully")
return NextResponse.json({
token,
userId,
userName,
})
} catch (error) {
console.error("Token generation error:", error)
return NextResponse.json(
{ error: error instanceof Error? error.message : "Failed to generate token" },
{ status: 500 },
)
}
}
This API route authenticates users for the React SDK. It uses the StreamChat.getInstance method to create a server-side client with your NEXT_PUBLIC_STREAM_API_KEY and STREAM_API_SECRET (set these in your .env file from your Stream dashboard).
The createToken method generates a JWT token for the user (from the API key), which is used to connect to Stream’s chat service. This token is returned to the client for use in components/stream-chat-container.tsx. The route includes error handling for missing credentials or invalid requests to ensure secure authentication.
Then, in components/stream-chat-container.tsx, initialise the Stream client and connect the user like so:
const apiKey = process.env.NEXT_PUBLIC_STREAM_API_KEY
useEffect(() => {
const init = async () => {
if (!apiKey) {
setError("Stream API key is not configured. Please check your environment variables.")
setIsLoading(false)
return
}
try {
setIsLoading(true)
setError(null)
const res = await fetch("/api/token", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ userId, userName }),
})
if (!res.ok) {
const errorData = await res.json().catch(() => ({}))
throw new Error(errorData.error || "Failed to get authentication token")
}
const { token } = await res.json()
const c = StreamChat.getInstance(apiKey)
await c.connectUser({ id: userId, name: userName }, token)
if (!c.userID) {
throw new Error("Failed to establish connection with Stream Chat")
}
setClient(c)
} catch (e) {
const errorMessage = e instanceof Error ? e.message : "An unknown error occurred"
console.error("Stream initialization error:", e)
setError(`Failed to initialize chat: ${errorMessage}`)
setClient(null)
} finally {
setIsLoading(false)
}
}
init()
return () => {
if (client) client.disconnectUser()
}
}, [userId, userName, apiKey])
In components/stream-chat-container.tsx, the useEffect hook initialises the Stream client using StreamChat.getInstance with the NEXT_PUBLIC_STREAM_API_KEY. It fetches a user token from the api/token/route.ts endpoint, passing userId and userName from the login form at app/page.tsx. The connectUser method authenticates the user with the token, establishing a real-time connection to Stream’s chat service.
The useMemo hook creates an AI Smart Replies channel and adds the authenticated user as a member:
const channel = useMemo(() => {
if (!client) return null
return client.channel("team", channelId, {
name: "AI Smart Replies",
members: [userId],
created_by_id: userId
})
}, [client, channelId, userId])
The full initialisation logic, including error handling and loading states, is in the GitHub repository.
Still in the /app directory, create an app folder, which will have these sub-folders and files:
-
/smart-replies/route.ts
import { type NextRequest, NextResponse } from "next/server"
import { generateSmartReplies } from "@/lib/synthetic-ai"
export async function POST(request: NextRequest) {
try {
const { messages } = await request.json()
if (!messages || !Array.isArray(messages)) {
return NextResponse.json({ error: "messages array is required" }, { status: 400 })
}
const result = await generateSmartReplies(messages)
return NextResponse.json({
replies: result.replies,
isAI: result.isAI,
model: result.model,
})
} catch (error) {
console.error("Smart replies API error:", error)
return NextResponse.json({ error: "Failed to generate smart replies" }, { status: 500 })
}
}
Here is an overview of how the project’s directory looks:
Creating the Chat UI
To set up the chat UI, we start with the <Chat> component we added earlier in the stream-chat-container.tsx file. This wraps the interface, passing the initialised client to provide access to Stream’s chat functionality. The <Channel> component renders the active channel (created in the useMemo hook), and the <Window> organises the layout.
"use client"
import { useEffect, useMemo, useState } from "react"
import { StreamChat } from "stream-chat"
import {
Chat,
Channel,
ChannelHeader,
MessageList,
MessageInput,
Window,
} from "stream-chat-react"
import "stream-chat-react/dist/css/v2/index.css"
import { SmartReplyBar } from "./smart-reply-bar"
return (
<Chat client={client} theme="str-chat__theme-dark">
<Channel channel={channel}>
<Window>
<ChannelHeader title="Live Stream Chat" live />
<MessageList />
<SmartReplyBridge />
<MessageInput focus />
</Window>
</Channel>
</Chat>
)
ChannelHeader displays the channel name (Live Stream Chat) with a live indicator, MessageList shows the conversation history, and MessageInput provides a text input for sending messages. For custom styling, we use styles/globals.css to define a dark theme with Tailwind CSS:
.dark {
--background: oklch(0.145 0 0);
--foreground: oklch(0.985 0 0);
--primary: oklch(0.985 0 0);
--primary-foreground: oklch(0.205 0 0);
// ... (Other theme variables)
}
The complete code for the UI is available in the GitHub repository.
For the chat UI, we adapt a Discord-like feel to the interface, create a discord-chat-layout.tsx file in the /components directory, and enter the code below:
"use client"
import { useEffect, useRef } from "react"
import { StreamChat } from "stream-chat"
import { Chat, Channel, MessageList, MessageInput, useChatContext, useChannelStateContext } from "stream-chat-react"
import { SmartReplyBar } from "./smart-reply-bar"
// Component to bridge Stream messages to SmartReplyBar
function MessageBridge() {
const { messages = [] } = useChannelStateContext()
const { channel } = useChatContext()
const handleSelectReply = (reply: string) => {
if (channel) {
void channel.sendMessage({ text: reply })
}
}
const simplifiedMessages = messages.map((m: any) => ({
id: m.id,
userId: m.user?.id || '',
userName: m.user?.name || m.user?.id || 'User',
text: m.text || '',
timestamp: m.created_at ? new Date(m.created_at) : new Date(),
}))
return <SmartReplyBar messages={simplifiedMessages} onSelectReply={handleSelectReply} />
}
export function DiscordChatLayout({ userId, userName, client, channel }: DiscordChatLayoutProps) {
const messagesEndRef = useRef<HTMLDivElement>(null)
// Auto-scroll to bottom when messages change
useEffect(() => {
messagesEndRef.current?.scrollIntoView({ behavior: "smooth" })
}, [channel?.state?.messages])
return (
<div className="flex h-screen bg-[#1e1f22]">
{/* Sidebar */}
<div className="w-16 bg-[#2b2d31] flex flex-col items-center py-4">
{/* Server icon and channels would go here */}
</div>
{/* Main Chat Area */}
<div className="flex-1 flex flex-col">
{/* Channel Header */}
<div className="h-14 border-b border-[#1e1f22] bg-[#2b2d31] flex items-center px-4">
<h2 className="text-white font-semibold"># general</h2>
</div>
{/* Messages */}
<div className="flex-1 overflow-y-auto p-4 bg-[#313338]">
<MessageList />
<div ref={messagesEndRef} />
</div>
{/* Message Input */}
<div className="p-4 border-t border-[#1e1f22] bg-[#383a40]">
<MessageInput />
<MessageBridge />
</div>
</div>
</div>
)
}
In the code above, we adapted a Discord-style UI for the interface with a dark theme and a message display area, styled with Tailwind. We customised Stream’s built-in components, such as MessageList and MessageInput, to align with Discord’s aesthetic while maintaining their real-time functionality. We also added a connector component that bridges Stream’s message system with our AI Smart Reply feature. This component handles message data transformation to make sure that AI-generated replies appear seamlessly within the chat flow.
Now that we have the chat set up and integrated with Stream’s React SDK, let’s set up Synthetic, which we will use for the AI smart replies.
Setting up Synthetic
Synthetic is an AI Inference platform that provides a solution for running open-source AI LLMs, such as Llama, Mistral, and DeepSeek.
Visit the synthetic website and click on “Sign up” like so:
The authentication page will then load. You can sign up using your email address or use Google social login.
After signing up, you will be prompted to enter your username like so:
Next, your Synthetic dashboard loads like so:
Click on the icon in the top left to open the sidebar shown below, where you will get your API key.
In the API settings, copy your API key and add it to your environment variable file .env in the project’s codebase.
SYNTHETIC_API_KEY="your_api_key"
STREAM_API_KEY=”your_stream_api”
STREAM_API_SECRET=”your_stream_secret”
For this project, we will utilise the OpenAI LLM and its API URL within the codebase. You can find all the necessary information on the page shown above.
OpenAI API Base URL: https://api.synthetic.new/openai/v1
Note:
These are the available OpenAI-compatible endpoints to use with the base URL from Synthetic:
- /models - Lists all always-on models and any recently used on-demand models
- /chat/completions - Chat-based completions with conversation history
- /completions - Traditional text completions
- /embeddings - Transforms text into vector embeddings
We are integrating the chat completions feature in the app we’re building here in this guide. The chat completion from the LLM model will create a model response for a given chat conversation. The Synthetic documentation provides a guide on how to integrate this feature.
In the code, we will be making a POST request to the API base URL endpoint like so:
POST https://api.synthetic.new/openai/v1/chat/completions
When adding the LLM URLs, make sure they are added with this prefix, like so:
hf:openai/gpt-oss-120b
or
hf:meta-llama/Llama-3.1-405B-Instruct
These are the Always-on models available on Synthetic:
You can also use a specific LLM model by copying the model’s name to be used in the API. Learn more in the Synthetic API documentation.
Synthetic also supports LoRA models, which are Low-rank adapters, called "LoRAs"; they are small, efficient fine-tunes that run on top of existing models. They can modify a model to be much more effective at specific tasks.
Learn about other models like Embedding Models and On-Demand Models here.
Note: Depending on the length of the content you’re prompting using the AI inference API from Synthetic, you may be required to use a pay-as-you-go service or upgrade to a pro plan on Synthetic.
So, don’t fret if you come across errors like this 🙂
fetch to https://api.synthetic.new/openai/v1/chat/completions failed with status 402 and body: {"error":"Insufficient credits and no active subscription. Due to context lengths, a minimum balance of $1.1179648 is required to run meta-llama/Llama-3.3-70B-Instruct. Your current balance is $0.00. Go to https://synthetic.new/billing to subscribe or purchase credits."}
Synthetic API error (HTTP ${response.status}): {"error":"Insufficient credits and no active subscription. Due to context lengths, a minimum balance of $1.1179648 is required to run meta-llama/Llama-3.3-70B-Instruct. Your current balance is $0.00. Go to https://synthetic.new/billing to subscribe or purchase credits."}
at generateSmartReplies (/lib/synthetic-ai)
Now that we have the Synthetic setup, let’s integrate it into the codebase for the smart chats application.
Capturing Messages and Sending to Synthetic
We’ll use Stream’s message.new event to capture incoming messages from the chat, and then forward the message text and context (recent conversation history) to Synthetic’s inference API.
In components/stream-chat-container.tsx, create a component to capture messages and send them to Synthetic’s API. Add this code within components/stream-chat-container.tsx:
function SmartReplyBridge() {
const { useChannelStateContext } = require("stream-chat-react") as typeof import("stream-chat-react")
const { messages } = useChannelStateContext()
type SRMessage = { id: string; userId: string; userName: string; text: string; timestamp: Date }
const simplified: SRMessage[] = (messages ?? []).slice(-10).map((m: any) => ({
id: m.id,
userId: m.user?.id ?? "",
userName: m.user?.name ?? m.user?.id ?? "User",
text: m.text ?? "",
timestamp: new Date(m.created_at || Date.now()),
}))
const { useChatContext } = require("stream-chat-react") as typeof import("stream-chat-react")
const { channel } = useChatContext()
const handleSelect = async (reply: string) => {
await channel?.sendMessage({ text: reply })
}
return <SmartReplyBar messages={simplified} onSelectReply={handleSelect} />
}
The component uses Stream’s useChannelStateContext hook to access the latest messages in the channel. It captures the last 10 messages, mapping them to a simplified format (SRMessage) with IDs, user IDs, user names, texts, and timestamps.
These messages are passed to the component, which sends them to Synthetic’s API via the [api/smart-replies/route.ts](https://github.com/Tabintel/smart-chat/tree/master/app/api/smart-replies) endpoint. The useChatContext hook provides access to the channel object, used later for sending replies.
This setup listens for Stream’s message component implicitly through the reactive messages state, with real-time updates.
Generating Smart Reply Suggestions with Synthetic API
We define a function to call Synthetic’s API endpoint with the chat context and parse the API response to extract multiple thoughtful reply options.
In your lib directory, create a file named synthetic-ai.ts to integrate Synthetic’s LLM models, and enter the code below:
interface SimpleMessage {
user: string
text: string
}
export interface SmartReplyResult {
replies: string[]
isAI: boolean
model?: string
}
export async function generateSmartReplies(messages: SimpleMessage[]): Promise<SmartReplyResult> {
try {
const apiKey = process.env.SYNTHETIC_API_KEY
if (!apiKey) {
return {
replies: getContextualReplies(messages),
isAI: false,
}
}
const conversationContext = messages
.slice(-10)
.map((msg) => `${msg.user}: ${msg.text}`)
.join("\n")
const systemPrompt = `You are a smart reply assistant for a live chat application. Your job is to generate 3 short, contextually relevant, and natural reply suggestions based on the conversation.
Guidelines:
- Each reply should be 5-50 characters
- Make them casual, friendly, and conversational
- Match the tone of the conversation
- Use emojis sparingly and naturally
- Avoid generic responses
- Be authentic and human-like
Return ONLY a valid JSON array of 3 strings, nothing else. Example: ["That's awesome!", "Let's go!", "I'm down"]`
const response = await fetch("https://api.synthetic.new/openai/v1/chat/completions", {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${apiKey}`,
},
body: JSON.stringify({
model: "hf:meta-llama/Llama-3.3-70B-Instruct",
messages: [
{
role: "system",
content: systemPrompt,
},
{
role: "user",
content: `Recent conversation:\n${conversationContext}\n\nGenerate 3 smart reply suggestions as a JSON array.`,
},
],
temperature: 0.7,
max_tokens: 200,
}),
})
// ... (Error handling and response parsing as provided)
} catch (error) {
console.error("Synthetic API error:", error)
return {
replies: getContextualReplies(messages),
isAI: false,
}
}
}
The generateSmartReplies function integrates Synthetic’s LLM model: hf:meta-llama/Llama-3.3-70B-Instruct, to generate context-aware smart replies.
It retrieves the last 10 messages, formats them within a conversation context, and sends them to Synthetic’s API endpoint: https://api.synthetic.new/openai/v1/chat/completions, along with a system prompt that defines guidelines for concise, natural replies.
The API returns a JSON array of three replies, which are parsed and returned with isAI: true and the model name. If the API call fails (for example, due to a missing SYNTHETIC_API_KEY or rate limits), the getContextualReplies function provides fallback replies based on the conversations.
Here is the LLM system prompt in use for the chat app:
You are a smart reply assistant for a live chat application. Your job is to generate 3 short, contextually relevant, and natural reply suggestions based on the conversation.
Guidelines:
- Each reply should be 5-50 characters
- Make them casual, friendly, and conversational
- Match the tone of the conversation
- Use emojis sparingly and naturally
- Avoid generic responses
- Be authentic and human-like
Return ONLY a valid JSON array of 3 strings, nothing else. Example: ["That's awesome!", "Let's go!", "I'm down"]`
You can access the full code in this GitHub repository.
Next, in app/api/smart-replies/route.ts, create an API route to handle this logic:
import { type NextRequest, NextResponse } from "next/server"
import { generateSmartReplies } from "@/lib/synthetic-ai"
export async function POST(request: NextRequest) {
try {
const { messages } = await request.json()
if (!messages || !Array.isArray(messages)) {
return NextResponse.json({ error: "messages array is required" }, { status: 400 })
}
const result = await generateSmartReplies(messages)
return NextResponse.json({
replies: result.replies,
isAI: result.isAI,
model: result.model,
})
} catch (error) {
console.error("Smart replies API error:", error)
return NextResponse.json({ error: "Failed to generate smart replies" }, { status: 500 })
}
}
This route receives messages from SmartReplyBar, calls generateSmartReplies, and returns the replies. The integration combines Stream’s real-time message data with Synthetic’s AI inference for dynamic suggestions.
Displaying the Smart Replies in the Chat
Next, we create a custom SmartReplyBar component and render the AI reply suggestions as clickable buttons under the message input.
In your /components directory, create a file named smart-reply-bar.tsx to display AI-generated reply suggestions, then add the following code:
"use client"
import { useState, useEffect } from "react"
interface Message {
id: string
userId: string
userName: string
text: string
timestamp: Date
}
interface SmartReplyBarProps {
onSelectReply: (reply: string) => void
messages: Message[]
}
export function SmartReplyBar({ onSelectReply, messages }: SmartReplyBarProps) {
const [replies, setReplies] = useState<string[]>([])
const [loading, setLoading] = useState(false)
const [isAI, setIsAI] = useState(false)
useEffect(() => {
if (messages.length === 0) return
const t = setTimeout(() => {
void generateReplies()
}, 400)
return () => clearTimeout(t)
}, [messages])
const generateReplies = async () => {
setLoading(true)
try {
const formattedMessages = messages.slice(-5).map((msg) => ({
user: msg.userName,
text: msg.text,
}))
const response = await fetch("/api/smart-replies", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ messages: formattedMessages }),
})
if (response.ok) {
const data = await response.json()
setReplies(data.replies || [])
setIsAI(data.isAI || false)
} else {
setReplies([])
setIsAI(false)
}
} catch (error) {
setReplies([])
setIsAI(false)
} finally {
setLoading(false)
}
}
}
The SmartReplyBar component displays AI-generated replies as clickable Button components from the components/ui/button library.tsx below the message input. useEffect hook triggers generateReplies when new messages arrive from Stream’s messages via <SmartReplyBridge>.
generateReplies function sends the last five messages to the [api/smart-replies/route.ts](https://github.com/Tabintel/smart-chat/tree/master/app/api/smart-replies) endpoint, which calls Synthetic’s LLM to generate replies. AI replies are then stored in state and rendered as buttons with a dynamic style. The components/ui/button.tsx file defines reusable button styles:
const buttonVariants = cva(
"inline-flex items-center justify-center gap-2 whitespace-nowrap rounded-md text-sm font-medium transition-all disabled:pointer-events-none disabled:opacity-50 [&_svg]:pointer-events-none [&_svg:not([class*='size-'])]:size-4 shrink-0 [&_svg]:shrink-0 outline-none focus-visible:border-ring focus-visible:ring-ring/50 focus-visible:ring-[3px] aria-invalid:ring-destructive/20 dark:aria-invalid:ring-destructive/40 aria-invalid:border-destructive",
{
variants: {
variant: {
default: 'bg-primary text-primary-foreground hover:bg-primary/90',
// ... (Other variants)
},
size: {
default: 'h-9 px-4 py-2 has-[>svg]:px-3',
// ... (Other sizes)
},
},
defaultVariants: {
variant: 'default',
size: 'default',
},
},
)
This ensures consistent, accessible buttons for the AI smart replies from Synthetic’s Inference.
Sending Suggested Replies
We use sendMessage and the button click to call channel.sendMessage() with the chosen reply. The reply then gets added to the conversation instantly.
In components/stream-chat-container.tsx, the component handles sending selected replies like so:
const { useChatContext } = require("stream-chat-react") as typeof import("stream-chat-react")
const { channel } = useChatContext()
const handleSelect = async (reply: string) => {
await channel?.sendMessage({ text: reply })
}
return <SmartReplyBar messages={simplified} onSelectReply={handleSelect} />
When you click a smart reply button in SmartReplyBar, the onSelectReply callback triggers handleSelect, which uses Stream’s channel.sendMessage method to send the selected reply to the channel. This integrates Stream’s Chat React SDK with Synthetic’s AI-generated replies, instantly adding the reply to the conversation. The useChatContext hook provides access to the channel object, ensuring that all messages in the chat are sent properly, along with smart AI-suggested replies.
Running the App
Now that we have set up the chat application and Synthetic API integration, let’s run the application.
To start the application, open your terminal (command prompt) and enter this command npm run dev like so:
Copy http://localhost:3000 and open it in your browser. The app then loads like so:
Enter a random user ID and Display Name, then click on Join Chat:
After you enter your ID and username, the app shows the “Connecting to Stream chat” loader:
The chat app then loads like so:
We used a Discord-like interface for the Frontend UI. Now, send a message in the chat space. When typing, you’ll notice some quick replies you can send, which are done with AI Inference using Synthetic.
As you send messages in the app, you can see the chat logs on your Stream dashboard.
Cloning the GitHub repository
To get up to speed, you can also clone the GitHub repository for the project directory, install the required dependencies, and run the chat application.
Open your command terminal and run this command:
git clone https://github.com/Tabintel/smart-chat.git
Then navigate to this directory.
cd smart-chat
After opening the directory, run this command to install all the dependencies needed for the project.
npm install --legacy-peer-deps
We use the “legacy-peer-deps” command to instruct npm to ignore peer dependency conflicts during package installation.
Once the dependencies are installed, run the npm run dev command to start the application. You can then open the 'http://localhost:3000' URL in your browser to test the chat app.
Next Steps
In this guide, we created a chat app using Stream and Synthetic, with AI-powered smart replies that save users time with one-tap responses, keep conversations active, and make your app feel more innovative and modern.
You can extend the features we’ve built to:
- Include syncing responses across different channels.
- Add multiple AI agents to handle diverse use cases, such as customer support, marketplaces, or workplace chat.
Happy coding! 🎉























Top comments (0)