DEV Community

Alex Spinov
Alex Spinov

Posted on

Vercel AI SDK Has a Free AI Toolkit: Stream LLM Responses, Build Chatbots, and Integrate Any AI Model in React

Building an AI chatbot means: set up OpenAI client, handle streaming, parse SSE events, manage conversation state, display tokens as they arrive, handle errors, add retry logic. That's before any UI.

What if one hook gave you streaming AI responses, conversation history, loading states, and error handling?

import { useChat } from "ai/react";

function Chat() {
  const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat();

  return (
    <div>
      {messages.map(m => (
        <div key={m.id}>
          <b>{m.role}:</b> {m.content}
        </div>
      ))}
      <form onSubmit={handleSubmit}>
        <input value={input} onChange={handleInputChange} />
        <button disabled={isLoading}>Send</button>
      </form>
    </div>
  );
}
Enter fullscreen mode Exit fullscreen mode

That's the Vercel AI SDK. A complete AI application toolkit for TypeScript.

Server Side

// app/api/chat/route.ts
import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = streamText({
    model: openai("gpt-4-turbo"),
    messages,
    system: "You are a helpful assistant.",
  });

  return result.toDataStreamResponse();
}
Enter fullscreen mode Exit fullscreen mode

Multi-Provider Support

import { anthropic } from "@ai-sdk/anthropic";
import { google } from "@ai-sdk/google";
import { mistral } from "@ai-sdk/mistral";

// Switch models with one line — same API
const result = await generateText({
  model: anthropic("claude-sonnet-4-20250514"),
  prompt: "Explain quantum computing",
});

const result2 = await generateText({
  model: google("gemini-pro"),
  prompt: "Explain quantum computing",
});
Enter fullscreen mode Exit fullscreen mode

Structured Output (JSON)

import { generateObject } from "ai";
import { z } from "zod";

const { object } = await generateObject({
  model: openai("gpt-4-turbo"),
  schema: z.object({
    recipe: z.object({
      name: z.string(),
      ingredients: z.array(z.object({
        item: z.string(),
        amount: z.string(),
      })),
      steps: z.array(z.string()),
    }),
  }),
  prompt: "Generate a recipe for chocolate cake",
});
// object.recipe is fully typed!
Enter fullscreen mode Exit fullscreen mode

Tool Calling (Function Calling)

const result = await generateText({
  model: openai("gpt-4-turbo"),
  tools: {
    weather: {
      description: "Get weather for a location",
      parameters: z.object({ city: z.string() }),
      execute: async ({ city }) => {
        const data = await fetch(`https://api.weather.com/${city}`);
        return data.json();
      },
    },
    search: {
      description: "Search the web",
      parameters: z.object({ query: z.string() }),
      execute: async ({ query }) => searchWeb(query),
    },
  },
  prompt: "What's the weather in Tokyo?",
});
Enter fullscreen mode Exit fullscreen mode

AI SDK vs LangChain vs Direct API

Feature AI SDK LangChain Direct API
Streaming Built-in Manual Manual SSE
React hooks useChat, useCompletion None None
Multi-provider Same API Adapters Different SDKs
Structured output Zod schemas Output parsers JSON mode
Bundle size ~15 KB ~200 KB ~5 KB
Learning curve Low High Medium

Choose AI SDK for React apps with AI features. Choose LangChain for complex AI pipelines and agents.

Start here: sdk.vercel.ai


Need custom data extraction, scraping, or automation? I build tools that collect and process data at scale — 78 actors on Apify Store and 265+ open-source repos. Email me: Spinov001@gmail.com | My Apify Actors

Top comments (0)