When Vercel released AI SDK 6 on December 22, 2025, the headline feature was not a new model integration or a faster streaming API. It was a different kind of addition: agents became a first-class primitive in the SDK, not an afterthought patched on top of generateText.
Effloow Lab installed ai@latest (currently 6.0.177) and inspected the package exports, constructor signatures, and available methods directly. This guide covers what actually changed, what the new ToolLoopAgent API looks like in practice, and how to move existing code from SDK 5.x to 6.
Why This Matters: The Shift from Loops to Primitives
In AI SDK 5.x, building an agent meant writing your own loop. You called generateText, checked for tool calls in the response, executed those tools, passed results back, and repeated until done — all in application code that you maintained yourself.
The result was that every team ended up writing a slightly different, slightly buggy version of the same control loop. Edge cases around retries, step limits, streaming, and type safety accumulated per project.
SDK 6 formalizes this pattern. ToolLoopAgent handles the loop. Your code defines what the agent can do and when it should stop. The runtime handles how it executes.
This is not just a convenience wrapper. The Agent interface and ToolLoopAgent class are designed so that:
- The same agent definition works in API routes, background jobs, and UI streaming contexts without modification
- TypeScript type inference flows end-to-end from tool schemas to UI message types
- Human-in-the-loop approval, stop conditions, and structured output are all opt-in per-agent rather than bolt-ons
Installation and Current Version
The package name is still ai. Installing the latest v6 release:
npm install ai@latest
# or
npm install ai@^6.0.0
Effloow Lab confirmed that npm install ai@latest pulls 10 packages with zero vulnerabilities. The install is lean.
Peer dependency: Zod 3.x or Zod 4.x are both supported.
"zod": "^3.25.76 || ^4.1.8"
If you are on Zod 3, nothing changes. If you want Zod 4, SDK 6 now accepts it natively.
The Core Change: ToolLoopAgent
The central addition is ToolLoopAgent. In SDK 5.x, an Experimental_Agent class existed (it is still exported in 6.0.177 as a compatibility shim, but it is deprecated). In SDK 6, the production API is:
import { ToolLoopAgent } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
import { z } from 'zod';
import { tool } from 'ai';
const searchTool = tool({
description: 'Search documentation for a given query',
inputSchema: z.object({ query: z.string() }),
execute: async ({ query }) => {
// your search implementation
return { results: [] };
},
});
const agent = new ToolLoopAgent({
model: anthropic('claude-sonnet-4-5'),
instructions: 'You are a helpful developer assistant.',
tools: { search: searchTool },
});
The ToolLoopAgent constructor accepts:
| Parameter | Type | Default | Purpose |
|---|---|---|---|
model |
LanguageModel |
required | The model to use |
instructions |
string |
optional | System prompt |
tools |
Record<string, Tool> |
optional | Available tools |
stopWhen |
`StopCondition | StopCondition[]` | stepCountIs(20) |
output |
Output |
optional | Structured output format |
toolChoice |
ToolChoice |
optional | Force/auto/none |
temperature, topP, etc. |
number | optional | Model parameters |
onStepFinish |
callback | optional | Hook for each step |
onFinish |
callback | optional | Hook on completion |
The instance exposes two methods: generate() (returns Promise<GenerateTextResult>) and stream() (returns a streaming result). Both accept the same parameters:
const result = await agent.generate({
prompt: 'What does the useEffect hook do?',
});
console.log(result.text);
Agent Is an Interface
ToolLoopAgent implements the Agent interface. This matters because you can create custom agent implementations that slot into the same API surface. If you need a routing agent, a retrieval-augmented agent, or an agent that checks a policy before every step, you implement Agent directly rather than subclassing ToolLoopAgent.
This design rewards dependency injection: pass an Agent type through your application, and the concrete implementation can swap out in tests or staging environments without changing call sites.
Human-in-the-Loop: needsApproval
Any tool can be marked as requiring human approval before execution:
const deployTool = tool({
description: 'Deploy a new version to production',
inputSchema: z.object({
version: z.string(),
environment: z.enum(['staging', 'production']),
}),
needsApproval: true,
execute: async ({ version, environment }) => {
// deployment logic
return { deployed: true, version };
},
});
When needsApproval: true, the tool call pauses and surfaces a pending approval in the UI stream. Your application handles the approval UI. The agent resumes execution only after the user confirms.
You can also pass an async function to needsApproval for conditional approval logic:
needsApproval: async ({ toolInput }) =>
toolInput.environment === 'production',
This pattern makes it straightforward to build agents that handle low-risk actions automatically (staging deploys, read-only lookups) while requiring human sign-off on high-risk ones (production deploys, account deletions).
Structured Output from Agents
In SDK 5.x, getting structured data back from a multi-step agent required parsing the final text output yourself. SDK 6 introduces the Output helper that works at the agent level:
import { Output } from 'ai';
import { z } from 'zod';
const reportAgent = new ToolLoopAgent({
model: anthropic('claude-sonnet-4-5'),
tools: { fetchData: dataFetchTool },
output: Output.object({
schema: z.object({
summary: z.string(),
keyFindings: z.array(z.string()),
confidenceScore: z.number().min(0).max(1),
}),
}),
});
const result = await reportAgent.generate({
prompt: 'Analyze Q1 sales data and provide a structured report.',
});
// result.object is typed as { summary: string; keyFindings: string[]; confidenceScore: number }
console.log(result.object.keyFindings);
Supported output types are Output.object(), Output.array(), Output.choice(), Output.json(), and Output.text(). The model calls tools as many times as needed, then produces the final structured output in one pass at the end.
Multi-Agent Patterns: Agents as Tools
The most powerful pattern in SDK 6 is composing agents into multi-agent systems by wrapping them as tools:
const summarizeAgent = new ToolLoopAgent({
model: anthropic('claude-haiku-4-5'),
instructions: 'Summarize the given text in 3 bullet points.',
});
const factCheckAgent = new ToolLoopAgent({
model: anthropic('claude-sonnet-4-5'),
instructions: 'Fact-check the claims in the given text against the web.',
tools: { search: webSearchTool },
});
// Wrap subagents as tools
const summarizeTool = tool({
description: 'Summarize a long block of text',
inputSchema: z.object({ text: z.string() }),
execute: async ({ text }) => {
const result = await summarizeAgent.generate({ prompt: text });
return result.text;
},
});
const factCheckTool = tool({
description: 'Fact-check claims in a passage',
inputSchema: z.object({ passage: z.string() }),
execute: async ({ passage }) => {
const result = await factCheckAgent.generate({ prompt: passage });
return result.text;
},
});
// Orchestrator agent
const orchestrator = new ToolLoopAgent({
model: anthropic('claude-sonnet-4-5'),
instructions: 'Research topics thoroughly: summarize sources, then fact-check key claims.',
tools: { summarize: summarizeTool, factCheck: factCheckTool },
});
Subagents call .generate() (which returns the final result) rather than .stream(). The orchestrator calls tools, gets back text results from subagents, and continues reasoning.
This decomposition keeps each agent focused on a narrow task, makes individual agents testable in isolation, and avoids the context bloat that comes from cramming all capabilities into one massive system prompt.
Stop Conditions
By default, ToolLoopAgent stops after 20 steps. You can configure this explicitly or combine multiple conditions:
import { stepCountIs } from 'ai';
const agent = new ToolLoopAgent({
model: anthropic('claude-sonnet-4-5'),
tools: { search: searchTool },
stopWhen: stepCountIs(10),
});
The prepareStep callback gives you per-step control if you need dynamic stop conditions based on intermediate results:
const agent = new ToolLoopAgent({
model: anthropic('claude-sonnet-4-5'),
tools: { search: searchTool },
prepareStep: async ({ steps }) => {
if (steps.some(s => s.text.includes('DONE'))) {
return { stopCondition: true };
}
return {};
},
});
MCP Support
Model Context Protocol support ships in a separate package to keep the core ai bundle lean:
npm install @ai-sdk/mcp
SDK 6 adds OAuth authentication handling for HTTP-based MCP servers, plus resources and prompts discovery and elicitation support (server-initiated user input). The createMCPClient function now handles PKCE, token refresh, and session management transparently:
import { createMCPClient } from '@ai-sdk/mcp';
const client = await createMCPClient({
transport: {
type: 'http',
url: 'https://your-mcp-server.com/mcp',
authProvider, // handles OAuth flow automatically
},
});
const tools = await client.tools();
This makes it practical to connect agents to external MCP servers (databases, APIs, enterprise systems) without writing OAuth boilerplate.
Migrating from AI SDK 5.x
Most of the migration is mechanical. Vercel provides a codemod that handles the majority of changes:
npx @ai-sdk/codemod upgrade v6
The key manual changes:
Rename Experimental_Agent to ToolLoopAgent
// Before
import { Experimental_Agent } from 'ai';
const agent = new Experimental_Agent({
model: ...,
system: 'You are a helpful assistant.',
});
// After
import { ToolLoopAgent } from 'ai';
const agent = new ToolLoopAgent({
model: ...,
instructions: 'You are a helpful assistant.',
});
Note the parameter rename: system becomes instructions. The default stopWhen also changed from stepCountIs(1) to stepCountIs(20) — if your existing agent relied on single-step behavior, set this explicitly.
CoreMessage to ModelMessage
// Before
import type { CoreMessage } from 'ai';
// After
import type { ModelMessage } from 'ai';
// Use convertToModelMessages() (now async) for conversion
generateObject / streamObject Deprecation
These functions still work in 6.0.177 but are on the deprecation path. Migrate to generateText/streamText with an output setting:
// Before
const { object } = await generateObject({
model: anthropic('claude-sonnet-4-5'),
schema: z.object({ title: z.string() }),
prompt: 'Generate a blog post title',
});
// After
const { object } = await generateText({
model: anthropic('claude-sonnet-4-5'),
output: Output.object({ schema: z.object({ title: z.string() }) }),
prompt: 'Generate a blog post title',
});
Token Usage Fields
// Before
usage.cachedInputTokens;
usage.reasoningTokens;
// After
usage.inputTokenDetails.cacheReadTokens;
usage.outputTokenDetails.reasoningTokens;
Common Mistakes
Ignoring the default step count change. If you migrated from Experimental_Agent and assumed single-step behavior, your agent now runs up to 20 steps. This affects cost and latency. Set stopWhen: stepCountIs(1) explicitly if you need the old behavior.
Calling .stream() on subagents. In multi-agent patterns, subagents called via tool execution should use .generate(). Streaming only makes sense at the top level where you pipe output to a UI.
Missing the async on convertToModelMessages. The function became async in v6 to support async Tool.toModelOutput(). Forgetting await produces a Promise instead of a message array — a TypeScript type that might not catch this at the call site.
Assuming MCP is in the core package. createMCPClient requires @ai-sdk/mcp separately. The ai core package does not include MCP to keep bundle size down.
Not using the codemod first. The automated codemod handles renaming, import changes, and common method signature updates. Running it before manual review saves time and reduces manual errors.
FAQ
Q: Is AI SDK 6 compatible with Next.js 14 and 15?
Yes. The SDK works with any React framework including Next.js App Router and Pages Router. The ai/react sub-package provides hooks (useChat, useCompletion) that remain unchanged in v6.
Q: Can I use Anthropic Claude models with ToolLoopAgent?
Yes. Any provider supported by the SDK works with ToolLoopAgent. Install the provider package (@ai-sdk/anthropic, @ai-sdk/openai, @ai-sdk/google) and pass the model to the constructor. Provider-specific tools (Anthropic memory tool, OpenAI file patching, Google Maps grounding) are also accessible through the standard tools API.
Q: Does ToolLoopAgent support streaming to the UI?
Yes. Call .stream() instead of .generate(). Use createAgentUIStream and pipeAgentUIStreamToResponse to pipe the stream to a Next.js API route or other HTTP handler. The InferAgentUIMessage<typeof myAgent> TypeScript type infers the correct message shape for type-safe UI components.
Q: What is the difference between stopWhen and prepareStep?
stopWhen is a declarative stop condition — "stop after N steps" or "stop when no tool calls remain." prepareStep is a per-step callback that lets you inspect intermediate results and dynamically modify the agent's behavior (change tools, update instructions, or signal a stop) based on what happened so far.
Q: Are there breaking changes in the OpenAI provider?
Yes. strictJsonSchema now defaults to true for OpenAI. This improves JSON reliability but requires stricter Zod schema compliance (no .optional() on required fields, no .default()). If you see validation errors after upgrading, check your OpenAI-specific schemas first.
Key Takeaways
Bottom Line
AI SDK 6 is the version where Vercel's SDK stops being a collection of LLM utility functions and starts being an agent framework. The lean install (10 packages), stable MCP support, and type-safe multi-agent composition make it a credible foundation for production TypeScript agent systems in 2026. Run the codemod first, check your stop conditions, and migrate generateObject calls when you have time — the migration is not urgent but the new APIs are cleaner.
-
ToolLoopAgentis the production API.Experimental_Agentstill exports in 6.0.177 but is deprecated. - The Agent interface design lets you swap implementations without changing call sites — valuable for testing.
-
needsApprovalon individual tools gives you granular human-in-the-loop control without restructuring your agent. -
Agents as tools is the idiomatic multi-agent pattern: wrap
agent.generate()inside atool()definition and compose freely. -
MCP lives in
@ai-sdk/mcp, separate from core. OAuth handling is now built in. - The codemod (
npx @ai-sdk/codemod upgrade v6) handles most migration automatically. - Zod 4 is now supported alongside Zod 3, so you are not blocked from upgrading Zod independently.
Top comments (0)