File: announcing-ai-sdk-6-beta.md | Updated: 11/15/2025
Menu
v5 (Latest)
AI SDK 5.x
Model Context Protocol (MCP) Tools
AI SDK 6 Beta
Copy markdown
======================================================================================================
AI SDK 6 is in beta — while more stable than alpha, AI SDK 6 is still in active development and APIs may still change. Pin to specific versions as breaking changes may occur in patch releases.
AI SDK 6 is a major version due to the introduction of the v3 Language Model Specification that powers new capabilities like agents and tool approval. However, unlike AI SDK 5, this release is not expected to have major breaking changes for most users.
The version bump reflects improvements to the specification, not a complete redesign of the SDK. If you're using AI SDK 5, migrating to v6 should be straightforward with minimal code changes.
The AI SDK 6 Beta is intended for:
Your feedback during this beta phase directly shapes the final stable release. Share your experiences through GitHub issues .
To install the AI SDK 6 Beta, run the following command:
npm install ai@beta @ai-sdk/openai@beta @ai-sdk/react@beta
APIs may still change during beta. Pin to specific versions as breaking changes may occur in patch releases.
AI SDK 6 introduces several features (with more to come soon!):
A new unified interface for building agents with full control over execution flow, tool loops, and state management.
Request user confirmation before executing tools, enabling native human-in-the-loop patterns.
Generate structured data alongside tool calling with generateText and streamText - now stable and production-ready.
Improve search relevance by reordering documents based on their relationship to a query using specialized reranking models.
Native support for image editing (coming soon).
AI SDK 6 introduces a powerful new Agent interface that provides a standardized way to build agents.
The ToolLoopAgent class provides a default implementation out of the box:
import { openai } from '@ai-sdk/openai';import { ToolLoopAgent } from 'ai';import { weatherTool } from '@/tool/weather';
export const weatherAgent = new ToolLoopAgent({ model: openai('gpt-4o'), instructions: 'You are a helpful weather assistant.', tools: { weather: weatherTool, },});
// Use the agentconst result = await weatherAgent.generate({ prompt: 'What is the weather in San Francisco?',});
The agent automatically handles the tool execution loop:
stopWhen: stepCountIs(20))Call options let you pass type-safe runtime inputs to dynamically configure your agents. Use them to inject retrieved documents for RAG, select models based on request complexity, customize tool behavior per request, or adjust any agent setting based on context.
Without call options, you'd need to create multiple agents or handle configuration logic outside the agent. With call options, you define a schema once and modify agent behavior at runtime:
import { ToolLoopAgent } from 'ai';import { z } from 'zod';
const supportAgent = new ToolLoopAgent({ model: 'openai/gpt-4o', callOptionsSchema: z.object({ userId: z.string(), accountType: z.enum(['free', 'pro', 'enterprise']), }), instructions: 'You are a helpful customer support agent.', prepareCall: ({ options, ...settings }) => ({ ...settings, instructions: settings.instructions + `\nUser context:- Account type: ${options.accountType}- User ID: ${options.userId}
Adjust your response based on the user's account level.`, }),});
// Pass options when calling the agentconst result = await supportAgent.generate({ prompt: 'How do I upgrade my account?', options: { userId: 'user_123', accountType: 'free', },});
The options parameter is type-safe and will error if you don't provide it or pass incorrect types.
Call options enable dynamic agent configuration for several scenarios:
Learn more in the Configuring Call Options documentation.
Agents integrate seamlessly with React and other UI frameworks:
// Server-side API routeimport { createAgentUIStreamResponse } from 'ai';
export async function POST(request: Request) { const { messages } = await request.json();
return createAgentUIStreamResponse({ agent: weatherAgent, messages, });}
// Client-side with type safetyimport { useChat } from '@ai-sdk/react';import { InferAgentUIMessage } from 'ai';import { weatherAgent } from '@/agent/weather-agent';
type WeatherAgentUIMessage = InferAgentUIMessage<typeof weatherAgent>;
const { messages, sendMessage } = useChat<WeatherAgentUIMessage>();
In AI SDK 6, Agent is an interface rather than a concrete class. While ToolLoopAgent provides a solid default implementation for most use cases, you can implement the Agent interface to build custom agent architectures:
import { Agent } from 'ai';
// Build your own multi-agent orchestrator that delegates to specialistsclass Orchestrator implements Agent { constructor(private subAgents: Record<string, Agent>) { /* Implementation */ }}
const orchestrator = new Orchestrator({ subAgents: { // your subagents },});
This approach enables you to experiment with orchestrators, memory layers, custom stop conditions, and agent patterns tailored to your specific use case.
AI SDK 6 introduces a tool approval system that gives you control over when tools are executed.
Enable approval for a tool by setting needsApproval:
import { tool } from 'ai';import { z } from 'zod';
export const weatherTool = tool({ description: 'Get the weather in a location', inputSchema: z.object({ city: z.string(), }), needsApproval: true, // Require user approval execute: async ({ city }) => { const weather = await fetchWeather(city); return weather; },});
Make approval decisions based on tool input:
export const paymentTool = tool({ description: 'Process a payment', inputSchema: z.object({ amount: z.number(), recipient: z.string(), }), // Only require approval for large transactions needsApproval: async ({ amount }) => amount > 1000, execute: async ({ amount, recipient }) => { return await processPayment(amount, recipient); },});
Handle approval requests in your UI:
export function WeatherToolView({ invocation, addToolApprovalResponse }) { if (invocation.state === 'approval-requested') { return ( <div> <p>Can I retrieve the weather for {invocation.input.city}?</p> <button onClick={() => addToolApprovalResponse({ id: invocation.approval.id, approved: true, }) } > Approve </button> <button onClick={() => addToolApprovalResponse({ id: invocation.approval.id, approved: false, }) } > Deny </button> </div> ); }
if (invocation.state === 'output-available') { return ( <div> Weather: {invocation.output.weather} Temperature: {invocation.output.temperature}°F </div> ); }
// Handle other states...}
Automatically continue the conversation once approvals are handled:
import { useChat } from '@ai-sdk/react';import { lastAssistantMessageIsCompleteWithApprovalResponses } from 'ai';
const { messages, addToolApprovalResponse } = useChat({ sendAutomaticallyWhen: lastAssistantMessageIsCompleteWithApprovalResponses,});
AI SDK 6 stabilizes structured output support for agents, enabling you to generate structured data alongside multi-step tool calling.
Previously, you could only generate structured outputs with generateObject and streamObject, which didn't support tool calling. Now ToolLoopAgent (and generateText / streamText) can combine both capabilities using the output parameter:
import { Output, ToolLoopAgent, tool } from 'ai';import { openai } from '@ai-sdk/openai';import { z } from 'zod';
const agent = new ToolLoopAgent({ model: openai('gpt-4o'), tools: { weather: tool({ description: 'Get the weather in a location', inputSchema: z.object({ city: z.string(), }), execute: async ({ city }) => { return { temperature: 72, condition: 'sunny' }; }, }), }, output: Output.object({ schema: z.object({ summary: z.string(), temperature: z.number(), recommendation: z.string(), }), }),});
const { output } = await agent.generate({ prompt: 'What is the weather in San Francisco and what should I wear?',});// The agent calls the weather tool AND returns structured outputconsole.log(output);// {// summary: "It's sunny in San Francisco",// temperature: 72,// recommendation: "Wear light clothing and sunglasses"// }
The Output object provides multiple strategies for structured generation:
Output.object(): Generate structured objects with Zod schemasOutput.array(): Generate arrays of structured objectsOutput.choice(): Select from a specific set of optionsOutput.text(): Generate plain text (default behavior)Use agent.stream() to stream structured output as it's being generated:
import { ToolLoopAgent, Output } from 'ai';import { openai } from '@ai-sdk/openai';import { z } from 'zod';
const profileAgent = new ToolLoopAgent({ model: openai('gpt-4o'), instructions: 'Generate realistic person profiles.', output: Output.object({ schema: z.object({ name: z.string(), age: z.number(), occupation: z.string(), }), }),});
const { partialOutputStream } = await profileAgent.stream({ prompt: 'Generate a person profile.',});
for await (const partial of partialOutputStream) { console.log(partial); // { name: "John" } // { name: "John", age: 30 } // { name: "John", age: 30, occupation: "Engineer" }}
generateText and streamTextStructured outputs are also supported in generateText and streamText functions, allowing you to use this feature outside of agents when needed.
When using structured output with generateText or streamText, you must configure multiple steps with stopWhen because generating the structured output is itself a step. For example: stopWhen: stepCountIs(2) to allow tool calling and output generation.
AI SDK 6 introduces native support for reranking, a technique that improves search relevance by reordering documents based on their relationship to a query.
Unlike embedding-based similarity search, reranking models are specifically trained to understand query-document relationships, producing more accurate relevance scores:
import { rerank } from 'ai';import { cohere } from '@ai-sdk/cohere';
const documents = [ 'sunny day at the beach', 'rainy afternoon in the city', 'snowy night in the mountains',];
const { ranking } = await rerank({ model: cohere.reranking('rerank-v3.5'), documents, query: 'talk about rain', topN: 2,});
console.log(ranking);// [// { originalIndex: 1, score: 0.9, document: 'rainy afternoon in the city' },// { originalIndex: 0, score: 0.3, document: 'sunny day at the beach' }// ]
Reranking also supports structured documents, making it ideal for searching through databases, emails, or other structured content:
import { rerank } from 'ai';import { cohere } from '@ai-sdk/cohere';
const documents = [ { from: 'Paul Doe', subject: 'Follow-up', text: 'We are happy to give you a discount of 20% on your next order.', }, { from: 'John McGill', subject: 'Missing Info', text: 'Sorry, but here is the pricing information from Oracle: $5000/month', },];
const { rerankedDocuments } = await rerank({ model: cohere.reranking('rerank-v3.5'), documents, query: 'Which pricing did we get from Oracle?', topN: 1,});
console.log(rerankedDocuments[0]);// { from: 'John McGill', subject: 'Missing Info', text: '...' }
Several providers offer reranking models:
Native support for image editing and generation workflows is coming soon. This will enable:
AI SDK 6 is expected to have minimal breaking changes. The version bump is due to the v3 Language Model Specification, but most AI SDK 5 code will work with little or no modification.
AI SDK 6 Beta: Available now
Stable Release: End of 2025
On this page
Default Implementation: ToolLoopAgent
Support in generateText and streamText
Deploy and Scale AI Apps with Vercel.
Vercel delivers the infrastructure and developer experience you need to ship reliable AI-powered applications at scale.
Trusted by industry leaders: