📄 ai-sdk/docs/announcing-ai-sdk-6-beta

File: announcing-ai-sdk-6-beta.md | Updated: 11/15/2025

Source: https://ai-sdk.dev/docs/announcing-ai-sdk-6-beta

AI SDK

Menu

v5 (Latest)

AI SDK 5.x

AI SDK by Vercel

AI SDK 6 Beta

Foundations

Overview

Providers and Models

Prompts

Tools

Streaming

Getting Started

Navigating the Library

Next.js App Router

Next.js Pages Router

Svelte

Vue.js (Nuxt)

Node.js

Expo

Agents

Agents

Building Agents

Workflow Patterns

Loop Control

AI SDK Core

Overview

Generating Text

Generating Structured Data

Tool Calling

Model Context Protocol (MCP) Tools

Prompt Engineering

Settings

Embeddings

Image Generation

Transcription

Speech

Language Model Middleware

Provider & Model Management

Error Handling

Testing

Telemetry

AI SDK UI

Overview

Chatbot

Chatbot Message Persistence

Chatbot Resume Streams

Chatbot Tool Usage

Generative User Interfaces

Completion

Object Generation

Streaming Custom Data

Error Handling

Transport

Reading UIMessage Streams

Message Metadata

Stream Protocols

AI SDK RSC

Advanced

Reference

AI SDK Core

AI SDK UI

AI SDK RSC

Stream Helpers

AI SDK Errors

Migration Guides

Troubleshooting

AI SDK 6 Beta

Copy markdown

Announcing AI SDK 6 Beta

======================================================================================================

AI SDK 6 is in beta — while more stable than alpha, AI SDK 6 is still in active development and APIs may still change. Pin to specific versions as breaking changes may occur in patch releases.

Why AI SDK 6?


AI SDK 6 is a major version due to the introduction of the v3 Language Model Specification that powers new capabilities like agents and tool approval. However, unlike AI SDK 5, this release is not expected to have major breaking changes for most users.

The version bump reflects improvements to the specification, not a complete redesign of the SDK. If you're using AI SDK 5, migrating to v6 should be straightforward with minimal code changes.

Beta Version Guidance


The AI SDK 6 Beta is intended for:

  • Trying out new features and giving us feedback on the developer experience
  • Experimenting with agents and tool approval workflows

Your feedback during this beta phase directly shapes the final stable release. Share your experiences through GitHub issues .

Installation


To install the AI SDK 6 Beta, run the following command:

npm install ai@beta @ai-sdk/openai@beta @ai-sdk/react@beta

APIs may still change during beta. Pin to specific versions as breaking changes may occur in patch releases.

What's New in AI SDK 6?


AI SDK 6 introduces several features (with more to come soon!):

Agent Abstraction

A new unified interface for building agents with full control over execution flow, tool loops, and state management.

Tool Execution Approval

Request user confirmation before executing tools, enabling native human-in-the-loop patterns.

Structured Output (Stable)

Generate structured data alongside tool calling with generateText and streamText - now stable and production-ready.

Reranking Support

Improve search relevance by reordering documents based on their relationship to a query using specialized reranking models.

Image Editing Support

Native support for image editing (coming soon).

Agent Abstraction


AI SDK 6 introduces a powerful new Agent interface that provides a standardized way to build agents.

Default Implementation: ToolLoopAgent

The ToolLoopAgent class provides a default implementation out of the box:

import { openai } from '@ai-sdk/openai';import { ToolLoopAgent } from 'ai';import { weatherTool } from '@/tool/weather';
export const weatherAgent = new ToolLoopAgent({  model: openai('gpt-4o'),  instructions: 'You are a helpful weather assistant.',  tools: {    weather: weatherTool,  },});
// Use the agentconst result = await weatherAgent.generate({  prompt: 'What is the weather in San Francisco?',});

The agent automatically handles the tool execution loop:

  1. Calls the LLM with your prompt
  2. Executes any requested tool calls
  3. Adds results back to the conversation
  4. Repeats until complete (default stopWhen: stepCountIs(20))

Configuring Call Options

Call options let you pass type-safe runtime inputs to dynamically configure your agents. Use them to inject retrieved documents for RAG, select models based on request complexity, customize tool behavior per request, or adjust any agent setting based on context.

Without call options, you'd need to create multiple agents or handle configuration logic outside the agent. With call options, you define a schema once and modify agent behavior at runtime:

import { ToolLoopAgent } from 'ai';import { z } from 'zod';
const supportAgent = new ToolLoopAgent({  model: 'openai/gpt-4o',  callOptionsSchema: z.object({    userId: z.string(),    accountType: z.enum(['free', 'pro', 'enterprise']),  }),  instructions: 'You are a helpful customer support agent.',  prepareCall: ({ options, ...settings }) => ({    ...settings,    instructions:      settings.instructions +      `\nUser context:- Account type: ${options.accountType}- User ID: ${options.userId}
Adjust your response based on the user's account level.`,  }),});
// Pass options when calling the agentconst result = await supportAgent.generate({  prompt: 'How do I upgrade my account?',  options: {    userId: 'user_123',    accountType: 'free',  },});

The options parameter is type-safe and will error if you don't provide it or pass incorrect types.

Call options enable dynamic agent configuration for several scenarios:

  • RAG: Fetch relevant documents and inject them into prompts at runtime
  • Dynamic model selection: Choose faster or more capable models based on request complexity
  • Tool configuration: Adjust tools per request
  • Provider options: Set reasoning effort, temperature, or other provider-specific settings dynamically

Learn more in the Configuring Call Options documentation.

UI Integration

Agents integrate seamlessly with React and other UI frameworks:

// Server-side API routeimport { createAgentUIStreamResponse } from 'ai';
export async function POST(request: Request) {  const { messages } = await request.json();
  return createAgentUIStreamResponse({    agent: weatherAgent,    messages,  });}

// Client-side with type safetyimport { useChat } from '@ai-sdk/react';import { InferAgentUIMessage } from 'ai';import { weatherAgent } from '@/agent/weather-agent';
type WeatherAgentUIMessage = InferAgentUIMessage<typeof weatherAgent>;
const { messages, sendMessage } = useChat<WeatherAgentUIMessage>();

Custom Agent Implementations

In AI SDK 6, Agent is an interface rather than a concrete class. While ToolLoopAgent provides a solid default implementation for most use cases, you can implement the Agent interface to build custom agent architectures:

import { Agent } from 'ai';
// Build your own multi-agent orchestrator that delegates to specialistsclass Orchestrator implements Agent {  constructor(private subAgents: Record<string, Agent>) {    /* Implementation */  }}
const orchestrator = new Orchestrator({  subAgents: {    // your subagents  },});

This approach enables you to experiment with orchestrators, memory layers, custom stop conditions, and agent patterns tailored to your specific use case.

Tool Execution Approval


AI SDK 6 introduces a tool approval system that gives you control over when tools are executed.

Enable approval for a tool by setting needsApproval:

import { tool } from 'ai';import { z } from 'zod';
export const weatherTool = tool({  description: 'Get the weather in a location',  inputSchema: z.object({    city: z.string(),  }),  needsApproval: true, // Require user approval  execute: async ({ city }) => {    const weather = await fetchWeather(city);    return weather;  },});

Dynamic Approval

Make approval decisions based on tool input:

export const paymentTool = tool({  description: 'Process a payment',  inputSchema: z.object({    amount: z.number(),    recipient: z.string(),  }),  // Only require approval for large transactions  needsApproval: async ({ amount }) => amount > 1000,  execute: async ({ amount, recipient }) => {    return await processPayment(amount, recipient);  },});

Client-Side Approval UI

Handle approval requests in your UI:

export function WeatherToolView({ invocation, addToolApprovalResponse }) {  if (invocation.state === 'approval-requested') {    return (      <div>        <p>Can I retrieve the weather for {invocation.input.city}?</p>        <button          onClick={() =>            addToolApprovalResponse({              id: invocation.approval.id,              approved: true,            })          }        >          Approve        </button>        <button          onClick={() =>            addToolApprovalResponse({              id: invocation.approval.id,              approved: false,            })          }        >          Deny        </button>      </div>    );  }
  if (invocation.state === 'output-available') {    return (      <div>        Weather: {invocation.output.weather}        Temperature: {invocation.output.temperature}°F      </div>    );  }
  // Handle other states...}

Auto-Submit After Approvals

Automatically continue the conversation once approvals are handled:

import { useChat } from '@ai-sdk/react';import { lastAssistantMessageIsCompleteWithApprovalResponses } from 'ai';
const { messages, addToolApprovalResponse } = useChat({  sendAutomaticallyWhen: lastAssistantMessageIsCompleteWithApprovalResponses,});

Structured Output (Stable)


AI SDK 6 stabilizes structured output support for agents, enabling you to generate structured data alongside multi-step tool calling.

Previously, you could only generate structured outputs with generateObject and streamObject, which didn't support tool calling. Now ToolLoopAgent (and generateText / streamText) can combine both capabilities using the output parameter:

import { Output, ToolLoopAgent, tool } from 'ai';import { openai } from '@ai-sdk/openai';import { z } from 'zod';
const agent = new ToolLoopAgent({  model: openai('gpt-4o'),  tools: {    weather: tool({      description: 'Get the weather in a location',      inputSchema: z.object({        city: z.string(),      }),      execute: async ({ city }) => {        return { temperature: 72, condition: 'sunny' };      },    }),  },  output: Output.object({    schema: z.object({      summary: z.string(),      temperature: z.number(),      recommendation: z.string(),    }),  }),});
const { output } = await agent.generate({  prompt: 'What is the weather in San Francisco and what should I wear?',});// The agent calls the weather tool AND returns structured outputconsole.log(output);// {//   summary: "It's sunny in San Francisco",//   temperature: 72,//   recommendation: "Wear light clothing and sunglasses"// }

Output Types

The Output object provides multiple strategies for structured generation:

  • Output.object(): Generate structured objects with Zod schemas
  • Output.array(): Generate arrays of structured objects
  • Output.choice(): Select from a specific set of options
  • Output.text(): Generate plain text (default behavior)

Streaming Structured Output

Use agent.stream() to stream structured output as it's being generated:

import { ToolLoopAgent, Output } from 'ai';import { openai } from '@ai-sdk/openai';import { z } from 'zod';
const profileAgent = new ToolLoopAgent({  model: openai('gpt-4o'),  instructions: 'Generate realistic person profiles.',  output: Output.object({    schema: z.object({      name: z.string(),      age: z.number(),      occupation: z.string(),    }),  }),});
const { partialOutputStream } = await profileAgent.stream({  prompt: 'Generate a person profile.',});
for await (const partial of partialOutputStream) {  console.log(partial);  // { name: "John" }  // { name: "John", age: 30 }  // { name: "John", age: 30, occupation: "Engineer" }}

Support in generateText and streamText

Structured outputs are also supported in generateText and streamText functions, allowing you to use this feature outside of agents when needed.

When using structured output with generateText or streamText, you must configure multiple steps with stopWhen because generating the structured output is itself a step. For example: stopWhen: stepCountIs(2) to allow tool calling and output generation.

Reranking Support


AI SDK 6 introduces native support for reranking, a technique that improves search relevance by reordering documents based on their relationship to a query.

Unlike embedding-based similarity search, reranking models are specifically trained to understand query-document relationships, producing more accurate relevance scores:

import { rerank } from 'ai';import { cohere } from '@ai-sdk/cohere';
const documents = [  'sunny day at the beach',  'rainy afternoon in the city',  'snowy night in the mountains',];
const { ranking } = await rerank({  model: cohere.reranking('rerank-v3.5'),  documents,  query: 'talk about rain',  topN: 2,});
console.log(ranking);// [//   { originalIndex: 1, score: 0.9, document: 'rainy afternoon in the city' },//   { originalIndex: 0, score: 0.3, document: 'sunny day at the beach' }// ]

Structured Document Reranking

Reranking also supports structured documents, making it ideal for searching through databases, emails, or other structured content:

import { rerank } from 'ai';import { cohere } from '@ai-sdk/cohere';
const documents = [  {    from: 'Paul Doe',    subject: 'Follow-up',    text: 'We are happy to give you a discount of 20% on your next order.',  },  {    from: 'John McGill',    subject: 'Missing Info',    text: 'Sorry, but here is the pricing information from Oracle: $5000/month',  },];
const { rerankedDocuments } = await rerank({  model: cohere.reranking('rerank-v3.5'),  documents,  query: 'Which pricing did we get from Oracle?',  topN: 1,});
console.log(rerankedDocuments[0]);// { from: 'John McGill', subject: 'Missing Info', text: '...' }

Supported Providers

Several providers offer reranking models:

Image Editing Support


Native support for image editing and generation workflows is coming soon. This will enable:

  • Image-to-image transformations
  • Multi-modal editing with text prompts

Migration from AI SDK 5.x


AI SDK 6 is expected to have minimal breaking changes. The version bump is due to the v3 Language Model Specification, but most AI SDK 5 code will work with little or no modification.

Timeline


AI SDK 6 Beta: Available now

Stable Release: End of 2025

On this page

Announcing AI SDK 6 Beta

Why AI SDK 6?

Beta Version Guidance

Installation

What's New in AI SDK 6?

Agent Abstraction

Tool Execution Approval

Structured Output (Stable)

Reranking Support

Image Editing Support

Agent Abstraction

Default Implementation: ToolLoopAgent

Configuring Call Options

UI Integration

Custom Agent Implementations

Tool Execution Approval

Dynamic Approval

Client-Side Approval UI

Auto-Submit After Approvals

Structured Output (Stable)

Output Types

Streaming Structured Output

Support in generateText and streamText

Reranking Support

Structured Document Reranking

Supported Providers

Image Editing Support

Migration from AI SDK 5.x

Timeline

Deploy and Scale AI Apps with Vercel.

Vercel delivers the infrastructure and developer experience you need to ship reliable AI-powered applications at scale.

Trusted by industry leaders:

  • OpenAI
  • Photoroom
  • leonardo-ai Logoleonardo-ai Logo
  • zapier Logozapier Logo

Talk to an expert