📄 ai-sdk/docs/migration-guides/migration-guide-5-0

File: migration-guide-5-0.md | Updated: 11/15/2025

Source: https://ai-sdk.dev/docs/migration-guides/migration-guide-5-0

AI SDK

Menu

v5 (Latest)

AI SDK 5.x

AI SDK by Vercel

AI SDK 6 Beta

Foundations

Overview

Providers and Models

Prompts

Tools

Streaming

Getting Started

Navigating the Library

Next.js App Router

Next.js Pages Router

Svelte

Vue.js (Nuxt)

Node.js

Expo

Agents

Agents

Building Agents

Workflow Patterns

Loop Control

AI SDK Core

Overview

Generating Text

Generating Structured Data

Tool Calling

Model Context Protocol (MCP) Tools

Prompt Engineering

Settings

Embeddings

Image Generation

Transcription

Speech

Language Model Middleware

Provider & Model Management

Error Handling

Testing

Telemetry

AI SDK UI

Overview

Chatbot

Chatbot Message Persistence

Chatbot Resume Streams

Chatbot Tool Usage

Generative User Interfaces

Completion

Object Generation

Streaming Custom Data

Error Handling

Transport

Reading UIMessage Streams

Message Metadata

Stream Protocols

AI SDK RSC

Advanced

Reference

AI SDK Core

AI SDK UI

AI SDK RSC

Stream Helpers

AI SDK Errors

Migration Guides

Versioning

Migrate Your Data to AI SDK 5.0

Migrate AI SDK 4.0 to 5.0

Migrate AI SDK 4.1 to 4.2

Migrate AI SDK 4.0 to 4.1

Migrate AI SDK 3.4 to 4.0

Migrate AI SDK 3.3 to 3.4

Migrate AI SDK 3.2 to 3.3

Migrate AI SDK 3.1 to 3.2

Migrate AI SDK 3.0 to 3.1

Troubleshooting

Copy markdown

Migrate AI SDK 4.0 to 5.0

==================================================================================================================

Recommended Migration Process


  1. Backup your project. If you use a versioning control system, make sure all previous versions are committed.
  2. Upgrade to AI SDK 5.0.
  3. Automatically migrate your code using one of these approaches:
  4. Follow the breaking changes guide below.
  5. Verify your project is working as expected.
  6. Commit your changes.

AI SDK 5 Migration MCP Server


The AI SDK 5 Migration Model Context Protocol (MCP) Server provides an automated way to migrate your project using a coding agent. This server has been designed for Cursor, but should work with any coding agent that supports MCP.

To get started, create or edit .cursor/mcp.json in your project:

{  "mcpServers": {    "ai-sdk-5-migration": {      "url": "https://ai-sdk-5-migration-mcp-server.vercel.app/api/mcp"    }  }}

After saving, open the command palette (Cmd+Shift+P on macOS, Ctrl+Shift+P on Windows/Linux) and search for "View: Open MCP Settings". Verify the new server appears and is toggled on.

Then use this prompt:

Please migrate this project to AI SDK 5 using the ai-sdk-5-migration mcp server. Start by creating a checklist.

For more information, see the AI SDK 5 Migration MCP Server repository .

AI SDK 5.0 Package Versions


You need to update the following packages to the following versions in your package.json file(s):

  • ai package: 5.0.0
  • @ai-sdk/provider package: 2.0.0
  • @ai-sdk/provider-utils package: 3.0.0
  • @ai-sdk/* packages: 2.0.0 (other @ai-sdk packages)

Additionally, you need to update the following peer dependencies:

  • zod package: 4.1.8 or later (recommended to avoid TypeScript performance issues)

An example upgrade command would be:

npm install ai @ai-sdk/react @ai-sdk/openai zod@^4.1.8

If you encounter TypeScript performance issues after upgrading, ensure you're using Zod 4.1.8 or later. If the issue persists, update your tsconfig.json to use moduleResolution: "nodenext". See the TypeScript performance troubleshooting guide for more details.

Codemods


The AI SDK provides Codemod transformations to help upgrade your codebase when a feature is deprecated, removed, or otherwise changed.

Codemods are transformations that run on your codebase automatically. They allow you to easily apply many changes without having to manually go through every file.

Codemods are intended as a tool to help you with the upgrade process. They may not cover all of the changes you need to make. You may need to make additional changes manually.

You can run all codemods provided as part of the 5.0 upgrade process by running the following command from the root of your project:

npx @ai-sdk/codemod upgrade

To run only the v5 codemods (v4 → v5 migration):

npx @ai-sdk/codemod v5

Individual codemods can be run by specifying the name of the codemod:

npx @ai-sdk/codemod <codemod-name> <path>

For example, to run a specific v5 codemod:

npx @ai-sdk/codemod v5/rename-format-stream-part src/

See also the table of codemods . In addition, the latest set of codemods can be found in the @ai-sdk/codemod repository.

AI SDK Core Changes


generateText and streamText Changes

Maximum Output Tokens

The maxTokens parameter has been renamed to maxOutputTokens for clarity.

AI SDK 4.0

const result = await generateText({  model: openai('gpt-4.1'),  maxTokens: 1024,  prompt: 'Hello, world!',});

AI SDK 5.0

const result = await generateText({  model: openai('gpt-4.1'),  maxOutputTokens: 1024,  prompt: 'Hello, world!',});

Message and Type System Changes

Core Type Renames

CoreMessageModelMessage

AI SDK 4.0

import { CoreMessage } from 'ai';

AI SDK 5.0

import { ModelMessage } from 'ai';
MessageUIMessage

AI SDK 4.0

import { Message, CreateMessage } from 'ai';

AI SDK 5.0

import { UIMessage, CreateUIMessage } from 'ai';
convertToCoreMessagesconvertToModelMessages

AI SDK 4.0

import { convertToCoreMessages, streamText } from 'ai';import { openai } from '@ai-sdk/openai';
const result = await streamText({  model: openai('gpt-4'),  messages: convertToCoreMessages(messages),});

AI SDK 5.0

import { convertToModelMessages, streamText } from 'ai';import { openai } from '@ai-sdk/openai';
const result = await streamText({  model: openai('gpt-4'),  messages: convertToModelMessages(messages),});

For more information about model messages, see the Model Message reference .

UIMessage Changes

Content → Parts Array

For UIMessages (previously called Message), the .content property has been replaced with a parts array structure.

AI SDK 4.0

import { type Message } from 'ai'; // v4 Message type
// Messages (useChat) - had content propertyconst message: Message = {  id: '1',  role: 'user',  content: 'Bonjour!',};

AI SDK 5.0

import { type UIMessage, type ModelMessage } from 'ai';
// UIMessages (useChat) - now use parts arrayconst uiMessage: UIMessage = {  id: '1',  role: 'user',  parts: [{ type: 'text', text: 'Bonjour!' }],};

Data Role Removed

The data role has been removed from UI messages.

AI SDK 4.0

const message = {  role: 'data',  content: 'Some content',  data: { customField: 'value' },};

AI SDK 5.0

// V5: Use UI message streams with custom data partsconst stream = createUIMessageStream({  execute({ writer }) {    // Write custom data instead of message annotations    writer.write({      type: 'data-custom',      id: 'custom-1',      data: { customField: 'value' },    });  },});

UIMessage Reasoning Structure

The reasoning property on UI messages has been moved to parts.

AI SDK 4.0

const message: Message = {  role: 'assistant',  content: 'Hello',  reasoning: 'I will greet the user',};

AI SDK 5.0

const message: UIMessage = {  role: 'assistant',  parts: [    {      type: 'reasoning',      text: 'I will greet the user',    },    {      type: 'text',      text: 'Hello',    },  ],};

Reasoning Part Property Rename

The reasoning property on reasoning UI parts has been renamed to text.

AI SDK 4.0

{  message.parts.map((part, index) => {    if (part.type === 'reasoning') {      return (        <div key={index} className="reasoning-display">          {part.reasoning}        </div>      );    }  });}

AI SDK 5.0

{  message.parts.map((part, index) => {    if (part.type === 'reasoning') {      return (        <div key={index} className="reasoning-display">          {part.text}        </div>      );    }  });}

File Part Changes

File parts now use .url instead of .data and .mimeType.

AI SDK 4.0

{  messages.map(message => (    <div key={message.id}>      {message.parts.map((part, index) => {        if (part.type === 'text') {          return <div key={index}>{part.text}</div>;        } else if (part.type === 'file' && part.mimeType.startsWith('image/')) {          return (            <img              key={index}              src={`data:${part.mimeType};base64,${part.data}`}            />          );        }      })}    </div>  ));}

AI SDK 5.0

{  messages.map(message => (    <div key={message.id}>      {message.parts.map((part, index) => {        if (part.type === 'text') {          return <div key={index}>{part.text}</div>;        } else if (          part.type === 'file' &&          part.mediaType.startsWith('image/')        ) {          return <img key={index} src={part.url} />;        }      })}    </div>  ));}

Stream Data Removal

The StreamData class has been completely removed and replaced with UI message streams for custom data.

AI SDK 4.0

import { StreamData } from 'ai';
const streamData = new StreamData();streamData.append('custom-data');streamData.close();

AI SDK 5.0

import { createUIMessageStream, createUIMessageStreamResponse } from 'ai';
const stream = createUIMessageStream({  execute({ writer }) {    // Write custom data parts    writer.write({      type: 'data-custom',      id: 'custom-1',      data: 'custom-data',    });
    // Can merge with LLM streams    const result = streamText({      model: openai('gpt-4.1'),      messages,    });
    writer.merge(result.toUIMessageStream());  },});
return createUIMessageStreamResponse({ stream });

Custom Data Streaming: writeMessageAnnotation/writeData Removed

The writeMessageAnnotation and writeData methods from DataStreamWriter have been removed. Instead, use custom data parts with the new UIMessage stream architecture.

AI SDK 4.0

import { openai } from '@ai-sdk/openai';import { createDataStreamResponse, streamText } from 'ai';
export async function POST(req: Request) {  const { messages } = await req.json();
  return createDataStreamResponse({    execute: dataStream => {      // Write general data      dataStream.writeData('call started');
      const result = streamText({        model: openai('gpt-4o'),        messages,        onChunk() {          // Write message annotations          dataStream.writeMessageAnnotation({            status: 'streaming',            timestamp: Date.now(),          });        },        onFinish() {          // Write final annotations          dataStream.writeMessageAnnotation({            id: generateId(),            completed: true,          });
          dataStream.writeData('call completed');        },      });
      result.mergeIntoDataStream(dataStream);    },  });}

AI SDK 5.0

import { openai } from '@ai-sdk/openai';import {  createUIMessageStream,  createUIMessageStreamResponse,  streamText,  generateId,} from 'ai';
export async function POST(req: Request) {  const { messages } = await req.json();
  const stream = createUIMessageStream({    execute: ({ writer }) => {      const statusId = generateId();
      // Write general data (transient - not added to message history)      writer.write({        type: 'data-status',        id: statusId,        data: { status: 'call started' },      });
      const result = streamText({        model: openai('gpt-4o'),        messages,        onChunk() {          // Write data parts that update during streaming          writer.write({            type: 'data-status',            id: statusId,            data: {              status: 'streaming',              timestamp: Date.now(),            },          });        },        onFinish() {          // Write final data parts          writer.write({            type: 'data-status',            id: statusId,            data: {              status: 'completed',            },          });        },      });
      writer.merge(result.toUIMessageStream());    },  });
  return createUIMessageStreamResponse({ stream });}

For more detailed information about streaming custom data in v5, see the Streaming Data guide .

Provider Metadata → Provider Options

The providerMetadata input parameter has been renamed to providerOptions. Note that the returned metadata in results is still called providerMetadata.

AI SDK 4.0

const result = await generateText({  model: openai('gpt-4'),  prompt: 'Hello',  providerMetadata: {    openai: { store: false },  },});

AI SDK 5.0

const result = await generateText({  model: openai('gpt-4'),  prompt: 'Hello',  providerOptions: {    // Input parameter renamed    openai: { store: false },  },});
// Returned metadata still uses providerMetadata:console.log(result.providerMetadata?.openai);

Tool Definition Changes (parameters → inputSchema)

Tool definitions have been updated to use inputSchema instead of parameters and error classes have been renamed.

AI SDK 4.0

import { tool } from 'ai';
const weatherTool = tool({  description: 'Get the weather for a city',  parameters: z.object({    city: z.string(),  }),  execute: async ({ city }) => {    return `Weather in ${city}`;  },});

AI SDK 5.0

import { tool } from 'ai';
const weatherTool = tool({  description: 'Get the weather for a city',  inputSchema: z.object({    city: z.string(),  }),  execute: async ({ city }) => {    return `Weather in ${city}`;  },});

Tool Result Content: experimental_toToolResultContent → toModelOutput

The experimental_toToolResultContent option has been renamed to toModelOutput and is no longer experimental.

AI SDK 4.0

const screenshotTool = tool({  description: 'Take a screenshot',  parameters: z.object({}),  execute: async () => {    const imageData = await takeScreenshot();    return imageData; // base64 string  },  experimental_toToolResultContent: result => [{ type: 'image', data: result }],});

AI SDK 5.0

const screenshotTool = tool({  description: 'Take a screenshot',  inputSchema: z.object({}),  execute: async () => {    const imageData = await takeScreenshot();    return imageData;  },  toModelOutput: result => ({    type: 'content',    value: [{ type: 'media', mediaType: 'image/png', data: result }],  }),});

Tool Property Changes (args/result → input/output)

Tool call and result properties have been renamed for better consistency with schemas.

AI SDK 4.0

// Tool calls used "args" and "result"for await (const part of result.fullStream) {  switch (part.type) {    case 'tool-call':      console.log('Tool args:', part.args);      break;    case 'tool-result':      console.log('Tool result:', part.result);      break;  }}

AI SDK 5.0

// Tool calls now use "input" and "output"for await (const part of result.fullStream) {  switch (part.type) {    case 'tool-call':      console.log('Tool input:', part.input);      break;    case 'tool-result':      console.log('Tool output:', part.output);      break;  }}

Tool Execution Error Handling

The ToolExecutionError class has been removed. Tool execution errors now appear as tool-error content parts in the result steps, enabling automated LLM roundtrips in multi-step scenarios.

AI SDK 4.0

import { ToolExecutionError } from 'ai';
try {  const result = await generateText({    // ...  });} catch (error) {  if (error instanceof ToolExecutionError) {    console.log('Tool execution failed:', error.message);    console.log('Tool name:', error.toolName);    console.log('Tool input:', error.toolInput);  }}

AI SDK 5.0

// Tool execution errors now appear in result stepsconst { steps } = await generateText({  // ...});
// check for tool errors in the stepsconst toolErrors = steps.flatMap(step =>  step.content.filter(part => part.type === 'tool-error'),);
toolErrors.forEach(toolError => {  console.log('Tool error:', toolError.error);  console.log('Tool name:', toolError.toolName);  console.log('Tool input:', toolError.input);});

For streaming scenarios, tool execution errors appear as tool-error parts in the stream, while other errors appear as error parts.

Tool Call Streaming Now Default (toolCallStreaming Removed)

The toolCallStreaming option has been removed in AI SDK 5.0. Tool call streaming is now always enabled by default.

AI SDK 4.0

const result = streamText({  model: openai('gpt-4o'),  messages,  toolCallStreaming: true, // Optional parameter to enable streaming  tools: {    weatherTool,    searchTool,  },});

AI SDK 5.0

const result = streamText({  model: openai('gpt-4o'),  messages: convertToModelMessages(messages),  // toolCallStreaming removed - streaming is always enabled  tools: {    weatherTool,    searchTool,  },});

Tool Part Type Changes (UIMessage)

In v5, UI tool parts use typed naming: tool-${toolName} instead of generic types.

AI SDK 4.0

// Generic tool-invocation type{  message.parts.map(part => {    if (part.type === 'tool-invocation') {      return <div>{part.toolInvocation.toolName}</div>;    }  });}

AI SDK 5.0

// Type-safe tool parts with specific names{  message.parts.map(part => {    switch (part.type) {      case 'tool-getWeatherInformation':        return <div>Getting weather...</div>;      case 'tool-askForConfirmation':        return <div>Asking for confirmation...</div>;    }  });}

Dynamic Tools Support

AI SDK 5.0 introduces dynamic tools for handling tools with unknown types at development time, such as MCP tools without schemas or user-defined functions at runtime.

New dynamicTool Helper

The new dynamicTool helper function allows you to define tools where the input and output types are not known at compile time.

AI SDK 5.0

import { dynamicTool } from 'ai';import { z } from 'zod';
// Define a dynamic toolconst runtimeTool = dynamicTool({  description: 'A tool defined at runtime',  inputSchema: z.object({}),  execute: async input => {    // Input and output are typed as 'unknown'    return { result: `Processed: ${input.query}` };  },});

MCP Tools Without Schemas

MCP tools that don't provide schemas are now automatically treated as dynamic tools:

AI SDK 5.0

import { MCPClient } from 'ai';
const client = new MCPClient({  /* ... */});const tools = await client.getTools();
// Tools without schemas are now 'dynamic' type// and won't break type inference when mixed with static tools

Type-Safe Handling with Mixed Tools

When using both static and dynamic tools together, use the dynamic flag for type narrowing:

AI SDK 5.0

const result = await generateText({  model: openai('gpt-4'),  tools: {    // Static tool with known types    weather: weatherTool,    // Dynamic tool with unknown types    customDynamicTool: dynamicTool({      /* ... */    }),  },  onStepFinish: step => {    // Handle tool calls with type safety    for (const toolCall of step.toolCalls) {      if (toolCall.dynamic) {        // Dynamic tool: input/output are 'unknown'        console.log('Dynamic tool called:', toolCall.toolName);        continue;      }
      // Static tools have full type inference      switch (toolCall.toolName) {        case 'weather':          // TypeScript knows the exact types          console.log(toolCall.input.location); // string          break;      }    }  },});

New dynamic-tool UI Part

UI messages now include a dynamic-tool part type for rendering dynamic tool invocations:

AI SDK 5.0

{  message.parts.map((part, index) => {    switch (part.type) {      // Static tools use specific types      case 'tool-weather':        return <div>Weather: {part.input.city}</div>;
      // Dynamic tools use the generic dynamic-tool type      case 'dynamic-tool':        return (          <div>            Dynamic tool: {part.toolName}            <pre>{JSON.stringify(part.input, null, 2)}</pre>          </div>        );    }  });}

Breaking Change: Type Narrowing Required for Tool Calls and Results

When iterating over toolCalls and toolResults, you now need to check the dynamic flag first for proper type narrowing:

AI SDK 4.0

// Direct type checking worked without dynamic flagonStepFinish: step => {  for (const toolCall of step.toolCalls) {    switch (toolCall.toolName) {      case 'weather':        console.log(toolCall.input.location); // typed as string        break;      case 'search':        console.log(toolCall.input.query); // typed as string        break;    }  }};

AI SDK 5.0

// Must check dynamic flag first for type narrowingonStepFinish: step => {  for (const toolCall of step.toolCalls) {    // Check if it's a dynamic tool first    if (toolCall.dynamic) {      console.log('Dynamic tool:', toolCall.toolName);      console.log('Input:', toolCall.input); // typed as unknown      continue;    }
    // Now TypeScript knows it's a static tool    switch (toolCall.toolName) {      case 'weather':        console.log(toolCall.input.location); // typed as string        break;      case 'search':        console.log(toolCall.input.query); // typed as string        break;    }  }};

Tool UI Part State Changes

Tool UI parts now use more granular states that better represent the streaming lifecycle and error handling.

AI SDK 4.0

// Old states{  message.parts.map(part => {    if (part.type === 'tool-invocation') {      switch (part.toolInvocation.state) {        case 'partial-call':          return <div>Loading...</div>;        case 'call':          return (            <div>              Tool called with {JSON.stringify(part.toolInvocation.args)}            </div>          );        case 'result':          return <div>Result: {part.toolInvocation.result}</div>;      }    }  });}

AI SDK 5.0

// New granular states{  message.parts.map(part => {    switch (part.type) {      case 'tool-getWeatherInformation':        switch (part.state) {          case 'input-streaming':            return <pre>{JSON.stringify(part.input, null, 2)}</pre>;          case 'input-available':            return <div>Getting weather for {part.input.city}...</div>;          case 'output-available':            return <div>Weather: {part.output}</div>;          case 'output-error':            return <div>Error: {part.errorText}</div>;        }    }  });}

State Changes:

  • partial-callinput-streaming (tool input being streamed)
  • callinput-available (tool input complete, ready to execute)
  • resultoutput-available (tool execution successful)
  • New: output-error (tool execution failed)

Rendering Tool Invocations (Catch-All Pattern)

In v4, you typically rendered tool invocations using a catch-all tool-invocation type. In v5, the recommended approach is to handle each tool specifically using its typed part name (e.g., tool-getWeather). However, if you need a catch-all pattern for rendering all tool invocations the same way, you can use the isToolUIPart and getToolName helper functions as a fallback.

AI SDK 4.0

{  message.parts.map((part, index) => {    switch (part.type) {      case 'text':        return <div key={index}>{part.text}</div>;      case 'tool-invocation':        const { toolInvocation } = part;        return (          <details key={`tool-${toolInvocation.toolCallId}`}>            <summary>              <span>{toolInvocation.toolName}</span>              {toolInvocation.state === 'result' ? (                <span>Click to expand</span>              ) : (                <span>calling...</span>              )}            </summary>            {toolInvocation.state === 'result' ? (              <div>                <pre>{JSON.stringify(toolInvocation.result, null, 2)}</pre>              </div>            ) : null}          </details>        );    }  });}

AI SDK 5.0

import { isToolUIPart, getToolName } from 'ai';
{  message.parts.map((part, index) => {    switch (part.type) {      case 'text':        return <div key={index}>{part.text}</div>;      default:        if (isToolUIPart(part)) {          const toolInvocation = part;          return (            <details key={`tool-${toolInvocation.toolCallId}`}>              <summary>                <span>{getToolName(toolInvocation)}</span>                {toolInvocation.state === 'output-available' ? (                  <span>Click to expand</span>                ) : (                  <span>calling...</span>                )}              </summary>              {toolInvocation.state === 'output-available' ? (                <div>                  <pre>{JSON.stringify(toolInvocation.output, null, 2)}</pre>                </div>              ) : null}            </details>          );        }    }  });}

Media Type Standardization

mimeType has been renamed to mediaType for consistency. Both image and file types are supported in model messages.

AI SDK 4.0

const result = await generateText({  model: someModel,  messages: [    {      role: 'user',      content: [        { type: 'text', text: 'What do you see?' },        {          type: 'image',          image: new Uint8Array([0, 1, 2, 3]),          mimeType: 'image/png',        },        {          type: 'file',          data: contents,          mimeType: 'application/pdf',        },      ],    },  ],});

AI SDK 5.0

const result = await generateText({  model: someModel,  messages: [    {      role: 'user',      content: [        { type: 'text', text: 'What do you see?' },        {          type: 'image',          image: new Uint8Array([0, 1, 2, 3]),          mediaType: 'image/png',        },        {          type: 'file',          data: contents,          mediaType: 'application/pdf',        },      ],    },  ],});

Reasoning Support

Reasoning Text Property Rename

The .reasoning property has been renamed to .reasoningText for multi-step generations.

AI SDK 4.0

for (const step of steps) {  console.log(step.reasoning);}

AI SDK 5.0

for (const step of steps) {  console.log(step.reasoningText);}

Generate Text Reasoning Property Changes

In generateText() and streamText() results, reasoning properties have been renamed.

AI SDK 4.0

const result = await generateText({  model: anthropic('claude-sonnet-4-20250514'),  prompt: 'Explain your reasoning',});
console.log(result.reasoning); // String reasoning textconsole.log(result.reasoningDetails); // Array of reasoning details

AI SDK 5.0

const result = await generateText({  model: anthropic('claude-sonnet-4-20250514'),  prompt: 'Explain your reasoning',});
console.log(result.reasoningText); // String reasoning textconsole.log(result.reasoning); // Array of reasoning details

Continuation Steps Removal

The experimental_continueSteps option has been removed from generateText().

AI SDK 4.0

const result = await generateText({  experimental_continueSteps: true,  // ...});

AI SDK 5.0

const result = await generateText({  // experimental_continueSteps has been removed  // Use newer models with higher output token limits instead  // ...});

Image Generation Changes

Image model settings have been moved to providerOptions.

AI SDK 4.0

await generateImage({  model: luma.image('photon-flash-1', {    maxImagesPerCall: 5,    pollIntervalMillis: 500,  }),  prompt,  n: 10,});

AI SDK 5.0

await generateImage({  model: luma.image('photon-flash-1'),  prompt,  n: 10,  maxImagesPerCall: 5,  providerOptions: {    luma: { pollIntervalMillis: 500 },  },});

Step Result Changes

Step Type Removal

The stepType property has been removed from step results.

AI SDK 4.0

steps.forEach(step => {  switch (step.stepType) {    case 'initial':      console.log('Initial step');      break;    case 'tool-result':      console.log('Tool result step');      break;    case 'done':      console.log('Final step');      break;  }});

AI SDK 5.0

steps.forEach((step, index) => {  if (index === 0) {    console.log('Initial step');  } else if (step.toolResults.length > 0) {    console.log('Tool result step');  } else {    console.log('Final step');  }});

Step Control: maxSteps → stopWhen

For core functions like generateText and streamText, the maxSteps parameter has been replaced with stopWhen, which provides more flexible control over multi-step execution. The stopWhen parameter defines conditions for stopping the generation when the last step contains tool results. When multiple conditions are provided as an array, the generation stops if any condition is met.

AI SDK 4.0

// V4: Simple numeric limitconst result = await generateText({  model: openai('gpt-4'),  messages,  maxSteps: 5, // Stop after a maximum of 5 steps});
// useChat with maxStepsconst { messages } = useChat({  maxSteps: 3, // Stop after a maximum of 3 steps});

AI SDK 5.0

import { stepCountIs, hasToolCall } from 'ai';
// V5: Server-side - flexible stopping conditions with stopWhenconst result = await generateText({  model: openai('gpt-4'),  messages,  // Only triggers when last step has tool results  stopWhen: stepCountIs(5), // Stop at step 5 if tools were called});
// Server-side - stop when specific tool is calledconst result = await generateText({  model: openai('gpt-4'),  messages,  stopWhen: hasToolCall('finalizeTask'), // Stop when finalizeTask tool is called});

Common stopping patterns:

AI SDK 5.0

// Stop after N steps (equivalent to old maxSteps)// Note: Only applies when the last step has tool resultsstopWhen: stepCountIs(5);
// Stop when specific tool is calledstopWhen: hasToolCall('finalizeTask');
// Multiple conditions (stops if ANY condition is met)stopWhen: [  stepCountIs(10), // Maximum 10 steps  hasToolCall('submitOrder'), // Or when order is submitted];
// Custom condition based on step contentstopWhen: ({ steps }) => {  const lastStep = steps[steps.length - 1];  // Custom logic - only triggers if last step has tool results  return lastStep?.text?.includes('COMPLETE');};

Important: The stopWhen conditions are only evaluated when the last step contains tool results.

Usage vs Total Usage

Usage properties now distinguish between single step and total usage.

AI SDK 4.0

// usage contained total token usage across all stepsconsole.log(result.usage);

AI SDK 5.0

// usage contains token usage from the final step onlyconsole.log(result.usage);// totalUsage contains total token usage across all stepsconsole.log(result.totalUsage);

AI SDK UI Changes


Package Structure Changes

@ai-sdk/rsc Package Extraction

The ai/rsc export has been extracted to a separate package @ai-sdk/rsc.

AI SDK 4.0

import { createStreamableValue } from 'ai/rsc';

AI SDK 5.0

import { createStreamableValue } from '@ai-sdk/rsc';

Don't forget to install the new package: npm install @ai-sdk/rsc

React UI Hooks Moved to @ai-sdk/react

The deprecated ai/react export has been removed in favor of @ai-sdk/react.

AI SDK 4.0

import { useChat } from 'ai/react';

AI SDK 5.0

import { useChat } from '@ai-sdk/react';

Don't forget to install the new package: npm install @ai-sdk/react

useChat Changes

The useChat hook has undergone significant changes in v5, with new transport architecture, removal of managed input state, and more.

maxSteps Removal

The maxSteps parameter has been removed from useChat. You should now use server-side stopWhen conditions for multi-step tool execution control, and manually submit tool results and trigger new messages for client-side tool calls.

AI SDK 4.0

const { messages, sendMessage } = useChat({  maxSteps: 5, // Automatic tool result submission});

AI SDK 5.0

// Server-side: Use stopWhen for multi-step controlimport { streamText, convertToModelMessages, stepCountIs } from 'ai';import { openai } from '@ai-sdk/openai';
const result = await streamText({  model: openai('gpt-4'),  messages: convertToModelMessages(messages),  stopWhen: stepCountIs(5), // Stop after 5 steps with tool calls});
// Client-side: Configure automatic submissionimport { useChat } from '@ai-sdk/react';import {  DefaultChatTransport,  lastAssistantMessageIsCompleteWithToolCalls,} from 'ai';
const { messages, sendMessage, addToolOutput } = useChat({  // Automatically submit when all tool results are available  sendAutomaticallyWhen: lastAssistantMessageIsCompleteWithToolCalls,
  async onToolCall({ toolCall }) {    const result = await executeToolCall(toolCall);
    // Important: Don't await addToolOutput inside onToolCall to avoid deadlocks    addToolOutput({      tool: toolCall.toolName,      toolCallId: toolCall.toolCallId,      output: result,    });  },});

Important: When using sendAutomaticallyWhen, don't use await with addToolOutput inside onToolCall as it can cause deadlocks. The await is useful when you're not using automatic submission and need to ensure the messages are updated before manually calling sendMessage().

This change provides more flexibility for handling tool calls and aligns client behavior with server-side multi-step execution patterns.

For more details on the new tool submission approach, see the Tool Result Submission Changes section below.

Initial Messages Renamed

The initialMessages option has been renamed to messages.

AI SDK 4.0

import { useChat, type Message } from '@ai-sdk/react';
function ChatComponent({ initialMessages }: { initialMessages: Message[] }) {  const { messages } = useChat({    initialMessages: initialMessages,    // ...  });
  // your component}

AI SDK 5.0

import { useChat, type UIMessage } from '@ai-sdk/react';
function ChatComponent({ initialMessages }: { initialMessages: UIMessage[] }) {  const { messages } = useChat({    messages: initialMessages,    // ...  });
  // your component}

Sharing Chat Instances

In v4, you could share chat state between components by using the same id parameter in multiple useChat hooks.

AI SDK 4.0

// Component Aconst { messages } = useChat({  id: 'shared-chat',  api: '/api/chat',});
// Component B - would share the same chat stateconst { messages } = useChat({  id: 'shared-chat',  api: '/api/chat',});

In v5, you need to explicitly share chat instances by passing a shared Chat instance.

AI SDK 5.0

// e.g. Store Chat instance in React Context and create a custom hook
// Component Aconst { chat } = useSharedChat(); // Custom hook that accesses shared Chat from context
const { messages, sendMessage } = useChat({  chat, // Pass the shared chat instance});
// Component B - shares the same chat instanceconst { chat } = useSharedChat(); // Same hook to access shared Chat from context
const { messages } = useChat({  chat, // Same shared chat instance});

For a complete example of sharing chat state across components, see the Share Chat State Across Components recipe.

Chat Transport Architecture

Configuration is now handled through transport objects instead of direct API options.

AI SDK 4.0

import { useChat } from '@ai-sdk/react';
const { messages } = useChat({  api: '/api/chat',  credentials: 'include',  headers: { 'Custom-Header': 'value' },});

AI SDK 5.0

import { useChat } from '@ai-sdk/react';import { DefaultChatTransport } from 'ai';
const { messages } = useChat({  transport: new DefaultChatTransport({    api: '/api/chat',    credentials: 'include',    headers: { 'Custom-Header': 'value' },  }),});

Removed Managed Input State

The useChat hook no longer manages input state internally. You must now manage input state manually.

AI SDK 4.0

import { useChat } from '@ai-sdk/react';
export default function Page() {  const { messages, input, handleInputChange, handleSubmit } = useChat({    api: '/api/chat',  });
  return (    <form onSubmit={handleSubmit}>      <input value={input} onChange={handleInputChange} />      <button type="submit">Send</button>    </form>  );}

AI SDK 5.0

import { useChat } from '@ai-sdk/react';import { DefaultChatTransport } from 'ai';import { useState } from 'react';
export default function Page() {  const [input, setInput] = useState('');  const { messages, sendMessage } = useChat({    transport: new DefaultChatTransport({ api: '/api/chat' }),  });
  const handleSubmit = e => {    e.preventDefault();    sendMessage({ text: input });    setInput('');  };
  return (    <form onSubmit={handleSubmit}>      <input value={input} onChange={e => setInput(e.target.value)} />      <button type="submit">Send</button>    </form>  );}

Message Sending: appendsendMessage

The append function has been replaced with sendMessage and requires structured message format.

AI SDK 4.0

const { append } = useChat();
// Simple text messageappend({ role: 'user', content: 'Hello' });
// With custom bodyappend(  {    role: 'user',    content: 'Hello',  },  { body: { imageUrl: 'https://...' } },);

AI SDK 5.0

const { sendMessage } = useChat();
// Simple text message (most common usage)sendMessage({ text: 'Hello' });
// Or with explicit parts arraysendMessage({  parts: [{ type: 'text', text: 'Hello' }],});
// With custom body (via request options)sendMessage(  { role: 'user', parts: [{ type: 'text', text: 'Hello' }] },  { body: { imageUrl: 'https://...' } },);

Message Regeneration: reloadregenerate

The reload function has been renamed to regenerate with enhanced functionality.

AI SDK 4.0

const { reload } = useChat();
// Regenerate last messagereload();

AI SDK 5.0

const { regenerate } = useChat();
// Regenerate last messageregenerate();
// Regenerate specific messageregenerate({ messageId: 'message-123' });

onResponse Removal

The onResponse callback has been removed from useChat and useCompletion.

AI SDK 4.0

const { messages } = useChat({  onResponse(response) {    // handle response  },});

AI SDK 5.0

const { messages } = useChat({  // onResponse is no longer available});

Send Extra Message Fields Default

The sendExtraMessageFields option has been removed and is now the default behavior.

AI SDK 4.0

const { messages } = useChat({  sendExtraMessageFields: true,});

AI SDK 5.0

const { messages } = useChat({  // sendExtraMessageFields is now the default});

Keep Last Message on Error Removal

The keepLastMessageOnError option has been removed as it's no longer needed.

AI SDK 4.0

const { messages } = useChat({  keepLastMessageOnError: true,});

AI SDK 5.0

const { messages } = useChat({  // keepLastMessageOnError is no longer needed});

Chat Request Options Changes

The data and allowEmptySubmit options have been removed from ChatRequestOptions.

AI SDK 4.0

handleSubmit(e, {  data: { imageUrl: 'https://...' },  body: { custom: 'value' },  allowEmptySubmit: true,});

AI SDK 5.0

sendMessage(  {    /* yourMessage */  },  {    body: {      custom: 'value',      imageUrl: 'https://...', // Move data to body    },  },);

Request Options Type Rename

RequestOptions has been renamed to CompletionRequestOptions.

AI SDK 4.0

import type { RequestOptions } from 'ai';

AI SDK 5.0

import type { CompletionRequestOptions } from 'ai';

addToolResult Renamed to addToolOutput

The addToolResult method has been renamed to addToolOutput. Additionally, the result parameter has been renamed to output for consistency with other tool-related APIs.

AI SDK 4.0

const { addToolResult } = useChat();
// Add tool result with 'result' parameteraddToolResult({  toolCallId: 'tool-call-123',  result: 'Weather: 72°F, sunny',});

AI SDK 5.0

const { addToolOutput } = useChat();
// Add tool output with 'output' parameter and 'tool' name for type safetyaddToolOutput({  tool: 'getWeather',  toolCallId: 'tool-call-123',  output: 'Weather: 72°F, sunny',});

addToolResult is still available but deprecated. It will be removed in version 6.

Tool Result Submission Changes

The automatic tool result submission behavior has been updated in useChat and the Chat component. You now have more control and flexibility over when tool results are submitted.

  • onToolCall no longer supports returning values to automatically submit tool results
  • You must explicitly call addToolOutput to provide tool results
  • Use sendAutomaticallyWhen with lastAssistantMessageIsCompleteWithToolCalls helper for automatic submission
  • Important: Don't use await with addToolOutput inside onToolCall to avoid deadlocks
  • The maxSteps parameter has been removed from the Chat component and useChat hook
  • For multi-step tool execution, use server-side stopWhen conditions instead (see maxSteps Removal )

AI SDK 4.0

const { messages, sendMessage, addToolResult } = useChat({  maxSteps: 5, // Removed in v5
  // Automatic submission by returning a value  async onToolCall({ toolCall }) {    if (toolCall.toolName === 'getLocation') {      const cities = ['New York', 'Los Angeles', 'Chicago', 'San Francisco'];      return cities[Math.floor(Math.random() * cities.length)];    }  },});

AI SDK 5.0

import { useChat } from '@ai-sdk/react';import {  DefaultChatTransport,  lastAssistantMessageIsCompleteWithToolCalls,} from 'ai';
const { messages, sendMessage, addToolOutput } = useChat({  // Automatic submission with helper  sendAutomaticallyWhen: lastAssistantMessageIsCompleteWithToolCalls,
  async onToolCall({ toolCall }) {    if (toolCall.toolName === 'getLocation') {      const cities = ['New York', 'Los Angeles', 'Chicago', 'San Francisco'];
      // Important: Don't await inside onToolCall to avoid deadlocks      addToolOutput({        tool: 'getLocation',        toolCallId: toolCall.toolCallId,        output: cities[Math.floor(Math.random() * cities.length)],      });    }  },});

Loading State Changes

The deprecated isLoading helper has been removed in favor of status.

AI SDK 4.0

const { isLoading } = useChat();

AI SDK 5.0

const { status } = useChat();// Use state instead of isLoading for more granular control

Resume Stream Support

The resume functionality has been moved from experimental_resume to resumeStream.

AI SDK 4.0

// Resume was experimentalconst { messages } = useChat({  experimental_resume: true,});

AI SDK 5.0

const { messages } = useChat({  resumeStream: true, // Resume interrupted streams});

Dynamic Body Values

In v4, the body option in useChat configuration would dynamically update with component state changes. In v5, the body value is only captured at the first render and remains static throughout the component lifecycle.

AI SDK 4.0

const [temperature, setTemperature] = useState(0.7);
const { messages } = useChat({  api: '/api/chat',  body: {    temperature, // This would update dynamically in v4  },});

AI SDK 5.0

const [temperature, setTemperature] = useState(0.7);
// Option 1: Use request-level configuration (Recommended)const { messages, sendMessage } = useChat({  transport: new DefaultChatTransport({ api: '/api/chat' }),});
// Pass dynamic values at request timesendMessage(  { text: input },  {    body: {      temperature, // Current temperature value at request time    },  },);
// Option 2: Use function configuration with useRefconst temperatureRef = useRef(temperature);temperatureRef.current = temperature;
const { messages } = useChat({  transport: new DefaultChatTransport({    api: '/api/chat',    body: () => ({      temperature: temperatureRef.current,    }),  }),});

For more details on request configuration, see the Chatbot guide .

Usage Information

In v4, usage information was directly accessible through the onFinish callback's options parameter. In v5, usage data is attached as metadata to individual messages using the messageMetadata function in toUIMessageStreamResponse.

AI SDK 4.0

const { messages } = useChat({  onFinish(message, options) {    const usage = options.usage;    console.log('Usage:', usage);  },});

AI SDK 5.0

import { openai } from '@ai-sdk/openai';import {  convertToModelMessages,  streamText,  UIMessage,  type LanguageModelUsage,} from 'ai';
// Create a new metadata type (optional for type-safety)type MyMetadata = {  totalUsage: LanguageModelUsage;};
// Create a new custom message type with your own metadataexport type MyUIMessage = UIMessage<MyMetadata>;
export async function POST(req: Request) {  const { messages }: { messages: MyUIMessage[] } = await req.json();
  const result = streamText({    model: openai('gpt-4o'),    messages: convertToModelMessages(messages),  });
  return result.toUIMessageStreamResponse({    originalMessages: messages,    messageMetadata: ({ part }) => {      // Send total usage when generation is finished      if (part.type === 'finish') {        return { totalUsage: part.totalUsage };      }    },  });}

Then, on the client, you can access the message-level metadata.

AI SDK 5.0 - Client

'use client';
import { useChat } from '@ai-sdk/react';import type { MyUIMessage } from './api/chat/route';import { DefaultChatTransport } from 'ai';
export default function Chat() {  // Use custom message type defined on the server (optional for type-safety)  const { messages } = useChat<MyUIMessage>({    transport: new DefaultChatTransport({      api: '/api/chat',    }),  });
  return (    <div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">      {messages.map(m => (        <div key={m.id} className="whitespace-pre-wrap">          {m.role === 'user' ? 'User: ' : 'AI: '}          {m.parts.map(part => {            if (part.type === 'text') {              return part.text;            }          })}          {/* Render usage via metadata */}          {m.metadata?.totalUsage && (            <div>Total usage: {m.metadata?.totalUsage.totalTokens} tokens</div>          )}        </div>      ))}    </div>  );}

You can also access your metadata from the onFinish callback of useChat:

AI SDK 5.0 - onFinish

'use client';
import { useChat } from '@ai-sdk/react';import type { MyUIMessage } from './api/chat/route';import { DefaultChatTransport } from 'ai';
export default function Chat() {  // Use custom message type defined on the server (optional for type-safety)  const { messages } = useChat<MyUIMessage>({    transport: new DefaultChatTransport({      api: '/api/chat',    }),    onFinish: ({ message }) => {      // Access message metadata via onFinish callback      console.log(message.metadata?.totalUsage);    },  });}

Request Body Preparation: experimental_prepareRequestBody → prepareSendMessagesRequest

The experimental_prepareRequestBody option has been replaced with prepareSendMessagesRequest in the transport configuration.

AI SDK 4.0

import { useChat } from '@ai-sdk/react';
const { messages } = useChat({  api: '/api/chat',  // Only send the last message to the server:  experimental_prepareRequestBody({ messages, id }) {    return { message: messages[messages.length - 1], id };  },});

AI SDK 5.0

import { useChat } from '@ai-sdk/react';import { DefaultChatTransport } from 'ai';
const { messages } = useChat({  transport: new DefaultChatTransport({    api: '/api/chat',    // Only send the last message to the server:    prepareSendMessagesRequest({ messages, id }) {      return { body: { message: messages[messages.length - 1], id } };    },  }),});

@ai-sdk/vue Changes

The Vue.js integration has been completely restructured, replacing the useChat composable with a Chat class.

useChat Replaced with Chat Class

@ai-sdk/vue v1

<script setup>import { useChat } from '@ai-sdk/vue';
const { messages, input, handleSubmit } = useChat({  api: '/api/chat',});</script>

@ai-sdk/vue v2

<script setup>import { Chat } from '@ai-sdk/vue';import { DefaultChatTransport } from 'ai';import { ref } from 'vue';
const input = ref('');const chat = new Chat({  transport: new DefaultChatTransport({ api: '/api/chat' }),});
const handleSubmit = (e: Event) => {  e.preventDefault();  chat.sendMessage({ text: input.value });  input.value = '';};</script>

Message Structure Changes

Messages now use a parts array instead of a content string.

@ai-sdk/vue v1

<template>  <div v-for="message in messages" :key="message.id">    <div>{{ message.role }}: {{ message.content }}</div>  </div></template>

@ai-sdk/vue v2

<template>  <div v-for="message in chat.messages" :key="message.id">    <div>{{ message.role }}:</div>    <div v-for="part in message.parts" :key="part.type">      <span v-if="part.type === 'text'">{{ part.text }}</span>    </div>  </div></template>

@ai-sdk/svelte Changes

The Svelte integration has also been updated with new constructor patterns and readonly properties.

Constructor API Changes

@ai-sdk/svelte v1

import { Chat } from '@ai-sdk/svelte';
const chatInstance = Chat({  api: '/api/chat',});

@ai-sdk/svelte v2

import { Chat } from '@ai-sdk/svelte';import { DefaultChatTransport } from 'ai';
const chatInstance = Chat(() => ({  transport: new DefaultChatTransport({ api: '/api/chat' }),}));
Properties Made Readonly

Properties are now readonly and must be updated using setter methods.

@ai-sdk/svelte v1

// Direct property mutation was allowedchatInstance.messages = [...chatInstance.messages, newMessage];

@ai-sdk/svelte v2

// Must use setter methodschatInstance.setMessages([...chatInstance.messages, newMessage]);
Removed Managed Input

Like React and Vue, input management has been removed from the Svelte integration.

@ai-sdk/svelte v1

// Input was managed internallyconst { messages, input, handleSubmit } = chatInstance;

@ai-sdk/svelte v2

// Must manage input state manuallylet input = '';const { messages, sendMessage } = chatInstance;
const handleSubmit = () => {  sendMessage({ text: input });  input = '';};

@ai-sdk/ui-utils Package Removal

The @ai-sdk/ui-utils package has been removed and its exports moved to the main ai package.

AI SDK 4.0

import { getTextFromDataUrl } from '@ai-sdk/ui-utils';

AI SDK 5.0

import { getTextFromDataUrl } from 'ai';

Note: processDataStream was removed entirely in v5.0. Use readUIMessageStream instead for processing UI message streams, or use the more configurable Chat/useChat APIs for most use cases.

useCompletion Changes

The data property has been removed from the useCompletion hook.

AI SDK 4.0

const {  completion,  handleSubmit,  data, // No longer available} = useCompletion();

AI SDK 5.0

const {  completion,  handleSubmit,  // data property removed entirely} = useCompletion();

useAssistant Removal

The useAssistant hook has been removed.

AI SDK 4.0

import { useAssistant } from '@ai-sdk/react';

AI SDK 5.0

// useAssistant has been removed// Use useChat with appropriate configuration instead

For an implementation of the assistant functionality with AI SDK v5, see this example repository .

Attachments → File Parts

The experimental_attachments property has been replaced with the parts array.

AI SDK 4.0

{  messages.map(message => (    <div className="flex flex-col gap-2">      {message.content}
      <div className="flex flex-row gap-2">        {message.experimental_attachments?.map((attachment, index) =>          attachment.contentType?.includes('image/') ? (            <img src={attachment.url} alt={attachment.name} />          ) : attachment.contentType?.includes('text/') ? (            <div className="w-32 h-24 p-2 overflow-hidden text-xs border rounded-md ellipsis text-zinc-500">              {getTextFromDataUrl(attachment.url)}            </div>          ) : null,        )}      </div>    </div>  ));}

AI SDK 5.0

{  messages.map(message => (    <div>      {message.parts.map((part, index) => {        if (part.type === 'text') {          return <div key={index}>{part.text}</div>;        }
        if (part.type === 'file' && part.mediaType?.startsWith('image/')) {          return (            <div key={index}>              <img src={part.url} />            </div>          );        }      })}    </div>  ));}

Embedding Changes

Provider Options for Embeddings

Embedding model settings now use provider options instead of model parameters.

AI SDK 4.0

const { embedding } = await embed({  model: openai('text-embedding-3-small', {    dimensions: 10,  }),});

AI SDK 5.0

const { embedding } = await embed({  model: openai('text-embedding-3-small'),  providerOptions: {    openai: {      dimensions: 10,    },  },});

Raw Response → Response

The rawResponse property has been renamed to response.

AI SDK 4.0

const { rawResponse } = await embed(/* */);

AI SDK 5.0

const { response } = await embed(/* */);

Parallel Requests in embedMany

embedMany now makes parallel requests with a configurable maxParallelCalls option.

AI SDK 5.0

const { embeddings, usage } = await embedMany({  maxParallelCalls: 2, // Limit parallel requests  model: openai.textEmbeddingModel('text-embedding-3-small'),  values: [    'sunny day at the beach',    'rainy afternoon in the city',    'snowy night in the mountains',  ],});

LangChain Adapter Moved to @ai-sdk/langchain

The LangChainAdapter has been moved to @ai-sdk/langchain and the API has been updated to use UI message streams.

AI SDK 4.0

import { LangChainAdapter } from 'ai';
const response = LangChainAdapter.toDataStreamResponse(stream);

AI SDK 5.0

import { toUIMessageStream } from '@ai-sdk/langchain';import { createUIMessageStreamResponse } from 'ai';
const response = createUIMessageStreamResponse({  stream: toUIMessageStream(stream),});

Don't forget to install the new package: npm install @ai-sdk/langchain

LlamaIndex Adapter Moved to @ai-sdk/llamaindex

The LlamaIndexAdapter has been extracted to a separate package @ai-sdk/llamaindex and follows the same UI message stream pattern.

AI SDK 4.0

import { LlamaIndexAdapter } from 'ai';
const response = LlamaIndexAdapter.toDataStreamResponse(stream);

AI SDK 5.0

import { toUIMessageStream } from '@ai-sdk/llamaindex';import { createUIMessageStreamResponse } from 'ai';
const response = createUIMessageStreamResponse({  stream: toUIMessageStream(stream),});

Don't forget to install the new package: npm install @ai-sdk/llamaindex

Streaming Architecture


The streaming architecture has been completely redesigned in v5 to support better content differentiation, concurrent streaming of multiple parts, and improved real-time UX.

Stream Protocol Changes

Stream Protocol: Single Chunks → Start/Delta/End Pattern

The fundamental streaming pattern has changed from single chunks to a three-phase pattern with unique IDs for each content block.

AI SDK 4.0

for await (const chunk of result.fullStream) {  switch (chunk.type) {    case 'text-delta': {      process.stdout.write(chunk.textDelta);      break;    }  }}

AI SDK 5.0

for await (const chunk of result.fullStream) {  switch (chunk.type) {    case 'text-start': {      // New: Initialize a text block with unique ID      console.log(`Starting text block: ${chunk.id}`);      break;    }    case 'text-delta': {      // Changed: Now includes ID and uses 'delta' property      process.stdout.write(chunk.delta); // Changed from 'textDelta'      break;    }    case 'text-end': {      // New: Finalize the text block      console.log(`Completed text block: ${chunk.id}`);      break;    }  }}

Reasoning Streaming Pattern

Reasoning content now follows the same start/delta/end pattern:

AI SDK 4.0

for await (const chunk of result.fullStream) {  switch (chunk.type) {    case 'reasoning': {      // Single chunk with full reasoning text      console.log('Reasoning:', chunk.text);      break;    }  }}

AI SDK 5.0

for await (const chunk of result.fullStream) {  switch (chunk.type) {    case 'reasoning-start': {      console.log(`Starting reasoning block: ${chunk.id}`);      break;    }    case 'reasoning-delta': {      process.stdout.write(chunk.delta);      break;    }    case 'reasoning-end': {      console.log(`Completed reasoning block: ${chunk.id}`);      break;    }  }}

Tool Input Streaming

Tool inputs can now be streamed as they're being generated:

AI SDK 5.0

for await (const chunk of result.fullStream) {  switch (chunk.type) {    case 'tool-input-start': {      console.log(`Starting tool input for ${chunk.toolName}: ${chunk.id}`);      break;    }    case 'tool-input-delta': {      // Stream the JSON input as it's being generated      process.stdout.write(chunk.delta);      break;    }    case 'tool-input-end': {      console.log(`Completed tool input: ${chunk.id}`);      break;    }    case 'tool-call': {      // Final tool call with complete input      console.log('Tool call:', chunk.toolName, chunk.input);      break;    }  }}

onChunk Callback Changes

The onChunk callback now receives the new streaming chunk types with IDs and the start/delta/end pattern.

AI SDK 4.0

const result = streamText({  model: openai('gpt-4.1'),  prompt: 'Write a story',  onChunk({ chunk }) {    switch (chunk.type) {      case 'text-delta': {        // Single property with text content        console.log('Text delta:', chunk.textDelta);        break;      }    }  },});

AI SDK 5.0

const result = streamText({  model: openai('gpt-4.1'),  prompt: 'Write a story',  onChunk({ chunk }) {    switch (chunk.type) {      case 'text-delta': {        // Text chunks now use single 'text' type        console.log('Text chunk:', chunk.text);        break;      }      case 'reasoning': {        // Reasoning chunks use single 'reasoning' type        console.log('Reasoning chunk:', chunk.text);        break;      }      case 'source': {        console.log('Source chunk:', chunk);        break;      }      case 'tool-call': {        console.log('Tool call:', chunk.toolName, chunk.input);        break;      }      case 'tool-input-start': {        console.log(          `Tool input started for ${chunk.toolName}:`,          chunk.toolCallId,        );        break;      }      case 'tool-input-delta': {        console.log(`Tool input delta for ${chunk.toolCallId}:`, chunk.delta);        break;      }      case 'tool-result': {        console.log('Tool result:', chunk.output);        break;      }      case 'raw': {        console.log('Raw chunk:', chunk);        break;      }    }  },});

File Stream Parts Restructure

File parts in streams have been flattened.

AI SDK 4.0

for await (const chunk of result.fullStream) {  switch (chunk.type) {    case 'file': {      console.log('Media type:', chunk.file.mediaType);      console.log('File data:', chunk.file.data);      break;    }  }}

AI SDK 5.0

for await (const chunk of result.fullStream) {  switch (chunk.type) {    case 'file': {      console.log('Media type:', chunk.mediaType);      console.log('File data:', chunk.data);      break;    }  }}

Source Stream Parts Restructure

Source stream parts have been flattened.

AI SDK 4.0

for await (const part of result.fullStream) {  if (part.type === 'source' && part.source.sourceType === 'url') {    console.log('ID:', part.source.id);    console.log('Title:', part.source.title);    console.log('URL:', part.source.url);  }}

AI SDK 5.0

for await (const part of result.fullStream) {  if (part.type === 'source' && part.sourceType === 'url') {    console.log('ID:', part.id);    console.log('Title:', part.title);    console.log('URL:', part.url);  }}

Finish Event Changes

Stream finish events have been renamed for consistency.

AI SDK 4.0

for await (const part of result.fullStream) {  switch (part.type) {    case 'step-finish': {      console.log('Step finished:', part.finishReason);      break;    }    case 'finish': {      console.log('Usage:', part.usage);      break;    }  }}

AI SDK 5.0

for await (const part of result.fullStream) {  switch (part.type) {    case 'finish-step': {      // Renamed from 'step-finish'      console.log('Step finished:', part.finishReason);      break;    }    case 'finish': {      console.log('Total Usage:', part.totalUsage); // Changed from 'usage'      break;    }  }}

Stream Protocol Changes

Proprietary Protocol -> Server-Sent Events

The data stream protocol has been updated to use Server-Sent Events.

AI SDK 4.0

import { createDataStream, formatDataStreamPart } from 'ai';
const dataStream = createDataStream({  execute: writer => {    writer.writeData('initialized call');    writer.write(formatDataStreamPart('text', 'Hello'));    writer.writeSource({      type: 'source',      sourceType: 'url',      id: 'source-1',      url: 'https://example.com',      title: 'Example Source',    });  },});

AI SDK 5.0

import { createUIMessageStream } from 'ai';
const stream = createUIMessageStream({  execute: ({ writer }) => {    writer.write({ type: 'data', value: ['initialized call'] });    writer.write({ type: 'text', value: 'Hello' });    writer.write({      type: 'source-url',      value: {        type: 'source',        id: 'source-1',        url: 'https://example.com',        title: 'Example Source',      },    });  },});

Data Stream Response Helper Functions Renamed

The streaming API has been completely restructured from data streams to UI message streams.

AI SDK 4.0

// Express/Node.js serversapp.post('/stream', async (req, res) => {  const result = streamText({    model: openai('gpt-4.1'),    prompt: 'Generate content',  });
  result.pipeDataStreamToResponse(res);});
// Next.js API routesconst result = streamText({  model: openai('gpt-4.1'),  prompt: 'Generate content',});
return result.toDataStreamResponse();

AI SDK 5.0

// Express/Node.js serversapp.post('/stream', async (req, res) => {  const result = streamText({    model: openai('gpt-4.1'),    prompt: 'Generate content',  });
  result.pipeUIMessageStreamToResponse(res);});
// Next.js API routesconst result = streamText({  model: openai('gpt-4.1'),  prompt: 'Generate content',});
return result.toUIMessageStreamResponse();

Stream Transform Function Renaming

Various stream-related functions have been renamed for consistency.

AI SDK 4.0

import { DataStreamToSSETransformStream } from 'ai';

AI SDK 5.0

import { JsonToSseTransformStream } from 'ai';

Error Handling: getErrorMessage → onError

The getErrorMessage option in toDataStreamResponse has been replaced with onError in toUIMessageStreamResponse, providing more control over error forwarding to the client.

By default, error messages are NOT sent to the client to prevent leaking sensitive information. The onError callback allows you to explicitly control what error information is forwarded to the client.

AI SDK 4.0

return result.toDataStreamResponse({  getErrorMessage: error => {    // Return sanitized error data to send to client    // Only return what you want the client to see!    return {      errorCode: 'STREAM_ERROR',      message: 'An error occurred while processing your request',      // In production, avoid sending error.message directly to prevent information leakage    };  },});

AI SDK 5.0

return result.toUIMessageStreamResponse({  onError: error => {    // Return sanitized error data to send to client    // Only return what you want the client to see!    return {      errorCode: 'STREAM_ERROR',      message: 'An error occurred while processing your request',      // In production, avoid sending error.message directly to prevent information leakage    };  },});

Utility Changes

ID Generation Changes

The createIdGenerator() function now requires a size argument.

AI SDK 4.0

const generator = createIdGenerator({ prefix: 'msg' });const id = generator(16); // Custom size at call time

AI SDK 5.0

const generator = createIdGenerator({ prefix: 'msg', size: 16 });const id = generator(); // Fixed size from creation

IDGenerator → IdGenerator

The type name has been updated.

AI SDK 4.0

import { IDGenerator } from 'ai';

AI SDK 5.0

import { IdGenerator } from 'ai';

Provider Interface Changes

Language Model V2 Import

LanguageModelV2 must now be imported from @ai-sdk/provider.

AI SDK 4.0

import { LanguageModelV2 } from 'ai';

AI SDK 5.0

import { LanguageModelV2 } from '@ai-sdk/provider';

Middleware Rename

LanguageModelV1Middleware has been renamed and moved.

AI SDK 4.0

import { LanguageModelV1Middleware } from 'ai';

AI SDK 5.0

import { LanguageModelV2Middleware } from '@ai-sdk/provider';

Usage Token Properties

Token usage properties have been renamed for consistency.

AI SDK 4.0

// In language model implementations{  usage: {    promptTokens: 10,    completionTokens: 20  }}

AI SDK 5.0

// In language model implementations{  usage: {    inputTokens: 10,    outputTokens: 20,    totalTokens: 30 // Now required  }}

Stream Part Type Changes

The LanguageModelV2StreamPart type has been expanded to support the new streaming architecture with start/delta/end patterns and IDs.

AI SDK 4.0

// V4: Simple stream partstype LanguageModelV2StreamPart =  | { type: 'text-delta'; textDelta: string }  | { type: 'reasoning'; text: string }  | { type: 'tool-call'; toolCallId: string; toolName: string; input: string };

AI SDK 5.0

// V5: Enhanced stream parts with IDs and lifecycle eventstype LanguageModelV2StreamPart =  // Text blocks with start/delta/end pattern  | {      type: 'text-start';      id: string;      providerMetadata?: SharedV2ProviderMetadata;    }  | {      type: 'text-delta';      id: string;      delta: string;      providerMetadata?: SharedV2ProviderMetadata;    }  | {      type: 'text-end';      id: string;      providerMetadata?: SharedV2ProviderMetadata;    }
  // Reasoning blocks with start/delta/end pattern  | {      type: 'reasoning-start';      id: string;      providerMetadata?: SharedV2ProviderMetadata;    }  | {      type: 'reasoning-delta';      id: string;      delta: string;      providerMetadata?: SharedV2ProviderMetadata;    }  | {      type: 'reasoning-end';      id: string;      providerMetadata?: SharedV2ProviderMetadata;    }
  // Tool input streaming  | {      type: 'tool-input-start';      id: string;      toolName: string;      providerMetadata?: SharedV2ProviderMetadata;    }  | {      type: 'tool-input-delta';      id: string;      delta: string;      providerMetadata?: SharedV2ProviderMetadata;    }  | {      type: 'tool-input-end';      id: string;      providerMetadata?: SharedV2ProviderMetadata;    }
  // Enhanced tool calls  | {      type: 'tool-call';      toolCallId: string;      toolName: string;      input: string;      providerMetadata?: SharedV2ProviderMetadata;    }
  // Stream lifecycle events  | { type: 'stream-start'; warnings: Array<LanguageModelV2CallWarning> }  | {      type: 'finish';      usage: LanguageModelV2Usage;      finishReason: LanguageModelV2FinishReason;      providerMetadata?: SharedV2ProviderMetadata;    };

Raw Response → Response

Provider response objects have been updated.

AI SDK 4.0

// In language model implementations{  rawResponse: {    /* ... */  }}

AI SDK 5.0

// In language model implementations{  response: {    /* ... */  }}

wrapLanguageModel now stable

AI SDK 4.0

import { experimental_wrapLanguageModel } from 'ai';

AI SDK 5.0

import { wrapLanguageModel } from 'ai';

activeTools No Longer Experimental

AI SDK 4.0

const result = await generateText({  model: openai('gpt-4'),  messages,  tools: { weatherTool, locationTool },  experimental_activeTools: ['weatherTool'],});

AI SDK 5.0

const result = await generateText({  model: openai('gpt-4'),  messages,  tools: { weatherTool, locationTool },  activeTools: ['weatherTool'], // No longer experimental});

prepareStep No Longer Experimental

The experimental_prepareStep option has been promoted and no longer requires the experimental prefix.

AI SDK 4.0

const result = await generateText({  model: openai('gpt-4'),  messages,  tools: { weatherTool, locationTool },  experimental_prepareStep: ({ steps, stepNumber, model }) => {    console.log('Preparing step:', stepNumber);    return {      activeTools: ['weatherTool'],      system: 'Be helpful and concise.',    };  },});

AI SDK 5.0

const result = await generateText({  model: openai('gpt-4'),  messages,  tools: { weatherTool, locationTool },  prepareStep: ({ steps, stepNumber, model }) => {    console.log('Preparing step:', stepNumber);    return {      activeTools: ['weatherTool'],      system: 'Be helpful and concise.',      // Can also configure toolChoice, model, etc.    };  },});

The prepareStep function receives { steps, stepNumber, model } and can return:

  • model: Different model for this step
  • activeTools: Which tools to make available
  • toolChoice: Tool selection strategy
  • system: System message for this step
  • undefined: Use default settings

Temperature Default Removal

Temperature is no longer set to 0 by default.

AI SDK 4.0

await generateText({  model: openai('gpt-4'),  prompt: 'Write a creative story',  // Implicitly temperature: 0});

AI SDK 5.0

await generateText({  model: openai('gpt-4'),  prompt: 'Write a creative story',  temperature: 0, // Must explicitly set});

Message Persistence Changes


If you have persisted messages in a database, see the Data Migration Guide for comprehensive guidance on migrating your stored message data to the v5 format.

In v4, you would typically use helper functions like appendResponseMessages or appendClientMessage to format messages in the onFinish callback of streamText:

AI SDK 4.0

import {  streamText,  convertToModelMessages,  appendClientMessage,  appendResponseMessages,} from 'ai';import { openai } from '@ai-sdk/openai';
const updatedMessages = appendClientMessage({  messages,  message: lastUserMessage,});
const result = streamText({  model: openai('gpt-4o'),  messages: updatedMessages,  experimental_generateMessageId: () => generateId(), // ID generation on streamText  onFinish: async ({ responseMessages, usage }) => {    // Use helper functions to format messages    const finalMessages = appendResponseMessages({      messages: updatedMessages,      responseMessages,    });
    // Save formatted messages to database    await saveMessages(finalMessages);  },});

In v5, message persistence is now handled through the toUIMessageStreamResponse method, which automatically formats response messages in the UIMessage format:

AI SDK 5.0

import { streamText, convertToModelMessages, UIMessage } from 'ai';import { openai } from '@ai-sdk/openai';
const messages: UIMessage[] = [  // Your existing messages in UIMessage format];
const result = streamText({  model: openai('gpt-4o'),  messages: convertToModelMessages(messages),  // experimental_generateMessageId removed from here});
return result.toUIMessageStreamResponse({  originalMessages: messages, // IMPORTANT: Required to prevent duplicate messages  generateMessageId: () => generateId(), // IMPORTANT: Required for proper message ID generation  onFinish: ({ messages, responseMessage }) => {    // messages contains all messages (original + response) in UIMessage format    saveChat({ chatId, messages });
    // responseMessage contains just the generated message in UIMessage format    saveMessage({ chatId, message: responseMessage });  },});

Important: When using toUIMessageStreamResponse, you should always provide both originalMessages and generateMessageId parameters. Without these, you may experience duplicate or repeated assistant messages in your UI. For more details, see Troubleshooting: Repeated Assistant Messages .

Message ID Generation

The experimental_generateMessageId option has been moved from streamText configuration to toUIMessageStreamResponse, as it's designed for use with UIMessages rather than ModelMessages.

AI SDK 4.0

const result = streamText({  model: openai('gpt-4o'),  messages,  experimental_generateMessageId: () => generateId(),});

AI SDK 5.0

const result = streamText({  model: openai('gpt-4o'),  messages: convertToModelMessages(messages),});
return result.toUIMessageStreamResponse({  generateMessageId: () => generateId(), // No longer experimental  // ...});

For more details on message IDs and persistence, see the Chatbot Message Persistence guide .

Using createUIMessageStream

For more complex scenarios, especially when working with data parts, you can use createUIMessageStream:

AI SDK 5.0 - Advanced

import {  createUIMessageStream,  createUIMessageStreamResponse,  streamText,  convertToModelMessages,  UIMessage,} from 'ai';import { openai } from '@ai-sdk/openai';
const stream = createUIMessageStream({  originalMessages: messages,  generateId: generateId, // Required for proper message ID generation  execute: ({ writer }) => {    // Write custom data parts    writer.write({      type: 'data',      data: { status: 'processing', timestamp: Date.now() },    });
    // Stream the AI response    const result = streamText({      model: openai('gpt-4o'),      messages: convertToModelMessages(messages),    });
    writer.merge(result.toUIMessageStream());  },  onFinish: ({ messages }) => {    // messages contains all messages (original + response + data parts) in UIMessage format    saveChat({ chatId, messages });  },});
return createUIMessageStreamResponse({ stream });

Provider & Model Changes


OpenAI

Default Provider Instance Uses Responses API

In AI SDK 5, the default OpenAI provider instance uses the Responses API, while AI SDK 4 used the Chat Completions API. The Chat Completions API remains fully supported and you can use it with openai.chat(...).

AI SDK 4.0

import { openai } from '@ai-sdk/openai';
const defaultModel = openai('gpt-4.1-mini'); // Chat Completions API

AI SDK 5.0

import { openai } from '@ai-sdk/openai';
const defaultModel = openai('gpt-4.1-mini'); // Responses API
// Specify a specific API when needed:const chatCompletionsModel = openai.chat('gpt-4.1-mini');const responsesModel = openai.responses('gpt-4.1-mini');

The Responses and Chat Completions APIs have different behavior and defaults. If you depend on the Chat Completions API, switch your model instance to openai.chat(...) and audit your configuration.

Strict Schemas (strictSchemas) with Responses API

In AI SDK 4.0, you could set the strictSchemas option on Responses models (which defaulted to true). This option has been renamed to strictJsonSchema in AI SDK 5.0 and now defaults to false.

AI SDK 4.0

import { z } from 'zod';import { generateObject } from 'ai';import { openai, type OpenAIResponsesProviderOptions } from '@ai-sdk/openai';
const result = await generateObject({  model: openai.responses('gpt-4.1'),  schema: z.object({    // ...  }),  providerOptions: {    openai: {      strictSchemas: true, // default behaviour in AI SDK 4    } satisfies OpenAIResponsesProviderOptions,  },});

AI SDK 5.0

import { z } from 'zod';import { generateObject } from 'ai';import { openai, type OpenAIResponsesProviderOptions } from '@ai-sdk/openai';
const result = await generateObject({  model: openai('gpt-4.1-2024'), // uses Responses API  schema: z.object({    // ...  }),  providerOptions: {    openai: {      strictJsonSchema: true, // defaults to false, opt back in to the AI SDK 4 strict behaviour    } satisfies OpenAIResponsesProviderOptions,  },});

If you call openai.chat(...) to use the Chat Completions API directly, you can type it with OpenAIChatLanguageModelOptions. AI SDK 5 adds the same strictJsonSchema option there as well.

Structured Outputs

The structuredOutputs option is now configured using provider options rather than as a setting on the model instance.

AI SDK 4.0

import { z } from 'zod';import { generateObject } from 'ai';import { openai } from '@ai-sdk/openai';
const result = await generateObject({  model: openai('gpt-4.1', { structuredOutputs: true }), // use Chat Completions API  schema: z.object({ name: z.string() }),});

AI SDK 5.0 (Chat Completions API)

import { z } from 'zod';import { generateObject } from 'ai';import { openai, type OpenAIChatLanguageModelOptions } from '@ai-sdk/openai';
const result = await generateObject({  model: openai.chat('gpt-4.1'), // use Chat Completions API  schema: z.object({ name: z.string() }),  providerOptions: {    openai: {      structuredOutputs: true,    } satisfies OpenAIChatLanguageModelOptions,  },});

Compatibility Option Removal

The compatibility option has been removed; strict compatibility mode is now the default.

AI SDK 4.0

const openai = createOpenAI({  compatibility: 'strict',});

AI SDK 5.0

const openai = createOpenAI({  // strict compatibility is now the default});

Legacy Function Calls Removal

The useLegacyFunctionCalls option has been removed.

AI SDK 4.0

const result = streamText({  model: openai('gpt-4.1', { useLegacyFunctionCalls: true }),});

AI SDK 5.0

const result = streamText({  model: openai('gpt-4.1'),});

Simulate Streaming

The simulateStreaming model option has been replaced with middleware.

AI SDK 4.0

const result = generateText({  model: openai('gpt-4.1', { simulateStreaming: true }),  prompt: 'Hello, world!',});

AI SDK 5.0

import { simulateStreamingMiddleware, wrapLanguageModel } from 'ai';
const model = wrapLanguageModel({  model: openai('gpt-4.1'),  middleware: simulateStreamingMiddleware(),});
const result = generateText({  model,  prompt: 'Hello, world!',});

Google

Search Grounding is now a provider defined tool

Search Grounding is now called "Google Search" and is now a provider defined tool.

AI SDK 4.0

const { text, providerMetadata } = await generateText({  model: google('gemini-1.5-pro', {    useSearchGrounding: true,  }),  prompt: 'List the top 5 San Francisco news from the past week.',});

AI SDK 5.0

import { google } from '@ai-sdk/google';const { text, sources, providerMetadata } = await generateText({  model: google('gemini-1.5-pro'),  prompt:    'List the top 5 San Francisco news from the past week.'  tools: {    google_search: google.tools.googleSearch({}),  },});

Amazon Bedrock

Snake Case → Camel Case

Provider options have been updated to use camelCase.

AI SDK 4.0

const result = await generateText({  model: bedrock('amazon.titan-tg1-large'),  prompt: 'Hello, world!',  providerOptions: {    bedrock: {      reasoning_config: {        /* ... */      },    },  },});

AI SDK 5.0

const result = await generateText({  model: bedrock('amazon.titan-tg1-large'),  prompt: 'Hello, world!',  providerOptions: {    bedrock: {      reasoningConfig: {        /* ... */      },    },  },});

Provider-Utils Changes

Deprecated CoreTool* types have been removed.

AI SDK 4.0

import {  CoreToolCall,  CoreToolResult,  CoreToolResultUnion,  CoreToolCallUnion,  CoreToolChoice,} from '@ai-sdk/provider-utils';

AI SDK 5.0

import {  ToolCall,  ToolResult,  TypedToolResult,  TypedToolCall,  ToolChoice,} from '@ai-sdk/provider-utils';

Troubleshooting


TypeScript Performance Issues with Zod

If you experience TypeScript server crashes, slow type checking, or errors like "Type instantiation is excessively deep and possibly infinite" when using Zod with AI SDK 5.0:

  1. First, ensure you're using Zod 4.1.8 or later - this version includes a fix for module resolution issues that cause TypeScript performance problems.

  2. If the issue persists, update your tsconfig.json to use moduleResolution: "nodenext":

    { "compilerOptions": { "moduleResolution": "nodenext" // ... other options }}

This resolves the TypeScript performance issues while allowing you to continue using the standard Zod import. If this doesn't resolve the issue, you can try using a version-specific import path as an alternative solution. For detailed troubleshooting steps, see TypeScript performance issues with Zod .

Codemod Table


The following table lists available codemods for the AI SDK 5.0 upgrade process. For more information, see the Codemods section.

| Change | Codemod | | --- | --- | | AI SDK Core Changes | | | Flatten streamText file properties | v5/flatten-streamtext-file-properties | | ID Generation Changes | v5/require-createIdGenerator-size-argument | | IDGenerator → IdGenerator | v5/rename-IDGenerator-to-IdGenerator | | Import LanguageModelV2 from provider package | v5/import-LanguageModelV2-from-provider-package | | Migrate to data stream protocol v2 | v5/migrate-to-data-stream-protocol-v2 | | Move image model maxImagesPerCall | v5/move-image-model-maxImagesPerCall | | Move LangChain adapter | v5/move-langchain-adapter | | Move maxSteps to stopWhen | v5/move-maxsteps-to-stopwhen | | Move provider options | v5/move-provider-options | | Move React to AI SDK | v5/move-react-to-ai-sdk | | Move UI utils to AI | v5/move-ui-utils-to-ai | | Remove experimental wrap language model | v5/remove-experimental-wrap-language-model | | Remove experimental activeTools | v5/remove-experimental-activetools | | Remove experimental prepareStep | v5/remove-experimental-preparestep | | Remove experimental continueSteps | v5/remove-experimental-continuesteps | | Remove experimental temperature | v5/remove-experimental-temperature | | Remove experimental truncate | v5/remove-experimental-truncate | | Remove experimental OpenAI compatibility | v5/remove-experimental-openai-compatibility | | Remove experimental OpenAI legacy function calls | v5/remove-experimental-openai-legacy-function-calls | | Remove experimental OpenAI structured outputs | v5/remove-experimental-openai-structured-outputs | | Remove experimental OpenAI store | v5/remove-experimental-openai-store | | Remove experimental OpenAI user | v5/remove-experimental-openai-user | | Remove experimental OpenAI parallel tool calls | v5/remove-experimental-openai-parallel-tool-calls | | Remove experimental OpenAI response format | v5/remove-experimental-openai-response-format | | Remove experimental OpenAI logit bias | v5/remove-experimental-openai-logit-bias | | Remove experimental OpenAI logprobs | v5/remove-experimental-openai-logprobs | | Remove experimental OpenAI seed | v5/remove-experimental-openai-seed | | Remove experimental OpenAI service tier | v5/remove-experimental-openai-service-tier | | Remove experimental OpenAI top logprobs | v5/remove-experimental-openai-top-logprobs | | Remove experimental OpenAI transform | v5/remove-experimental-openai-transform | | Remove experimental OpenAI stream options | v5/remove-experimental-openai-stream-options | | Remove experimental OpenAI prediction | v5/remove-experimental-openai-prediction | | Remove experimental Anthropic caching | v5/remove-experimental-anthropic-caching | | Remove experimental Anthropic computer use | v5/remove-experimental-anthropic-computer-use | | Remove experimental Anthropic PDF support | v5/remove-experimental-anthropic-pdf-support | | Remove experimental Anthropic prompt caching | v5/remove-experimental-anthropic-prompt-caching | | Remove experimental Google search grounding | v5/remove-experimental-google-search-grounding | | Remove experimental Google code execution | v5/remove-experimental-google-code-execution | | Remove experimental Google cached content | v5/remove-experimental-google-cached-content | | Remove experimental Google custom headers | v5/remove-experimental-google-custom-headers | | Rename format stream part | v5/rename-format-stream-part | | Rename parse stream part | v5/rename-parse-stream-part | | Replace image type with file type | v5/replace-image-type-with-file-type | | Replace LlamaIndex adapter | v5/replace-llamaindex-adapter | | Replace onCompletion with onFinal | v5/replace-oncompletion-with-onfinal | | Replace provider metadata with provider options | v5/replace-provider-metadata-with-provider-options | | Replace rawResponse with response | v5/replace-rawresponse-with-response | | Replace redacted reasoning type | v5/replace-redacted-reasoning-type | | Replace simulate streaming | v5/replace-simulate-streaming | | Replace textDelta with text | v5/replace-textdelta-with-text | | Replace usage token properties | v5/replace-usage-token-properties | | Restructure file stream parts | v5/restructure-file-stream-parts | | Restructure source stream parts | v5/restructure-source-stream-parts | | RSC package | v5/rsc-package |

Changes Between v5 Beta Versions


This section documents breaking changes between different beta versions of AI SDK 5.0. If you're upgrading from an earlier v5 beta version to a later one, check this section for any changes that might affect your code.

fullStream Type Rename: text/reasoning → text-delta/reasoning-delta

The chunk types in fullStream have been renamed for consistency with UI streams and language model streams.

AI SDK 5.0 (before beta.26)

for await (const chunk of result.fullStream) {  switch (chunk.type) {    case 'text-delta': {      process.stdout.write(chunk.text);      break;    }    case 'reasoning': {      console.log('Reasoning:', chunk.text);      break;    }  }}

AI SDK 5.0 (beta.26 and later)

for await (const chunk of result.fullStream) {  switch (chunk.type) {    case 'text-delta': {      process.stdout.write(chunk.text);      break;    }    case 'reasoning-delta': {      console.log('Reasoning:', chunk.text);      break;    }  }}

On this page

Migrate AI SDK 4.0 to 5.0

Recommended Migration Process

AI SDK 5 Migration MCP Server

AI SDK 5.0 Package Versions

Codemods

AI SDK Core Changes

generateText and streamText Changes

Maximum Output Tokens

Message and Type System Changes

Core Type Renames

CoreMessage → ModelMessage

Message → UIMessage

convertToCoreMessages → convertToModelMessages

UIMessage Changes

Content → Parts Array

Data Role Removed

UIMessage Reasoning Structure

Reasoning Part Property Rename

File Part Changes

Stream Data Removal

Custom Data Streaming: writeMessageAnnotation/writeData Removed

Provider Metadata → Provider Options

Tool Definition Changes (parameters → inputSchema)

Tool Result Content: experimental_toToolResultContent → toModelOutput

Tool Property Changes (args/result → input/output)

Tool Execution Error Handling

Tool Call Streaming Now Default (toolCallStreaming Removed)

Tool Part Type Changes (UIMessage)

Dynamic Tools Support

New dynamicTool Helper

MCP Tools Without Schemas

Type-Safe Handling with Mixed Tools

New dynamic-tool UI Part

Breaking Change: Type Narrowing Required for Tool Calls and Results

Tool UI Part State Changes

Rendering Tool Invocations (Catch-All Pattern)

Media Type Standardization

Reasoning Support

Reasoning Text Property Rename

Generate Text Reasoning Property Changes

Continuation Steps Removal

Image Generation Changes

Step Result Changes

Step Type Removal

Step Control: maxSteps → stopWhen

Usage vs Total Usage

AI SDK UI Changes

Package Structure Changes

@ai-sdk/rsc Package Extraction

React UI Hooks Moved to @ai-sdk/react

useChat Changes

maxSteps Removal

Initial Messages Renamed

Sharing Chat Instances

Chat Transport Architecture

Removed Managed Input State

Message Sending: append → sendMessage

Message Regeneration: reload → regenerate

onResponse Removal

Send Extra Message Fields Default

Keep Last Message on Error Removal

Chat Request Options Changes

Request Options Type Rename

addToolResult Renamed to addToolOutput

Tool Result Submission Changes

Loading State Changes

Resume Stream Support

Dynamic Body Values

Usage Information

Request Body Preparation: experimental_prepareRequestBody → prepareSendMessagesRequest

@ai-sdk/vue Changes

useChat Replaced with Chat Class

Message Structure Changes

@ai-sdk/svelte Changes

Constructor API Changes

Properties Made Readonly

Removed Managed Input

@ai-sdk/ui-utils Package Removal

useCompletion Changes

useAssistant Removal

Attachments → File Parts

Embedding Changes

Provider Options for Embeddings

Raw Response → Response

Parallel Requests in embedMany

LangChain Adapter Moved to @ai-sdk/langchain

LlamaIndex Adapter Moved to @ai-sdk/llamaindex

Streaming Architecture

Stream Protocol Changes

Stream Protocol: Single Chunks → Start/Delta/End Pattern

Reasoning Streaming Pattern

Tool Input Streaming

onChunk Callback Changes

File Stream Parts Restructure

Source Stream Parts Restructure

Finish Event Changes

Stream Protocol Changes

Proprietary Protocol -> Server-Sent Events

Data Stream Response Helper Functions Renamed

Stream Transform Function Renaming

Error Handling: getErrorMessage → onError

Utility Changes

ID Generation Changes

IDGenerator → IdGenerator

Provider Interface Changes

Language Model V2 Import

Middleware Rename

Usage Token Properties

Stream Part Type Changes

Raw Response → Response

wrapLanguageModel now stable

activeTools No Longer Experimental

prepareStep No Longer Experimental

Temperature Default Removal

Message Persistence Changes

Message ID Generation

Using createUIMessageStream

Provider & Model Changes

OpenAI

Default Provider Instance Uses Responses API

Strict Schemas (strictSchemas) with Responses API

Structured Outputs

Compatibility Option Removal

Legacy Function Calls Removal

Simulate Streaming

Google

Search Grounding is now a provider defined tool

Amazon Bedrock

Snake Case → Camel Case

Provider-Utils Changes

Troubleshooting

TypeScript Performance Issues with Zod

Codemod Table

Changes Between v5 Beta Versions

fullStream Type Rename: text/reasoning → text-delta/reasoning-delta

Deploy and Scale AI Apps with Vercel.

Vercel delivers the infrastructure and developer experience you need to ship reliable AI-powered applications at scale.

Trusted by industry leaders:

  • OpenAI
  • Photoroom
  • leonardo-ai Logoleonardo-ai Logo
  • zapier Logozapier Logo

Talk to an expert