📄 ai-sdk/docs/ai-sdk-ui/stream-protocol

File: stream-protocol.md | Updated: 11/15/2025

Source: https://ai-sdk.dev/docs/ai-sdk-ui/stream-protocol

AI SDK

Menu

v5 (Latest)

AI SDK 5.x

AI SDK by Vercel

AI SDK 6 Beta

Foundations

Overview

Providers and Models

Prompts

Tools

Streaming

Getting Started

Navigating the Library

Next.js App Router

Next.js Pages Router

Svelte

Vue.js (Nuxt)

Node.js

Expo

Agents

Agents

Building Agents

Workflow Patterns

Loop Control

AI SDK Core

Overview

Generating Text

Generating Structured Data

Tool Calling

Model Context Protocol (MCP) Tools

Prompt Engineering

Settings

Embeddings

Image Generation

Transcription

Speech

Language Model Middleware

Provider & Model Management

Error Handling

Testing

Telemetry

AI SDK UI

Overview

Chatbot

Chatbot Message Persistence

Chatbot Resume Streams

Chatbot Tool Usage

Generative User Interfaces

Completion

Object Generation

Streaming Custom Data

Error Handling

Transport

Reading UIMessage Streams

Message Metadata

Stream Protocols

AI SDK RSC

Advanced

Reference

AI SDK Core

AI SDK UI

AI SDK RSC

Stream Helpers

AI SDK Errors

Migration Guides

Troubleshooting

Copy markdown

Stream Protocols

=======================================================================================

AI SDK UI functions such as useChat and useCompletion support both text streams and data streams. The stream protocol defines how the data is streamed to the frontend on top of the HTTP protocol.

This page describes both protocols and how to use them in the backend and frontend.

You can use this information to develop custom backends and frontends for your use case, e.g., to provide compatible API endpoints that are implemented in a different language such as Python.

For instance, here's an example using FastAPI as a backend.

Text Stream Protocol


A text stream contains chunks in plain text, that are streamed to the frontend. Each chunk is then appended together to form a full text response.

Text streams are supported by useChat, useCompletion, and useObject. When you use useChat or useCompletion, you need to enable text streaming by setting the streamProtocol options to text.

You can generate text streams with streamText in the backend. When you call toTextStreamResponse() on the result object, a streaming HTTP response is returned.

Text streams only support basic text data. If you need to stream other types of data such as tool calls, use data streams.

Text Stream Example

Here is a Next.js example that uses the text stream protocol:

app/page.tsx

'use client';
import { useChat } from '@ai-sdk/react';import { TextStreamChatTransport } from 'ai';import { useState } from 'react';
export default function Chat() {  const [input, setInput] = useState('');  const { messages, sendMessage } = useChat({    transport: new TextStreamChatTransport({ api: '/api/chat' }),  });
  return (    <div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">      {messages.map(message => (        <div key={message.id} className="whitespace-pre-wrap">          {message.role === 'user' ? 'User: ' : 'AI: '}          {message.parts.map((part, i) => {            switch (part.type) {              case 'text':                return <div key={`${message.id}-${i}`}>{part.text}</div>;            }          })}        </div>      ))}
      <form        onSubmit={e => {          e.preventDefault();          sendMessage({ text: input });          setInput('');        }}      >        <input          className="fixed dark:bg-zinc-900 bottom-0 w-full max-w-md p-2 mb-8 border border-zinc-300 dark:border-zinc-800 rounded shadow-xl"          value={input}          placeholder="Say something..."          onChange={e => setInput(e.currentTarget.value)}        />      </form>    </div>  );}

app/api/chat/route.ts

import { streamText, UIMessage, convertToModelMessages } from 'ai';import { openai } from '@ai-sdk/openai';
// Allow streaming responses up to 30 secondsexport const maxDuration = 30;
export async function POST(req: Request) {  const { messages }: { messages: UIMessage[] } = await req.json();
  const result = streamText({    model: openai('gpt-4o'),    messages: convertToModelMessages(messages),  });
  return result.toTextStreamResponse();}

Data Stream Protocol


A data stream follows a special protocol that the AI SDK provides to send information to the frontend.

The data stream protocol uses Server-Sent Events (SSE) format for improved standardization, keep-alive through ping, reconnect capabilities, and better cache handling.

When you provide data streams from a custom backend, you need to set the x-vercel-ai-ui-message-stream header to v1.

The following stream parts are currently supported:

Message Start Part

Indicates the beginning of a new message with metadata.

Format: Server-Sent Event with JSON object

Example:

data: {"type":"start","messageId":"..."}

Text Parts

Text content is streamed using a start/delta/end pattern with unique IDs for each text block.

Text Start Part

Indicates the beginning of a text block.

Format: Server-Sent Event with JSON object

Example:

data: {"type":"text-start","id":"msg_68679a454370819ca74c8eb3d04379630dd1afb72306ca5d"}

Text Delta Part

Contains incremental text content for the text block.

Format: Server-Sent Event with JSON object

Example:

data: {"type":"text-delta","id":"msg_68679a454370819ca74c8eb3d04379630dd1afb72306ca5d","delta":"Hello"}

Text End Part

Indicates the completion of a text block.

Format: Server-Sent Event with JSON object

Example:

data: {"type":"text-end","id":"msg_68679a454370819ca74c8eb3d04379630dd1afb72306ca5d"}

Reasoning Parts

Reasoning content is streamed using a start/delta/end pattern with unique IDs for each reasoning block.

Reasoning Start Part

Indicates the beginning of a reasoning block.

Format: Server-Sent Event with JSON object

Example:

data: {"type":"reasoning-start","id":"reasoning_123"}

Reasoning Delta Part

Contains incremental reasoning content for the reasoning block.

Format: Server-Sent Event with JSON object

Example:

data: {"type":"reasoning-delta","id":"reasoning_123","delta":"This is some reasoning"}

Reasoning End Part

Indicates the completion of a reasoning block.

Format: Server-Sent Event with JSON object

Example:

data: {"type":"reasoning-end","id":"reasoning_123"}

Source Parts

Source parts provide references to external content sources.

Source URL Part

References to external URLs.

Format: Server-Sent Event with JSON object

Example:

data: {"type":"source-url","sourceId":"https://example.com","url":"https://example.com"}

Source Document Part

References to documents or files.

Format: Server-Sent Event with JSON object

Example:

data: {"type":"source-document","sourceId":"https://example.com","mediaType":"file","title":"Title"}

File Part

The file parts contain references to files with their media type.

Format: Server-Sent Event with JSON object

Example:

data: {"type":"file","url":"https://example.com/file.png","mediaType":"image/png"}

Data Parts

Custom data parts allow streaming of arbitrary structured data with type-specific handling.

Format: Server-Sent Event with JSON object where the type includes a custom suffix

Example:

data: {"type":"data-weather","data":{"location":"SF","temperature":100}}

The data-* type pattern allows you to define custom data types that your frontend can handle specifically.

Error Part

The error parts are appended to the message as they are received.

Format: Server-Sent Event with JSON object

Example:

data: {"type":"error","errorText":"error message"}

Tool Input Start Part

Indicates the beginning of tool input streaming.

Format: Server-Sent Event with JSON object

Example:

data: {"type":"tool-input-start","toolCallId":"call_fJdQDqnXeGxTmr4E3YPSR7Ar","toolName":"getWeatherInformation"}

Tool Input Delta Part

Incremental chunks of tool input as it's being generated.

Format: Server-Sent Event with JSON object

Example:

data: {"type":"tool-input-delta","toolCallId":"call_fJdQDqnXeGxTmr4E3YPSR7Ar","inputTextDelta":"San Francisco"}

Tool Input Available Part

Indicates that tool input is complete and ready for execution.

Format: Server-Sent Event with JSON object

Example:

data: {"type":"tool-input-available","toolCallId":"call_fJdQDqnXeGxTmr4E3YPSR7Ar","toolName":"getWeatherInformation","input":{"city":"San Francisco"}}

Tool Output Available Part

Contains the result of tool execution.

Format: Server-Sent Event with JSON object

Example:

data: {"type":"tool-output-available","toolCallId":"call_fJdQDqnXeGxTmr4E3YPSR7Ar","output":{"city":"San Francisco","weather":"sunny"}}

Start Step Part

A part indicating the start of a step.

Format: Server-Sent Event with JSON object

Example:

data: {"type":"start-step"}

Finish Step Part

A part indicating that a step (i.e., one LLM API call in the backend) has been completed.

This part is necessary to correctly process multiple stitched assistant calls, e.g. when calling tools in the backend, and using steps in useChat at the same time.

Format: Server-Sent Event with JSON object

Example:

data: {"type":"finish-step"}

Finish Message Part

A part indicating the completion of a message.

Format: Server-Sent Event with JSON object

Example:

data: {"type":"finish"}

Stream Termination

The stream ends with a special [DONE] marker.

Format: Server-Sent Event with literal [DONE]

Example:

data: [DONE]

The data stream protocol is supported by useChat and useCompletion on the frontend and used by default. useCompletion only supports the text and data stream parts.

On the backend, you can use toUIMessageStreamResponse() from the streamText result object to return a streaming HTTP response.

UI Message Stream Example

Here is a Next.js example that uses the UI message stream protocol:

app/page.tsx

'use client';
import { useChat } from '@ai-sdk/react';import { useState } from 'react';
export default function Chat() {  const [input, setInput] = useState('');  const { messages, sendMessage } = useChat();
  return (    <div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">      {messages.map(message => (        <div key={message.id} className="whitespace-pre-wrap">          {message.role === 'user' ? 'User: ' : 'AI: '}          {message.parts.map((part, i) => {            switch (part.type) {              case 'text':                return <div key={`${message.id}-${i}`}>{part.text}</div>;            }          })}        </div>      ))}
      <form        onSubmit={e => {          e.preventDefault();          sendMessage({ text: input });          setInput('');        }}      >        <input          className="fixed dark:bg-zinc-900 bottom-0 w-full max-w-md p-2 mb-8 border border-zinc-300 dark:border-zinc-800 rounded shadow-xl"          value={input}          placeholder="Say something..."          onChange={e => setInput(e.currentTarget.value)}        />      </form>    </div>  );}

app/api/chat/route.ts

import { openai } from '@ai-sdk/openai';import { streamText, UIMessage, convertToModelMessages } from 'ai';
// Allow streaming responses up to 30 secondsexport const maxDuration = 30;
export async function POST(req: Request) {  const { messages }: { messages: UIMessage[] } = await req.json();
  const result = streamText({    model: openai('gpt-4o'),    messages: convertToModelMessages(messages),  });
  return result.toUIMessageStreamResponse();}

On this page

Stream Protocols

Text Stream Protocol

Text Stream Example

Data Stream Protocol

Message Start Part

Text Parts

Text Start Part

Text Delta Part

Text End Part

Reasoning Parts

Reasoning Start Part

Reasoning Delta Part

Reasoning End Part

Source Parts

Source URL Part

Source Document Part

File Part

Data Parts

Error Part

Tool Input Start Part

Tool Input Delta Part

Tool Input Available Part

Tool Output Available Part

Start Step Part

Finish Step Part

Finish Message Part

Stream Termination

UI Message Stream Example

Deploy and Scale AI Apps with Vercel.

Vercel delivers the infrastructure and developer experience you need to ship reliable AI-powered applications at scale.

Trusted by industry leaders:

  • OpenAI
  • Photoroom
  • leonardo-ai Logoleonardo-ai Logo
  • zapier Logozapier Logo

Talk to an expert