📄 ai-sdk/docs/ai-sdk-ui/streaming-data

File: streaming-data.md | Updated: 11/15/2025

Source: https://ai-sdk.dev/docs/ai-sdk-ui/streaming-data

AI SDK

Menu

v5 (Latest)

AI SDK 5.x

AI SDK by Vercel

AI SDK 6 Beta

Foundations

Overview

Providers and Models

Prompts

Tools

Streaming

Getting Started

Navigating the Library

Next.js App Router

Next.js Pages Router

Svelte

Vue.js (Nuxt)

Node.js

Expo

Agents

Agents

Building Agents

Workflow Patterns

Loop Control

AI SDK Core

Overview

Generating Text

Generating Structured Data

Tool Calling

Model Context Protocol (MCP) Tools

Prompt Engineering

Settings

Embeddings

Image Generation

Transcription

Speech

Language Model Middleware

Provider & Model Management

Error Handling

Testing

Telemetry

AI SDK UI

Overview

Chatbot

Chatbot Message Persistence

Chatbot Resume Streams

Chatbot Tool Usage

Generative User Interfaces

Completion

Object Generation

Streaming Custom Data

Error Handling

Transport

Reading UIMessage Streams

Message Metadata

Stream Protocols

AI SDK RSC

Advanced

Reference

AI SDK Core

AI SDK UI

AI SDK RSC

Stream Helpers

AI SDK Errors

Migration Guides

Troubleshooting

Copy markdown

Streaming Custom Data

================================================================================================

It is often useful to send additional data alongside the model's response. For example, you may want to send status information, the message ids after storing them, or references to content that the language model is referring to.

The AI SDK provides several helpers that allows you to stream additional data to the client and attach it to the UIMessage parts array:

  • createUIMessageStream: creates a data stream
  • createUIMessageStreamResponse: creates a response object that streams data
  • pipeUIMessageStreamToResponse: pipes a data stream to a server response object

The data is streamed as part of the response stream using Server-Sent Events.

Setting Up Type-Safe Data Streaming


First, define your custom message type with data part schemas for type safety:

ai/types.ts

import { UIMessage } from 'ai';
// Define your custom message type with data part schemasexport type MyUIMessage = UIMessage<  never, // metadata type  {    weather: {      city: string;      weather?: string;      status: 'loading' | 'success';    };    notification: {      message: string;      level: 'info' | 'warning' | 'error';    };  } // data parts type>;

Streaming Data from the Server


In your server-side route handler, you can create a UIMessageStream and then pass it to createUIMessageStreamResponse:

route.ts

import { openai } from '@ai-sdk/openai';import {  createUIMessageStream,  createUIMessageStreamResponse,  streamText,  convertToModelMessages,} from 'ai';import type { MyUIMessage } from '@/ai/types';
export async function POST(req: Request) {  const { messages } = await req.json();
  const stream = createUIMessageStream<MyUIMessage>({    execute: ({ writer }) => {      // 1. Send initial status (transient - won't be added to message history)      writer.write({        type: 'data-notification',        data: { message: 'Processing your request...', level: 'info' },        transient: true, // This part won't be added to message history      });
      // 2. Send sources (useful for RAG use cases)      writer.write({        type: 'source',        value: {          type: 'source',          sourceType: 'url',          id: 'source-1',          url: 'https://weather.com',          title: 'Weather Data Source',        },      });
      // 3. Send data parts with loading state      writer.write({        type: 'data-weather',        id: 'weather-1',        data: { city: 'San Francisco', status: 'loading' },      });
      const result = streamText({        model: openai('gpt-4.1'),        messages: convertToModelMessages(messages),        onFinish() {          // 4. Update the same data part (reconciliation)          writer.write({            type: 'data-weather',            id: 'weather-1', // Same ID = update existing part            data: {              city: 'San Francisco',              weather: 'sunny',              status: 'success',            },          });
          // 5. Send completion notification (transient)          writer.write({            type: 'data-notification',            data: { message: 'Request completed', level: 'info' },            transient: true, // Won't be added to message history          });        },      });
      writer.merge(result.toUIMessageStream());    },  });
  return createUIMessageStreamResponse({ stream });}

You can also send stream data from custom backends, e.g. Python / FastAPI, using the UI Message Stream Protocol .

Types of Streamable Data


Data Parts (Persistent)

Regular data parts are added to the message history and appear in message.parts:

writer.write({  type: 'data-weather',  id: 'weather-1', // Optional: enables reconciliation  data: { city: 'San Francisco', status: 'loading' },});

Sources

Sources are useful for RAG implementations where you want to show which documents or URLs were referenced:

writer.write({  type: 'source',  value: {    type: 'source',    sourceType: 'url',    id: 'source-1',    url: 'https://example.com',    title: 'Example Source',  },});

Transient Data Parts (Ephemeral)

Transient parts are sent to the client but not added to the message history. They are only accessible via the onData useChat handler:

// serverwriter.write({  type: 'data-notification',  data: { message: 'Processing...', level: 'info' },  transient: true, // Won't be added to message history});
// clientconst [notification, setNotification] = useState();
const { messages } = useChat({  onData: ({ data, type }) => {    if (type === 'data-notification') {      setNotification({ message: data.message, level: data.level });    }  },});

Data Part Reconciliation


When you write to a data part with the same ID, the client automatically reconciles and updates that part. This enables powerful dynamic experiences like:

  • Collaborative artifacts - Update code, documents, or designs in real-time
  • Progressive data loading - Show loading states that transform into final results
  • Live status updates - Update progress bars, counters, or status indicators
  • Interactive components - Build UI elements that evolve based on user interaction

The reconciliation happens automatically - simply use the same id when writing to the stream.

Processing Data on the Client


Using the onData Callback

The onData callback is essential for handling streaming data, especially transient parts:

page.tsx

import { useChat } from '@ai-sdk/react';import type { MyUIMessage } from '@/ai/types';
const { messages } = useChat<MyUIMessage>({  api: '/api/chat',  onData: dataPart => {    // Handle all data parts as they arrive (including transient parts)    console.log('Received data part:', dataPart);
    // Handle different data part types    if (dataPart.type === 'data-weather') {      console.log('Weather update:', dataPart.data);    }
    // Handle transient notifications (ONLY available here, not in message.parts)    if (dataPart.type === 'data-notification') {      showToast(dataPart.data.message, dataPart.data.level);    }  },});

Important: Transient data parts are only available through the onData callback. They will not appear in the message.parts array since they're not added to message history.

Rendering Persistent Data Parts

You can filter and render data parts from the message parts array:

page.tsx

const result = (  <>    {messages?.map(message => (      <div key={message.id}>        {/* Render weather data parts */}        {message.parts          .filter(part => part.type === 'data-weather')          .map((part, index) => (            <div key={index} className="weather-widget">              {part.data.status === 'loading' ? (                <>Getting weather for {part.data.city}...</>              ) : (                <>                  Weather in {part.data.city}: {part.data.weather}                </>              )}            </div>          ))}
        {/* Render text content */}        {message.parts          .filter(part => part.type === 'text')          .map((part, index) => (            <div key={index}>{part.text}</div>          ))}
        {/* Render sources */}        {message.parts          .filter(part => part.type === 'source')          .map((part, index) => (            <div key={index} className="source">              Source: <a href={part.url}>{part.title}</a>            </div>          ))}      </div>    ))}  </>);

Complete Example

page.tsx

'use client';
import { useChat } from '@ai-sdk/react';import { useState } from 'react';import type { MyUIMessage } from '@/ai/types';
export default function Chat() {  const [input, setInput] = useState('');
  const { messages, sendMessage } = useChat<MyUIMessage>({    api: '/api/chat',    onData: dataPart => {      // Handle transient notifications      if (dataPart.type === 'data-notification') {        console.log('Notification:', dataPart.data.message);      }    },  });
  const handleSubmit = (e: React.FormEvent) => {    e.preventDefault();    sendMessage({ text: input });    setInput('');  };
  return (    <>      {messages?.map(message => (        <div key={message.id}>          {message.role === 'user' ? 'User: ' : 'AI: '}
          {/* Render weather data */}          {message.parts            .filter(part => part.type === 'data-weather')            .map((part, index) => (              <span key={index} className="weather-update">                {part.data.status === 'loading' ? (                  <>Getting weather for {part.data.city}...</>                ) : (                  <>                    Weather in {part.data.city}: {part.data.weather}                  </>                )}              </span>            ))}
          {/* Render text content */}          {message.parts            .filter(part => part.type === 'text')            .map((part, index) => (              <div key={index}>{part.text}</div>            ))}        </div>      ))}
      <form onSubmit={handleSubmit}>        <input          value={input}          onChange={e => setInput(e.target.value)}          placeholder="Ask about the weather..."        />        <button type="submit">Send</button>      </form>    </>  );}

Use Cases


  • RAG Applications - Stream sources and retrieved documents
  • Real-time Status - Show loading states and progress updates
  • Collaborative Tools - Stream live updates to shared artifacts
  • Analytics - Send usage data without cluttering message history
  • Notifications - Display temporary alerts and status messages

Message Metadata vs Data Parts


Both message metadata and data parts allow you to send additional information alongside messages, but they serve different purposes:

Message Metadata

Message metadata is best for message-level information that describes the message as a whole:

  • Attached at the message level via message.metadata

  • Sent using the messageMetadata callback in toUIMessageStreamResponse

  • Ideal for: timestamps, model info, token usage, user context

  • Type-safe with custom metadata types

    // Server: Send metadata about the messagereturn result.toUIMessageStreamResponse({ messageMetadata: ({ part }) => { if (part.type === 'finish') { return { model: part.response.modelId, totalTokens: part.totalUsage.totalTokens, createdAt: Date.now(), }; } },});

Data Parts

Data parts are best for streaming dynamic arbitrary data:

  • Added to the message parts array via message.parts

  • Streamed using createUIMessageStream and writer.write()

  • Can be reconciled/updated using the same ID

  • Support transient parts that don't persist

  • Ideal for: dynamic content, loading states, interactive components

    // Server: Stream data as part of message contentwriter.write({ type: 'data-weather', id: 'weather-1', data: { city: 'San Francisco', status: 'loading' },});

For more details on message metadata, see the Message Metadata documentation .

On this page

Streaming Custom Data

Setting Up Type-Safe Data Streaming

Streaming Data from the Server

Types of Streamable Data

Data Parts (Persistent)

Sources

Transient Data Parts (Ephemeral)

Data Part Reconciliation

Processing Data on the Client

Using the onData Callback

Rendering Persistent Data Parts

Complete Example

Use Cases

Message Metadata vs Data Parts

Message Metadata

Data Parts

Deploy and Scale AI Apps with Vercel.

Vercel delivers the infrastructure and developer experience you need to ship reliable AI-powered applications at scale.

Trusted by industry leaders:

  • OpenAI
  • Photoroom
  • leonardo-ai Logoleonardo-ai Logo
  • zapier Logozapier Logo

Talk to an expert