📄 ai-sdk/cookbook/next/stream-object

File: stream-object.md | Updated: 11/15/2025

Source: https://ai-sdk.dev/cookbook/next/stream-object

AI SDK

Menu

Guides

RAG Agent

Multi-Modal Agent

Slackbot Agent Guide

Natural Language Postgres

Get started with Computer Use

Get started with Gemini 2.5

Get started with Claude 4

OpenAI Responses API

Google Gemini Image Generation

Get started with Claude 3.7 Sonnet

Get started with Llama 3.1

Get started with GPT-5

Get started with OpenAI o1

Get started with OpenAI o3-mini

Get started with DeepSeek R1

Next.js

Generate Text

Generate Text with Chat Prompt

Generate Image with Chat Prompt

Stream Text

Stream Text with Chat Prompt

Stream Text with Image Prompt

Chat with PDFs

streamText Multi-Step Cookbook

Markdown Chatbot with Memoization

Generate Object

Generate Object with File Prompt through Form Submission

Stream Object

Call Tools

Call Tools in Multiple Steps

Model Context Protocol (MCP) Tools

Share useChat State Across Components

Human-in-the-Loop Agent with Next.js

Send Custom Body from useChat

Render Visual Interface in Chat

Caching Middleware

Node

Generate Text

Generate Text with Chat Prompt

Generate Text with Image Prompt

Stream Text

Stream Text with Chat Prompt

Stream Text with Image Prompt

Stream Text with File Prompt

Generate Object with a Reasoning Model

Generate Object

Stream Object

Stream Object with Image Prompt

Record Token Usage After Streaming Object

Record Final Object after Streaming Object

Call Tools

Call Tools with Image Prompt

Call Tools in Multiple Steps

Model Context Protocol (MCP) Tools

Manual Agent Loop

Web Search Agent

Embed Text

Embed Text in Batch

Intercepting Fetch Requests

Local Caching Middleware

Retrieval Augmented Generation

Knowledge Base Agent

API Servers

Node.js HTTP Server

Express

Hono

Fastify

Nest.js

React Server Components

Copy markdown

Stream Object

==============================================================================

Object generation can sometimes take a long time to complete, especially when you're generating a large schema. In such cases, it is useful to stream the object generation process to the client in real-time. This allows the client to display the generated object as it is being generated, rather than have users wait for it to complete before displaying the result.

http://localhost:3000

View Notifications

Object Mode


The streamObject function allows you to specify different output strategies using the output parameter. By default, the output mode is set to object, which will generate exactly the structured object that you specify in the schema option.

Schema

It is helpful to set up the schema in a separate file that is imported on both the client and server.

app/api/use-object/schema.ts

import { z } from 'zod';
// define a schema for the notificationsexport const notificationSchema = z.object({  notifications: z.array(    z.object({      name: z.string().describe('Name of a fictional person.'),      message: z.string().describe('Message. Do not use emojis or links.'),    }),  ),});

Client

The client uses useObject to stream the object generation process.

The results are partial and are displayed as they are received. Please note the code for handling undefined values in the JSX.

app/page.tsx

'use client';
import { experimental_useObject as useObject } from '@ai-sdk/react';import { notificationSchema } from './api/use-object/schema';
export default function Page() {  const { object, submit } = useObject({    api: '/api/use-object',    schema: notificationSchema,  });
  return (    <div>      <button onClick={() => submit('Messages during finals week.')}>        Generate notifications      </button>
      {object?.notifications?.map((notification, index) => (        <div key={index}>          <p>{notification?.name}</p>          <p>{notification?.message}</p>        </div>      ))}    </div>  );}

Server

On the server, we use streamObject to stream the object generation process.

app/api/use-object/route.ts

import { openai } from '@ai-sdk/openai';import { streamObject } from 'ai';import { notificationSchema } from './schema';
// Allow streaming responses up to 30 secondsexport const maxDuration = 30;
export async function POST(req: Request) {  const context = await req.json();
  const result = streamObject({    model: openai('gpt-4.1'),    schema: notificationSchema,    prompt:      `Generate 3 notifications for a messages app in this context:` + context,  });
  return result.toTextStreamResponse();}

Loading State and Stopping the Stream


You can use the loading state to display a loading indicator while the object is being generated. You can also use the stop function to stop the object generation process.

app/page.tsx

'use client';
import { experimental_useObject as useObject } from '@ai-sdk/react';import { notificationSchema } from './api/use-object/schema';
export default function Page() {  const { object, submit, isLoading, stop } = useObject({    api: '/api/use-object',    schema: notificationSchema,  });
  return (    <div>      <button        onClick={() => submit('Messages during finals week.')}        disabled={isLoading}      >        Generate notifications      </button>
      {isLoading && (        <div>          <div>Loading...</div>          <button type="button" onClick={() => stop()}>            Stop          </button>        </div>      )}
      {object?.notifications?.map((notification, index) => (        <div key={index}>          <p>{notification?.name}</p>          <p>{notification?.message}</p>        </div>      ))}    </div>  );}

Array Mode


The "array" output mode allows you to stream an array of objects one element at a time. This is particularly useful when generating lists of items.

Schema

First, update the schema to generate a single object (remove the z.array()).

app/api/use-object/schema.ts

import { z } from 'zod';
// define a schema for a single notificationexport const notificationSchema = z.object({  name: z.string().describe('Name of a fictional person.'),  message: z.string().describe('Message. Do not use emojis or links.'),});

Client

On the client, you wrap the schema in z.array() to generate an array of objects.

app/page.tsx

'use client';
import { experimental_useObject as useObject } from '@ai-sdk/react';import { notificationSchema } from '../api/use-object/schema';import z from 'zod';
export default function Page() {  const { object, submit, isLoading, stop } = useObject({    api: '/api/use-object',    schema: z.array(notificationSchema),  });
  return (    <div>      <button        onClick={() => submit('Messages during finals week.')}        disabled={isLoading}      >        Generate notifications      </button>
      {isLoading && (        <div>          <div>Loading...</div>          <button type="button" onClick={() => stop()}>            Stop          </button>        </div>      )}
      {object?.map((notification, index) => (        <div key={index}>          <p>{notification?.name}</p>          <p>{notification?.message}</p>        </div>      ))}    </div>  );}

Server

On the server, specify output: 'array' to generate an array of objects.

app/api/use-object/route.ts

import { openai } from '@ai-sdk/openai';import { streamObject } from 'ai';import { notificationSchema } from './schema';
export const maxDuration = 30;
export async function POST(req: Request) {  const context = await req.json();
  const result = streamObject({    model: openai('gpt-4.1'),    output: 'array',    schema: notificationSchema,    prompt:      `Generate 3 notifications for a messages app in this context:` + context,  });
  return result.toTextStreamResponse();}

No Schema Mode


The "no-schema" output mode can be used when you don't want to specify a schema, for example when the data structure is defined by a dynamic user request. When using this mode, omit the schema parameter and set output: 'no-schema'. The model will still attempt to generate JSON data based on the prompt.

Client

On the client, you wrap the schema in z.array() to generate an array of objects.

app/page.tsx

'use client';
import { experimental_useObject as useObject } from '@ai-sdk/react';import { z } from 'zod';
export default function Page() {  const { object, submit, isLoading, stop } = useObject({    api: '/api/use-object',    schema: z.unknown(),  });
  return (    <div>      <button        onClick={() => submit('Messages during finals week.')}        disabled={isLoading}      >        Generate notifications      </button>
      {isLoading && (        <div>          <div>Loading...</div>          <button type="button" onClick={() => stop()}>            Stop          </button>        </div>      )}
      {JSON.stringify(object, null, 2)}    </div>  );}

Server

On the server, specify output: 'no-schema'.

app/api/use-object/route.ts

import { openai } from '@ai-sdk/openai';import { streamObject } from 'ai';
export const maxDuration = 30;
export async function POST(req: Request) {  const context = await req.json();
  const result = streamObject({    model: openai('gpt-4o'),    output: 'no-schema',    prompt:      `Generate 3 notifications (in JSON) for a messages app in this context:` +      context,  });
  return result.toTextStreamResponse();}

On this page

Stream Object

Object Mode

Schema

Client

Server

Loading State and Stopping the Stream

Array Mode

Schema

Client

Server

No Schema Mode

Client

Server

Deploy and Scale AI Apps with Vercel.

Vercel delivers the infrastructure and developer experience you need to ship reliable AI-powered applications at scale.

Trusted by industry leaders:

  • OpenAI
  • Photoroom
  • leonardo-ai Logoleonardo-ai Logo
  • zapier Logozapier Logo

Talk to an expert