File: stream-object.md | Updated: 11/15/2025
Menu
Google Gemini Image Generation
Get started with Claude 3.7 Sonnet
Get started with OpenAI o3-mini
Generate Text with Chat Prompt
Generate Image with Chat Prompt
streamText Multi-Step Cookbook
Markdown Chatbot with Memoization
Generate Object with File Prompt through Form Submission
Model Context Protocol (MCP) Tools
Share useChat State Across Components
Human-in-the-Loop Agent with Next.js
Render Visual Interface in Chat
Generate Text with Chat Prompt
Generate Text with Image Prompt
Generate Object with a Reasoning Model
Stream Object with Image Prompt
Record Token Usage After Streaming Object
Record Final Object after Streaming Object
Model Context Protocol (MCP) Tools
Retrieval Augmented Generation
Copy markdown
==============================================================================
Object generation can sometimes take a long time to complete, especially when you're generating a large schema. In such cases, it is useful to stream the object generation process to the client in real-time. This allows the client to display the generated object as it is being generated, rather than have users wait for it to complete before displaying the result.
http://localhost:3000
View Notifications
The streamObject function allows you to specify different output strategies using the output parameter. By default, the output mode is set to object, which will generate exactly the structured object that you specify in the schema option.
It is helpful to set up the schema in a separate file that is imported on both the client and server.
app/api/use-object/schema.ts
import { z } from 'zod';
// define a schema for the notificationsexport const notificationSchema = z.object({ notifications: z.array( z.object({ name: z.string().describe('Name of a fictional person.'), message: z.string().describe('Message. Do not use emojis or links.'), }), ),});
The client uses useObject
to stream the object generation process.
The results are partial and are displayed as they are received. Please note the code for handling undefined values in the JSX.
app/page.tsx
'use client';
import { experimental_useObject as useObject } from '@ai-sdk/react';import { notificationSchema } from './api/use-object/schema';
export default function Page() { const { object, submit } = useObject({ api: '/api/use-object', schema: notificationSchema, });
return ( <div> <button onClick={() => submit('Messages during finals week.')}> Generate notifications </button>
{object?.notifications?.map((notification, index) => ( <div key={index}> <p>{notification?.name}</p> <p>{notification?.message}</p> </div> ))} </div> );}
On the server, we use streamObject
to stream the object generation process.
app/api/use-object/route.ts
import { openai } from '@ai-sdk/openai';import { streamObject } from 'ai';import { notificationSchema } from './schema';
// Allow streaming responses up to 30 secondsexport const maxDuration = 30;
export async function POST(req: Request) { const context = await req.json();
const result = streamObject({ model: openai('gpt-4.1'), schema: notificationSchema, prompt: `Generate 3 notifications for a messages app in this context:` + context, });
return result.toTextStreamResponse();}
Loading State and Stopping the Stream
You can use the loading state to display a loading indicator while the object is being generated. You can also use the stop function to stop the object generation process.
app/page.tsx
'use client';
import { experimental_useObject as useObject } from '@ai-sdk/react';import { notificationSchema } from './api/use-object/schema';
export default function Page() { const { object, submit, isLoading, stop } = useObject({ api: '/api/use-object', schema: notificationSchema, });
return ( <div> <button onClick={() => submit('Messages during finals week.')} disabled={isLoading} > Generate notifications </button>
{isLoading && ( <div> <div>Loading...</div> <button type="button" onClick={() => stop()}> Stop </button> </div> )}
{object?.notifications?.map((notification, index) => ( <div key={index}> <p>{notification?.name}</p> <p>{notification?.message}</p> </div> ))} </div> );}
The "array" output mode allows you to stream an array of objects one element at a time. This is particularly useful when generating lists of items.
First, update the schema to generate a single object (remove the z.array()).
app/api/use-object/schema.ts
import { z } from 'zod';
// define a schema for a single notificationexport const notificationSchema = z.object({ name: z.string().describe('Name of a fictional person.'), message: z.string().describe('Message. Do not use emojis or links.'),});
On the client, you wrap the schema in z.array() to generate an array of objects.
app/page.tsx
'use client';
import { experimental_useObject as useObject } from '@ai-sdk/react';import { notificationSchema } from '../api/use-object/schema';import z from 'zod';
export default function Page() { const { object, submit, isLoading, stop } = useObject({ api: '/api/use-object', schema: z.array(notificationSchema), });
return ( <div> <button onClick={() => submit('Messages during finals week.')} disabled={isLoading} > Generate notifications </button>
{isLoading && ( <div> <div>Loading...</div> <button type="button" onClick={() => stop()}> Stop </button> </div> )}
{object?.map((notification, index) => ( <div key={index}> <p>{notification?.name}</p> <p>{notification?.message}</p> </div> ))} </div> );}
On the server, specify output: 'array' to generate an array of objects.
app/api/use-object/route.ts
import { openai } from '@ai-sdk/openai';import { streamObject } from 'ai';import { notificationSchema } from './schema';
export const maxDuration = 30;
export async function POST(req: Request) { const context = await req.json();
const result = streamObject({ model: openai('gpt-4.1'), output: 'array', schema: notificationSchema, prompt: `Generate 3 notifications for a messages app in this context:` + context, });
return result.toTextStreamResponse();}
The "no-schema" output mode can be used when you don't want to specify a schema, for example when the data structure is defined by a dynamic user request. When using this mode, omit the schema parameter and set output: 'no-schema'. The model will still attempt to generate JSON data based on the prompt.
On the client, you wrap the schema in z.array() to generate an array of objects.
app/page.tsx
'use client';
import { experimental_useObject as useObject } from '@ai-sdk/react';import { z } from 'zod';
export default function Page() { const { object, submit, isLoading, stop } = useObject({ api: '/api/use-object', schema: z.unknown(), });
return ( <div> <button onClick={() => submit('Messages during finals week.')} disabled={isLoading} > Generate notifications </button>
{isLoading && ( <div> <div>Loading...</div> <button type="button" onClick={() => stop()}> Stop </button> </div> )}
{JSON.stringify(object, null, 2)} </div> );}
On the server, specify output: 'no-schema'.
app/api/use-object/route.ts
import { openai } from '@ai-sdk/openai';import { streamObject } from 'ai';
export const maxDuration = 30;
export async function POST(req: Request) { const context = await req.json();
const result = streamObject({ model: openai('gpt-4o'), output: 'no-schema', prompt: `Generate 3 notifications (in JSON) for a messages app in this context:` + context, });
return result.toTextStreamResponse();}
On this page
Loading State and Stopping the Stream
Deploy and Scale AI Apps with Vercel.
Vercel delivers the infrastructure and developer experience you need to ship reliable AI-powered applications at scale.
Trusted by industry leaders: