File: stream-text.md | Updated: 11/15/2025
Menu
Google Gemini Image Generation
Get started with Claude 3.7 Sonnet
Get started with OpenAI o3-mini
Generate Text with Chat Prompt
Generate Image with Chat Prompt
streamText Multi-Step Cookbook
Markdown Chatbot with Memoization
Generate Object with File Prompt through Form Submission
Model Context Protocol (MCP) Tools
Share useChat State Across Components
Human-in-the-Loop Agent with Next.js
Render Visual Interface in Chat
Generate Text with Chat Prompt
Generate Text with Image Prompt
Generate Object with a Reasoning Model
Stream Object with Image Prompt
Record Token Usage After Streaming Object
Record Final Object after Streaming Object
Model Context Protocol (MCP) Tools
Retrieval Augmented Generation
Generate Text with Chat Prompt
Restore Messages From Database
Render Visual Interface in Chat
Stream Updates to Visual Interfaces
Record Token Usage after Streaming User Interfaces
Copy markdown
=======================================================================
This example uses React Server Components (RSC). If you want to client side rendering and hooks instead, check out the "stream text" example with useCompletion .
Text generation can sometimes take a long time to complete, especially when you're generating a couple of paragraphs. In such cases, it is useful to stream the text generation process to the client in real-time. This allows the client to display the generated text as it is being generated, rather than have users wait for it to complete before displaying the result.
http://localhost:3000
Answer
Let's create a simple React component that will call the generate function when a button is clicked. The generate function will call the streamText function, which will then generate text based on the input prompt. To consume the stream of text in the client, we will use the readStreamableValue function from the @ai-sdk/rsc module.
app/page.tsx
'use client';
import { useState } from 'react';import { generate } from './actions';import { readStreamableValue } from '@ai-sdk/rsc';
// Allow streaming responses up to 30 secondsexport const maxDuration = 30;
export default function Home() { const [generation, setGeneration] = useState<string>('');
return ( <div> <button onClick={async () => { const { output } = await generate('Why is the sky blue?');
for await (const delta of readStreamableValue(output)) { setGeneration(currentGeneration => `${currentGeneration}${delta}`); } }} > Ask </button>
<div>{generation}</div> </div> );}
On the server side, we need to implement the generate function, which will call the streamText function. The streamText function will generate text based on the input prompt. In order to stream the text generation to the client, we will use createStreamableValue that can wrap any changeable value and stream it to the client.
Using DevTools, we can see the text generation being streamed to the client in real-time.
app/actions.ts
'use server';
import { streamText } from 'ai';import { openai } from '@ai-sdk/openai';import { createStreamableValue } from '@ai-sdk/rsc';
export async function generate(input: string) { const stream = createStreamableValue('');
(async () => { const { textStream } = streamText({ model: openai('gpt-3.5-turbo'), prompt: input, });
for await (const delta of textStream) { stream.update(delta); }
stream.done(); })();
return { output: stream.value };}
On this page
Deploy and Scale AI Apps with Vercel.
Vercel delivers the infrastructure and developer experience you need to ship reliable AI-powered applications at scale.
Trusted by industry leaders: