File: generate-text.md | Updated: 11/15/2025
Menu
Google Gemini Image Generation
Get started with Claude 3.7 Sonnet
Get started with OpenAI o3-mini
Generate Text with Chat Prompt
Generate Image with Chat Prompt
streamText Multi-Step Cookbook
Markdown Chatbot with Memoization
Generate Object with File Prompt through Form Submission
Model Context Protocol (MCP) Tools
Share useChat State Across Components
Human-in-the-Loop Agent with Next.js
Render Visual Interface in Chat
Generate Text with Chat Prompt
Generate Text with Image Prompt
Generate Object with a Reasoning Model
Stream Object with Image Prompt
Record Token Usage After Streaming Object
Record Final Object after Streaming Object
Model Context Protocol (MCP) Tools
Retrieval Augmented Generation
Generate Text with Chat Prompt
Restore Messages From Database
Render Visual Interface in Chat
Stream Updates to Visual Interfaces
Record Token Usage after Streaming User Interfaces
Copy markdown
=============================================================================
This example uses React Server Components (RSC). If you want to client side rendering and hooks instead, check out the "generate text" example with useState .
A situation may arise when you need to generate text based on a prompt. For example, you may want to generate a response to a question or summarize a body of text. The generateText function can be used to generate text based on the input prompt.
http://localhost:3000
Answer
Let's create a simple React component that will call the getAnswer function when a button is clicked. The getAnswer function will call the generateText function from the ai module, which will then generate text based on the input prompt.
app/page.tsx
'use client';
import { useState } from 'react';import { getAnswer } from './actions';
// Allow streaming responses up to 30 secondsexport const maxDuration = 30;
export default function Home() { const [generation, setGeneration] = useState<string>('');
return ( <div> <button onClick={async () => { const { text } = await getAnswer('Why is the sky blue?'); setGeneration(text); }} > Answer </button> <div>{generation}</div> </div> );}
On the server side, we need to implement the getAnswer function, which will call the generateText function from the ai module. The generateText function will generate text based on the input prompt.
app/actions.ts
'use server';
import { generateText } from 'ai';import { openai } from '@ai-sdk/openai';
export async function getAnswer(question: string) { const { text, finishReason, usage } = await generateText({ model: openai('gpt-3.5-turbo'), prompt: question, });
return { text, finishReason, usage };}
On this page
Deploy and Scale AI Apps with Vercel.
Vercel delivers the infrastructure and developer experience you need to ship reliable AI-powered applications at scale.
Trusted by industry leaders: