File: send-custom-body-from-use-chat.md | Updated: 11/15/2025
Menu
Google Gemini Image Generation
Get started with Claude 3.7 Sonnet
Get started with OpenAI o3-mini
Generate Text with Chat Prompt
Generate Image with Chat Prompt
streamText Multi-Step Cookbook
Markdown Chatbot with Memoization
Generate Object with File Prompt through Form Submission
Model Context Protocol (MCP) Tools
Share useChat State Across Components
Human-in-the-Loop Agent with Next.js
Render Visual Interface in Chat
Generate Text with Chat Prompt
Generate Text with Image Prompt
Generate Object with a Reasoning Model
Stream Object with Image Prompt
Record Token Usage After Streaming Object
Record Final Object after Streaming Object
Model Context Protocol (MCP) Tools
Retrieval Augmented Generation
Copy markdown
===============================================================================================================================
If you are looking to send custom values alongside each message, check out the chatbot request configuration documentation .
By default, useChat sends all messages as well as information from the request to the server. However, it is often desirable to control the entire body content that is sent to the server, e.g. to:
The prepareSendMessagesRequest option allows you to customize the entire body content that is sent to the server. The function receives the message list, the request data, and the request body from the append call. It should return the body content that will be sent to the server.
This example shows how to only send the text of the last message to the server. This can be useful if you want to reduce the amount of data sent to the server.
app/page.tsx
'use client';
import { useChat } from '@ai-sdk/react';import { DefaultChatTransport } from 'ai';import { useState } from 'react';
export default function Chat() { const [input, setInput] = useState(''); const { messages, sendMessage } = useChat({ transport: new DefaultChatTransport({ prepareSendMessagesRequest: ({ id, messages }) => { return { body: { id, message: messages[messages.length - 1], }, }; }, }), });
return ( <div> {messages.map((message, index) => ( <div key={index}> {message.role === 'user' ? 'User: ' : 'AI: '} {message.parts.map((part) => { switch (part.type) { case "text": return <div key={`${message.id}-text`}>{part.text}</div>; } })} </div> ))}
<form onSubmit={(e) => { e.preventDefault(); sendMessage({text: input}); setInput(''); }}> <input value={input} onChange={(e) => setInput(e.currentTarget.value)} /> </form> </div> );}
We need to adjust the server to receive the custom request format with the chat ID and last message. The rest of the message history can be loaded from storage.
app/api/chat/route.ts
import { openai } from '@ai-sdk/openai';import { convertToModelMessages, streamText } from 'ai';
// Allow streaming responses up to 30 secondsexport const maxDuration = 30;
export async function POST(req: Request) { const { id, message } = await req.json();
// Load existing messages and add the new one const messages = await loadMessages(id); messages.push(message);
// Call the language model const result = streamText({ model: openai('gpt-4.1'), messages: convertToModelMessages(messages), });
// Respond with the stream return result.toUIMessageStreamResponse({ originalMessages: messages, onFinish: ({ messages: newMessages }) => { saveMessages(id, newMessages); }, });}
On this page
Deploy and Scale AI Apps with Vercel.
Vercel delivers the infrastructure and developer experience you need to ship reliable AI-powered applications at scale.
Trusted by industry leaders: