File: prune-messages.md | Updated: 11/15/2025
Menu
v5 (Latest)
AI SDK 5.x
Model Context Protocol (MCP) Tools
Copy markdown
==============================================================================================
The pruneMessages function is used to prune or filter an array of ModelMessage objects. This is useful for reducing message context (to save tokens), removing intermediate reasoning, or trimming tool calls and empty messages before sending to an LLM.
app/api/chat/route.ts
import { pruneMessages, streamText } from 'ai';
export async function POST(req: Request) { const { messages } = await req.json();
const prunedMessages = pruneMessages({ messages, reasoning: 'before-last-message', toolCalls: 'before-last-2-messages', emptyMessages: 'remove', });
const result = streamText({ model: 'openai/gpt-4o', messages: prunedMessages, });
return result.toUIMessageStreamResponse();}
import { pruneMessages } from "ai"
ModelMessage[]
An array of ModelMessage objects to prune.
'all' | 'before-last-message' | 'none'
How to remove reasoning content from assistant messages. Default: "none".
'all' | 'before-last-message' | 'before-last-${number}-messages' | 'none' | PruneToolCallsOption[]
How to prune tool call/results/approval content. Can specify strategy or a list with tools.
'keep' | 'remove'
Whether to keep or remove messages whose content is empty after pruning. Default: "remove".
An array of ModelMessage
objects, pruned according to the provided options.
Array
The pruned list of ModelMessage objects
import { pruneMessages } from 'ai';
const pruned = pruneMessages({ messages, reasoning: 'all', // Remove all reasoning parts toolCalls: 'before-last-message', // Remove tool calls except those in the last message});
'all' to remove all, 'before-last-message' to keep reasoning in the last message, or 'none' to retain all reasoning.'all': Prune all such content.'before-last-message': Prune except in the last message.before-last-N-messages: Prune except in the last N messages.'none': Do not prune.'remove' (default) to exclude messages that have no content after pruning.Tip:
pruneMessagesis typically used prior to sending a context window to an LLM to reduce message/token count, especially after a series of tool-calls and approvals.
For advanced usage and the full list of possible message parts, see ModelMessage
and pruneMessages implementation
.
On this page
Deploy and Scale AI Apps with Vercel.
Vercel delivers the infrastructure and developer experience you need to ship reliable AI-powered applications at scale.
Trusted by industry leaders: