File: stream-abort-handling.md | Updated: 11/15/2025
Menu
v5 (Latest)
AI SDK 5.x
Model Context Protocol (MCP) Tools
Client-Side Function Calls Not Invoked
Server Actions in Client Components
useChat/useCompletion stream output contains 0:... instead of text
Tool Invocation Missing Result Error
Streaming Not Working When Deployed
Streaming Not Working When Proxied
Getting Timeouts When Deploying on Vercel
useChat Failed to Parse Stream
Server Action Plain Objects Error
Custom headers, body, and credentials not working with useChat
TypeScript performance issues with Zod and AI SDK 5
Repeated assistant messages in useChat
onFinish not called when stream is aborted
Tool calling with generateObject and streamObject
Abort breaks resumable streams
Streaming Status Shows But No Text Appears
Stale body values with useChat
Unsupported model version error
Object generation failed with OpenAI
Model is not assignable to type "LanguageModelV1"
TypeScript error "Cannot find namespace 'JSX'"
React error "Maximum update depth exceeded"
Jest: cannot find module '@ai-sdk/rsc'
Copy markdown
onFinish not called when stream is aborted
=======================================================================================================================================================
When using toUIMessageStreamResponse with an onFinish callback, the callback may not execute when the stream is aborted. This happens because the abort handler immediately terminates the response, preventing the onFinish callback from being triggered.
// Server-side code where onFinish isn't called on abortexport async function POST(req: Request) { const { messages } = await req.json();
const result = streamText({ model: openai('gpt-4o'), messages: convertToModelMessages(messages), abortSignal: req.signal, });
return result.toUIMessageStreamResponse({ onFinish: async ({ isAborted }) => { // This isn't called when the stream is aborted! if (isAborted) { console.log('Stream was aborted'); // Handle abort-specific cleanup } else { console.log('Stream completed normally'); // Handle normal completion } }, });}
When a stream is aborted, the response is immediately terminated. Without proper handling, the onFinish callback has no chance to execute, preventing important cleanup operations like saving partial results or logging abort events.
Add consumeStream to the toUIMessageStreamResponse configuration. This ensures that abort events are properly captured and forwarded to the onFinish callback, allowing it to execute even when the stream is aborted.
// other imports...import { consumeStream } from 'ai';
export async function POST(req: Request) { const { messages } = await req.json();
const result = streamText({ model: openai('gpt-4o'), messages: convertToModelMessages(messages), abortSignal: req.signal, });
return result.toUIMessageStreamResponse({ onFinish: async ({ isAborted }) => { // Now this WILL be called even when aborted! if (isAborted) { console.log('Stream was aborted'); // Handle abort-specific cleanup } else { console.log('Stream completed normally'); // Handle normal completion } }, consumeSseStream: consumeStream, // This enables onFinish to be called on abort });}
On this page
onFinish not called when stream is aborted
Deploy and Scale AI Apps with Vercel.
Vercel delivers the infrastructure and developer experience you need to ship reliable AI-powered applications at scale.
Trusted by industry leaders: