File: stopping-streams.md | Updated: 11/15/2025
Menu
v5 (Latest)
AI SDK 5.x
Model Context Protocol (MCP) Tools
Rendering UI with Language Models
Copy markdown
=======================================================================================
Cancelling ongoing streams is often needed. For example, users might want to stop a stream when they realize that the response is not what they want.
The different parts of the AI SDK support cancelling streams in different ways.
The AI SDK functions have an abortSignal argument that you can use to cancel a stream. You would use this if you want to cancel a stream from the server side to the LLM API, e.g. by forwarding the abortSignal from the request.
import { openai } from '@ai-sdk/openai';import { streamText } from 'ai';
export async function POST(req: Request) { const { prompt } = await req.json();
const result = streamText({ model: openai('gpt-4.1'), prompt, // forward the abort signal: abortSignal: req.signal, onAbort: ({ steps }) => { // Handle cleanup when stream is aborted console.log('Stream aborted after', steps.length, 'steps'); // Persist partial results to database }, });
return result.toTextStreamResponse();}
The hooks, e.g. useChat or useCompletion, provide a stop helper function that can be used to cancel a stream. This will cancel the stream from the client side to the server.
Stream abort functionality is not compatible with stream resumption. If you're using resume: true in useChat, the abort functionality will break the resumption mechanism. Choose either abort or resume functionality, but not both.
'use client';
import { useCompletion } from '@ai-sdk/react';
export default function Chat() { const { input, completion, stop, status, handleSubmit, handleInputChange } = useCompletion();
return ( <div> {(status === 'submitted' || status === 'streaming') && ( <button type="button" onClick={() => stop()}> Stop </button> )} {completion} <form onSubmit={handleSubmit}> <input value={input} onChange={handleInputChange} /> </form> </div> );}
When streams are aborted, you may need to perform cleanup operations such as persisting partial results or cleaning up resources. The onAbort callback provides a way to handle these scenarios on the server side.
Unlike onFinish, which is called when a stream completes normally, onAbort is specifically called when a stream is aborted via AbortSignal. This distinction allows you to handle normal completion and aborted streams differently.
For UI message streams (toUIMessageStreamResponse), the onFinish callback also receives an isAborted parameter that indicates whether the stream was aborted. This allows you to handle both completion and abort scenarios in a single callback.
import { streamText } from 'ai';
const result = streamText({ model: openai('gpt-4.1'), prompt: 'Write a long story...', abortSignal: controller.signal, onAbort: ({ steps }) => { // Called when stream is aborted - persist partial results await savePartialResults(steps); await logAbortEvent(steps.length); }, onFinish: ({ steps, totalUsage }) => { // Called when stream completes normally await saveFinalResults(steps, totalUsage); },});
The onAbort callback receives:
steps: Array of all completed steps before the abort occurredThis is particularly useful for:
You can also handle abort events directly in the stream using the abort stream part:
for await (const part of result.fullStream) { switch (part.type) { case 'text-delta': // Handle text delta content break; case 'abort': // Handle abort event directly in stream console.log('Stream was aborted'); break; // ... other cases }}
When using toUIMessageStreamResponse, you need to handle stream abortion slightly differently. The onFinish callback receives an isAborted parameter, and you should pass the consumeStream function to ensure proper abort handling:
import { openai } from '@ai-sdk/openai';import { consumeStream, convertToModelMessages, streamText, UIMessage,} from 'ai';
export async function POST(req: Request) { const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({ model: openai('gpt-4o'), messages: convertToModelMessages(messages), abortSignal: req.signal, });
return result.toUIMessageStreamResponse({ onFinish: async ({ isAborted }) => { if (isAborted) { console.log('Stream was aborted'); // Handle abort-specific cleanup } else { console.log('Stream completed normally'); // Handle normal completion } }, consumeSseStream: consumeStream, });}
The consumeStream function is necessary for proper abort handling in UI message streams. It ensures that the stream is properly consumed even when aborted, preventing potential memory leaks or hanging connections.
The AI SDK RSC does not currently support stopping streams.
On this page
Deploy and Scale AI Apps with Vercel.
Vercel delivers the infrastructure and developer experience you need to ship reliable AI-powered applications at scale.
Trusted by industry leaders: