📄 ai-sdk/docs/advanced/stopping-streams

File: stopping-streams.md | Updated: 11/15/2025

Source: https://ai-sdk.dev/docs/advanced/stopping-streams

AI SDK

Menu

v5 (Latest)

AI SDK 5.x

AI SDK by Vercel

AI SDK 6 Beta

Foundations

Overview

Providers and Models

Prompts

Tools

Streaming

Getting Started

Navigating the Library

Next.js App Router

Next.js Pages Router

Svelte

Vue.js (Nuxt)

Node.js

Expo

Agents

Agents

Building Agents

Workflow Patterns

Loop Control

AI SDK Core

Overview

Generating Text

Generating Structured Data

Tool Calling

Model Context Protocol (MCP) Tools

Prompt Engineering

Settings

Embeddings

Image Generation

Transcription

Speech

Language Model Middleware

Provider & Model Management

Error Handling

Testing

Telemetry

AI SDK UI

Overview

Chatbot

Chatbot Message Persistence

Chatbot Resume Streams

Chatbot Tool Usage

Generative User Interfaces

Completion

Object Generation

Streaming Custom Data

Error Handling

Transport

Reading UIMessage Streams

Message Metadata

Stream Protocols

AI SDK RSC

Advanced

Prompt Engineering

Stopping Streams

Backpressure

Caching

Multiple Streamables

Rate Limiting

Rendering UI with Language Models

Language Models as Routers

Multistep Interfaces

Sequential Generations

Vercel Deployment Guide

Reference

AI SDK Core

AI SDK UI

AI SDK RSC

Stream Helpers

AI SDK Errors

Migration Guides

Troubleshooting

Copy markdown

Stopping Streams

=======================================================================================

Cancelling ongoing streams is often needed. For example, users might want to stop a stream when they realize that the response is not what they want.

The different parts of the AI SDK support cancelling streams in different ways.

AI SDK Core


The AI SDK functions have an abortSignal argument that you can use to cancel a stream. You would use this if you want to cancel a stream from the server side to the LLM API, e.g. by forwarding the abortSignal from the request.

import { openai } from '@ai-sdk/openai';import { streamText } from 'ai';
export async function POST(req: Request) {  const { prompt } = await req.json();
  const result = streamText({    model: openai('gpt-4.1'),    prompt,    // forward the abort signal:    abortSignal: req.signal,    onAbort: ({ steps }) => {      // Handle cleanup when stream is aborted      console.log('Stream aborted after', steps.length, 'steps');      // Persist partial results to database    },  });
  return result.toTextStreamResponse();}

AI SDK UI


The hooks, e.g. useChat or useCompletion, provide a stop helper function that can be used to cancel a stream. This will cancel the stream from the client side to the server.

Stream abort functionality is not compatible with stream resumption. If you're using resume: true in useChat, the abort functionality will break the resumption mechanism. Choose either abort or resume functionality, but not both.

'use client';
import { useCompletion } from '@ai-sdk/react';
export default function Chat() {  const { input, completion, stop, status, handleSubmit, handleInputChange } =    useCompletion();
  return (    <div>      {(status === 'submitted' || status === 'streaming') && (        <button type="button" onClick={() => stop()}>          Stop        </button>      )}      {completion}      <form onSubmit={handleSubmit}>        <input value={input} onChange={handleInputChange} />      </form>    </div>  );}

Handling stream abort cleanup


When streams are aborted, you may need to perform cleanup operations such as persisting partial results or cleaning up resources. The onAbort callback provides a way to handle these scenarios on the server side.

Unlike onFinish, which is called when a stream completes normally, onAbort is specifically called when a stream is aborted via AbortSignal. This distinction allows you to handle normal completion and aborted streams differently.

For UI message streams (toUIMessageStreamResponse), the onFinish callback also receives an isAborted parameter that indicates whether the stream was aborted. This allows you to handle both completion and abort scenarios in a single callback.

import { streamText } from 'ai';
const result = streamText({  model: openai('gpt-4.1'),  prompt: 'Write a long story...',  abortSignal: controller.signal,  onAbort: ({ steps }) => {    // Called when stream is aborted - persist partial results    await savePartialResults(steps);    await logAbortEvent(steps.length);  },  onFinish: ({ steps, totalUsage }) => {    // Called when stream completes normally    await saveFinalResults(steps, totalUsage);  },});

The onAbort callback receives:

  • steps: Array of all completed steps before the abort occurred

This is particularly useful for:

  • Persisting partial conversation history to database
  • Saving partial progress for later continuation
  • Cleaning up server-side resources or connections
  • Logging abort events for analytics

You can also handle abort events directly in the stream using the abort stream part:

for await (const part of result.fullStream) {  switch (part.type) {    case 'text-delta':      // Handle text delta content      break;    case 'abort':      // Handle abort event directly in stream      console.log('Stream was aborted');      break;    // ... other cases  }}

UI Message Streams


When using toUIMessageStreamResponse, you need to handle stream abortion slightly differently. The onFinish callback receives an isAborted parameter, and you should pass the consumeStream function to ensure proper abort handling:

import { openai } from '@ai-sdk/openai';import {  consumeStream,  convertToModelMessages,  streamText,  UIMessage,} from 'ai';
export async function POST(req: Request) {  const { messages }: { messages: UIMessage[] } = await req.json();
  const result = streamText({    model: openai('gpt-4o'),    messages: convertToModelMessages(messages),    abortSignal: req.signal,  });
  return result.toUIMessageStreamResponse({    onFinish: async ({ isAborted }) => {      if (isAborted) {        console.log('Stream was aborted');        // Handle abort-specific cleanup      } else {        console.log('Stream completed normally');        // Handle normal completion      }    },    consumeSseStream: consumeStream,  });}

The consumeStream function is necessary for proper abort handling in UI message streams. It ensures that the stream is properly consumed even when aborted, preventing potential memory leaks or hanging connections.

AI SDK RSC


The AI SDK RSC does not currently support stopping streams.

On this page

Stopping Streams

AI SDK Core

AI SDK UI

Handling stream abort cleanup

UI Message Streams

AI SDK RSC

Deploy and Scale AI Apps with Vercel.

Vercel delivers the infrastructure and developer experience you need to ship reliable AI-powered applications at scale.

Trusted by industry leaders:

  • OpenAI
  • Photoroom
  • leonardo-ai Logoleonardo-ai Logo
  • zapier Logozapier Logo

Talk to an expert