📄 ai-sdk/docs/agents/loop-control

File: loop-control.md | Updated: 11/15/2025

Source: https://ai-sdk.dev/docs/agents/loop-control

AI SDK

Menu

v5 (Latest)

AI SDK 5.x

AI SDK by Vercel

AI SDK 6 Beta

Foundations

Overview

Providers and Models

Prompts

Tools

Streaming

Getting Started

Navigating the Library

Next.js App Router

Next.js Pages Router

Svelte

Vue.js (Nuxt)

Node.js

Expo

Agents

Agents

Building Agents

Workflow Patterns

Loop Control

AI SDK Core

Overview

Generating Text

Generating Structured Data

Tool Calling

Model Context Protocol (MCP) Tools

Prompt Engineering

Settings

Embeddings

Image Generation

Transcription

Speech

Language Model Middleware

Provider & Model Management

Error Handling

Testing

Telemetry

AI SDK UI

Overview

Chatbot

Chatbot Message Persistence

Chatbot Resume Streams

Chatbot Tool Usage

Generative User Interfaces

Completion

Object Generation

Streaming Custom Data

Error Handling

Transport

Reading UIMessage Streams

Message Metadata

Stream Protocols

AI SDK RSC

Advanced

Reference

AI SDK Core

AI SDK UI

AI SDK RSC

Stream Helpers

AI SDK Errors

Migration Guides

Troubleshooting

Copy markdown

Loop Control

=========================================================================

You can control both the execution flow and the settings at each step of the agent loop. The AI SDK provides built-in loop control through two parameters: stopWhen for defining stopping conditions and prepareStep for modifying settings (model, tools, messages, and more) between steps.

Stop Conditions


The stopWhen parameter controls when to stop execution when there are tool results in the last step. By default, agents stop after a single step using stepCountIs(1).

When you provide stopWhen, the agent continues executing after tool calls until a stopping condition is met. When the condition is an array, execution stops when any of the conditions are met.

Use Built-in Conditions

The AI SDK provides several built-in stopping conditions:

import { Experimental_Agent as Agent, stepCountIs } from 'ai';
const agent = new Agent({  model: 'openai/gpt-4o',  tools: {    // your tools  },  stopWhen: stepCountIs(20), // Stop after 20 steps maximum});
const result = await agent.generate({  prompt: 'Analyze this dataset and create a summary report',});

Combine Multiple Conditions

Combine multiple stopping conditions. The loop stops when it meets any condition:

import { Experimental_Agent as Agent, stepCountIs, hasToolCall } from 'ai';
const agent = new Agent({  model: 'openai/gpt-4o',  tools: {    // your tools  },  stopWhen: [    stepCountIs(20), // Maximum 20 steps    hasToolCall('someTool'), // Stop after calling 'someTool'  ],});
const result = await agent.generate({  prompt: 'Research and analyze the topic',});

Create Custom Conditions

Build custom stopping conditions for specific requirements:

import { Experimental_Agent as Agent, StopCondition, ToolSet } from 'ai';
const tools = {  // your tools} satisfies ToolSet;
const hasAnswer: StopCondition<typeof tools> = ({ steps }) => {  // Stop when the model generates text containing "ANSWER:"  return steps.some(step => step.text?.includes('ANSWER:')) ?? false;};
const agent = new Agent({  model: 'openai/gpt-4o',  tools,  stopWhen: hasAnswer,});
const result = await agent.generate({  prompt: 'Find the answer and respond with "ANSWER: [your answer]"',});

Custom conditions receive step information across all steps:

const budgetExceeded: StopCondition<typeof tools> = ({ steps }) => {  const totalUsage = steps.reduce(    (acc, step) => ({      inputTokens: acc.inputTokens + (step.usage?.inputTokens ?? 0),      outputTokens: acc.outputTokens + (step.usage?.outputTokens ?? 0),    }),    { inputTokens: 0, outputTokens: 0 },  );
  const costEstimate =    (totalUsage.inputTokens * 0.01 + totalUsage.outputTokens * 0.03) / 1000;  return costEstimate > 0.5; // Stop if cost exceeds $0.50};

Prepare Step


The prepareStep callback runs before each step in the loop and defaults to the initial settings if you don't return any changes. Use it to modify settings, manage context, or implement dynamic behavior based on execution history.

Dynamic Model Selection

Switch models based on step requirements:

import { Experimental_Agent as Agent } from 'ai';
const agent = new Agent({  model: 'openai/gpt-4o-mini', // Default model  tools: {    // your tools  },  prepareStep: async ({ stepNumber, messages }) => {    // Use a stronger model for complex reasoning after initial steps    if (stepNumber > 2 && messages.length > 10) {      return {        model: 'openai/gpt-4o',      };    }    // Continue with default settings    return {};  },});
const result = await agent.generate({  prompt: '...',});

Context Management

Manage growing conversation history in long-running loops:

import { Experimental_Agent as Agent } from 'ai';
const agent = new Agent({  model: 'openai/gpt-4o',  tools: {    // your tools  },  prepareStep: async ({ messages }) => {    // Keep only recent messages to stay within context limits    if (messages.length > 20) {      return {        messages: [          messages[0], // Keep system message          ...messages.slice(-10), // Keep last 10 messages        ],      };    }    return {};  },});
const result = await agent.generate({  prompt: '...',});

Tool Selection

Control which tools are available at each step:

import { Experimental_Agent as Agent } from 'ai';
const agent = new Agent({  model: 'openai/gpt-4o',  tools: {    search: searchTool,    analyze: analyzeTool,    summarize: summarizeTool,  },  prepareStep: async ({ stepNumber, steps }) => {    // Search phase (steps 0-2)    if (stepNumber <= 2) {      return {        activeTools: ['search'],        toolChoice: 'required',      };    }
    // Analysis phase (steps 3-5)    if (stepNumber <= 5) {      return {        activeTools: ['analyze'],      };    }
    // Summary phase (step 6+)    return {      activeTools: ['summarize'],      toolChoice: 'required',    };  },});
const result = await agent.generate({  prompt: '...',});

You can also force a specific tool to be used:

prepareStep: async ({ stepNumber }) => {  if (stepNumber === 0) {    // Force the search tool to be used first    return {      toolChoice: { type: 'tool', toolName: 'search' },    };  }
  if (stepNumber === 5) {    // Force the summarize tool after analysis    return {      toolChoice: { type: 'tool', toolName: 'summarize' },    };  }
  return {};};

Message Modification

Transform messages before sending them to the model:

import { Experimental_Agent as Agent } from 'ai';
const agent = new Agent({  model: 'openai/gpt-4o',  tools: {    // your tools  },  prepareStep: async ({ messages, stepNumber }) => {    // Summarize tool results to reduce token usage    const processedMessages = messages.map(msg => {      if (msg.role === 'tool' && msg.content.length > 1000) {        return {          ...msg,          content: summarizeToolResult(msg.content),        };      }      return msg;    });
    return { messages: processedMessages };  },});
const result = await agent.generate({  prompt: '...',});

Access Step Information


Both stopWhen and prepareStep receive detailed information about the current execution:

prepareStep: async ({  model, // Current model configuration  stepNumber, // Current step number (0-indexed)  steps, // All previous steps with their results  messages, // Messages to be sent to the model}) => {  // Access previous tool calls and results  const previousToolCalls = steps.flatMap(step => step.toolCalls);  const previousResults = steps.flatMap(step => step.toolResults);
  // Make decisions based on execution history  if (previousToolCalls.some(call => call.toolName === 'dataAnalysis')) {    return {      toolChoice: { type: 'tool', toolName: 'reportGenerator' },    };  }
  return {};},

Manual Loop Control


For scenarios requiring complete control over the agent loop, you can use AI SDK Core functions (generateText and streamText) to implement your own loop management instead of using stopWhen and prepareStep. This approach provides maximum flexibility for complex workflows.

Implementing a Manual Loop

Build your own agent loop when you need full control over execution:

import { generateText, ModelMessage } from 'ai';
const messages: ModelMessage[] = [{ role: 'user', content: '...' }];
let step = 0;const maxSteps = 10;
while (step < maxSteps) {  const result = await generateText({    model: 'openai/gpt-4o',    messages,    tools: {      // your tools here    },  });
  messages.push(...result.response.messages);
  if (result.text) {    break; // Stop when model generates text  }
  step++;}

This manual approach gives you complete control over:

  • Message history management
  • Step-by-step decision making
  • Custom stopping conditions
  • Dynamic tool and model selection
  • Error handling and recovery

Learn more about manual agent loops in the cookbook .

On this page

Loop Control

Stop Conditions

Use Built-in Conditions

Combine Multiple Conditions

Create Custom Conditions

Prepare Step

Dynamic Model Selection

Context Management

Tool Selection

Message Modification

Access Step Information

Manual Loop Control

Implementing a Manual Loop

Deploy and Scale AI Apps with Vercel.

Vercel delivers the infrastructure and developer experience you need to ship reliable AI-powered applications at scale.

Trusted by industry leaders:

  • OpenAI
  • Photoroom
  • leonardo-ai Logoleonardo-ai Logo
  • zapier Logozapier Logo

Talk to an expert