📄 ai-sdk/docs/troubleshooting/use-chat-custom-request-options

File: use-chat-custom-request-options.md | Updated: 11/15/2025

Source: https://ai-sdk.dev/docs/troubleshooting/use-chat-custom-request-options

AI SDK

Menu

v5 (Latest)

AI SDK 5.x

AI SDK by Vercel

AI SDK 6 Beta

Foundations

Overview

Providers and Models

Prompts

Tools

Streaming

Getting Started

Navigating the Library

Next.js App Router

Next.js Pages Router

Svelte

Vue.js (Nuxt)

Node.js

Expo

Agents

Agents

Building Agents

Workflow Patterns

Loop Control

AI SDK Core

Overview

Generating Text

Generating Structured Data

Tool Calling

Model Context Protocol (MCP) Tools

Prompt Engineering

Settings

Embeddings

Image Generation

Transcription

Speech

Language Model Middleware

Provider & Model Management

Error Handling

Testing

Telemetry

AI SDK UI

Overview

Chatbot

Chatbot Message Persistence

Chatbot Resume Streams

Chatbot Tool Usage

Generative User Interfaces

Completion

Object Generation

Streaming Custom Data

Error Handling

Transport

Reading UIMessage Streams

Message Metadata

Stream Protocols

AI SDK RSC

Advanced

Reference

AI SDK Core

AI SDK UI

AI SDK RSC

Stream Helpers

AI SDK Errors

Migration Guides

Troubleshooting

Azure OpenAI Slow to Stream

Client-Side Function Calls Not Invoked

Server Actions in Client Components

useChat/useCompletion stream output contains 0:... instead of text

Streamable UI Errors

Tool Invocation Missing Result Error

Streaming Not Working When Deployed

Streaming Not Working When Proxied

Getting Timeouts When Deploying on Vercel

Unclosed Streams

useChat Failed to Parse Stream

Server Action Plain Objects Error

useChat No Response

Custom headers, body, and credentials not working with useChat

TypeScript performance issues with Zod and AI SDK 5

useChat "An error occurred"

Repeated assistant messages in useChat

onFinish not called when stream is aborted

Tool calling with generateObject and streamObject

Abort breaks resumable streams

streamText fails silently

Streaming Status Shows But No Text Appears

Stale body values with useChat

Type Error with onToolCall

Unsupported model version error

Object generation failed with OpenAI

Model is not assignable to type "LanguageModelV1"

TypeScript error "Cannot find namespace 'JSX'"

React error "Maximum update depth exceeded"

Jest: cannot find module '@ai-sdk/rsc'

Copy markdown

Custom headers, body, and credentials not working with useChat

=======================================================================================================================================================================================================

Issue


When using the useChat hook, custom request options like headers, body fields, and credentials configured directly on the hook are not being sent with the request:

// These options are not sent with the requestconst { messages, sendMessage } = useChat({  headers: {    Authorization: 'Bearer token123',  },  body: {    user_id: '123',  },  credentials: 'include',});

Background


The useChat hook has changed its API for configuring request options. Direct options like headers, body, and credentials on the hook itself are no longer supported. Instead, you need to use the transport configuration with DefaultChatTransport or pass options at the request level.

Solution


There are three ways to properly configure request options with useChat:

Option 1: Request-Level Configuration (Recommended for Dynamic Values)

For dynamic values that change over time, the recommended approach is to pass options when calling sendMessage:

const { messages, sendMessage } = useChat();
// Send options with each messagesendMessage(  { text: input },  {    headers: {      Authorization: `Bearer ${getAuthToken()}`, // Dynamic auth token      'X-Request-ID': generateRequestId(),    },    body: {      temperature: 0.7,      max_tokens: 100,      user_id: getCurrentUserId(), // Dynamic user ID      sessionId: getCurrentSessionId(), // Dynamic session    },  },);

This approach ensures that the most up-to-date values are always sent with each request.

Option 2: Hook-Level Configuration with Static Values

For static values that don't change during the component lifecycle, use the DefaultChatTransport:

import { useChat } from '@ai-sdk/react';import { DefaultChatTransport } from 'ai';
const { messages, sendMessage } = useChat({  transport: new DefaultChatTransport({    api: '/api/chat',    headers: {      'X-API-Version': 'v1', // Static API version      'X-App-ID': 'my-app', // Static app identifier    },    body: {      model: 'gpt-4o', // Default model      stream: true, // Static configuration    },    credentials: 'include', // Static credentials policy  }),});

Option 3: Hook-Level Configuration with Resolvable Functions

If you need dynamic values at the hook level, you can use functions that return configuration values. However, request-level configuration is generally preferred for better reliability:

import { useChat } from '@ai-sdk/react';import { DefaultChatTransport } from 'ai';
const { messages, sendMessage } = useChat({  transport: new DefaultChatTransport({    api: '/api/chat',    headers: () => ({      Authorization: `Bearer ${getAuthToken()}`,      'X-User-ID': getCurrentUserId(),    }),    body: () => ({      sessionId: getCurrentSessionId(),      preferences: getUserPreferences(),    }),    credentials: () => (isAuthenticated() ? 'include' : 'same-origin'),  }),});

For component state that changes over time, request-level configuration (Option 1) is recommended. If using hook-level functions, consider using useRef to store current values and reference ref.current in your configuration function.

Combining Hook and Request Level Options

Request-level options take precedence over hook-level options:

// Hook-level default configurationconst { messages, sendMessage } = useChat({  transport: new DefaultChatTransport({    api: '/api/chat',    headers: {      'X-API-Version': 'v1',    },    body: {      model: 'gpt-4o',    },  }),});
// Override or add options per requestsendMessage(  { text: input },  {    headers: {      'X-API-Version': 'v2', // This overrides the hook-level header      'X-Request-ID': '123', // This is added    },    body: {      model: 'gpt-4o-mini', // This overrides the hook-level body field      temperature: 0.5, // This is added    },  },);

For more details on request configuration, see the Request Configuration documentation.

On this page

Custom headers, body, and credentials not working with useChat

Issue

Background

Solution

Option 1: Request-Level Configuration (Recommended for Dynamic Values)

Option 2: Hook-Level Configuration with Static Values

Option 3: Hook-Level Configuration with Resolvable Functions

Combining Hook and Request Level Options

Deploy and Scale AI Apps with Vercel.

Vercel delivers the infrastructure and developer experience you need to ship reliable AI-powered applications at scale.

Trusted by industry leaders:

  • OpenAI
  • Photoroom
  • leonardo-ai Logoleonardo-ai Logo
  • zapier Logozapier Logo

Talk to an expert