File: chatbot-tool-usage.md | Updated: 11/15/2025
Menu
v5 (Latest)
AI SDK 5.x
Model Context Protocol (MCP) Tools
Copy markdown
==============================================================================================
With useChat
and streamText
, you can use tools in your chatbot application. The AI SDK supports three types of tools in this context:
The flow is as follows:
streamText call.execute method and their results are forwarded to the client.onToolCall callback. You must call addToolOutput to provide the tool result.parts property of the last assistant message.addToolOutput can be used to add the tool result to the chat.sendAutomaticallyWhen. This triggers another iteration of this flow.The tool calls and tool executions are integrated into the assistant message as typed tool parts. A tool part is at first a tool call, and then it becomes a tool result when the tool is executed. The tool result contains all information about the tool call as well as the result of the tool execution.
Tool result submission can be configured using the sendAutomaticallyWhen option. You can use the lastAssistantMessageIsCompleteWithToolCalls helper to automatically submit when all tool results are available. This simplifies the client-side code while still allowing full control when needed.
In this example, we'll use three tools:
getWeatherInformation: An automatically executed server-side tool that returns the weather in a given city.askForConfirmation: A user-interaction client-side tool that asks the user for confirmation.getLocation: An automatically executed client-side tool that returns a random city.app/api/chat/route.ts
import { openai } from '@ai-sdk/openai';import { convertToModelMessages, streamText, UIMessage } from 'ai';import { z } from 'zod';
// Allow streaming responses up to 30 secondsexport const maxDuration = 30;
export async function POST(req: Request) { const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({ model: openai('gpt-4o'), messages: convertToModelMessages(messages), tools: { // server-side tool with execute function: getWeatherInformation: { description: 'show the weather in a given city to the user', inputSchema: z.object({ city: z.string() }), execute: async ({}: { city: string }) => { const weatherOptions = ['sunny', 'cloudy', 'rainy', 'snowy', 'windy']; return weatherOptions[ Math.floor(Math.random() * weatherOptions.length) ]; }, }, // client-side tool that starts user interaction: askForConfirmation: { description: 'Ask the user for confirmation.', inputSchema: z.object({ message: z.string().describe('The message to ask for confirmation.'), }), }, // client-side tool that is automatically executed on the client: getLocation: { description: 'Get the user location. Always ask for confirmation before using this tool.', inputSchema: z.object({}), }, }, });
return result.toUIMessageStreamResponse();}
The client-side page uses the useChat hook to create a chatbot application with real-time message streaming. Tool calls are displayed in the chat UI as typed tool parts. Please make sure to render the messages using the parts property of the message.
There are three things worth mentioning:
The onToolCall
callback is used to handle client-side tools that should be automatically executed. In this example, the getLocation tool is a client-side tool that returns a random city. You call addToolOutput to provide the result (without await to avoid potential deadlocks).
Always check if (toolCall.dynamic) first in your onToolCall handler. Without this check, TypeScript will throw an error like: Type 'string' is not assignable to type '"toolName1" | "toolName2"' when you try to use toolCall.toolName in addToolOutput.
The sendAutomaticallyWhen
option with lastAssistantMessageIsCompleteWithToolCalls helper automatically submits when all tool results are available.
The parts array of assistant messages contains tool parts with typed names like tool-askForConfirmation. The client-side tool askForConfirmation is displayed in the UI. It asks the user for confirmation and displays the result once the user confirms or denies the execution. The result is added to the chat using addToolOutput with the tool parameter for type safety.
app/page.tsx
'use client';
import { useChat } from '@ai-sdk/react';import { DefaultChatTransport, lastAssistantMessageIsCompleteWithToolCalls,} from 'ai';import { useState } from 'react';
export default function Chat() { const { messages, sendMessage, addToolOutput } = useChat({ transport: new DefaultChatTransport({ api: '/api/chat', }),
sendAutomaticallyWhen: lastAssistantMessageIsCompleteWithToolCalls,
// run client-side tools that are automatically executed: async onToolCall({ toolCall }) { // Check if it's a dynamic tool first for proper type narrowing if (toolCall.dynamic) { return; }
if (toolCall.toolName === 'getLocation') { const cities = ['New York', 'Los Angeles', 'Chicago', 'San Francisco'];
// No await - avoids potential deadlocks addToolOutput({ tool: 'getLocation', toolCallId: toolCall.toolCallId, output: cities[Math.floor(Math.random() * cities.length)], }); } }, }); const [input, setInput] = useState('');
return ( <> {messages?.map(message => ( <div key={message.id}> <strong>{`${message.role}: `}</strong> {message.parts.map(part => { switch (part.type) { // render text parts as simple text: case 'text': return part.text;
// for tool parts, use the typed tool part names: case 'tool-askForConfirmation': { const callId = part.toolCallId;
switch (part.state) { case 'input-streaming': return ( <div key={callId}>Loading confirmation request...</div> ); case 'input-available': return ( <div key={callId}> {part.input.message} <div> <button onClick={() => addToolOutput({ tool: 'askForConfirmation', toolCallId: callId, output: 'Yes, confirmed.', }) } > Yes </button> <button onClick={() => addToolOutput({ tool: 'askForConfirmation', toolCallId: callId, output: 'No, denied', }) } > No </button> </div> </div> ); case 'output-available': return ( <div key={callId}> Location access allowed: {part.output} </div> ); case 'output-error': return <div key={callId}>Error: {part.errorText}</div>; } break; }
case 'tool-getLocation': { const callId = part.toolCallId;
switch (part.state) { case 'input-streaming': return ( <div key={callId}>Preparing location request...</div> ); case 'input-available': return <div key={callId}>Getting location...</div>; case 'output-available': return <div key={callId}>Location: {part.output}</div>; case 'output-error': return ( <div key={callId}> Error getting location: {part.errorText} </div> ); } break; }
case 'tool-getWeatherInformation': { const callId = part.toolCallId;
switch (part.state) { // example of pre-rendering streaming tool inputs: case 'input-streaming': return ( <pre key={callId}>{JSON.stringify(part, null, 2)}</pre> ); case 'input-available': return ( <div key={callId}> Getting weather information for {part.input.city}... </div> ); case 'output-available': return ( <div key={callId}> Weather in {part.input.city}: {part.output} </div> ); case 'output-error': return ( <div key={callId}> Error getting weather for {part.input.city}:{' '} {part.errorText} </div> ); } break; } } })} <br /> </div> ))}
<form onSubmit={e => { e.preventDefault(); if (input.trim()) { sendMessage({ text: input }); setInput(''); } }} > <input value={input} onChange={e => setInput(e.target.value)} /> </form> </> );}
Sometimes an error may occur during client-side tool execution. Use the addToolOutput method with a state of output-error and errorText value instead of output record the error.
app/page.tsx
'use client';
import { useChat } from '@ai-sdk/react';import { DefaultChatTransport, lastAssistantMessageIsCompleteWithToolCalls,} from 'ai';import { useState } from 'react';
export default function Chat() { const { messages, sendMessage, addToolOutput } = useChat({ transport: new DefaultChatTransport({ api: '/api/chat', }),
sendAutomaticallyWhen: lastAssistantMessageIsCompleteWithToolCalls,
// run client-side tools that are automatically executed: async onToolCall({ toolCall }) { // Check if it's a dynamic tool first for proper type narrowing if (toolCall.dynamic) { return; }
if (toolCall.toolName === 'getWeatherInformation') { try { const weather = await getWeatherInformation(toolCall.input);
// No await - avoids potential deadlocks addToolOutput({ tool: 'getWeatherInformation', toolCallId: toolCall.toolCallId, output: weather, }); } catch (err) { addToolOutput({ tool: 'getWeatherInformation', toolCallId: toolCall.toolCallId, state: 'output-error', errorText: 'Unable to get the weather information', }); } } }, });}
When using dynamic tools (tools with unknown types at compile time), the UI parts use a generic dynamic-tool type instead of specific tool types:
app/page.tsx
{ message.parts.map((part, index) => { switch (part.type) { // Static tools with specific (`tool-${toolName}`) types case 'tool-getWeatherInformation': return <WeatherDisplay part={part} />;
// Dynamic tools use generic `dynamic-tool` type case 'dynamic-tool': return ( <div key={index}> <h4>Tool: {part.toolName}</h4> {part.state === 'input-streaming' && ( <pre>{JSON.stringify(part.input, null, 2)}</pre> )} {part.state === 'output-available' && ( <pre>{JSON.stringify(part.output, null, 2)}</pre> )} {part.state === 'output-error' && ( <div>Error: {part.errorText}</div> )} </div> ); } });}
Dynamic tools are useful when integrating with:
Tool call streaming is enabled by default in AI SDK 5.0, allowing you to stream tool calls while they are being generated. This provides a better user experience by showing tool inputs as they are generated in real-time.
app/api/chat/route.ts
export async function POST(req: Request) { const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({ model: openai('gpt-4o'), messages: convertToModelMessages(messages), // toolCallStreaming is enabled by default in v5 // ... });
return result.toUIMessageStreamResponse();}
With tool call streaming enabled, partial tool calls are streamed as part of the data stream. They are available through the useChat hook. The typed tool parts of assistant messages will also contain partial tool calls. You can use the state property of the tool part to render the correct UI.
app/page.tsx
export default function Chat() { // ... return ( <> {messages?.map(message => ( <div key={message.id}> {message.parts.map(part => { switch (part.type) { case 'tool-askForConfirmation': case 'tool-getLocation': case 'tool-getWeatherInformation': switch (part.state) { case 'input-streaming': return <pre>{JSON.stringify(part.input, null, 2)}</pre>; case 'input-available': return <pre>{JSON.stringify(part.input, null, 2)}</pre>; case 'output-available': return <pre>{JSON.stringify(part.output, null, 2)}</pre>; case 'output-error': return <div>Error: {part.errorText}</div>; } } })} </div> ))} </> );}
When you are using multi-step tool calls, the AI SDK will add step start parts to the assistant messages. If you want to display boundaries between tool calls, you can use the step-start parts as follows:
app/page.tsx
// ...// where you render the message parts:message.parts.map((part, index) => { switch (part.type) { case 'step-start': // show step boundaries as horizontal lines: return index > 0 ? ( <div key={index} className="text-gray-500"> <hr className="my-2 border-gray-300" /> </div> ) : null; case 'text': // ... case 'tool-askForConfirmation': case 'tool-getLocation': case 'tool-getWeatherInformation': // ... }});// ...
You can also use multi-step calls on the server-side with streamText. This works when all invoked tools have an execute function on the server side.
app/api/chat/route.ts
import { openai } from '@ai-sdk/openai';import { convertToModelMessages, streamText, UIMessage, stepCountIs } from 'ai';import { z } from 'zod';
export async function POST(req: Request) { const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({ model: openai('gpt-4o'), messages: convertToModelMessages(messages), tools: { getWeatherInformation: { description: 'show the weather in a given city to the user', inputSchema: z.object({ city: z.string() }), // tool has execute function: execute: async ({}: { city: string }) => { const weatherOptions = ['sunny', 'cloudy', 'rainy', 'snowy', 'windy']; return weatherOptions[ Math.floor(Math.random() * weatherOptions.length) ]; }, }, }, stopWhen: stepCountIs(5), });
return result.toUIMessageStreamResponse();}
Language models can make errors when calling tools. By default, these errors are masked for security reasons, and show up as "An error occurred" in the UI.
To surface the errors, you can use the onError function when calling toUIMessageResponse.
export function errorHandler(error: unknown) { if (error == null) { return 'unknown error'; }
if (typeof error === 'string') { return error; }
if (error instanceof Error) { return error.message; }
return JSON.stringify(error);}
const result = streamText({ // ...});
return result.toUIMessageStreamResponse({ onError: errorHandler,});
In case you are using createUIMessageResponse, you can use the onError function when calling toUIMessageResponse:
const response = createUIMessageResponse({ // ... async execute(dataStream) { // ... }, onError: error => `Custom error: ${error.message}`,});
On this page
Deploy and Scale AI Apps with Vercel.
Vercel delivers the infrastructure and developer experience you need to ship reliable AI-powered applications at scale.
Trusted by industry leaders: