📄 ai-sdk/docs/ai-sdk-core/mcp-tools

File: mcp-tools.md | Updated: 11/15/2025

Source: https://ai-sdk.dev/docs/ai-sdk-core/mcp-tools

AI SDK

Menu

v5 (Latest)

AI SDK 5.x

AI SDK by Vercel

AI SDK 6 Beta

Foundations

Overview

Providers and Models

Prompts

Tools

Streaming

Getting Started

Navigating the Library

Next.js App Router

Next.js Pages Router

Svelte

Vue.js (Nuxt)

Node.js

Expo

Agents

Agents

Building Agents

Workflow Patterns

Loop Control

AI SDK Core

Overview

Generating Text

Generating Structured Data

Tool Calling

Model Context Protocol (MCP) Tools

Prompt Engineering

Settings

Embeddings

Image Generation

Transcription

Speech

Language Model Middleware

Provider & Model Management

Error Handling

Testing

Telemetry

AI SDK UI

Overview

Chatbot

Chatbot Message Persistence

Chatbot Resume Streams

Chatbot Tool Usage

Generative User Interfaces

Completion

Object Generation

Streaming Custom Data

Error Handling

Transport

Reading UIMessage Streams

Message Metadata

Stream Protocols

AI SDK RSC

Advanced

Reference

AI SDK Core

AI SDK UI

AI SDK RSC

Stream Helpers

AI SDK Errors

Migration Guides

Troubleshooting

Copy markdown

Model Context Protocol (MCP) Tools

=====================================================================================================================

The MCP tools feature is experimental and may change in the future.

The AI SDK supports connecting to Model Context Protocol (MCP) servers to access their tools, resources, and prompts. This enables your AI applications to discover and use capabilities across various services through a standardized interface.

Initializing an MCP Client


We recommend using HTTP transport (like StreamableHTTPClientTransport) for production deployments. The stdio transport should only be used for connecting to local servers as it cannot be deployed to production environments.

Create an MCP client using one of the following transport options:

  • HTTP transport (Recommended): Either configure HTTP directly via the client using transport: { type: 'http', ... }, or use MCP's official TypeScript SDK StreamableHTTPClientTransport
  • SSE (Server-Sent Events): An alternative HTTP-based transport
  • stdio: For local development only. Uses standard input/output streams for local MCP servers

HTTP Transport (Recommended)

For production deployments, we recommend using the HTTP transport. You can configure it directly on the client:

import { experimental_createMCPClient as createMCPClient } from '@ai-sdk/mcp';
const mcpClient = await createMCPClient({  transport: {    type: 'http',    url: 'https://your-server.com/mcp',
    // optional: configure HTTP headers    headers: { Authorization: 'Bearer my-api-key' },
    // optional: provide an OAuth client provider for automatic authorization    authProvider: myOAuthClientProvider,  },});

Alternatively, you can use StreamableHTTPClientTransport from MCP's official TypeScript SDK:

import { experimental_createMCPClient as createMCPClient } from '@ai-sdk/mcp';import { StreamableHTTPClientTransport } from '@modelcontextprotocol/sdk/client/streamableHttp.js';
const url = new URL('https://your-server.com/mcp');const mcpClient = await createMCPClient({  transport: new StreamableHTTPClientTransport(url, {    sessionId: 'session_123',  }),});

SSE Transport

SSE provides an alternative HTTP-based transport option. Configure it with a type and url property. You can also provide an authProvider for OAuth:

import { experimental_createMCPClient as createMCPClient } from '@ai-sdk/mcp';
const mcpClient = await createMCPClient({  transport: {    type: 'sse',    url: 'https://my-server.com/sse',
    // optional: configure HTTP headers    headers: { Authorization: 'Bearer my-api-key' },
    // optional: provide an OAuth client provider for automatic authorization    authProvider: myOAuthClientProvider,  },});

Stdio Transport (Local Servers)

The stdio transport should only be used for local servers.

The Stdio transport can be imported from either the MCP SDK or the AI SDK:

import { experimental_createMCPClient as createMCPClient } from '@ai-sdk/mcp';import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';// Or use the AI SDK's stdio transport:// import { Experimental_StdioMCPTransport as StdioClientTransport } from '@ai-sdk/mcp/mcp-stdio';
const mcpClient = await createMCPClient({  transport: new StdioClientTransport({    command: 'node',    args: ['src/stdio/dist/server.js'],  }),});

Custom Transport

You can also bring your own transport by implementing the MCPTransport interface for specific requirements not covered by the standard transports.

The client returned by the experimental_createMCPClient function is a lightweight client intended for use in tool conversion. It currently does not support all features of the full MCP client, such as: session management, resumable streams, and receiving notifications.

Authorization via OAuth is supported when using the AI SDK MCP HTTP or SSE transports by providing an authProvider.

Closing the MCP Client

After initialization, you should close the MCP client based on your usage pattern:

  • For short-lived usage (e.g., single requests), close the client when the response is finished
  • For long-running clients (e.g., command line apps), keep the client open but ensure it's closed when the application terminates

When streaming responses, you can close the client when the LLM response has finished. For example, when using streamText, you should use the onFinish callback:

const mcpClient = await experimental_createMCPClient({  // ...});
const tools = await mcpClient.tools();
const result = await streamText({  model: 'openai/gpt-4.1',  tools,  prompt: 'What is the weather in Brooklyn, New York?',  onFinish: async () => {    await mcpClient.close();  },});

When generating responses without streaming, you can use try/finally or cleanup functions in your framework:

let mcpClient: MCPClient | undefined;
try {  mcpClient = await experimental_createMCPClient({    // ...  });} finally {  await mcpClient?.close();}

Using MCP Tools


The client's tools method acts as an adapter between MCP tools and AI SDK tools. It supports two approaches for working with tool schemas:

Schema Discovery

With schema discovery, all tools offered by the server are automatically listed, and input parameter types are inferred based on the schemas provided by the server:

const tools = await mcpClient.tools();

This approach is simpler to implement and automatically stays in sync with server changes. However, you won't have TypeScript type safety during development, and all tools from the server will be loaded

Schema Definition

For better type safety and control, you can define the tools and their input schemas explicitly in your client code:

import { z } from 'zod';
const tools = await mcpClient.tools({  schemas: {    'get-data': {      inputSchema: z.object({        query: z.string().describe('The data query'),        format: z.enum(['json', 'text']).optional(),      }),    },    // For tools with zero inputs, you should use an empty object:    'tool-with-no-args': {      inputSchema: z.object({}),    },  },});

This approach provides full TypeScript type safety and IDE autocompletion, letting you catch parameter mismatches during development. When you define schemas, the client only pulls the explicitly defined tools, keeping your application focused on the tools it needs

Using MCP Resources


According to the MCP specification , resources are application-driven data sources that provide context to the model. Unlike tools (which are model-controlled), your application decides when to fetch and pass resources as context.

The MCP client provides three methods for working with resources:

Listing Resources

List all available resources from the MCP server:

const resources = await mcpClient.listResources();

Reading Resource Contents

Read the contents of a specific resource by its URI:

const resourceData = await mcpClient.readResource({  uri: 'file:///example/document.txt',});

Listing Resource Templates

Resource templates are dynamic URI patterns that allow flexible queries. List all available templates:

const templates = await mcpClient.listResourceTemplates();

Using MCP Prompts


According to the MCP specification, prompts are user-controlled templates that servers expose for clients to list and retrieve with optional arguments.

Listing Prompts

const prompts = await mcpClient.listPrompts();

Getting a Prompt

Retrieve prompt messages, optionally passing arguments defined by the server:

const prompt = await mcpClient.getPrompt({  name: 'code_review',  arguments: { code: 'function add(a, b) { return a + b; }' },});

Examples


You can see MCP tools in action in the following example:

Learn to use MCP tools in Node.js

On this page

Model Context Protocol (MCP) Tools

Initializing an MCP Client

HTTP Transport (Recommended)

SSE Transport

Stdio Transport (Local Servers)

Custom Transport

Closing the MCP Client

Using MCP Tools

Schema Discovery

Schema Definition

Using MCP Resources

Listing Resources

Reading Resource Contents

Listing Resource Templates

Using MCP Prompts

Listing Prompts

Getting a Prompt

Examples

Deploy and Scale AI Apps with Vercel.

Vercel delivers the infrastructure and developer experience you need to ship reliable AI-powered applications at scale.

Trusted by industry leaders:

  • OpenAI
  • Photoroom
  • leonardo-ai Logoleonardo-ai Logo
  • zapier Logozapier Logo

Talk to an expert