📄 ai-sdk/cookbook/api-servers/fastify

File: fastify.md | Updated: 11/15/2025

Source: https://ai-sdk.dev/cookbook/api-servers/fastify

AI SDK

Menu

Guides

RAG Agent

Multi-Modal Agent

Slackbot Agent Guide

Natural Language Postgres

Get started with Computer Use

Get started with Gemini 2.5

Get started with Claude 4

OpenAI Responses API

Google Gemini Image Generation

Get started with Claude 3.7 Sonnet

Get started with Llama 3.1

Get started with GPT-5

Get started with OpenAI o1

Get started with OpenAI o3-mini

Get started with DeepSeek R1

Next.js

Generate Text

Generate Text with Chat Prompt

Generate Image with Chat Prompt

Stream Text

Stream Text with Chat Prompt

Stream Text with Image Prompt

Chat with PDFs

streamText Multi-Step Cookbook

Markdown Chatbot with Memoization

Generate Object

Generate Object with File Prompt through Form Submission

Stream Object

Call Tools

Call Tools in Multiple Steps

Model Context Protocol (MCP) Tools

Share useChat State Across Components

Human-in-the-Loop Agent with Next.js

Send Custom Body from useChat

Render Visual Interface in Chat

Caching Middleware

Node

Generate Text

Generate Text with Chat Prompt

Generate Text with Image Prompt

Stream Text

Stream Text with Chat Prompt

Stream Text with Image Prompt

Stream Text with File Prompt

Generate Object with a Reasoning Model

Generate Object

Stream Object

Stream Object with Image Prompt

Record Token Usage After Streaming Object

Record Final Object after Streaming Object

Call Tools

Call Tools with Image Prompt

Call Tools in Multiple Steps

Model Context Protocol (MCP) Tools

Manual Agent Loop

Web Search Agent

Embed Text

Embed Text in Batch

Intercepting Fetch Requests

Local Caching Middleware

Retrieval Augmented Generation

Knowledge Base Agent

API Servers

Node.js HTTP Server

Express

Hono

Fastify

Nest.js

React Server Components

Copy markdown

Fastify

===================================================================

You can use the AI SDK in a Fastify server to generate and stream text and objects to the client.

Examples


The examples start a simple HTTP server that listens on port 8080. You can e.g. test it using curl:

curl -X POST http://localhost:8080

The examples use the OpenAI gpt-4o model. Ensure that the OpenAI API key is set in the OPENAI_API_KEY environment variable.

Full example: github.com/vercel/ai/examples/fastify

Data Stream

You can use the toDataStream method to get a data stream from the result and then pipe it to the response.

index.ts

import { openai } from '@ai-sdk/openai';import { streamText } from 'ai';import Fastify from 'fastify';
const fastify = Fastify({ logger: true });
fastify.post('/', async function (request, reply) {  const result = streamText({    model: openai('gpt-4o'),    prompt: 'Invent a new holiday and describe its traditions.',  });
  // Mark the response as a v1 data stream:  reply.header('X-Vercel-AI-Data-Stream', 'v1');  reply.header('Content-Type', 'text/plain; charset=utf-8');
  return reply.send(result.toDataStream({ data }));});
fastify.listen({ port: 8080 });

Sending Custom Data

createDataStream can be used to send custom data to the client.

index.ts

import { openai } from '@ai-sdk/openai';import { createDataStream, streamText } from 'ai';import Fastify from 'fastify';
const fastify = Fastify({ logger: true });
fastify.post('/stream-data', async function (request, reply) {  // immediately start streaming the response  const dataStream = createDataStream({    execute: async dataStreamWriter => {      dataStreamWriter.writeData('initialized call');
      const result = streamText({        model: openai('gpt-4o'),        prompt: 'Invent a new holiday and describe its traditions.',      });
      result.mergeIntoDataStream(dataStreamWriter);    },    onError: error => {      // Error messages are masked by default for security reasons.      // If you want to expose the error message to the client, you can do so here:      return error instanceof Error ? error.message : String(error);    },  });
  // Mark the response as a v1 data stream:  reply.header('X-Vercel-AI-Data-Stream', 'v1');  reply.header('Content-Type', 'text/plain; charset=utf-8');
  return reply.send(dataStream);});
fastify.listen({ port: 8080 });

Text Stream

You can use the textStream property to get a text stream from the result and then pipe it to the response.

index.ts

import { openai } from '@ai-sdk/openai';import { streamText } from 'ai';import Fastify from 'fastify';
const fastify = Fastify({ logger: true });
fastify.post('/', async function (request, reply) {  const result = streamText({    model: openai('gpt-4o'),    prompt: 'Invent a new holiday and describe its traditions.',  });
  reply.header('Content-Type', 'text/plain; charset=utf-8');
  return reply.send(result.textStream);});
fastify.listen({ port: 8080 });

Troubleshooting


  • Streaming not working when proxied

On this page

Fastify

Examples

Data Stream

Sending Custom Data

Text Stream

Troubleshooting

Deploy and Scale AI Apps with Vercel.

Vercel delivers the infrastructure and developer experience you need to ship reliable AI-powered applications at scale.

Trusted by industry leaders:

  • OpenAI
  • Photoroom
  • leonardo-ai Logoleonardo-ai Logo
  • zapier Logozapier Logo

Talk to an expert