File: embeddings.md | Updated: 11/15/2025
Menu
v5 (Latest)
AI SDK 5.x
Model Context Protocol (MCP) Tools
Copy markdown
========================================================================
Embeddings are a way to represent words, phrases, or images as vectors in a high-dimensional space. In this space, similar words are close to each other, and the distance between words can be used to measure their similarity.
The AI SDK provides the embed
function to embed single values, which is useful for tasks such as finding similar words or phrases or clustering text. You can use it with embeddings models, e.g. openai.textEmbeddingModel('text-embedding-3-large') or mistral.textEmbeddingModel('mistral-embed').
import { embed } from 'ai';import { openai } from '@ai-sdk/openai';
// 'embedding' is a single embedding object (number[])const { embedding } = await embed({ model: openai.textEmbeddingModel('text-embedding-3-small'), value: 'sunny day at the beach',});
When loading data, e.g. when preparing a data store for retrieval-augmented generation (RAG), it is often useful to embed many values at once (batch embedding).
The AI SDK provides the embedMany
function for this purpose. Similar to embed, you can use it with embeddings models, e.g. openai.textEmbeddingModel('text-embedding-3-large') or mistral.textEmbeddingModel('mistral-embed').
import { openai } from '@ai-sdk/openai';import { embedMany } from 'ai';
// 'embeddings' is an array of embedding objects (number[][]).// It is sorted in the same order as the input values.const { embeddings } = await embedMany({ model: openai.textEmbeddingModel('text-embedding-3-small'), values: [ 'sunny day at the beach', 'rainy afternoon in the city', 'snowy night in the mountains', ],});
After embedding values, you can calculate the similarity between them using the cosineSimilarity
function. This is useful to e.g. find similar words or phrases in a dataset. You can also rank and filter related items based on their similarity.
import { openai } from '@ai-sdk/openai';import { cosineSimilarity, embedMany } from 'ai';
const { embeddings } = await embedMany({ model: openai.textEmbeddingModel('text-embedding-3-small'), values: ['sunny day at the beach', 'rainy afternoon in the city'],});
console.log( `cosine similarity: ${cosineSimilarity(embeddings[0], embeddings[1])}`,);
Many providers charge based on the number of tokens used to generate embeddings. Both embed and embedMany provide token usage information in the usage property of the result object:
import { openai } from '@ai-sdk/openai';import { embed } from 'ai';
const { embedding, usage } = await embed({ model: openai.textEmbeddingModel('text-embedding-3-small'), value: 'sunny day at the beach',});
console.log(usage); // { tokens: 10 }
Embedding model settings can be configured using providerOptions for provider-specific parameters:
import { openai } from '@ai-sdk/openai';import { embed } from 'ai';
const { embedding } = await embed({ model: openai.textEmbeddingModel('text-embedding-3-small'), value: 'sunny day at the beach', providerOptions: { openai: { dimensions: 512, // Reduce embedding dimensions }, },});
The embedMany function now supports parallel processing with configurable maxParallelCalls to optimize performance:
import { openai } from '@ai-sdk/openai';import { embedMany } from 'ai';
const { embeddings, usage } = await embedMany({ maxParallelCalls: 2, // Limit parallel requests model: openai.textEmbeddingModel('text-embedding-3-small'), values: [ 'sunny day at the beach', 'rainy afternoon in the city', 'snowy night in the mountains', ],});
Both embed and embedMany accept an optional maxRetries parameter of type number that you can use to set the maximum number of retries for the embedding process. It defaults to 2 retries (3 attempts in total). You can set it to 0 to disable retries.
import { openai } from '@ai-sdk/openai';import { embed } from 'ai';
const { embedding } = await embed({ model: openai.textEmbeddingModel('text-embedding-3-small'), value: 'sunny day at the beach', maxRetries: 0, // Disable retries});
Both embed and embedMany accept an optional abortSignal parameter of type AbortSignal
that you can use to abort the embedding process or set a timeout.
import { openai } from '@ai-sdk/openai';import { embed } from 'ai';
const { embedding } = await embed({ model: openai.textEmbeddingModel('text-embedding-3-small'), value: 'sunny day at the beach', abortSignal: AbortSignal.timeout(1000), // Abort after 1 second});
Both embed and embedMany accept an optional headers parameter of type Record<string, string> that you can use to add custom headers to the embedding request.
import { openai } from '@ai-sdk/openai';import { embed } from 'ai';
const { embedding } = await embed({ model: openai.textEmbeddingModel('text-embedding-3-small'), value: 'sunny day at the beach', headers: { 'X-Custom-Header': 'custom-value' },});
Both embed and embedMany return response information that includes the raw provider response:
import { openai } from '@ai-sdk/openai';import { embed } from 'ai';
const { embedding, response } = await embed({ model: openai.textEmbeddingModel('text-embedding-3-small'), value: 'sunny day at the beach',});
console.log(response); // Raw provider response
Several providers offer embedding models:
| Provider | Model | Embedding Dimensions |
| --- | --- | --- |
| OpenAI | text-embedding-3-large | 3072 |
| OpenAI | text-embedding-3-small | 1536 |
| OpenAI | text-embedding-ada-002 | 1536 |
| Google Generative AI | gemini-embedding-001 | 3072 |
| Google Generative AI | text-embedding-004 | 768 |
| Mistral | mistral-embed | 1024 |
| Cohere | embed-english-v3.0 | 1024 |
| Cohere | embed-multilingual-v3.0 | 1024 |
| Cohere | embed-english-light-v3.0 | 384 |
| Cohere | embed-multilingual-light-v3.0 | 384 |
| Cohere | embed-english-v2.0 | 4096 |
| Cohere | embed-english-light-v2.0 | 1024 |
| Cohere | embed-multilingual-v2.0 | 768 |
| Amazon Bedrock | amazon.titan-embed-text-v1 | 1536 |
| Amazon Bedrock | amazon.titan-embed-text-v2:0 | 1024 |
On this page
Deploy and Scale AI Apps with Vercel.
Vercel delivers the infrastructure and developer experience you need to ship reliable AI-powered applications at scale.
Trusted by industry leaders: