āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā ā š shadcn/directory/udecode/plate/(plugins)/(ai)/ai ā āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā
title: AI description: AI-powered writing assistance. docs:
streamInsertChunk.withAIBatch and tf.ai.undo().tf.aiChat.replaceSelection and tf.aiChat.insertBelow.@ai-sdk/react so api.aiChat.submit can stream responses from Vercel AI SDK helpers.The fastest way to add AI functionality is with the AIKit. It ships the configured AIPlugin, AIChatPlugin, Markdown streaming helpers, cursor overlay, and their Plate UI components.
AIMenu: Floating command surface for prompts, tool shortcuts, and chat review.AILoadingBar: Displays streaming status at the editor container.AIAnchorElement: Invisible anchor node used to position the floating menu during streaming.AILeaf: Renders AI-marked text with subtle styling.import { createPlateEditor } from 'platejs/react';
import { AIKit } from '@/components/editor/plugins/ai-kit';
const editor = createPlateEditor({
plugins: [
// ...otherPlugins,
...AIKit,
],
});
Expose a streaming command endpoint that proxies your model provider:
<ComponentSource name="ai-api" />Set your AI Gateway key locally (replace with your provider secret if you are not using a gateway):
AI_GATEWAY_API_KEY="your-api-key"
</Steps>
npm install @platejs/ai @platejs/markdown @platejs/selection @ai-sdk/react ai
@platejs/suggestion is optional but required for diff-based edit suggestions.
import { createPlateEditor } from 'platejs/react';
import { AIChatPlugin, AIPlugin } from '@platejs/ai/react';
import { BlockSelectionPlugin } from '@platejs/selection/react';
import { MarkdownPlugin } from '@platejs/markdown';
export const editor = createPlateEditor({
plugins: [
BlockSelectionPlugin,
MarkdownPlugin,
AIPlugin,
AIChatPlugin, // extended in the next step
],
});
BlockSelectionPlugin: Enables multi-block selections that AIChatPlugin relies on for insert/replace transforms.MarkdownPlugin: Provides Markdown serialization used by streaming utilities.AIPlugin: Adds the AI mark and transforms for undoing AI batches.AIChatPlugin: Supplies the AI combobox, API helpers, and transforms.Use AIPlugin.withComponent with your own element (or AILeaf) to highlight AI-generated text.
Extend AIChatPlugin to hook streaming and edits. The example mirrors the core logic from AIKit while keeping the UI headless.
import { AIChatPlugin, applyAISuggestions, streamInsertChunk, useChatChunk } from '@platejs/ai/react';
import { withAIBatch } from '@platejs/ai';
import { getPluginType, KEYS, PathApi } from 'platejs';
import { usePluginOption } from 'platejs/react';
export const aiChatPlugin = AIChatPlugin.extend({
options: {
chatOptions: {
api: '/api/ai/command',
body: {
model: 'openai/gpt-4o-mini',
},
},
trigger: ' ',
triggerPreviousCharPattern: /^\s?$/,
},
useHooks: ({ editor, getOption }) => {
const mode = usePluginOption(AIChatPlugin, 'mode');
const toolName = usePluginOption(AIChatPlugin, 'toolName');
useChatChunk({
onChunk: ({ chunk, isFirst, text }) => {
if (mode === 'insert') {
if (isFirst) {
editor.setOption(AIChatPlugin, 'streaming', true);
editor.tf.insertNodes(
{
children: [{ text: '' }],
type: getPluginType(editor, KEYS.aiChat),
},
{
at: PathApi.next(editor.selection!.focus.path.slice(0, 1)),
}
);
}
if (!getOption('streaming')) return;
withAIBatch(
editor,
() => {
streamInsertChunk(editor, chunk, {
textProps: {
[getPluginType(editor, KEYS.ai)]: true,
},
});
},
{ split: isFirst }
);
}
if (toolName === 'edit' && mode === 'chat') {
withAIBatch(
editor,
() => {
applyAISuggestions(editor, text);
},
{ split: isFirst }
);
}
},
onFinish: () => {
editor.setOption(AIChatPlugin, 'streaming', false);
editor.setOption(AIChatPlugin, '_blockChunks', '');
editor.setOption(AIChatPlugin, '_blockPath', null);
editor.setOption(AIChatPlugin, '_mdxName', null);
},
});
},
});
useChatChunk: Watches UseChatHelpers status and yields incremental chunks.streamInsertChunk: Streams Markdown/MDX into the document, reusing the existing block when possible.applyAISuggestions: Converts responses into transient suggestion nodes when toolName === 'edit'.withAIBatch: Marks history batches so tf.ai.undo() only reverts the last AI-generated change.Provide your own render components (toolbar button, floating menu, etc.) when you extend the plugin.
Handle api.aiChat.submit requests on the server. Each request includes the chat messages from @ai-sdk/react and a ctx payload that contains the editor children, current selection, and last toolName.
Complete API example
import { createGateway } from '@ai-sdk/gateway';
import { convertToCoreMessages, streamText } from 'ai';
import { createSlateEditor } from 'platejs';
import { BaseEditorKit } from '@/registry/components/editor/editor-base-kit';
import { markdownJoinerTransform } from '@/registry/lib/markdown-joiner-transform';
export async function POST(req: Request) {
const { apiKey, ctx, messages, model } = await req.json();
const editor = createSlateEditor({
plugins: BaseEditorKit,
selection: ctx.selection,
value: ctx.children,
});
const gateway = createGateway({
apiKey: apiKey ?? process.env.AI_GATEWAY_API_KEY!,
});
const result = streamText({
experimental_transform: markdownJoinerTransform(),
messages: convertToCoreMessages(messages),
model: gateway(model ?? 'openai/gpt-4o-mini'),
system: ctx.toolName === 'edit' ? 'You are an editor that rewrites user text.' : undefined,
});
return result.toDataStreamResponse();
}
ctx.children and ctx.selection are rehydrated into a Slate editor so you can build rich prompts (see Prompt Templates).chatOptions.body; everything you add is passed verbatim in the JSON payload and can be read before calling createGateway.useChat and useChatChunk can process tokens incrementally.useChatBridge the editor and your model endpoint with @ai-sdk/react. Store helpers on the plugin so transforms can reload, stop, or show chat state.
import { useEffect } from 'react';
import { type UIMessage, DefaultChatTransport } from 'ai';
import { type UseChatHelpers, useChat } from '@ai-sdk/react';
import { AIChatPlugin } from '@platejs/ai/react';
import { useEditorPlugin } from 'platejs/react';
type ChatMessage = UIMessage<{}, { toolName: 'comment' | 'edit' | 'generate'; comment?: unknown }>;
export const useEditorAIChat = () => {
const { editor, setOption } = useEditorPlugin(AIChatPlugin);
const chat = useChat<ChatMessage>({
id: 'editor',
api: '/api/ai/command',
transport: new DefaultChatTransport(),
onData(data) {
if (data.type === 'data-toolName') {
editor.setOption(AIChatPlugin, 'toolName', data.data);
}
},
});
useEffect(() => {
setOption('chat', chat as UseChatHelpers<ChatMessage>);
}, [chat, setOption]);
return chat;
};
Combine the helper with useEditorChat to keep the floating menu anchored correctly:
import { useEditorChat } from '@platejs/ai/react';
useEditorChat({
chat,
onOpenChange: (open) => {
if (!open) chat.stop?.();
},
});
Now you can submit prompts programmatically:
import { AIChatPlugin } from '@platejs/ai/react';
editor.getApi(AIChatPlugin).aiChat.submit('', {
prompt: {
default: 'Continue the document after {block}',
selecting: 'Rewrite {selection} with a clearer tone',
},
toolName: 'generate',
});
</Steps>
api.aiChat.submit accepts an EditorPrompt. Provide a string, an object with default/selecting/blockSelecting, or a function that receives { editor, isSelecting, isBlockSelecting }. The helper getEditorPrompt in the client turns that value into the final string.replacePlaceholders(editor, template, { prompt }) to expand {editor}, {block}, {blockSelection}, and {prompt} using Markdown generated by @platejs/ai.import { replacePlaceholders } from '@platejs/ai';
editor.getApi(AIChatPlugin).aiChat.submit('Improve tone', {
prompt: ({ isSelecting }) =>
isSelecting
? replacePlaceholders(editor, 'Rewrite {blockSelection} using a friendly tone.')
: replacePlaceholders(editor, 'Continue {block} with two more sentences.'),
toolName: 'generate',
});
The demo backend in apps/www/src/app/api/ai/command reconstructs the editor from ctx and builds structured prompts:
getChooseToolPrompt decides whether the request is generate, edit, or comment.getGeneratePrompt, getEditPrompt, and getCommentPrompt transform the current editor state into instructions tailored to each mode.getMarkdown, getMarkdownWithSelection, and buildStructuredPrompt (see apps/www/src/app/api/ai/command/prompts.ts) make it easy to embed block ids, selections, and MDX tags into the LLM request.Augment the payload you send from the client to fine-tune server prompts:
editor.setOption(aiChatPlugin, 'chatOptions', {
api: '/api/ai/command',
body: {
model: 'openai/gpt-4o-mini',
tone: 'playful',
temperature: 0.4,
},
});
Everything under chatOptions.body arrives in the route handler, letting you swap providers, pass user-specific metadata, or branch into different prompt templates.
The streaming utilities keep complex layouts intact while responses arrive:
streamInsertChunk(editor, chunk, options) deserializes Markdown chunks, updates the current block in place, and appends new blocks as needed. Use textProps/elementProps to tag streamed nodes (e.g., mark AI text).streamDeserializeMd and streamDeserializeInlineMd provide lower-level access if you need to control streaming for custom node types.streamSerializeMd mirrors the editor state so you can detect drift between streamed content and the response buffer.Reset the internal _blockChunks, _blockPath, and _mdxName options when streaming finishes to start the next response from a clean slate.
useAIChatEditorRegisters an auxiliary editor for chat previews and deserializes Markdown with block-level memoization.
<API name="useAIChatEditor"> <APIParameters> <APIItem name="editor" type="SlateEditor">Editor instance dedicated to the chat preview.</APIItem> <APIItem name="content" type="string">Markdown content returned by the model.</APIItem> <APIItem name="options" type="DeserializeMdOptions" optional>Pass `parser` to filter tokens before deserialization.</APIItem> </APIParameters> </API>import { usePlateEditor } from 'platejs/react';
import { MarkdownPlugin } from '@platejs/markdown';
import { AIChatPlugin, useAIChatEditor } from '@platejs/ai/react';
const aiPreviewEditor = usePlateEditor({
plugins: [MarkdownPlugin, AIChatPlugin],
});
useAIChatEditor(aiPreviewEditor, responseMarkdown, {
parser: { exclude: ['space'] },
});
useEditorChatConnects UseChatHelpers to editor state so the AI menu knows whether to anchor to cursor, selection, or block selection.
useChatChunkStreams chat responses chunk-by-chunk and gives you full control over insertion.
<API name="useChatChunk"> <APIParameters> <APIItem name="onChunk" type="(chunk: { chunk: string; isFirst: boolean; nodes: TText[]; text: string }) => void">Handle each streamed chunk.</APIItem> <APIItem name="onFinish" type="({ content }: { content: string }) => void" optional>Called when streaming finishes.</APIItem> </APIParameters> </API>withAIBatchGroups editor operations into a single history batch and flags it as AI-generated so tf.ai.undo() removes it safely.
applyAISuggestionsDiffs AI output against stored chatNodes and writes transient suggestion nodes. Requires @platejs/suggestion.
Complementary helpers allow you to finalize or discard the diff:
acceptAISuggestions(editor): Converts transient suggestion nodes into permanent suggestions.rejectAISuggestions(editor): Removes transient suggestion nodes and clears suggestion marks.aiCommentToRangeMaps streamed comment metadata back to document ranges so comments can be inserted automatically.
<API name="aiCommentToRange"> <APIParameters> <APIItem name="editor" type="PlateEditor">Editor instance.</APIItem> <APIItem name="options" type="{ blockId: string; comment: string; content: string }">Block id and text used to locate the range.</APIItem> </APIParameters> <APIReturns type="{ start: BasePoint; end: BasePoint } | null">Range matching the comment or `null` if it cannot be found.</APIReturns> </API>findTextRangeInBlockFuzzy-search helper that uses LCS to find the closest match inside a block.
<API name="findTextRangeInBlock"> <APIParameters> <APIItem name="node" type="TNode">Block node to search.</APIItem> <APIItem name="searchText" type="string">Text snippet to locate.</APIItem> </APIParameters> <APIReturns type="{ start: { path: Path; offset: number }; end: { path: Path; offset: number } } | null">Matched range or `null`.</APIReturns> </API>getEditorPromptGenerates prompts that respect cursor, selection, or block selection states.
<API name="getEditorPrompt"> <APIParameters> <APIItem name="editor" type="SlateEditor">Editor providing context.</APIItem> <APIItem name="options" type="{ prompt?: EditorPrompt }">String, config, or function describing the prompt.</APIItem> </APIParameters> <APIReturns type="string">Contextualized prompt string.</APIReturns> </API>replacePlaceholdersReplaces placeholders like {editor}, {blockSelection}, and {prompt} with serialized Markdown.
AIPluginAdds an ai mark to streamed text and exposes transforms to remove AI nodes or undo the last AI batch. Use .withComponent to render AI-marked text with a custom component.
AIChatPluginMain plugin that powers the AI menu, chat state, and transforms.
<API name="AIChatPlugin"> <APIOptions> <APIItem name="trigger" type="RegExp | string | string[]" optional>Character(s) that open the command menu. Defaults to `' '`.</APIItem> <APIItem name="triggerPreviousCharPattern" type="RegExp" optional>Pattern that must match the character before the trigger. Defaults to `/^\s?$/`.</APIItem> <APIItem name="triggerQuery" type="(editor: SlateEditor) => boolean" optional>Return `false` to cancel opening in specific contexts.</APIItem> <APIItem name="chat" type="UseChatHelpers<ChatMessage>" optional>Store helpers from `useChat` so API calls can access them.</APIItem> <APIItem name="chatNodes" type="TIdElement[]" optional>Snapshot of nodes used to diff edit suggestions (managed internally).</APIItem> <APIItem name="chatSelection" type="TRange | null" optional>Selection captured before submitting a prompt (managed internally).</APIItem> <APIItem name="mode" type="'chat' | 'insert'">Controls whether responses stream directly into the document or open a review panel. Defaults to `'insert'`.</APIItem> <APIItem name="open" type="boolean" optional>Whether the AI menu is visible. Defaults to `false`.</APIItem> <APIItem name="streaming" type="boolean" optional>True while a response is streaming. Defaults to `false`.</APIItem> <APIItem name="toolName" type="'comment' | 'edit' | 'generate' | null" optional>Active tool used to interpret the response.</APIItem> </APIOptions> </API>api.aiChat.submit(input, options?)Submits a prompt to your model provider. When mode is omitted it defaults to 'insert' for a collapsed cursor and 'chat' otherwise.
api.aiChat.reset(options?)Clears chat state, removes AI nodes, and optionally undoes the last AI batch.
<API name="reset"> <APIParameters> <APIItem name="options" type="{ undo?: boolean }" optional>Pass `undo: false` to keep streamed content.</APIItem> </APIParameters> </API>api.aiChat.node(options?)Retrieves the first AI node that matches the specified criteria.
<API name="node"> <APIParameters> <APIItem name="options" type="EditorNodesOptions & { anchor?: boolean; streaming?: boolean }" optional>Set `anchor: true` to get the anchor node or `streaming: true` to retrieve the node currently being streamed into.</APIItem> </APIParameters> <APIReturns type="NodeEntry | undefined">Matching node entry, if found.</APIReturns> </API>api.aiChat.reload()Replays the last prompt using the stored UseChatHelpers, restoring the original selection or block selection before resubmitting.
api.aiChat.stop()Stops streaming and calls chat.stop.
api.aiChat.show()Opens the AI menu, clears previous chat messages, and resets tool state.
api.aiChat.hide(options?)Closes the AI menu, optionally undoing the last AI batch and refocusing the editor.
<API name="hide"> <APIParameters> <APIItem name="options" type="{ focus?: boolean; undo?: boolean }" optional>Set `focus: false` to keep focus outside the editor or `undo: false` to preserve inserted content.</APIItem> </APIParameters> </API>tf.aiChat.accept()Accepts the latest response. In insert mode it removes AI marks and places the caret at the end of the streamed content. In chat mode it applies the pending suggestions.
tf.aiChat.insertBelow(sourceEditor, options?)Inserts the chat preview (sourceEditor) below the current selection or block selection.
tf.aiChat.replaceSelection(sourceEditor, options?)Replaces the current selection or block selection with the chat preview.
<API name="replaceSelection"> <APIParameters> <APIItem name="sourceEditor" type="SlateEditor">Editor containing the generated content.</APIItem> <APIItem name="options" type="{ format?: 'all' | 'none' | 'single' }" optional>Controls how much formatting from the original selection should be applied.</APIItem> </APIParameters> </API>tf.aiChat.removeAnchor(options?)Removes the temporary anchor node used to position the AI menu.
<API name="removeAnchor"> <APIParameters> <APIItem name="options" type="EditorNodesOptions" optional>Filters the nodes to remove.</APIItem> </APIParameters> </API>tf.ai.insertNodes(nodes, options?)Inserts nodes tagged with the AI mark at the current selection (or options.target).
tf.ai.removeMarks(options?)Clears the AI mark from matching nodes.
tf.ai.removeNodes(options?)Removes text nodes that are marked as AI-generated.
tf.ai.undo()Undoes the latest history entry if it was created by withAIBatch and contained AI content. Clears the paired redo entry to avoid re-applying AI output.
Extend the aiChatItems map to add new commands. Each command receives { aiEditor, editor, input } and can dispatch api.aiChat.submit with custom prompts or transforms.
summarizeInBullets: {
icon: <ListIcon />,
label: 'Summarize in bullets',
value: 'summarizeInBullets',
onSelect: ({ editor }) => {
void editor.getApi(AIChatPlugin).aiChat.submit('', {
prompt: 'Summarize the current selection using bullet points',
toolName: 'generate',
});
},
},
generateTOC: {
icon: <BookIcon />,
label: 'Generate table of contents',
value: 'generateTOC',
onSelect: ({ editor }) => {
const headings = editor.api.nodes({
match: (n) => ['h1', 'h2', 'h3'].includes(n.type as string),
});
const prompt =
headings.length === 0
? 'Create a realistic table of contents for this document'
: 'Generate a table of contents that reflects the existing headings';
void editor.getApi(AIChatPlugin).aiChat.submit('', {
mode: 'insert',
prompt,
toolName: 'generate',
});
},
},
The menu automatically switches between command and suggestion states:
cursorCommand: Cursor is collapsed and no response yet.selectionCommand: Text is selected and no response yet.cursorSuggestion / selectionSuggestion: A response exists, so actions like Accept, Try Again, or Insert Below are shown.Use toolName ('generate' | 'edit' | 'comment') to control how streaming hooks process the response. For example, 'edit' enables diff-based suggestions, and 'comment' allows you to convert streamed comments into discussion threads with aiCommentToRange.
ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā