šŸ“ Sign Up | šŸ” Log In

← Root | ↑ Up

ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā” │ šŸ“„ shadcn/directory/udecode/plate/(plugins)/(ai)/ai │ ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜

╔══════════════════════════════════════════════════════════════════════════════════════════════╗
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘

title: AI description: AI-powered writing assistance. docs:

  • route: https://pro.platejs.org/docs/examples/ai title: Plus

<ComponentPreview name="ai-demo" /> <PackageInfo>

Features

  • Context-aware command menu that adapts to cursor, text selection, and block selection workflows.
  • Streaming Markdown/MDX insertion with table, column, and code block support powered by streamInsertChunk.
  • Insert and chat review modes with undo-safe batching via withAIBatch and tf.ai.undo().
  • Block selection aware transforms to replace or append entire sections using tf.aiChat.replaceSelection and tf.aiChat.insertBelow.
  • Direct integration with @ai-sdk/react so api.aiChat.submit can stream responses from Vercel AI SDK helpers.
  • Suggestion and comment utilities that diff AI edits, accept/reject changes, and map AI feedback back to document ranges.
</PackageInfo>

Kit Usage

<Steps>

Installation

The fastest way to add AI functionality is with the AIKit. It ships the configured AIPlugin, AIChatPlugin, Markdown streaming helpers, cursor overlay, and their Plate UI components.

<ComponentSource name="ai-kit" />
  • AIMenu: Floating command surface for prompts, tool shortcuts, and chat review.
  • AILoadingBar: Displays streaming status at the editor container.
  • AIAnchorElement: Invisible anchor node used to position the floating menu during streaming.
  • AILeaf: Renders AI-marked text with subtle styling.

Add Kit

import { createPlateEditor } from 'platejs/react';
import { AIKit } from '@/components/editor/plugins/ai-kit';

const editor = createPlateEditor({
  plugins: [
    // ...otherPlugins,
    ...AIKit,
  ],
});

Add API Route

Expose a streaming command endpoint that proxies your model provider:

<ComponentSource name="ai-api" />

Configure Environment

Set your AI Gateway key locally (replace with your provider secret if you are not using a gateway):

AI_GATEWAY_API_KEY="your-api-key"
</Steps>

Manual Usage

<Steps>

Installation

npm install @platejs/ai @platejs/markdown @platejs/selection @ai-sdk/react ai

@platejs/suggestion is optional but required for diff-based edit suggestions.

Add Plugins

import { createPlateEditor } from 'platejs/react';
import { AIChatPlugin, AIPlugin } from '@platejs/ai/react';
import { BlockSelectionPlugin } from '@platejs/selection/react';
import { MarkdownPlugin } from '@platejs/markdown';

export const editor = createPlateEditor({
  plugins: [
    BlockSelectionPlugin,
    MarkdownPlugin,
    AIPlugin,
    AIChatPlugin, // extended in the next step
  ],
});
  • BlockSelectionPlugin: Enables multi-block selections that AIChatPlugin relies on for insert/replace transforms.
  • MarkdownPlugin: Provides Markdown serialization used by streaming utilities.
  • AIPlugin: Adds the AI mark and transforms for undoing AI batches.
  • AIChatPlugin: Supplies the AI combobox, API helpers, and transforms.

Use AIPlugin.withComponent with your own element (or AILeaf) to highlight AI-generated text.

Configure AIChatPlugin

Extend AIChatPlugin to hook streaming and edits. The example mirrors the core logic from AIKit while keeping the UI headless.

import { AIChatPlugin, applyAISuggestions, streamInsertChunk, useChatChunk } from '@platejs/ai/react';
import { withAIBatch } from '@platejs/ai';
import { getPluginType, KEYS, PathApi } from 'platejs';
import { usePluginOption } from 'platejs/react';

export const aiChatPlugin = AIChatPlugin.extend({
  options: {
    chatOptions: {
      api: '/api/ai/command',
      body: {
        model: 'openai/gpt-4o-mini',
      },
    },
    trigger: ' ',
    triggerPreviousCharPattern: /^\s?$/,
  },
  useHooks: ({ editor, getOption }) => {
    const mode = usePluginOption(AIChatPlugin, 'mode');
    const toolName = usePluginOption(AIChatPlugin, 'toolName');

    useChatChunk({
      onChunk: ({ chunk, isFirst, text }) => {
        if (mode === 'insert') {
          if (isFirst) {
            editor.setOption(AIChatPlugin, 'streaming', true);

            editor.tf.insertNodes(
              {
                children: [{ text: '' }],
                type: getPluginType(editor, KEYS.aiChat),
              },
              {
                at: PathApi.next(editor.selection!.focus.path.slice(0, 1)),
              }
            );
          }

          if (!getOption('streaming')) return;

          withAIBatch(
            editor,
            () => {
              streamInsertChunk(editor, chunk, {
                textProps: {
                  [getPluginType(editor, KEYS.ai)]: true,
                },
              });
            },
            { split: isFirst }
          );
        }

        if (toolName === 'edit' && mode === 'chat') {
          withAIBatch(
            editor,
            () => {
              applyAISuggestions(editor, text);
            },
            { split: isFirst }
          );
        }
      },
      onFinish: () => {
        editor.setOption(AIChatPlugin, 'streaming', false);
        editor.setOption(AIChatPlugin, '_blockChunks', '');
        editor.setOption(AIChatPlugin, '_blockPath', null);
        editor.setOption(AIChatPlugin, '_mdxName', null);
      },
    });
  },
});
  • useChatChunk: Watches UseChatHelpers status and yields incremental chunks.
  • streamInsertChunk: Streams Markdown/MDX into the document, reusing the existing block when possible.
  • applyAISuggestions: Converts responses into transient suggestion nodes when toolName === 'edit'.
  • withAIBatch: Marks history batches so tf.ai.undo() only reverts the last AI-generated change.

Provide your own render components (toolbar button, floating menu, etc.) when you extend the plugin.

Build API Route

Handle api.aiChat.submit requests on the server. Each request includes the chat messages from @ai-sdk/react and a ctx payload that contains the editor children, current selection, and last toolName. Complete API example

import { createGateway } from '@ai-sdk/gateway';
import { convertToCoreMessages, streamText } from 'ai';
import { createSlateEditor } from 'platejs';

import { BaseEditorKit } from '@/registry/components/editor/editor-base-kit';
import { markdownJoinerTransform } from '@/registry/lib/markdown-joiner-transform';

export async function POST(req: Request) {
  const { apiKey, ctx, messages, model } = await req.json();

  const editor = createSlateEditor({
    plugins: BaseEditorKit,
    selection: ctx.selection,
    value: ctx.children,
  });

  const gateway = createGateway({
    apiKey: apiKey ?? process.env.AI_GATEWAY_API_KEY!,
  });

  const result = streamText({
    experimental_transform: markdownJoinerTransform(),
    messages: convertToCoreMessages(messages),
    model: gateway(model ?? 'openai/gpt-4o-mini'),
    system: ctx.toolName === 'edit' ? 'You are an editor that rewrites user text.' : undefined,
  });

  return result.toDataStreamResponse();
}
  • ctx.children and ctx.selection are rehydrated into a Slate editor so you can build rich prompts (see Prompt Templates).
  • Forward provider settings (model, apiKey, temperature, gateway flags, etc.) through chatOptions.body; everything you add is passed verbatim in the JSON payload and can be read before calling createGateway.
  • Always read secrets from the server. The client should only send opaque identifiers or short-lived tokens.
  • Return a streaming response so useChat and useChatChunk can process tokens incrementally.

Connect useChat

Bridge the editor and your model endpoint with @ai-sdk/react. Store helpers on the plugin so transforms can reload, stop, or show chat state.

import { useEffect } from 'react';

import { type UIMessage, DefaultChatTransport } from 'ai';
import { type UseChatHelpers, useChat } from '@ai-sdk/react';
import { AIChatPlugin } from '@platejs/ai/react';
import { useEditorPlugin } from 'platejs/react';

type ChatMessage = UIMessage<{}, { toolName: 'comment' | 'edit' | 'generate'; comment?: unknown }>;

export const useEditorAIChat = () => {
  const { editor, setOption } = useEditorPlugin(AIChatPlugin);

  const chat = useChat<ChatMessage>({
    id: 'editor',
    api: '/api/ai/command',
    transport: new DefaultChatTransport(),
    onData(data) {
      if (data.type === 'data-toolName') {
        editor.setOption(AIChatPlugin, 'toolName', data.data);
      }
    },
  });

  useEffect(() => {
    setOption('chat', chat as UseChatHelpers<ChatMessage>);
  }, [chat, setOption]);

  return chat;
};

Combine the helper with useEditorChat to keep the floating menu anchored correctly:

import { useEditorChat } from '@platejs/ai/react';

useEditorChat({
  chat,
  onOpenChange: (open) => {
    if (!open) chat.stop?.();
  },
});

Now you can submit prompts programmatically:

import { AIChatPlugin } from '@platejs/ai/react';

editor.getApi(AIChatPlugin).aiChat.submit('', {
  prompt: {
    default: 'Continue the document after {block}',
    selecting: 'Rewrite {selection} with a clearer tone',
  },
  toolName: 'generate',
});
</Steps>

Prompt Templates

Client Prompting

  • api.aiChat.submit accepts an EditorPrompt. Provide a string, an object with default/selecting/blockSelecting, or a function that receives { editor, isSelecting, isBlockSelecting }. The helper getEditorPrompt in the client turns that value into the final string.
  • Combine it with replacePlaceholders(editor, template, { prompt }) to expand {editor}, {block}, {blockSelection}, and {prompt} using Markdown generated by @platejs/ai.
import { replacePlaceholders } from '@platejs/ai';

editor.getApi(AIChatPlugin).aiChat.submit('Improve tone', {
  prompt: ({ isSelecting }) =>
    isSelecting
      ? replacePlaceholders(editor, 'Rewrite {blockSelection} using a friendly tone.')
      : replacePlaceholders(editor, 'Continue {block} with two more sentences.'),
  toolName: 'generate',
});

Server Prompting

The demo backend in apps/www/src/app/api/ai/command reconstructs the editor from ctx and builds structured prompts:

  • getChooseToolPrompt decides whether the request is generate, edit, or comment.
  • getGeneratePrompt, getEditPrompt, and getCommentPrompt transform the current editor state into instructions tailored to each mode.
  • Utility helpers like getMarkdown, getMarkdownWithSelection, and buildStructuredPrompt (see apps/www/src/app/api/ai/command/prompts.ts) make it easy to embed block ids, selections, and MDX tags into the LLM request.

Augment the payload you send from the client to fine-tune server prompts:

editor.setOption(aiChatPlugin, 'chatOptions', {
  api: '/api/ai/command',
  body: {
    model: 'openai/gpt-4o-mini',
    tone: 'playful',
    temperature: 0.4,
  },
});

Everything under chatOptions.body arrives in the route handler, letting you swap providers, pass user-specific metadata, or branch into different prompt templates.

Keyboard Shortcuts

<KeyTable> <KeyTableItem hotkey="Space">Open the AI menu in an empty block (cursor mode)</KeyTableItem> <KeyTableItem hotkey="Cmd + J">Show the AI menu (set via `shortcuts.show`)</KeyTableItem> <KeyTableItem hotkey="Escape">Hide the AI menu and stop streaming</KeyTableItem> </KeyTable>

Streaming

The streaming utilities keep complex layouts intact while responses arrive:

  • streamInsertChunk(editor, chunk, options) deserializes Markdown chunks, updates the current block in place, and appends new blocks as needed. Use textProps/elementProps to tag streamed nodes (e.g., mark AI text).
  • streamDeserializeMd and streamDeserializeInlineMd provide lower-level access if you need to control streaming for custom node types.
  • streamSerializeMd mirrors the editor state so you can detect drift between streamed content and the response buffer.

Reset the internal _blockChunks, _blockPath, and _mdxName options when streaming finishes to start the next response from a clean slate.

Streaming Example

<ComponentPreview name="markdown-streaming-demo" />

Plate Plus

<ComponentPreviewPro name="ai-pro" />

Hooks

useAIChatEditor

Registers an auxiliary editor for chat previews and deserializes Markdown with block-level memoization.

<API name="useAIChatEditor"> <APIParameters> <APIItem name="editor" type="SlateEditor">Editor instance dedicated to the chat preview.</APIItem> <APIItem name="content" type="string">Markdown content returned by the model.</APIItem> <APIItem name="options" type="DeserializeMdOptions" optional>Pass `parser` to filter tokens before deserialization.</APIItem> </APIParameters> </API>
import { usePlateEditor } from 'platejs/react';
import { MarkdownPlugin } from '@platejs/markdown';
import { AIChatPlugin, useAIChatEditor } from '@platejs/ai/react';

const aiPreviewEditor = usePlateEditor({
  plugins: [MarkdownPlugin, AIChatPlugin],
});

useAIChatEditor(aiPreviewEditor, responseMarkdown, {
  parser: { exclude: ['space'] },
});

useEditorChat

Connects UseChatHelpers to editor state so the AI menu knows whether to anchor to cursor, selection, or block selection.

<API name="useEditorChat"> <APIParameters> <APIItem name="chat" type="UseChatHelpers&lt;ChatMessage&gt;">Helpers returned by `useChat`.</APIItem> <APIItem name="onOpenBlockSelection" type="(blocks: NodeEntry[]) => void" optional>Called when the menu opens on block selection.</APIItem> <APIItem name="onOpenChange" type="(open: boolean) => void" optional>Called whenever the menu opens or closes.</APIItem> <APIItem name="onOpenCursor" type="() => void" optional>Called when the menu opens at the cursor.</APIItem> <APIItem name="onOpenSelection" type="() => void" optional>Called when the menu opens on a text selection.</APIItem> </APIParameters> </API>

useChatChunk

Streams chat responses chunk-by-chunk and gives you full control over insertion.

<API name="useChatChunk"> <APIParameters> <APIItem name="onChunk" type="(chunk: { chunk: string; isFirst: boolean; nodes: TText[]; text: string }) => void">Handle each streamed chunk.</APIItem> <APIItem name="onFinish" type="({ content }: { content: string }) => void" optional>Called when streaming finishes.</APIItem> </APIParameters> </API>

Utilities

withAIBatch

Groups editor operations into a single history batch and flags it as AI-generated so tf.ai.undo() removes it safely.

<API name="withAIBatch"> <APIParameters> <APIItem name="editor" type="SlateEditor">Target editor.</APIItem> <APIItem name="fn" type="() => void">Operations to run.</APIItem> <APIItem name="options" type="{ split?: boolean }" optional>Set `split: true` to start a new history batch.</APIItem> </APIParameters> </API>

applyAISuggestions

Diffs AI output against stored chatNodes and writes transient suggestion nodes. Requires @platejs/suggestion.

<API name="applyAISuggestions"> <APIParameters> <APIItem name="editor" type="SlateEditor">Editor to apply suggestions to.</APIItem> <APIItem name="content" type="string">Markdown response from the model.</APIItem> </APIParameters> </API>

Complementary helpers allow you to finalize or discard the diff:

  • acceptAISuggestions(editor): Converts transient suggestion nodes into permanent suggestions.
  • rejectAISuggestions(editor): Removes transient suggestion nodes and clears suggestion marks.

aiCommentToRange

Maps streamed comment metadata back to document ranges so comments can be inserted automatically.

<API name="aiCommentToRange"> <APIParameters> <APIItem name="editor" type="PlateEditor">Editor instance.</APIItem> <APIItem name="options" type="{ blockId: string; comment: string; content: string }">Block id and text used to locate the range.</APIItem> </APIParameters> <APIReturns type="{ start: BasePoint; end: BasePoint } | null">Range matching the comment or `null` if it cannot be found.</APIReturns> </API>

findTextRangeInBlock

Fuzzy-search helper that uses LCS to find the closest match inside a block.

<API name="findTextRangeInBlock"> <APIParameters> <APIItem name="node" type="TNode">Block node to search.</APIItem> <APIItem name="searchText" type="string">Text snippet to locate.</APIItem> </APIParameters> <APIReturns type="{ start: { path: Path; offset: number }; end: { path: Path; offset: number } } | null">Matched range or `null`.</APIReturns> </API>

getEditorPrompt

Generates prompts that respect cursor, selection, or block selection states.

<API name="getEditorPrompt"> <APIParameters> <APIItem name="editor" type="SlateEditor">Editor providing context.</APIItem> <APIItem name="options" type="{ prompt?: EditorPrompt }">String, config, or function describing the prompt.</APIItem> </APIParameters> <APIReturns type="string">Contextualized prompt string.</APIReturns> </API>

replacePlaceholders

Replaces placeholders like {editor}, {blockSelection}, and {prompt} with serialized Markdown.

<API name="replacePlaceholders"> <APIParameters> <APIItem name="editor" type="SlateEditor">Editor providing content.</APIItem> <APIItem name="text" type="string">Template text.</APIItem> <APIItem name="options" type="{ prompt?: string }" optional>Prompt value injected into `{prompt}`.</APIItem> </APIParameters> <APIReturns type="string">Template with placeholders replaced by Markdown.</APIReturns> </API>

Plugins

AIPlugin

Adds an ai mark to streamed text and exposes transforms to remove AI nodes or undo the last AI batch. Use .withComponent to render AI-marked text with a custom component.

<API name="AIPlugin"> <APIOptions> <APIItem name="node.isLeaf" type="true">AI content is stored on text nodes.</APIItem> <APIItem name="node.isDecoration" type="false">AI marks are regular text properties, not decorations.</APIItem> </APIOptions> </API>

AIChatPlugin

Main plugin that powers the AI menu, chat state, and transforms.

<API name="AIChatPlugin"> <APIOptions> <APIItem name="trigger" type="RegExp | string | string[]" optional>Character(s) that open the command menu. Defaults to `' '`.</APIItem> <APIItem name="triggerPreviousCharPattern" type="RegExp" optional>Pattern that must match the character before the trigger. Defaults to `/^\s?$/`.</APIItem> <APIItem name="triggerQuery" type="(editor: SlateEditor) => boolean" optional>Return `false` to cancel opening in specific contexts.</APIItem> <APIItem name="chat" type="UseChatHelpers&lt;ChatMessage&gt;" optional>Store helpers from `useChat` so API calls can access them.</APIItem> <APIItem name="chatNodes" type="TIdElement[]" optional>Snapshot of nodes used to diff edit suggestions (managed internally).</APIItem> <APIItem name="chatSelection" type="TRange | null" optional>Selection captured before submitting a prompt (managed internally).</APIItem> <APIItem name="mode" type="'chat' | 'insert'">Controls whether responses stream directly into the document or open a review panel. Defaults to `'insert'`.</APIItem> <APIItem name="open" type="boolean" optional>Whether the AI menu is visible. Defaults to `false`.</APIItem> <APIItem name="streaming" type="boolean" optional>True while a response is streaming. Defaults to `false`.</APIItem> <APIItem name="toolName" type="'comment' | 'edit' | 'generate' | null" optional>Active tool used to interpret the response.</APIItem> </APIOptions> </API>

API

api.aiChat.submit(input, options?)

Submits a prompt to your model provider. When mode is omitted it defaults to 'insert' for a collapsed cursor and 'chat' otherwise.

<API name="submit"> <APIParameters> <APIItem name="input" type="string">Raw input from the user.</APIItem> <APIItem name="options" type="object" optional>Fine-tune submission behaviour.</APIItem> </APIParameters> <APIOptions type="object"> <APIItem name="mode" type="'chat' | 'insert'" optional>Override the response mode.</APIItem> <APIItem name="options" type="ChatRequestOptions" optional>Forwarded to `chat.sendMessage` (model, headers, etc.).</APIItem> <APIItem name="prompt" type="EditorPrompt" optional>String, config, or function processed by `getEditorPrompt`.</APIItem> <APIItem name="toolName" type="'comment' | 'edit' | 'generate' | null" optional>Tags the submission so hooks can react differently.</APIItem> </APIOptions> </API>

api.aiChat.reset(options?)

Clears chat state, removes AI nodes, and optionally undoes the last AI batch.

<API name="reset"> <APIParameters> <APIItem name="options" type="{ undo?: boolean }" optional>Pass `undo: false` to keep streamed content.</APIItem> </APIParameters> </API>

api.aiChat.node(options?)

Retrieves the first AI node that matches the specified criteria.

<API name="node"> <APIParameters> <APIItem name="options" type="EditorNodesOptions &amp; { anchor?: boolean; streaming?: boolean }" optional>Set `anchor: true` to get the anchor node or `streaming: true` to retrieve the node currently being streamed into.</APIItem> </APIParameters> <APIReturns type="NodeEntry | undefined">Matching node entry, if found.</APIReturns> </API>

api.aiChat.reload()

Replays the last prompt using the stored UseChatHelpers, restoring the original selection or block selection before resubmitting.

api.aiChat.stop()

Stops streaming and calls chat.stop.

api.aiChat.show()

Opens the AI menu, clears previous chat messages, and resets tool state.

api.aiChat.hide(options?)

Closes the AI menu, optionally undoing the last AI batch and refocusing the editor.

<API name="hide"> <APIParameters> <APIItem name="options" type="{ focus?: boolean; undo?: boolean }" optional>Set `focus: false` to keep focus outside the editor or `undo: false` to preserve inserted content.</APIItem> </APIParameters> </API>

Transforms

tf.aiChat.accept()

Accepts the latest response. In insert mode it removes AI marks and places the caret at the end of the streamed content. In chat mode it applies the pending suggestions.

tf.aiChat.insertBelow(sourceEditor, options?)

Inserts the chat preview (sourceEditor) below the current selection or block selection.

<API name="insertBelow"> <APIParameters> <APIItem name="sourceEditor" type="SlateEditor">Editor containing the generated content.</APIItem> <APIItem name="options" type="{ format?: 'all' | 'none' | 'single' }" optional>Copy formatting from the source selection. Defaults to `'single'`.</APIItem> </APIParameters> </API>

tf.aiChat.replaceSelection(sourceEditor, options?)

Replaces the current selection or block selection with the chat preview.

<API name="replaceSelection"> <APIParameters> <APIItem name="sourceEditor" type="SlateEditor">Editor containing the generated content.</APIItem> <APIItem name="options" type="{ format?: 'all' | 'none' | 'single' }" optional>Controls how much formatting from the original selection should be applied.</APIItem> </APIParameters> </API>

tf.aiChat.removeAnchor(options?)

Removes the temporary anchor node used to position the AI menu.

<API name="removeAnchor"> <APIParameters> <APIItem name="options" type="EditorNodesOptions" optional>Filters the nodes to remove.</APIItem> </APIParameters> </API>

tf.ai.insertNodes(nodes, options?)

Inserts nodes tagged with the AI mark at the current selection (or options.target).

tf.ai.removeMarks(options?)

Clears the AI mark from matching nodes.

tf.ai.removeNodes(options?)

Removes text nodes that are marked as AI-generated.

tf.ai.undo()

Undoes the latest history entry if it was created by withAIBatch and contained AI content. Clears the paired redo entry to avoid re-applying AI output.

Customization

Adding Custom AI Commands

<ComponentSource name="ai-menu" />

Extend the aiChatItems map to add new commands. Each command receives { aiEditor, editor, input } and can dispatch api.aiChat.submit with custom prompts or transforms.

Simple Custom Command

summarizeInBullets: {
  icon: <ListIcon />,
  label: 'Summarize in bullets',
  value: 'summarizeInBullets',
  onSelect: ({ editor }) => {
    void editor.getApi(AIChatPlugin).aiChat.submit('', {
      prompt: 'Summarize the current selection using bullet points',
      toolName: 'generate',
    });
  },
},

Command with Complex Logic

generateTOC: {
  icon: <BookIcon />,
  label: 'Generate table of contents',
  value: 'generateTOC',
  onSelect: ({ editor }) => {
    const headings = editor.api.nodes({
      match: (n) => ['h1', 'h2', 'h3'].includes(n.type as string),
    });

    const prompt =
      headings.length === 0
        ? 'Create a realistic table of contents for this document'
        : 'Generate a table of contents that reflects the existing headings';

    void editor.getApi(AIChatPlugin).aiChat.submit('', {
      mode: 'insert',
      prompt,
      toolName: 'generate',
    });
  },
},

The menu automatically switches between command and suggestion states:

  • cursorCommand: Cursor is collapsed and no response yet.
  • selectionCommand: Text is selected and no response yet.
  • cursorSuggestion / selectionSuggestion: A response exists, so actions like Accept, Try Again, or Insert Below are shown.

Use toolName ('generate' | 'edit' | 'comment') to control how streaming hooks process the response. For example, 'edit' enables diff-based suggestions, and 'comment' allows you to convert streamed comments into discussion threads with aiCommentToRange.

ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•‘
ā•šā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•

← Root | ↑ Up