File: prompt-caching.md | Updated: 11/15/2025
Agent Skills are now available! Learn more about extending Claude's capabilities with Agent Skills .
English
Search...
Ctrl K
Search...
Navigation
Capabilities
Prompt caching
Home Developer Guide API Reference Model Context Protocol (MCP) Resources Release Notes
On this page
Prompt caching is a powerful feature that optimizes your API usage by allowing resuming from specific prefixes in your prompts. This approach significantly reduces processing time and costs for repetitive tasks or prompts with consistent elements. Here’s an example of how to implement prompt caching with the Messages API using a cache_control block:
Shell
Python
TypeScript
Java
Copy
curl https://api.anthropic.com/v1/messages \
-H "content-type: application/json" \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "claude-sonnet-4-5",
"max_tokens": 1024,
"system": [\
{\
"type": "text",\
"text": "You are an AI assistant tasked with analyzing literary works. Your goal is to provide insightful commentary on themes, characters, and writing style.\n"\
},\
{\
"type": "text",\
"text": "<the entire contents of Pride and Prejudice>",\
"cache_control": {"type": "ephemeral"}\
}\
],
"messages": [\
{\
"role": "user",\
"content": "Analyze the major themes in Pride and Prejudice."\
}\
]
}'
# Call the model again with the same inputs up to the cache checkpoint
curl https://api.anthropic.com/v1/messages # rest of input
JSON
Copy
{"cache_creation_input_tokens":188086,"cache_read_input_tokens":0,"input_tokens":21,"output_tokens":393}
{"cache_creation_input_tokens":0,"cache_read_input_tokens":188086,"input_tokens":21,"output_tokens":393}
In this example, the entire text of “Pride and Prejudice” is cached using the cache_control parameter. This enables reuse of this large text across multiple API calls without reprocessing it each time. Changing only the user message allows you to ask various questions about the book while utilizing the cached content, leading to faster responses and improved efficiency.
When you send a request with prompt caching enabled:
This is especially useful for:
By default, the cache has a 5-minute lifetime. The cache is refreshed for no additional cost each time the cached content is used.
If you find that 5 minutes is too short, Anthropic also offers a 1-hour cache duration at additional cost .For more information, see 1-hour cache duration .
Prompt caching caches the full prefixPrompt caching references the entire prompt - tools, system, and messages (in that order) up to and including the block designated with cache_control.
Prompt caching introduces a new pricing structure. The table below shows the price per million tokens for each supported model:
| Model | Base Input Tokens | 5m Cache Writes | 1h Cache Writes | Cache Hits & Refreshes | Output Tokens | | --- | --- | --- | --- | --- | --- | | Claude Opus 4.1 | $15 / MTok | $18.75 / MTok | $30 / MTok | $1.50 / MTok | $75 / MTok | | Claude Opus 4 | $15 / MTok | $18.75 / MTok | $30 / MTok | $1.50 / MTok | $75 / MTok | | Claude Sonnet 4.5 | $3 / MTok | $3.75 / MTok | $6 / MTok | $0.30 / MTok | $15 / MTok | | Claude Sonnet 4 | $3 / MTok | $3.75 / MTok | $6 / MTok | $0.30 / MTok | $15 / MTok | | Claude Sonnet 3.7 (deprecated<br>) | $3 / MTok | $3.75 / MTok | $6 / MTok | $0.30 / MTok | $15 / MTok | | Claude Haiku 4.5 | $1 / MTok | $1.25 / MTok | $2 / MTok | $0.10 / MTok | $5 / MTok | | Claude Haiku 3.5 | $0.80 / MTok | $1 / MTok | $1.6 / MTok | $0.08 / MTok | $4 / MTok | | Claude Opus 3 (deprecated<br>) | $15 / MTok | $18.75 / MTok | $30 / MTok | $1.50 / MTok | $75 / MTok | | Claude Haiku 3 | $0.25 / MTok | $0.30 / MTok | $0.50 / MTok | $0.03 / MTok | $1.25 / MTok |
The table above reflects the following pricing multipliers for prompt caching:
Supported models
Prompt caching is currently supported on:
Structuring your prompt
Place static content (tool definitions, system instructions, context, examples) at the beginning of your prompt. Mark the end of the reusable content for caching using the cache_control parameter. Cache prefixes are created in the following order: tools, system, then messages. This order forms a hierarchy where each level builds upon the previous ones.
How automatic prefix checking works
You can use just one cache breakpoint at the end of your static content, and the system will automatically find the longest matching sequence of cached blocks. Understanding how this works helps you optimize your caching strategy. Three core principles:
cache_control, the cache hash key is generated by hashing all previous blocks in the conversation sequentially. This means the cache for each block depends on all content that came before it.cache_control breakpoint. After checking 20 blocks without a match, it stops checking and moves to the next explicit breakpoint (if any).Example: Understanding the lookback window Consider a conversation with 30 content blocks where you set cache_control only on block 30:
cache_control breakpoint on block 5, the system would continue checking from that breakpoint: block 5 (no match) → block 4 (match!). This allows a cache hit at block 4, demonstrating why you should place breakpoints before editable content.Key takeaway: Always set an explicit cache breakpoint at the end of your conversation to maximize your chances of cache hits. Additionally, set breakpoints just before content blocks that might be editable to ensure those sections can be cached independently.
When to use multiple breakpoints
You can define up to 4 cache breakpoints if you want to:
Important limitation: If your prompt has more than 20 content blocks before your cache breakpoint, and you modify content earlier than those 20 blocks, you won’t get a cache hit unless you add additional explicit breakpoints closer to that content.
Cache limitations
The minimum cacheable prompt length is:
Shorter prompts cannot be cached, even if marked with cache_control. Any requests to cache fewer than this number of tokens will be processed without caching. To see if a prompt was cached, see the response usage fields
. For concurrent requests, note that a cache entry only becomes available after the first response begins. If you need cache hits for parallel requests, wait for the first response before sending subsequent requests. Currently, “ephemeral” is the only supported cache type, which by default has a 5-minute lifetime.
Understanding cache breakpoint costs
Cache breakpoints themselves don’t add any cost. You are only charged for:
Adding more cache_control breakpoints doesn’t increase your costs - you still pay the same amount based on what content is actually cached and read. The breakpoints simply give you control over what sections can be cached independently.
What can be cached
Most blocks in the request can be designated for caching with cache_control. This includes:
tools arraysystem arraymessages.content array, for both user and assistant turnsmessages.content array, in user turnsmessages.content array, in both user and assistant turnsEach of these elements can be marked with cache_control to enable caching for that portion of the request.
What cannot be cached
While most request blocks can be cached, there are some exceptions:
cache_control. However, thinking blocks CAN be cached alongside other content when they appear in previous assistant turns. When cached this way, they DO count as input tokens when read from cache.What invalidates the cache
Modifications to cached content can invalidate some or all of the cache. As described in Structuring your prompt
, the cache follows the hierarchy: tools → system → messages. Changes at each level invalidate that level and all subsequent levels. The following table shows which parts of the cache are invalidated by different types of changes. ✘ indicates that the cache is invalidated, while ✓ indicates that the cache remains valid.
| What changes | Tools cache | System cache | Messages cache | Impact |
| --- | --- | --- | --- | --- |
| Tool definitions | ✘ | ✘ | ✘ | Modifying tool definitions (names, descriptions, parameters) invalidates the entire cache |
| Web search toggle | ✓ | ✘ | ✘ | Enabling/disabling web search modifies the system prompt |
| Citations toggle | ✓ | ✘ | ✘ | Enabling/disabling citations modifies the system prompt |
| Tool choice | ✓ | ✓ | ✘ | Changes to tool_choice parameter only affect message blocks |
| Images | ✓ | ✓ | ✘ | Adding/removing images anywhere in the prompt affects message blocks |
| Thinking parameters | ✓ | ✓ | ✘ | Changes to extended thinking settings (enable/disable, budget) affect message blocks |
| Non-tool results passed to extended thinking requests | ✓ | ✓ | ✘ | When non-tool results are passed in requests while extended thinking is enabled, all previously-cached thinking blocks are stripped from context, and any messages in context that follow those thinking blocks are removed from the cache. For more details, see Caching with thinking blocks<br>. |
Tracking cache performance
Monitor cache performance using these API response fields, within usage in the response (or message_start event if streaming
):
cache_creation_input_tokens: Number of tokens written to the cache when creating a new entry.cache_read_input_tokens: Number of tokens retrieved from the cache for this request.input_tokens: Number of input tokens which were not read from or used to create a cache (i.e., tokens after the last cache breakpoint).Understanding the token breakdownThe input_tokens field represents only the tokens that come after the last cache breakpoint in your request - not all the input tokens you sent.To calculate total input tokens:
Copy
total_input_tokens = cache_read_input_tokens + cache_creation_input_tokens + input_tokens
Spatial explanation:
cache_read_input_tokens = tokens before breakpoint already cached (reads)cache_creation_input_tokens = tokens before breakpoint being cached now (writes)input_tokens = tokens after your last breakpoint (not eligible for cache)Example: If you have a request with 100,000 tokens of cached content (read from cache), 0 tokens of new content being cached, and 50 tokens in your user message (after the cache breakpoint):
cache_read_input_tokens: 100,000cache_creation_input_tokens: 0input_tokens: 50This is important for understanding both costs and rate limits, as input_tokens will typically be much smaller than your total input when using caching effectively.
Best practices for effective caching
To optimize prompt caching performance:
Optimizing for different use cases
Tailor your prompt caching strategy to your scenario:
Troubleshooting common issues
If experiencing unexpected behavior:
tool_choice and image usage remain consistent between callscache_control parameters earlier in the prompt to ensure all content can be cachedtool_use content blocks have stable ordering as some languages (e.g. Swift, Go) randomize key order during JSON conversion, breaking cachesChanges to tool_choice or the presence/absence of images anywhere in the prompt will invalidate the cache, requiring a new cache entry to be created. For more details on cache invalidation, see What invalidates the cache
.
Caching with thinking blocks
When using extended thinking
with prompt caching, thinking blocks have special behavior: Automatic caching alongside other content: While thinking blocks cannot be explicitly marked with cache_control, they get cached as part of the request content when you make subsequent API calls with tool results. This commonly happens during tool use when you pass thinking blocks back to continue the conversation. Input token counting: When thinking blocks are read from cache, they count as input tokens in your usage metrics. This is important for cost calculation and token budgeting. Cache invalidation patterns:
cache_control markersFor more details on cache invalidation, see What invalidates the cache . Example with tool use:
Copy
Request 1: User: "What's the weather in Paris?"
Response: [thinking_block_1] + [tool_use block 1]
Request 2:
User: ["What's the weather in Paris?"],
Assistant: [thinking_block_1] + [tool_use block 1],
User: [tool_result_1, cache=True]
Response: [thinking_block_2] + [text block 2]
# Request 2 caches its request content (not the response)
# The cache includes: user message, thinking_block_1, tool_use block 1, and tool_result_1
Request 3:
User: ["What's the weather in Paris?"],
Assistant: [thinking_block_1] + [tool_use block 1],
User: [tool_result_1, cache=True],
Assistant: [thinking_block_2] + [text block 2],
User: [Text response, cache=True]
# Non-tool-result user block causes all thinking blocks to be ignored
# This request is processed as if thinking blocks were never present
When a non-tool-result user block is included, it designates a new assistant loop and all previous thinking blocks are removed from context. For more detailed information, see the extended thinking documentation .
If you find that 5 minutes is too short, Anthropic also offers a 1-hour cache duration at additional cost
. To use the extended cache, include ttl in the cache_control definition like this:
Copy
"cache_control": {
"type": "ephemeral",
"ttl": "5m" | "1h"
}
The response will include detailed cache information like the following:
Copy
{
"usage": {
"input_tokens": ...,
"cache_read_input_tokens": ...,
"cache_creation_input_tokens": ...,
"output_tokens": ...,
"cache_creation": {
"ephemeral_5m_input_tokens": 456,
"ephemeral_1h_input_tokens": 100,
}
}
}
Note that the current cache_creation_input_tokens field equals the sum of the values in the cache_creation object.
When to use the 1-hour cache
If you have prompts that are used at a regular cadence (i.e., system prompts that are used more frequently than every 5 minutes), continue to use the 5-minute cache, since this will continue to be refreshed at no additional charge. The 1-hour cache is best used in the following scenarios:
The 5-minute and 1-hour cache behave the same with respect to latency. You will generally see improved time-to-first-token for long documents.
Mixing different TTLs
You can use both 1-hour and 5-minute cache controls in the same request, but with an important constraint: Cache entries with longer TTL must appear before shorter TTLs (i.e., a 1-hour cache entry must appear before any 5-minute cache entries). When mixing TTLs, we determine three billing locations in your prompt:
A: The token count at the highest cache hit (or 0 if no hits).B: The token count at the highest 1-hour cache_control block after A (or equals A if none exist).C: The token count at the last cache_control block.If B and/or C are larger than A, they will necessarily be cache misses, because A is the highest cache hit.
You’ll be charged for:
A.(B - A).(C - B).Here are 3 examples. This depicts the input tokens of 3 requests, each of which has different cache hits and cache misses. Each has a different calculated pricing, shown in the colored boxes, as a result.
To help you get started with prompt caching, we’ve prepared a prompt caching cookbook with detailed examples and best practices. Below, we’ve included several code snippets that showcase various prompt caching patterns. These examples demonstrate how to implement caching in different scenarios, helping you understand the practical applications of this feature:
Large context caching example
Shell
Python
TypeScript
Java
Copy
curl https://api.anthropic.com/v1/messages \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "content-type: application/json" \
--data \
'{
"model": "claude-sonnet-4-5",
"max_tokens": 1024,
"system": [\
{\
"type": "text",\
"text": "You are an AI assistant tasked with analyzing legal documents."\
},\
{\
"type": "text",\
"text": "Here is the full text of a complex legal agreement: [Insert full text of a 50-page legal agreement here]",\
"cache_control": {"type": "ephemeral"}\
}\
],
"messages": [\
{\
"role": "user",\
"content": "What are the key terms and conditions in this agreement?"\
}\
]
}'
This example demonstrates basic prompt caching usage, caching the full text of the legal agreement as a prefix while keeping the user instruction uncached.For the first request:
input_tokens: Number of tokens in the user message onlycache_creation_input_tokens: Number of tokens in the entire system message, including the legal documentcache_read_input_tokens: 0 (no cache hit on first request)For subsequent requests within the cache lifetime:
input_tokens: Number of tokens in the user message onlycache_creation_input_tokens: 0 (no new cache creation)cache_read_input_tokens: Number of tokens in the entire cached system messageCaching tool definitions
Shell
Python
TypeScript
Java
Copy
curl https://api.anthropic.com/v1/messages \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "content-type: application/json" \
--data \
'{
"model": "claude-sonnet-4-5",
"max_tokens": 1024,
"tools": [\
{\
"name": "get_weather",\
"description": "Get the current weather in a given location",\
"input_schema": {\
"type": "object",\
"properties": {\
"location": {\
"type": "string",\
"description": "The city and state, e.g. San Francisco, CA"\
},\
"unit": {\
"type": "string",\
"enum": ["celsius", "fahrenheit"],\
"description": "The unit of temperature, either celsius or fahrenheit"\
}\
},\
"required": ["location"]\
}\
},\
# many more tools\
{\
"name": "get_time",\
"description": "Get the current time in a given time zone",\
"input_schema": {\
"type": "object",\
"properties": {\
"timezone": {\
"type": "string",\
"description": "The IANA time zone name, e.g. America/Los_Angeles"\
}\
},\
"required": ["timezone"]\
},\
"cache_control": {"type": "ephemeral"}\
}\
],
"messages": [\
{\
"role": "user",\
"content": "What is the weather and time in New York?"\
}\
]
}'
In this example, we demonstrate caching tool definitions.The cache_control parameter is placed on the final tool (get_time) to designate all of the tools as part of the static prefix.This means that all tool definitions, including get_weather and any other tools defined before get_time, will be cached as a single prefix.This approach is useful when you have a consistent set of tools that you want to reuse across multiple requests without re-processing them each time.For the first request:
input_tokens: Number of tokens in the user messagecache_creation_input_tokens: Number of tokens in all tool definitions and system promptcache_read_input_tokens: 0 (no cache hit on first request)For subsequent requests within the cache lifetime:
input_tokens: Number of tokens in the user messagecache_creation_input_tokens: 0 (no new cache creation)cache_read_input_tokens: Number of tokens in all cached tool definitions and system promptContinuing a multi-turn conversation
Shell
Python
TypeScript
Java
Copy
curl https://api.anthropic.com/v1/messages \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "content-type: application/json" \
--data \
'{
"model": "claude-sonnet-4-5",
"max_tokens": 1024,
"system": [\
{\
"type": "text",\
"text": "...long system prompt",\
"cache_control": {"type": "ephemeral"}\
}\
],
"messages": [\
{\
"role": "user",\
"content": [\
{\
"type": "text",\
"text": "Hello, can you tell me more about the solar system?",\
}\
]\
},\
{\
"role": "assistant",\
"content": "Certainly! The solar system is the collection of celestial bodies that orbit our Sun. It consists of eight planets, numerous moons, asteroids, comets, and other objects. The planets, in order from closest to farthest from the Sun, are: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune. Each planet has its own unique characteristics and features. Is there a specific aspect of the solar system you would like to know more about?"\
},\
{\
"role": "user",\
"content": [\
{\
"type": "text",\
"text": "Good to know."\
},\
{\
"type": "text",\
"text": "Tell me more about Mars.",\
"cache_control": {"type": "ephemeral"}\
}\
]\
}\
]
}'
In this example, we demonstrate how to use prompt caching in a multi-turn conversation.During each turn, we mark the final block of the final message with cache_control so the conversation can be incrementally cached. The system will automatically lookup and use the longest previously cached sequence of blocks for follow-up messages. That is, blocks that were previously marked with a cache_control block are later not marked with this, but they will still be considered a cache hit (and also a cache refresh!) if they are hit within 5 minutes.In addition, note that the cache_control parameter is placed on the system message. This is to ensure that if this gets evicted from the cache (after not being used for more than 5 minutes), it will get added back to the cache on the next request.This approach is useful for maintaining context in ongoing conversations without repeatedly processing the same information.When this is set up properly, you should see the following in the usage response of each request:
input_tokens: Number of tokens in the new user message (will be minimal)cache_creation_input_tokens: Number of tokens in the new assistant and user turnscache_read_input_tokens: Number of tokens in the conversation up to the previous turnPutting it all together: Multiple cache breakpoints
Shell
Python
TypeScript
Java
Copy
curl https://api.anthropic.com/v1/messages \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "content-type: application/json" \
--data \
'{
"model": "claude-sonnet-4-5",
"max_tokens": 1024,
"tools": [\
{\
"name": "search_documents",\
"description": "Search through the knowledge base",\
"input_schema": {\
"type": "object",\
"properties": {\
"query": {\
"type": "string",\
"description": "Search query"\
}\
},\
"required": ["query"]\
}\
},\
{\
"name": "get_document",\
"description": "Retrieve a specific document by ID",\
"input_schema": {\
"type": "object",\
"properties": {\
"doc_id": {\
"type": "string",\
"description": "Document ID"\
}\
},\
"required": ["doc_id"]\
},\
"cache_control": {"type": "ephemeral"}\
}\
],
"system": [\
{\
"type": "text",\
"text": "You are a helpful research assistant with access to a document knowledge base.\n\n# Instructions\n- Always search for relevant documents before answering\n- Provide citations for your sources\n- Be objective and accurate in your responses\n- If multiple documents contain relevant information, synthesize them\n- Acknowledge when information is not available in the knowledge base",\
"cache_control": {"type": "ephemeral"}\
},\
{\
"type": "text",\
"text": "# Knowledge Base Context\n\nHere are the relevant documents for this conversation:\n\n## Document 1: Solar System Overview\nThe solar system consists of the Sun and all objects that orbit it...\n\n## Document 2: Planetary Characteristics\nEach planet has unique features. Mercury is the smallest planet...\n\n## Document 3: Mars Exploration\nMars has been a target of exploration for decades...\n\n[Additional documents...]",\
"cache_control": {"type": "ephemeral"}\
}\
],
"messages": [\
{\
"role": "user",\
"content": "Can you search for information about Mars rovers?"\
},\
{\
"role": "assistant",\
"content": [\
{\
"type": "tool_use",\
"id": "tool_1",\
"name": "search_documents",\
"input": {"query": "Mars rovers"}\
}\
]\
},\
{\
"role": "user",\
"content": [\
{\
"type": "tool_result",\
"tool_use_id": "tool_1",\
"content": "Found 3 relevant documents: Document 3 (Mars Exploration), Document 7 (Rover Technology), Document 9 (Mission History)"\
}\
]\
},\
{\
"role": "assistant",\
"content": [\
{\
"type": "text",\
"text": "I found 3 relevant documents about Mars rovers. Let me get more details from the Mars Exploration document."\
}\
]\
},\
{\
"role": "user",\
"content": [\
{\
"type": "text",\
"text": "Yes, please tell me about the Perseverance rover specifically.",\
"cache_control": {"type": "ephemeral"}\
}\
]\
}\
]
}'
This comprehensive example demonstrates how to use all 4 available cache breakpoints to optimize different parts of your prompt:
cache_control parameter on the last tool definition caches all tool definitions.cache_control to enable incremental caching of the conversation as it progresses.This approach provides maximum flexibility:
For the first request:
input_tokens: Tokens in the final user messagecache_creation_input_tokens: Tokens in all cached segments (tools + instructions + RAG documents + conversation history)cache_read_input_tokens: 0 (no cache hits)For subsequent requests with only a new user message:
input_tokens: Tokens in the new user message onlycache_creation_input_tokens: Any new tokens added to conversation historycache_read_input_tokens: All previously cached tokens (tools + instructions + RAG documents + previous conversation)This pattern is especially powerful for:
Do I need multiple cache breakpoints or is one at the end sufficient?
In most cases, a single cache breakpoint at the end of your static content is sufficient. The system automatically checks for cache hits at all previous content block boundaries (up to 20 blocks before your breakpoint) and uses the longest matching sequence of cached blocks.You only need multiple breakpoints if:
Example: If you have system instructions (rarely change) and RAG context (changes daily), you might use two breakpoints to cache them separately.
Do cache breakpoints add extra cost?
No, cache breakpoints themselves are free. You only pay for:
The number of breakpoints doesn’t affect pricing - only the amount of content cached and read matters.
How do I calculate total input tokens from the usage fields?
The usage response includes three separate input token fields that together represent your total input:
Copy
total_input_tokens = cache_read_input_tokens + cache_creation_input_tokens + input_tokens
cache_read_input_tokens: Tokens retrieved from cache (everything before cache breakpoints that was cached)cache_creation_input_tokens: New tokens being written to cache (at cache breakpoints)input_tokens: Tokens after the last cache breakpoint that aren’t cachedImportant: input_tokens does NOT represent all input tokens - only the portion after your last cache breakpoint. If you have cached content, input_tokens will typically be much smaller than your total input.Example: With a 200K token document cached and a 50 token user question:
cache_read_input_tokens: 200,000cache_creation_input_tokens: 0input_tokens: 50This breakdown is critical for understanding both your costs and rate limit usage. See Tracking cache performance for more details.
What is the cache lifetime?
The cache’s default minimum lifetime (TTL) is 5 minutes. This lifetime is refreshed each time the cached content is used.If you find that 5 minutes is too short, Anthropic also offers a 1-hour cache TTL .
How many cache breakpoints can I use?
You can define up to 4 cache breakpoints (using cache_control parameters) in your prompt.
Is prompt caching available for all models?
No, prompt caching is currently only available for Claude Opus 4.1, Claude Opus 4, Claude Sonnet 4.5, Claude Sonnet 4, Claude Sonnet 3.7, Claude Haiku 4.5, Claude Haiku 3.5, Claude Haiku 3, and Claude Opus 3 (deprecated ).
How does prompt caching work with extended thinking?
Cached system prompts and tools will be reused when thinking parameters change. However, thinking changes (enabling/disabling or budget changes) will invalidate previously cached prompt prefixes with messages content.For more details on cache invalidation, see What invalidates the cache .For more on extended thinking, including its interaction with tool use and prompt caching, see the extended thinking documentation .
How do I enable prompt caching?
To enable prompt caching, include at least one cache_control breakpoint in your API request.
Can I use prompt caching with other API features?
Yes, prompt caching can be used alongside other API features like tool use and vision capabilities. However, changing whether there are images in a prompt or modifying tool use settings will break the cache.For more details on cache invalidation, see What invalidates the cache .
How does prompt caching affect pricing?
Prompt caching introduces a new pricing structure where cache writes cost 25% more than base input tokens, while cache hits cost only 10% of the base input token price.
Can I manually clear the cache?
Currently, there’s no way to manually clear the cache. Cached prefixes automatically expire after a minimum of 5 minutes of inactivity.
How can I track the effectiveness of my caching strategy?
You can monitor cache performance using the cache_creation_input_tokens and cache_read_input_tokens fields in the API response.
What can break the cache?
See What invalidates the cache for more details on cache invalidation, including a list of changes that require creating a new cache entry.
How does prompt caching handle privacy and data separation?
Prompt caching is designed with strong privacy and data separation measures:
cache_control anywhere in your prompts. For cost efficiency, it’s better to exclude highly variable parts (e.g., user’s arbitrary input) from caching.These measures ensure that prompt caching maintains data privacy and security while offering performance benefits.
Can I use prompt caching with the Batches API?
Yes, it is possible to use prompt caching with your Batches API requests. However, because asynchronous batch requests can be processed concurrently and in any order, cache hits are provided on a best-effort basis.The 1-hour cache can help improve your cache hits. The most cost effective way of using it is the following:
This is typically better than using the 5-minute cache simply because it’s common for batch requests to take between 5 minutes and 1 hour to complete. We’re considering ways to improve these cache hit rates and making this process more straightforward.
Why am I seeing the error `AttributeError: 'Beta' object has no attribute 'prompt_caching'` in Python?
This error typically appears when you have upgraded your SDK or you are using outdated code examples. Prompt caching is now generally available, so you no longer need the beta prefix. Instead of:
Python
Copy
python client.beta.prompt_caching.messages.create(...)
Simply use:
Python
Copy
python client.messages.create(...)
Why am I seeing 'TypeError: Cannot read properties of undefined (reading 'messages')'?
This error typically appears when you have upgraded your SDK or you are using outdated code examples. Prompt caching is now generally available, so you no longer need the beta prefix. Instead of:
TypeScript
Copy
client.beta.promptCaching.messages.create(...)
Simply use:
Copy
client.messages.create(...)
Was this page helpful?
YesNo
Prompting best practices Context editing
Assistant
Responses are generated using AI and may contain mistakes.