interface ChatAnthropicCallOptionsBaseChatModelCallOptionsPick<AnthropicInput, "streamUsage">Optional array of beta features to enable for the Anthropic API. Beta features are experimental capabilities that may change or be removed. See https://docs.anthropic.com/en/api/versioning for available beta features.
Cache control configuration for prompt caching. When provided, applies cache_control to the last content block of the last message, enabling Anthropic's prompt caching feature.
This is the recommended way to enable prompt caching as it applies cache_control at the final message formatting layer, avoiding issues with message content block manipulation during earlier processing stages.
Callbacks for this call and any sub-calls (eg. a Chain calling an LLM). Tags are passed to all callbacks, metadata is passed to handle*Start callbacks.
Runtime values for attributes previously made configurable on this Runnable, or sub-Runnables.
Container ID for file persistence across turns with code execution. Used with the code_execution_20250825 tool.
Custom headers to pass to the Anthropic API when making a request.
Specifies the geographic region for inference processing. US-only inference is available at 1.1x pricing for models released after February 1, 2026.
Describes the format of structured outputs. This should be provided if an output is considered to be structured
Maximum number of parallel calls to make.
Array of MCP server URLs to use for the request.
Configuration options for the model's output, such as effort level and output format.
Version of AIMessage output format to store in message content.
AIMessage.contentBlocks will lazily parse the contents of content into a
standard format. This flag can be used to additionally store the standard format
as the message content, e.g., for serialization purposes.
.contentBlocks).contentBlocks)You can also set LC_OUTPUT_VERSION as an environment variable to "v1" to
enable this by default.
Maximum number of times a call can recurse. If not provided, defaults to 25.
Unique identifier for the tracer run for this call. If not provided, a new UUID will be generated.
Name for the tracer run for this call. Defaults to the name of the class.
Abort signal for this call. If provided, the call will be aborted when the signal is aborted.
Stop tokens to use for this call. If not provided, the default stop tokens for the model will be used.
Whether or not to include token usage data in streamed chunks.
Timeout for this call in milliseconds.
Whether or not to specify what tool the model should use