interface ChatBedrockConverseCallOptionsBaseChatModelCallOptionsPick<ChatBedrockConverseInput, "additionalModelRequestFields" | "streamUsage" | "guardrailConfig" | "performanceConfig" | "serviceTier">Additional inference parameters that the model supports, beyond the
base set of inference parameters that the Converse API supports in the inferenceConfig
field. For more information, see the model parameters link below.
Callbacks for this call and any sub-calls (eg. a Chain calling an LLM). Tags are passed to all callbacks, metadata is passed to handle*Start callbacks.
Runtime values for attributes previously made configurable on this Runnable, or sub-Runnables.
Configuration information for a guardrail that you want to use in the request.
Describes the format of structured outputs. This should be provided if an output is considered to be structured
Maximum number of parallel calls to make.
Metadata for this call and any sub-calls (eg. a Chain calling an LLM). Keys should be strings, values should be JSON-serializable.
Version of AIMessage output format to store in message content.
AIMessage.contentBlocks will lazily parse the contents of content into a
standard format. This flag can be used to additionally store the standard format
as the message content, e.g., for serialization purposes.
.contentBlocks).contentBlocks)You can also set LC_OUTPUT_VERSION as an environment variable to "v1" to
enable this by default.
Model performance configuration. See https://docs.aws.amazon.com/bedrock/latest/userguide/latency-optimized-inference.html
Maximum number of times a call can recurse. If not provided, defaults to 25.
Key-value pairs that you can use to filter invocation logs.
Unique identifier for the tracer run for this call. If not provided, a new UUID will be generated.
Name for the tracer run for this call. Defaults to the name of the class.
Service tier for model invocation.
Specifies the processing tier type used for serving the request. Supported values are 'priority', 'default', 'flex', and 'reserved'.
If not provided, AWS uses the default tier.
For more information, see: https://docs.aws.amazon.com/bedrock/latest/userguide/service-tiers-inference.html
Abort signal for this call. If provided, the call will be aborted when the signal is aborted.
A list of stop sequences. A stop sequence is a sequence of characters that causes the model to stop generating the response.
Whether or not to include usage data, like token counts in the streamed response chunks. Passing as a call option will take precedence over the class-level setting.
Tags for this call and any sub-calls (eg. a Chain calling an LLM). You can use these to filter calls.
Timeout for this call in milliseconds.
Tool choice for the model. If passing a string, it must be "any", "auto" or the name of the tool to use. Or, pass a BedrockToolChoice object.
If "any" is passed, the model must request at least one tool. If "auto" is passed, the model automatically decides if a tool should be called or whether to generate text instead. If a tool name is passed, it will force the model to call that specific tool.