Deployed params
interface WatsonxDeployedParamsRuntime values for attributes previously made configurable on this Runnable, or sub-Runnables.
Penalizes repeated tokens according to frequency
The id_or_name can be either the deployment_id that identifies the deployment or a serving_name that
allows a predefined URL to be used to post a prediction. The deployment must reference a prompt template with
input_mode chat.
The WML instance that is associated with the deployment will be used for limits and billing (if a paid plan).
Whether to include reasoning_content in the response. Default is true.
Dictionary used to adjust the probability of specific tokens being generated
Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message.
Describes the format of structured outputs. This should be provided if an output is considered to be structured
The maximum number of tokens that can be generated in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. Set to 0 for the model's configured max generated tokens.
The maximum number of concurrent calls that can be made.
Defaults to Infinity, which means no limit.
The maximum number of retries that can be made for a single call, with an exponential backoff between each attempt. Defaults to 6.
Number of completions to generate for each prompt
Penalizes repeated tokens
A lower reasoning effort can result in faster responses, fewer tokens used, and shorter reasoning_content in the responses. Supported values are low, medium, and high.
Maximum number of times a call can recurse. If not provided, defaults to 25.
Penalizes repeated tokens according to frequency. Range from 1.0 to 2.0. Defaults to 1.0.
The tool response format.
If "content" then the output of the tool is interpreted as the contents of a ToolMessage. If "content_and_artifact" then the output is expected to be a two-tuple corresponding to the (content, artifact) of a ToolMessage.
Unique identifier for the tracer run for this call. If not provided, a new UUID will be generated.
Name for the tracer run for this call. Defaults to the name of the class.
Abort signal for this call. If provided, the call will be aborted when the signal is aborted.
Stop tokens to use for this call. If not provided, the default stop tokens for the model will be used.
Whether to stream the results or not. Defaults to false.
Amount of randomness injected into the response. Ranges from 0 to 1 (0 is not included). Use temp closer to 0 for analytical / multiple choice, and temp closer to 1 for creative and generative tasks. Defaults to 0.95.
Time limit in milliseconds - if not completed within this time, generation will stop. The text generated so far will be returned along with the `TIME_LIMIT`` stop reason. Depending on the users plan, and on the model being used, there may be an enforced maximum time limit.
Timeout for this call in milliseconds.
Specifies how the chat model should use tools.
An integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used.
Total probability mass of tokens to consider at each step. Range from 0 to 1.0. Defaults to 0.8.
Runtime values for attributes previously made configurable on this Runnable,
Penalizes repeated tokens according to frequency
The id_or_name can be either the deployment_id that identifies the deployment or a serving_name that
Whether to include reasoning_content in the response. Default is true.
Dictionary used to adjust the probability of specific tokens being generated
Whether to return log probabilities of the output tokens or not.
Describes the format of structured outputs.
The maximum number of tokens that can be generated in the chat completion. The total length of input tokens
The maximum number of concurrent calls that can be made.
The maximum number of retries that can be made for a single call,
Number of completions to generate for each prompt
Penalizes repeated tokens
A lower reasoning effort can result in faster responses, fewer tokens used, and shorter reasoning_content in the responses.
Maximum number of times a call can recurse. If not provided, defaults to 25.
Penalizes repeated tokens according to frequency. Range
The tool response format.
Unique identifier for the tracer run for this call. If not provided, a new UUID
Name for the tracer run for this call. Defaults to the name of the class.
Abort signal for this call.
Stop tokens to use for this call.
Whether to stream the results or not. Defaults to false.
Amount of randomness injected into the response. Ranges
Time limit in milliseconds - if not completed within this time, generation will stop. The text generated so
Timeout for this call in milliseconds.
Specifies how the chat model should use tools.
An integer between 0 and 5 specifying the number of most likely tokens to return at each token position,
Total probability mass of tokens to consider at each step. Range