class ChatWatsonxBaseChatModel<CallOptions>The async caller should be used by subclasses to make any async calls, which will thus benefit from the concurrency and retry logic.
Penalizes repeated tokens according to frequency
The id_or_name can be either the deployment_id that identifies the deployment or a serving_name that
allows a predefined URL to be used to post a prediction. The deployment must reference a prompt template with
input_mode chat.
The WML instance that is associated with the deployment will be used for limits and billing (if a paid plan).
Whether to include reasoning_content in the response. Default is true.
Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message.
The maximum number of tokens that can be generated in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. Set to 0 for the model's configured max generated tokens.
The maximum number of concurrent calls that can be made.
Defaults to Infinity, which means no limit.
The maximum number of retries that can be made for a single call, with an exponential backoff between each attempt. Defaults to 6.
Model name to use. Available options are: qwen-turbo, qwen-plus, qwen-max, or Other compatible models.
Number of completions to generate for each prompt
Penalizes repeated tokens
The project that contains the resource. Either space_id or project_id has to be given.
A lower reasoning effort can result in faster responses, fewer tokens used, and shorter reasoning_content in the responses. Supported values are low, medium, and high.
The tool response format.
If "content" then the output of the tool is interpreted as the contents of a ToolMessage. If "content_and_artifact" then the output is expected to be a two-tuple corresponding to the (content, artifact) of a ToolMessage.
The space that contains the resource. Either space_id or project_id has to be given.
Whether to stream the results or not. Defaults to false.
Amount of randomness injected into the response. Ranges from 0 to 1 (0 is not included). Use temp closer to 0 for analytical / multiple choice, and temp closer to 1 for creative and generative tasks. Defaults to 0.95.
Time limit in milliseconds - if not completed within this time, generation will stop. The text generated so far will be returned along with the `TIME_LIMIT`` stop reason. Depending on the users plan, and on the model being used, there may be an enforced maximum time limit.
An integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used.
Total probability mass of tokens to consider at each step. Range from 0 to 1.0. Defaults to 0.8.
Whether to print out response text.