interface TogetherAICallOptionsBaseLLMCallOptionsPick<TogetherAIInputs, "modelName" | "model" | "temperature" | "topP" | "topK" | "repetitionPenalty" | "logprobs" | "safetyModel" | "maxTokens" | "stop">Runtime values for attributes previously made configurable on this Runnable, or sub-Runnables.
Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message.
Describes the format of structured outputs. This should be provided if an output is considered to be structured
The maximum number of concurrent calls that can be made.
Defaults to Infinity, which means no limit.
Model name to use. Available options are: qwen-turbo, qwen-plus, qwen-max, or Other compatible models.
Model name to use. Available options are: qwen-turbo, qwen-plus, qwen-max, or Other compatible models.
Alias for model
Maximum number of times a call can recurse. If not provided, defaults to 25.
Penalizes repeated tokens according to frequency. Range from 1.0 to 2.0. Defaults to 1.0.
Unique identifier for the tracer run for this call. If not provided, a new UUID will be generated.
Name for the tracer run for this call. Defaults to the name of the class.
Abort signal for this call. If provided, the call will be aborted when the signal is aborted.
Stop tokens to use for this call. If not provided, the default stop tokens for the model will be used.
Amount of randomness injected into the response. Ranges from 0 to 1 (0 is not included). Use temp closer to 0 for analytical / multiple choice, and temp closer to 1 for creative and generative tasks. Defaults to 0.95.
Timeout for this call in milliseconds.
Total probability mass of tokens to consider at each step. Range from 0 to 1.0. Defaults to 0.8.