Optional
fields: anyOptional
apiSecret key to use when making requests. Defaults to the value of
MINIMAX_API_KEY
environment variable.
Optional
baseOptional
beamHow many results to generate; the default is 1 and the maximum is not more than 4. Because beamWidth generates multiple results, it will consume more tokens.
Optional
botSetting for each robot, only available for pro version.
Optional
continueIf it is true, this indicates that the current request is set to continuation mode, and the response is a continuation of the last sentence in the incoming messages; at this time, the last sender is not limited to USER, it can also be BOT. Assuming the last sentence of incoming messages is {"sender_type": " U S E R", "text": "天生我材"}, the completion of the reply may be "It must be useful."
Optional
defaultDefault bot name
Optional
defaultDefault user name
Optional
headersOptional
maskFor the text information in the output that may involve privacy issues, code masking is currently included but not limited to emails, domains, links, ID numbers, home addresses, etc., with the default being true, that is, code masking is enabled.
Optional
minimaxSecret key to use when making requests. Defaults to the value of
MINIMAX_API_KEY
environment variable.
Alias for apiKey
Optional
minimaxAPI key to use when making requests. Defaults to the value of
MINIMAX_GROUP_ID
environment variable.
Model name to use
Model name to use
Alias for model
Optional
prefixOptional
promptDialogue setting, characters, or functionality setting.
Optional
proEnable Chatcompletion pro
Optional
replyOptional
roleDialogue Metadata
Optional
skipSensitize text information in the output that may involve privacy issues, currently including but not limited to emails, domain names, links, ID numbers, home addresses, etc. Default false, ie. enable sensitization.
Whether to stream the results or not. Defaults to false.
Optional
temperatureAmount of randomness injected into the response. Ranges from 0 to 1 (0 is not included). Use temp closer to 0 for analytical / multiple choice, and temp closer to 1 for creative and generative tasks. Defaults to 0.95.
Optional
tokensPay attention to the maximum number of tokens generated, this parameter does not affect the generation effect of the model itself, but only realizes the function by truncating the tokens exceeding the limit. It is necessary to ensure that the number of tokens of the input context plus this value is less than 6144 or 16384, otherwise the request will fail.
Optional
topThe smaller the sampling method, the more determinate the result; the larger the number, the more random the result.
Optional
useWhether to use the standard SSE format, when set to true, the streaming results will be separated by two line breaks. This parameter only takes effect when stream is set to true.
Optional
options: unknownOptional
options: unknownGet the identifying parameters for the model
Optional
beam_width?: numberOptional
bot_setting?: BotSetting[]Optional
functions?: Function[]A list of functions the model may generate JSON inputs for.
Optional
mask_sensitive_info?: booleanOptional
plugins?: string[]Optional
prompt?: stringOptional
reply_constraints?: ReplyConstraintsOptional
role_meta?: RoleMetaOptional
sample_messages?: MinimaxChatCompletionRequestMessage[]Optional
skip_info_mask?: booleanOptional
stream?: booleanOptional
temperature?: numberOptional
tokens_to_generate?: numberOptional
top_p?: numberOptional
use_standard_sse?: booleanGet the parameters used to invoke the model
Optional
options: unknownConvert a list of messages to the format expected by the model.
Optional
messages: BaseMessage[]Optional
options: unknownStatic
lc_
Wrapper around Minimax large language models that use the Chat endpoint.
To use you should have the
MINIMAX_GROUP_ID
andMINIMAX_API_KEY
environment variable set.Example