langchain.js
    Preparing search index...

    Wrapper around Minimax large language models that use the Chat endpoint.

    To use you should have the MINIMAX_GROUP_ID and MINIMAX_API_KEY environment variable set.

    // Define a chat prompt with a system message setting the context for translation
    const chatPrompt = ChatPromptTemplate.fromMessages([
    SystemMessagePromptTemplate.fromTemplate(
    "You are a helpful assistant that translates {input_language} to {output_language}.",
    ),
    HumanMessagePromptTemplate.fromTemplate("{text}"),
    ]);

    // Create a new LLMChain with the chat model and the defined prompt
    const chainB = new LLMChain({
    prompt: chatPrompt,
    llm: new ChatMinimax({ temperature: 0.01 }),
    });

    // Call the chain with the input language, output language, and the text to translate
    const resB = await chainB.call({
    input_language: "English",
    output_language: "Chinese",
    text: "I love programming.",
    });

    // Log the result
    console.log({ resB });

    Hierarchy (View Summary)

    Implements

    • MinimaxChatInput
    Index

    Constructors

    • Parameters

      • Optionalfields: any

      Returns ChatMinimax

    Properties

    apiKey?: string

    Secret key to use when making requests. Defaults to the value of MINIMAX_API_KEY environment variable.

    apiUrl: string
    basePath?: string = "https://api.minimax.chat/v1"
    beamWidth?: number

    How many results to generate; the default is 1 and the maximum is not more than 4. Because beamWidth generates multiple results, it will consume more tokens.

    botSetting?: BotSetting[]

    Setting for each robot, only available for pro version.

    continueLastMessage?: boolean

    If it is true, this indicates that the current request is set to continuation mode, and the response is a continuation of the last sentence in the incoming messages; at this time, the last sender is not limited to USER, it can also be BOT. Assuming the last sentence of incoming messages is {"sender_type": " U S E R", "text": "天生我材"}, the completion of the reply may be "It must be useful."

    defaultBotName?: string = "Assistant"

    Default bot name

    defaultUserName?: string = "I"

    Default user name

    headers?: Record<string, string>
    lc_serializable: boolean = true
    maskSensitiveInfo?: boolean

    For the text information in the output that may involve privacy issues, code masking is currently included but not limited to emails, domains, links, ID numbers, home addresses, etc., with the default being true, that is, code masking is enabled.

    minimaxApiKey?: string

    Secret key to use when making requests. Defaults to the value of MINIMAX_API_KEY environment variable. Alias for apiKey

    minimaxGroupId?: string

    API key to use when making requests. Defaults to the value of MINIMAX_GROUP_ID environment variable.

    model: string = "abab5.5-chat"

    Model name to use

    "abab5.5-chat"
    
    modelName: string = "abab5.5-chat"

    Model name to use Alias for model

    "abab5.5-chat"
    
    prefixMessages?: MinimaxChatCompletionRequestMessage[]
    prompt?: string

    Dialogue setting, characters, or functionality setting.

    proVersion?: boolean = true

    Enable Chatcompletion pro

    replyConstraints?: ReplyConstraints
    roleMeta?: RoleMeta

    Dialogue Metadata

    skipInfoMask?: boolean

    Sensitize text information in the output that may involve privacy issues, currently including but not limited to emails, domain names, links, ID numbers, home addresses, etc. Default false, ie. enable sensitization.

    streaming: boolean = false

    Whether to stream the results or not. Defaults to false.

    temperature?: number = 0.9

    Amount of randomness injected into the response. Ranges from 0 to 1 (0 is not included). Use temp closer to 0 for analytical / multiple choice, and temp closer to 1 for creative and generative tasks. Defaults to 0.95.

    tokensToGenerate?: number

    Pay attention to the maximum number of tokens generated, this parameter does not affect the generation effect of the model itself, but only realizes the function by truncating the tokens exceeding the limit. It is necessary to ensure that the number of tokens of the input context plus this value is less than 6144 or 16384, otherwise the request will fail.

    topP?: number = 0.8

    The smaller the sampling method, the more determinate the result; the larger the number, the more random the result.

    useStandardSse?: boolean

    Whether to use the standard SSE format, when set to true, the streaming results will be separated by two line breaks. This parameter only takes effect when stream is set to true.

    Accessors

    • get lc_secrets(): undefined | { [key: string]: string }

      Returns undefined | { [key: string]: string }

    Methods

    • Returns string

    • Parameters

      • Optionaloptions: unknown

      Returns any

    • Parameters

      • Optionaloptions: unknown

      Returns any

    • Get the identifying parameters for the model

      Returns {
          beam_width?: number;
          bot_setting?: BotSetting[];
          functions?: Function[];
          mask_sensitive_info?: boolean;
          model: string;
          plugins?: string[];
          prompt?: string;
          reply_constraints?: ReplyConstraints;
          role_meta?: RoleMeta;
          sample_messages?: MinimaxChatCompletionRequestMessage[];
          skip_info_mask?: boolean;
          stream?: boolean;
          temperature?: number;
          tokens_to_generate?: number;
          top_p?: number;
          use_standard_sse?: boolean;
      }

      • Optionalbeam_width?: number
      • Optionalbot_setting?: BotSetting[]
      • Optionalfunctions?: Function[]

        A list of functions the model may generate JSON inputs for.

      • Optionalmask_sensitive_info?: boolean
      • model: string
      • Optionalplugins?: string[]
      • Optionalprompt?: string
      • Optionalreply_constraints?: ReplyConstraints
      • Optionalrole_meta?: RoleMeta
      • Optionalsample_messages?: MinimaxChatCompletionRequestMessage[]
      • Optionalskip_info_mask?: boolean
      • Optionalstream?: boolean
      • Optionaltemperature?: number
      • Optionaltokens_to_generate?: number
      • Optionaltop_p?: number
      • Optionaluse_standard_sse?: boolean
    • Get the parameters used to invoke the model

      Parameters

      • Optionaloptions: unknown

      Returns Omit<MinimaxChatCompletionRequest, "messages">

    • Convert a list of messages to the format expected by the model.

      Parameters

      • Optionalmessages: BaseMessage[]
      • Optionaloptions: unknown

      Returns undefined | MinimaxChatCompletionRequestMessage[]

    • Returns string