langchain.js
    Preparing search index...

    Wrapper around Ali Tongyi large language models that use the Chat endpoint.

    To use you should have the ALIBABA_API_KEY environment variable set.

    const qwen = new ChatAlibabaTongyi({
    alibabaApiKey: "YOUR-API-KEY",
    });

    const qwen = new ChatAlibabaTongyi({
    model: "qwen-turbo",
    temperature: 1,
    alibabaApiKey: "YOUR-API-KEY",
    });

    const messages = [new HumanMessage("Hello")];

    await qwen.call(messages);

    Hierarchy (View Summary)

    Implements

    • AlibabaTongyiChatInput
    Index

    Constructors

    • Parameters

      • fields: any = {}

      Returns ChatAlibabaTongyi

    Properties

    alibabaApiKey?: string

    API key to use when making requests. Defaults to the value of ALIBABA_API_KEY environment variable.

    apiUrl: string
    enableSearch?: boolean
    lc_serializable: boolean
    maxTokens?: number
    model:
        | string & {}
        | "qwen-turbo"
        | "qwen-plus"
        | "qwen-max"
        | "qwen-max-1201"
        | "qwen-max-longcontext"
        | "qwen-7b-chat"
        | "qwen-14b-chat"
        | "qwen-72b-chat"
        | "llama2-7b-chat-v2"
        | "llama2-13b-chat-v2"
        | "baichuan-7b-v1"
        | "baichuan2-13b-chat-v1"
        | "baichuan2-7b-chat-v1"
        | "chatglm3-6b"
        | "chatglm-6b-v2"

    Model name to use. Available options are: qwen-turbo, qwen-plus, qwen-max, or Other compatible models.

    "qwen-turbo"
    
    modelName:
        | string & {}
        | "qwen-turbo"
        | "qwen-plus"
        | "qwen-max"
        | "qwen-max-1201"
        | "qwen-max-longcontext"
        | "qwen-7b-chat"
        | "qwen-14b-chat"
        | "qwen-72b-chat"
        | "llama2-7b-chat-v2"
        | "llama2-13b-chat-v2"
        | "baichuan-7b-v1"
        | "baichuan2-13b-chat-v1"
        | "baichuan2-7b-chat-v1"
        | "chatglm3-6b"
        | "chatglm-6b-v2"

    Model name to use. Available options are: qwen-turbo, qwen-plus, qwen-max, or Other compatible models. Alias for model

    "qwen-turbo"
    
    prefixMessages?: TongyiMessage[]

    Messages to pass as a prefix to the prompt

    repetitionPenalty?: number

    Penalizes repeated tokens according to frequency. Range from 1.0 to 2.0. Defaults to 1.0.

    seed?: number
    streaming: boolean

    Whether to stream the results or not. Defaults to false.

    temperature?: number

    Amount of randomness injected into the response. Ranges from 0 to 1 (0 is not included). Use temp closer to 0 for analytical / multiple choice, and temp closer to 1 for creative and generative tasks. Defaults to 0.95.

    topK?: number
    topP?: number

    Total probability mass of tokens to consider at each step. Range from 0 to 1.0. Defaults to 0.8.

    Accessors

    • get callKeys(): string[]

      Returns string[]

    • get lc_aliases(): undefined

      Returns undefined

    • get lc_secrets(): { alibabaApiKey: string }

      Returns { alibabaApiKey: string }

    Methods

    • Returns string

    • Parameters

      • messages: BaseMessage[]
      • Optionaloptions: unknown
      • OptionalrunManager: any

      Returns AsyncGenerator<ChatGenerationChunk>

    • Get the identifying parameters for the model

      Returns {
          enable_search?: null | boolean;
          incremental_output?: null | boolean;
          max_tokens?: null | number;
          repetition_penalty?: null | number;
          result_format?: "message" | "text";
          seed?: null | number;
          stream?: boolean;
          temperature?: null | number;
          top_k?: null | number;
          top_p?: null | number;
      } & Pick<ChatCompletionRequest, "model">

    • Get the parameters used to invoke the model

      Returns {
          enable_search?: null | boolean;
          incremental_output?: null | boolean;
          max_tokens?: null | number;
          repetition_penalty?: null | number;
          result_format?: "message" | "text";
          seed?: null | number;
          stream?: boolean;
          temperature?: null | number;
          top_k?: null | number;
          top_p?: null | number;
      }

    • Returns string