langchain.js
    Preparing search index...

    Install and import from @langchain/baidu-qianfan instead. Wrapper around Baidu ERNIE large language models that use the Chat endpoint.

    To use you should have the BAIDU_API_KEY and BAIDU_SECRET_KEY environment variable set.

    const ernieTurbo = new ChatBaiduWenxin({
    apiKey: "YOUR-API-KEY",
    baiduSecretKey: "YOUR-SECRET-KEY",
    });

    const ernie = new ChatBaiduWenxin({
    model: "ERNIE-Bot",
    temperature: 1,
    apiKey: "YOUR-API-KEY",
    baiduSecretKey: "YOUR-SECRET-KEY",
    });

    const messages = [new HumanMessage("Hello")];

    let res = await ernieTurbo.call(messages);

    res = await ernie.call(messages);

    Hierarchy (View Summary)

    Implements

    • BaiduWenxinChatInput
    Index

    Constructors

    • Parameters

      • Optionalfields: any

      Returns ChatBaiduWenxin

    Properties

    accessToken: string
    apiKey?: string

    API key to use when making requests. Defaults to the value of BAIDU_API_KEY environment variable.

    apiUrl: string
    baiduApiKey?: string

    API key to use when making requests. Defaults to the value of BAIDU_API_KEY environment variable. Alias for apiKey

    baiduSecretKey?: string

    Secret key to use when making requests. Defaults to the value of BAIDU_SECRET_KEY environment variable.

    lc_serializable: boolean = true
    model: string = "ERNIE-Bot-turbo"

    Model name to use. Available options are: ERNIE-Bot, ERNIE-Bot-turbo, ERNIE-Bot-4

    "ERNIE-Bot-turbo"
    
    modelName: string = "ERNIE-Bot-turbo"

    Model name to use. Available options are: ERNIE-Bot, ERNIE-Bot-turbo, ERNIE-Bot-4 Alias for model

    "ERNIE-Bot-turbo"
    
    penaltyScore?: number

    Penalizes repeated tokens according to frequency. Range from 1.0 to 2.0. Defaults to 1.0.

    prefixMessages?: WenxinMessage[]

    Messages to pass as a prefix to the prompt

    streaming: boolean = false

    Whether to stream the results or not. Defaults to false.

    temperature?: number

    Amount of randomness injected into the response. Ranges from 0 to 1 (0 is not included). Use temp closer to 0 for analytical / multiple choice, and temp closer to 1 for creative and generative tasks. Defaults to 0.95.

    topP?: number

    Total probability mass of tokens to consider at each step. Range from 0 to 1.0. Defaults to 0.8.

    userId?: string

    ID of the end-user who made requests.

    Accessors

    • get callKeys(): string[]

      Returns string[]

    • get lc_aliases(): undefined | { [key: string]: string }

      Returns undefined | { [key: string]: string }

    • get lc_secrets(): undefined | { [key: string]: string }

      Returns undefined | { [key: string]: string }

    Methods

    • Returns string

    • Parameters

      • messages: BaseMessage[]
      • Optionaloptions: unknown
      • OptionalrunManager: any

      Returns AsyncGenerator<ChatGenerationChunk>

    • Method that retrieves the access token for making requests to the Baidu API.

      Parameters

      • Optionaloptions: unknown

        Optional parsed call options.

      Returns Promise<any>

      The access token for making requests to the Baidu API.

    • Get the identifying parameters for the model

      Returns {
          model_name: string;
          penalty_score?: number;
          stream?: boolean;
          system?: string;
          temperature?: number;
          top_p?: number;
          user_id?: string;
      }

    • Get the parameters used to invoke the model

      Returns Omit<ChatCompletionRequest, "messages">

    • Returns string