langchain.js
    Preparing search index...

    Interface defining the input to the ZhipuAIChatInput class.

    Hierarchy (View Summary)

    Implements

    Index

    Constructors

    • Parameters

      • fields: any = {}

      Returns ChatZhipuAI

    Properties

    apiKey?: string

    API key to use when making requests. Defaults to the value of ZHIPUAI_API_KEY environment variable.

    apiUrl: string
    doSample?: boolean

    turn on sampling strategy when do_sample is true, do_sample is false, temperature、top_p will not take effect

    maxTokens?: number

    max value is 8192,defaults to 1024

    messages?: ZhipuMessage[]

    Messages to pass as a prefix to the prompt

    model: ModelName
    "glm-3-turbo"
    
    modelName: ModelName

    "glm-3-turbo" Alias for model

    requestId?: string

    Unique identifier for the request. Defaults to a random UUID.

    stop?: string[]
    streaming: boolean

    Whether to stream the results or not. Defaults to false.

    temperature?: number

    Amount of randomness injected into the response. Ranges from 0 to 1 (0 is not included). Use temp closer to 0 for analytical / multiple choice, and temp closer to 1 for creative and generative tasks. Defaults to 0.95

    topP?: number

    Total probability mass of tokens to consider at each step. Range from 0 to 1 Defaults to 0.7

    zhipuAIApiKey?: string

    API key to use when making requests. Defaults to the value of ZHIPUAI_API_KEY environment variable. Alias for apiKey

    Accessors

    • get callKeys(): string[]

      Returns string[]

    • get lc_aliases(): undefined

      Returns undefined

    • get lc_secrets(): { apiKey: string; zhipuAIApiKey: string }

      Returns { apiKey: string; zhipuAIApiKey: string }

    Methods

    • Returns string

    • Parameters

      • messages: BaseMessage[]
      • Optionaloptions: unknown
      • OptionalrunManager: any

      Returns AsyncGenerator<ChatGenerationChunk>

    • Get the identifying parameters for the model

      Returns Omit<ChatCompletionRequest, "messages">

    • Get the parameters used to invoke the model

      Returns Omit<ChatCompletionRequest, "messages">

    • Returns string