langchain.js
    Preparing search index...

    Interface OpenAIInput

    Input to OpenAI class.

    interface OpenAIInput {
        apiKey?: string;
        batchSize: number;
        bestOf?: number;
        frequencyPenalty: number;
        logitBias?: Record<string, number>;
        maxCompletionTokens?: number;
        maxTokens?: number;
        model: OpenAIChatModelId;
        modelKwargs?: Record<string, any>;
        modelName: string;
        n: number;
        openAIApiKey?: string;
        presencePenalty: number;
        stop?: string[];
        stopSequences?: string[];
        streaming: boolean;
        streamUsage?: boolean;
        temperature: number;
        timeout?: number;
        topP: number;
        user?: string;
        verbosity?: OpenAIVerbosityParam;
    }

    Hierarchy (View Summary)

    Index

    Properties

    apiKey?: string

    API key to use when making requests to OpenAI. Defaults to the value of OPENAI_API_KEY environment variable.

    batchSize: number

    Batch size to use when passing multiple documents to generate

    bestOf?: number

    Generates bestOf completions server side and returns the "best"

    frequencyPenalty: number

    Penalizes repeated tokens according to frequency

    logitBias?: Record<string, number>

    Dictionary used to adjust the probability of specific tokens being generated

    maxCompletionTokens?: number

    Maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the model's maximum context size. Alias for maxTokens for reasoning models.

    maxTokens?: number

    Maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the model's maximum context size.

    Model name to use

    modelKwargs?: Record<string, any>

    Holds any additional parameters that are valid to pass to openai.createCompletion that are not explicitly specified on this class.

    modelName: string

    Model name to use Alias for model

    Use "model" instead.

    n: number

    Number of completions to generate for each prompt

    openAIApiKey?: string

    API key to use when making requests to OpenAI. Defaults to the value of OPENAI_API_KEY environment variable. Alias for apiKey

    presencePenalty: number

    Penalizes repeated tokens

    stop?: string[]

    List of stop words to use when generating Alias for stopSequences

    stopSequences?: string[]

    List of stop words to use when generating

    streaming: boolean

    Whether to stream the results or not. Enabling disables tokenUsage reporting

    streamUsage?: boolean

    Whether or not to include token usage data in streamed chunks.

    true
    
    temperature: number

    Sampling temperature to use

    timeout?: number

    Timeout to use when making requests to OpenAI.

    topP: number

    Total probability mass of tokens to consider at each step

    user?: string

    Unique string identifier representing your end-user, which can help OpenAI to monitor and detect abuse.

    The verbosity of the model's response.