langchain.js
    Preparing search index...

    Class OpenAI<CallOptions>

    Wrapper around OpenAI large language models.

    To use you should have the openai package installed, with the OPENAI_API_KEY environment variable set.

    To use with Azure, import the AzureOpenAI class.

    Any parameters that are valid to be passed to openai.createCompletion can be passed through modelKwargs, even if not explicitly available on this class.

    const model = new OpenAI({
    modelName: "gpt-4",
    temperature: 0.7,
    maxTokens: 1000,
    maxRetries: 5,
    });

    const res = await model.invoke(
    "Question: What would be a good company name for a company that makes colorful socks?\nAnswer:"
    );
    console.log({ res });

    Type Parameters

    Hierarchy (View Summary)

    Implements

    Index

    Constructors

    Properties

    apiKey?: string

    API key to use when making requests to OpenAI. Defaults to the value of OPENAI_API_KEY environment variable.

    batchSize: number = 20

    Batch size to use when passing multiple documents to generate

    bestOf?: number

    Generates bestOf completions server side and returns the "best"

    client: OpenAI
    clientConfig: ClientOptions
    frequencyPenalty?: number

    Penalizes repeated tokens according to frequency

    lc_serializable: boolean = true
    logitBias?: Record<string, number>

    Dictionary used to adjust the probability of specific tokens being generated

    maxTokens?: number

    Maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the model's maximum context size.

    model: string = "gpt-3.5-turbo-instruct"

    Model name to use

    modelKwargs?: Record<string, any>

    Holds any additional parameters that are valid to pass to openai.createCompletion that are not explicitly specified on this class.

    modelName: string

    Use "model" instead

    n: number = 1

    Number of completions to generate for each prompt

    openAIApiKey?: string

    API key to use when making requests to OpenAI. Defaults to the value of OPENAI_API_KEY environment variable. Alias for apiKey

    organization?: string
    presencePenalty?: number

    Penalizes repeated tokens

    stop?: string[]

    List of stop words to use when generating Alias for stopSequences

    stopSequences?: string[]

    List of stop words to use when generating

    streaming: boolean = false

    Whether to stream the results or not. Enabling disables tokenUsage reporting

    temperature?: number

    Sampling temperature to use

    timeout?: number

    Timeout to use when making requests to OpenAI.

    topP?: number

    Total probability mass of tokens to consider at each step

    user?: string

    Unique string identifier representing your end-user, which can help OpenAI to monitor and detect abuse.

    Accessors

    • get callKeys(): any[]

      Returns any[]

    • get lc_aliases(): Record<string, string>

      Returns Record<string, string>

    • get lc_secrets(): undefined | { [key: string]: string }

      Returns undefined | { [key: string]: string }

    Methods

    • Call out to OpenAI's endpoint with k unique prompts

      Parameters

      • Optionalprompts: string[]

        The prompts to pass into the model.

      • Optionaloptions: unknown

        Optional list of stop words to use when generating.

      • OptionalrunManager: any

        Optional callback manager to use when generating.

      Returns Promise<LLMResult>

      The full LLM output.

      import { OpenAI } from "langchain/llms/openai";
      const openai = new OpenAI();
      const response = await openai.generate(["Tell me a joke."]);
    • Calls the OpenAI API with retry logic in case of failures.

      Parameters

      • options: undefined | RequestOptions

        Optional configuration for the API call.

      Returns RequestOptions

      The response from the OpenAI API.

    • Returns string

    • Parameters

      • input: string
      • options: unknown
      • OptionalrunManager: any

      Returns AsyncGenerator<GenerationChunk>

    • Calls the OpenAI API with retry logic in case of failures.

      Parameters

      • request: CompletionCreateParamsStreaming

        The request to send to the OpenAI API.

      • Optionaloptions: RequestOptions

        Optional configuration for the API call.

      Returns Promise<AsyncIterable<Completion, any, any>>

      The response from the OpenAI API.

    • Calls the OpenAI API with retry logic in case of failures.

      Parameters

      • request: CompletionCreateParamsNonStreaming

        The request to send to the OpenAI API.

      • Optionaloptions: RequestOptions

        Optional configuration for the API call.

      Returns Promise<Completion>

      The response from the OpenAI API.

    • Get the identifying parameters for the model

      Returns Omit<CompletionCreateParams, "prompt"> & { model_name: string } & ClientOptions

    • Get the parameters used to invoke the model

      Parameters

      • Optionaloptions: unknown

      Returns Omit<OpenAIClient.CompletionCreateParams, "prompt">

    • Returns string