Wrapper around OpenAI large language models.
To use you should have the openai package installed, with the
OPENAI_API_KEY environment variable set.
To use with Azure, import the AzureOpenAI class.
class AzureOpenAIInternal method that handles batching and configuration for a runnable
Create a unique cache key for a specific call to a specific language model.
Default streaming implementation.
Assigns new fields to the dict output of this runnable. Returns a new runnable.
Convert a runnable to a tool. Return a new instance of RunnableToolLike
Default implementation of batch, which calls invoke N times.
Generates chat based on the input messages.
Generates a prompt based on the input prompt values.
Get the number of tokens in the content.
Get the identifying parameters for the model
Get the parameters used to invoke the model
Invokes the chat model with a single input.
Pick keys from the dict output of this runnable. Returns a new runnable.
Create a new runnable sequence that runs each individual runnable in series,
Stream output in chunks.
Generate a stream of events emitted by the internal steps of the runnable.
Stream all output from a runnable, as reported to the callback system.
Default implementation of transform, which buffers input and then calls stream.
Bind config to a Runnable, returning a new Runnable.
Create a new runnable from the current one that will try invoking
Bind lifecycle listeners to a Runnable, returning a new Runnable.
Add retry logic to an existing runnable.
Add structured output to the model.
The name of the serializable. Override to provide an alias or
const model = new OpenAI({
modelName: "gpt-4",
temperature: 0.7,
maxTokens: 1000,
maxRetries: 5,
});
const res = await model.invoke(
"Question: What would be a good company name for a company that makes colorful socks?\nAnswer:"
);
console.log({ res });API key to use when making requests to OpenAI. Defaults to the value of
OPENAI_API_KEY environment variable.
Batch size to use when passing multiple documents to generate
Generates bestOf completions server side and returns the "best"
The async caller should be used by subclasses to make any async calls, which will thus benefit from the concurrency and retry logic.
Penalizes repeated tokens according to frequency
A path to the module that contains the class, eg. ["langchain", "llms"] Usually should be the same as the entrypoint the class is exported from.
Dictionary used to adjust the probability of specific tokens being generated
Maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the model's maximum context size.
Model name to use
Holds any additional parameters that are valid to pass to openai.createCompletion that are not explicitly specified on this class.
Number of completions to generate for each prompt
API key to use when making requests to OpenAI. Defaults to the value of
OPENAI_API_KEY environment variable.
Alias for apiKey
Penalizes repeated tokens
List of stop words to use when generating
Alias for stopSequences
List of stop words to use when generating
Whether to stream the results or not. Enabling disables tokenUsage reporting
Sampling temperature to use
Timeout to use when making requests to OpenAI.
Total probability mass of tokens to consider at each step
Unique string identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
Whether to print out response text.
Any parameters that are valid to be passed to openai.createCompletion can be passed through modelKwargs, even
if not explicitly available on this class.