Whether to include the raw OpenAI response in the output message's "additional_kwargs" field.
API key to use when making requests to OpenAI. Defaults to the value of
Parameters for audio output. Required when audio output is requested with
Penalizes repeated tokens according to frequency
Dictionary used to adjust the probability of specific tokens being generated
Whether to return log probabilities of the output tokens or not.
Maximum number of tokens to generate in the completion. -1 returns as many
Output types that you would like the model to generate for this request. Most
Model name to use
Holds any additional parameters that are valid to pass to openai.createCompletion that are not explicitly specified on this class.
Number of completions to generate for each prompt
Penalizes repeated tokens
Used by OpenAI to cache responses for similar requests to optimize your cache
Used by OpenAI to set cache retention time
Options for reasoning models.
Service tier to use for this request. Can be "auto", "default", or "flex" or "priority".
List of stop words to use when generating
List of stop words to use when generating
Whether to stream the results or not. Enabling disables tokenUsage reporting
Whether or not to include token usage data in streamed chunks.
Whether the model supports the strict argument when passing in tools.
Sampling temperature to use
Timeout to use when making requests to OpenAI.
An integer between 0 and 5 specifying the number of most likely tokens to return at each token position,
Total probability mass of tokens to consider at each step
Unique string identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
Whether to use the responses API for all requests. If false the responses API will be used
The verbosity of the model's response.
Must be set to true in tenancies with Zero Data Retention. Setting to true will disable
Azure OpenAI chat model integration.
Setup:
Install @langchain/openai and set the following environment variables:
npm install @langchain/openai
export AZURE_OPENAI_API_KEY="your-api-key"
export AZURE_OPENAI_API_DEPLOYMENT_NAME="your-deployment-name"
export AZURE_OPENAI_API_VERSION="your-version"
export AZURE_OPENAI_BASE_PATH="your-base-path"
Runtime args can be passed as the second argument to any of the base runnable methods .invoke. .stream, .batch, etc.
They can also be passed via .withConfig, or the second arg in .bindTools, like shown in the examples below:
// When calling `.withConfig`, call options should be passed via the first argument
const llmWithArgsBound = llm.withConfig({
stop: ["\n"],
tools: [...],
});
// When calling `.bindTools`, call options should be passed via the second argument
const llmWithTools = llm.bindTools(
[...],
{
tool_choice: "auto",
}
);
import { AzureChatOpenAI } from '@langchain/openai';
const llm = new AzureChatOpenAI({
azureOpenAIApiKey: process.env.AZURE_OPENAI_API_KEY, // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY
azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME, // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME
azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME, // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME
azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION, // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION
temperature: 0,
maxTokens: undefined,
timeout: undefined,
maxRetries: 2,
// apiKey: "...",
// baseUrl: "...",
// other params...
});
const input = `Translate "I love programming" into French.`;
// Models also accept a list of chat messages or a formatted prompt
const result = await llm.invoke(input);
console.log(result);
AIMessage {
"id": "chatcmpl-9u4Mpu44CbPjwYFkTbeoZgvzB00Tz",
"content": "J'adore la programmation.",
"response_metadata": {
"tokenUsage": {
"completionTokens": 5,
"promptTokens": 28,
"totalTokens": 33
},
"finish_reason": "stop",
"system_fingerprint": "fp_3aa7262c27"
},
"usage_metadata": {
"input_tokens": 28,
"output_tokens": 5,
"total_tokens": 33
}
}
for await (const chunk of await llm.stream(input)) {
console.log(chunk);
}
AIMessageChunk {
"id": "chatcmpl-9u4NWB7yUeHCKdLr6jP3HpaOYHTqs",
"content": ""
}
AIMessageChunk {
"content": "J"
}
AIMessageChunk {
"content": "'adore"
}
AIMessageChunk {
"content": " la"
}
AIMessageChunk {
"content": " programmation",,
}
AIMessageChunk {
"content": ".",,
}
AIMessageChunk {
"content": "",
"response_metadata": {
"finish_reason": "stop",
"system_fingerprint": "fp_c9aa9c0491"
},
}
AIMessageChunk {
"content": "",
"usage_metadata": {
"input_tokens": 28,
"output_tokens": 5,
"total_tokens": 33
}
}
import { AIMessageChunk } from '@langchain/core/messages';
import { concat } from '@langchain/core/utils/stream';
const stream = await llm.stream(input);
let full: AIMessageChunk | undefined;
for await (const chunk of stream) {
full = !full ? chunk : concat(full, chunk);
}
console.log(full);
AIMessageChunk {
"id": "chatcmpl-9u4PnX6Fy7OmK46DASy0bH6cxn5Xu",
"content": "J'adore la programmation.",
"response_metadata": {
"prompt": 0,
"completion": 0,
"finish_reason": "stop",
},
"usage_metadata": {
"input_tokens": 28,
"output_tokens": 5,
"total_tokens": 33
}
}
import { z } from 'zod';
const GetWeather = {
name: "GetWeather",
description: "Get the current weather in a given location",
schema: z.object({
location: z.string().describe("The city and state, e.g. San Francisco, CA")
}),
}
const GetPopulation = {
name: "GetPopulation",
description: "Get the current population in a given location",
schema: z.object({
location: z.string().describe("The city and state, e.g. San Francisco, CA")
}),
}
const llmWithTools = llm.bindTools([GetWeather, GetPopulation]);
const aiMsg = await llmWithTools.invoke(
"Which city is hotter today and which is bigger: LA or NY?"
);
console.log(aiMsg.tool_calls);
[
{
name: 'GetWeather',
args: { location: 'Los Angeles, CA' },
type: 'tool_call',
id: 'call_uPU4FiFzoKAtMxfmPnfQL6UK'
},
{
name: 'GetWeather',
args: { location: 'New York, NY' },
type: 'tool_call',
id: 'call_UNkEwuQsHrGYqgDQuH9nPAtX'
},
{
name: 'GetPopulation',
args: { location: 'Los Angeles, CA' },
type: 'tool_call',
id: 'call_kL3OXxaq9OjIKqRTpvjaCH14'
},
{
name: 'GetPopulation',
args: { location: 'New York, NY' },
type: 'tool_call',
id: 'call_s9KQB1UWj45LLGaEnjz0179q'
}
]
import { z } from 'zod';
const Joke = z.object({
setup: z.string().describe("The setup of the joke"),
punchline: z.string().describe("The punchline to the joke"),
rating: z.number().nullable().describe("How funny the joke is, from 1 to 10")
}).describe('Joke to tell user.');
const structuredLlm = llm.withStructuredOutput(Joke, { name: "Joke" });
const jokeResult = await structuredLlm.invoke("Tell me a joke about cats");
console.log(jokeResult);
{
setup: 'Why was the cat sitting on the computer?',
punchline: 'Because it wanted to keep an eye on the mouse!',
rating: 7
}
const jsonLlm = llm.withConfig({ response_format: { type: "json_object" } });
const jsonLlmAiMsg = await jsonLlm.invoke(
"Return a JSON object with key 'randomInts' and a value of 10 random ints in [0-99]"
);
console.log(jsonLlmAiMsg.content);
{
"randomInts": [23, 87, 45, 12, 78, 34, 56, 90, 11, 67]
}
import { HumanMessage } from '@langchain/core/messages';
const imageUrl = "https://example.com/image.jpg";
const imageData = await fetch(imageUrl).then(res => res.arrayBuffer());
const base64Image = Buffer.from(imageData).toString('base64');
const message = new HumanMessage({
content: [
{ type: "text", text: "describe the weather in this image" },
{
type: "image_url",
image_url: { url: `data:image/jpeg;base64,${base64Image}` },
},
]
});
const imageDescriptionAiMsg = await llm.invoke([message]);
console.log(imageDescriptionAiMsg.content);
The weather in the image appears to be clear and sunny. The sky is mostly blue with a few scattered white clouds, indicating fair weather. The bright sunlight is casting shadows on the green, grassy hill, suggesting it is a pleasant day with good visibility. There are no signs of rain or stormy conditions.
const aiMsgForMetadata = await llm.invoke(input);
console.log(aiMsgForMetadata.usage_metadata);
{ input_tokens: 28, output_tokens: 5, total_tokens: 33 }
const logprobsLlm = new ChatOpenAI({ model: "gpt-4o-mini", logprobs: true });
const aiMsgForLogprobs = await logprobsLlm.invoke(input);
console.log(aiMsgForLogprobs.response_metadata.logprobs);
{
content: [
{
token: 'J',
logprob: -0.000050616763,
bytes: [Array],
top_logprobs: []
},
{
token: "'",
logprob: -0.01868736,
bytes: [Array],
top_logprobs: []
},
{
token: 'ad',
logprob: -0.0000030545007,
bytes: [Array],
top_logprobs: []
},
{ token: 'ore', logprob: 0, bytes: [Array], top_logprobs: [] },
{
token: ' la',
logprob: -0.515404,
bytes: [Array],
top_logprobs: []
},
{
token: ' programm',
logprob: -0.0000118755715,
bytes: [Array],
top_logprobs: []
},
{ token: 'ation', logprob: 0, bytes: [Array], top_logprobs: [] },
{
token: '.',
logprob: -0.0000037697225,
bytes: [Array],
top_logprobs: []
}
],
refusal: null
}
const aiMsgForResponseMetadata = await llm.invoke(input);
console.log(aiMsgForResponseMetadata.response_metadata);
{
tokenUsage: { completionTokens: 5, promptTokens: 28, totalTokens: 33 },
finish_reason: 'stop',
system_fingerprint: 'fp_3aa7262c27'
}
Whether to include the raw OpenAI response in the output message's "additional_kwargs" field. Currently in experimental beta.
API key to use when making requests to OpenAI. Defaults to the value of
OPENAI_API_KEY environment variable.
Parameters for audio output. Required when audio output is requested with
modalities: ["audio"].
Learn more.
A function that returns an access token for Microsoft Entra (formerly known as Azure Active Directory), which will be invoked on every request.
Azure OpenAI API deployment name to use for completions when making requests to Azure OpenAI. This is the name of the deployment you created in the Azure portal. e.g. "my-openai-deployment" this will be used in the endpoint URL: https://{InstanceName}.openai.azure.com/openai/deployments/my-openai-deployment/
Azure OpenAI API instance name to use when making requests to Azure OpenAI. this is the name of the instance you created in the Azure portal. e.g. "my-openai-instance" this will be used in the endpoint URL: https://my-openai-instance.openai.azure.com/openai/deployments/{DeploymentName}/
API key to use when making requests to Azure OpenAI.
API version to use when making requests to Azure OpenAI.
Custom base url for Azure OpenAI API. This is useful in case you have a deployment in another region. e.g. setting this value to "https://westeurope.api.cognitive.microsoft.com/openai/deployments" will be result in the endpoint URL: https://westeurope.api.cognitive.microsoft.com/openai/deployments/{DeploymentName}/
Custom endpoint for Azure OpenAI API. This is useful in case you have a deployment in another region. e.g. setting this value to "https://westeurope.api.cognitive.microsoft.com/" will be result in the endpoint URL: https://westeurope.api.cognitive.microsoft.com/openai/deployments/{DeploymentName}/
Penalizes repeated tokens according to frequency
Dictionary used to adjust the probability of specific tokens being generated
Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message.
Maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the model's maximum context size.
Output types that you would like the model to generate for this request. Most models are capable of generating text, which is the default:
["text"]
The gpt-4o-audio-preview model can also be used to
generate audio. To request that
this model generate both text and audio responses, you can use:
["text", "audio"]
Model name to use
Holds any additional parameters that are valid to pass to openai.createCompletion that are not explicitly specified on this class.
Number of completions to generate for each prompt
Penalizes repeated tokens
Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Learn more.
Used by OpenAI to set cache retention time
Options for reasoning models.
Note that some options, like reasoning summaries, are only available when using the responses API. This option is ignored when not using a reasoning model.
Service tier to use for this request. Can be "auto", "default", or "flex" or "priority". Specifies the service tier for prioritization and latency optimization.
List of stop words to use when generating
Alias for stopSequences
List of stop words to use when generating
Whether to stream the results or not. Enabling disables tokenUsage reporting
Whether or not to include token usage data in streamed chunks.
Whether the model supports the strict argument when passing in tools.
If undefined the strict argument will not be passed to OpenAI.
Sampling temperature to use
Timeout to use when making requests to OpenAI.
An integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used.
Total probability mass of tokens to consider at each step
Unique string identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
Whether to use the responses API for all requests. If false the responses API will be used
only when required in order to fulfill the request.
The verbosity of the model's response.
Must be set to true in tenancies with Zero Data Retention. Setting to true will disable
output storage in the Responses API, but this DOES NOT enable Zero Data Retention in your
OpenAI organization or project. This must be configured directly with OpenAI.
See: https://platform.openai.com/docs/guides/your-data https://platform.openai.com/docs/api-reference/responses/create#responses-create-store
Get the identifying parameters for the model
Moderate content using OpenAI's Moderation API.
This method checks whether content violates OpenAI's content policy by analyzing text for categories such as hate, harassment, self-harm, sexual content, violence, and more.
Add structured output to the model.
The OpenAI model family supports the following structured output methods:
jsonSchema: Use the response_format field in the response to return a JSON schema. Only supported with the gpt-4o-mini,
gpt-4o-mini-2024-07-18, and gpt-4o-2024-08-06 model snapshots and later.functionCalling: Function calling is useful when you are building an application that bridges the models and functionality
of your application.jsonMode: JSON mode is a more basic version of the Structured Outputs feature. While JSON mode ensures that model
output is valid JSON, Structured Outputs reliably matches the model's output to the schema you specify.
We recommend you use functionCalling or jsonSchema if it is supported for your use case.The default method is functionCalling.