langchain.js
    Preparing search index...

    Interface ChatOpenAIResponsesCallOptions

    interface ChatOpenAIResponsesCallOptions {
        audio?: ChatCompletionAudioParam;
        include?: null | ResponseIncludable[];
        modalities?: ChatCompletionModality[];
        options?: RequestOptions;
        parallel_tool_calls?: boolean;
        prediction?: ChatCompletionPredictionContent;
        previous_response_id?: null | string;
        promptCacheKey?: string;
        promptIndex?: number;
        reasoning?: Reasoning;
        response_format?: ChatOpenAIResponseFormat;
        seed?: number;
        service_tier?: null | "auto" | "default" | "flex" | "scale" | "priority";
        stream_options?: ChatCompletionStreamOptions;
        strict?: boolean;
        text?: ResponseTextConfig;
        tool_choice?:
            | string
            | ChatCompletionAllowedToolChoice
            | ChatCompletionNamedToolChoice
            | ChatCompletionNamedToolChoiceCustom
            | ToolChoiceAllowed
            | ToolChoiceTypes
            | ToolChoiceFunction
            | ToolChoiceMcp
            | ToolChoiceCustom;
        tools?: any[];
        truncation?: null
        | "auto"
        | "disabled";
        verbosity?: OpenAIVerbosityParam;
    }

    Hierarchy (View Summary)

    Index

    Properties

    audio?: ChatCompletionAudioParam

    Parameters for audio output. Required when audio output is requested with modalities: ["audio"]. Learn more.

    include?: null | ResponseIncludable[]

    Specify additional output data to include in the model response.

    modalities?: ChatCompletionModality[]

    Output types that you would like the model to generate for this request. Most models are capable of generating text, which is the default:

    ["text"]

    The gpt-4o-audio-preview model can also be used to generate audio. To request that this model generate both text and audio responses, you can use:

    ["text", "audio"]

    options?: RequestOptions

    Additional options to pass to the underlying axios request.

    parallel_tool_calls?: boolean

    The model may choose to call multiple functions in a single turn. You can set parallel_tool_calls to false which ensures only one tool is called at most. Learn more

    prediction?: ChatCompletionPredictionContent

    Static predicted output content, such as the content of a text file that is being regenerated. Learn more.

    previous_response_id?: null | string

    The unique ID of the previous response to the model. Use this to create multi-turn conversations.

    promptCacheKey?: string

    Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more.

    promptIndex?: number

    Adds a prompt index to prompts passed to the model to track what prompt is being used for a given generation.

    reasoning?: Reasoning

    Options for reasoning models.

    Note that some options, like reasoning summaries, are only available when using the responses API. If these options are set, the responses API will be used to fulfill the request.

    These options will be ignored when not using a reasoning model.

    response_format?: ChatOpenAIResponseFormat

    An object specifying the format that the model must output.

    seed?: number

    When provided, the completions API will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.

    service_tier?: null | "auto" | "default" | "flex" | "scale" | "priority"

    Service tier to use for this request. Can be "auto", "default", or "flex" Specifies the service tier for prioritization and latency optimization.

    stream_options?: ChatCompletionStreamOptions

    Additional options to pass to streamed completions. If provided, this takes precedence over "streamUsage" set at initialization time.

    strict?: boolean

    If true, model output is guaranteed to exactly match the JSON Schema provided in the tool definition. If true, the input schema will also be validated according to https://platform.openai.com/docs/guides/structured-outputs/supported-schemas.

    If false, input schema will not be validated and model output will not be validated.

    If undefined, strict argument will not be passed to the model.

    text?: ResponseTextConfig

    Configuration options for a text response from the model. Can be plain text or structured JSON data.

    tool_choice?:
        | string
        | ChatCompletionAllowedToolChoice
        | ChatCompletionNamedToolChoice
        | ChatCompletionNamedToolChoiceCustom
        | ToolChoiceAllowed
        | ToolChoiceTypes
        | ToolChoiceFunction
        | ToolChoiceMcp
        | ToolChoiceCustom

    Specifies which tool the model should use to respond. Can be an OpenAIToolChoice or a ResponsesToolChoice. If not set, the model will decide which tool to use automatically.

    tools?: any[]

    A list of tools that the model may use to generate responses. Each tool can be a function, a built-in tool, or a custom tool definition. If not provided, the model will not use any tools.

    truncation?: null | "auto" | "disabled"

    The truncation strategy to use for the model response.

    The verbosity of the model's response.