langchain.js
    Preparing search index...

    Interface XAIResponsesCreateParamsNonStreaming

    Non-streaming variant of the request params.

    interface XAIResponsesCreateParamsNonStreaming {
        background?: boolean;
        include?: "reasoning.encrypted_content"[];
        input: XAIResponsesInput;
        instructions?: string;
        logprobs?: boolean;
        max_output_tokens?: number;
        metadata?: Record<string, unknown>;
        model?: string;
        parallel_tool_calls?: boolean;
        previous_response_id?: string;
        reasoning?: XAIResponsesReasoning;
        search_parameters?: XAIResponsesSearchParameters;
        service_tier?: string;
        store?: boolean;
        stream?: false;
        temperature?: number;
        text?: XAIResponsesText;
        tool_choice?: XAIResponsesToolChoice;
        tools?: XAIResponsesTool[];
        top_logprobs?: number;
        top_p?: number;
        truncation?: string;
        user?: string;
    }

    Hierarchy (View Summary)

    Index

    Properties

    background?: boolean

    Whether to process the response asynchronously in the background. Note: Unsupported.

    false
    
    include?: "reasoning.encrypted_content"[]

    What additional output data to include in the response. Currently supported: reasoning.encrypted_content.

    The input passed to the model. Can be text (string) or an array of message objects.

    instructions?: string

    An alternate way to specify the system prompt. Cannot be used with previous_response_id.

    logprobs?: boolean

    Whether to return log probabilities of the output tokens.

    false
    
    max_output_tokens?: number

    Max number of tokens that can be generated. Includes both output and reasoning tokens.

    metadata?: Record<string, unknown>

    Metadata for the request. Note: Not supported. Maintained for compatibility.

    model?: string

    Model name for the model to use (e.g., from xAI console).

    parallel_tool_calls?: boolean

    Whether to allow the model to run parallel tool calls.

    true
    
    previous_response_id?: string

    The ID of the previous response from the model. Use this to create multi-turn conversations.

    Reasoning configuration. Only for reasoning models.

    search_parameters?: XAIResponsesSearchParameters

    Set parameters for searched data. Takes precedence over web_search_preview tool.

    service_tier?: string

    Service tier for the request. Note: Not supported. Maintained for compatibility.

    store?: boolean

    Whether to store the input message(s) and response.

    true
    
    stream?: false

    If set, partial message deltas will be sent as server-sent events.

    false
    
    temperature?: number

    Sampling temperature between 0 and 2. Higher values make output more random, lower values more deterministic.

    1
    

    Settings for customizing a text response.

    Controls which tool is called by the model.

    A list of tools the model may call. Maximum of 128 tools.

    top_logprobs?: number

    Number of most likely tokens to return at each token position. Range: 0-8. Requires logprobs to be true.

    top_p?: number

    Nucleus sampling probability mass. The model considers results of tokens with top_p probability mass.

    1
    
    truncation?: string

    Truncation strategy. Note: Not supported. Maintained for compatibility.

    user?: string

    Unique identifier representing your end-user. Used for monitoring and abuse detection.