langchain.js
    Preparing search index...

    Interface XAIResponse

    xAI Responses API response body.

    interface XAIResponse {
        background?: null | boolean;
        created_at: number;
        debug_output?: null | XAIResponsesDebugOutput;
        id: string;
        incomplete_details?: null | XAIResponsesIncompleteDetails;
        max_output_tokens?: null | number;
        metadata?: null | Record<string, unknown>;
        model: string;
        object: "response";
        output: XAIResponsesOutputItem[];
        parallel_tool_calls?: boolean;
        previous_response_id?: null | string;
        reasoning?: null | XAIResponsesReasoningResponse;
        status: XAIResponsesStatus;
        store?: boolean;
        temperature?: null | number;
        text?: XAIResponsesText;
        tool_choice?: XAIResponsesToolChoice;
        tools?: XAIResponsesTool[];
        top_p?: null | number;
        usage?: null | XAIResponsesUsage;
        user?: null | string;
    }
    Index

    Properties

    background?: null | boolean

    Whether to process the response asynchronously in the background. Note: Unsupported.

    false
    
    created_at: number

    The Unix timestamp (in seconds) for the response creation time.

    debug_output?: null | XAIResponsesDebugOutput

    Debug output information (when available).

    id: string

    Unique ID of the response.

    incomplete_details?: null | XAIResponsesIncompleteDetails

    Details about why the response is incomplete (if status is "incomplete").

    max_output_tokens?: null | number

    Max number of tokens that can be generated. Includes both output and reasoning tokens.

    metadata?: null | Record<string, unknown>

    Only included for compatibility.

    model: string

    Model name used to generate the response.

    object: "response"

    The object type of this resource. Always set to response.

    The response generated by the model.

    parallel_tool_calls?: boolean

    Whether to allow the model to run parallel tool calls.

    previous_response_id?: null | string

    The ID of the previous response from the model.

    Reasoning configuration used for the response.

    Status of the response.

    store?: boolean

    Whether to store the input message(s) and response.

    true
    
    temperature?: null | number

    Sampling temperature used (between 0 and 2).

    1
    

    Settings for customizing a text response.

    Controls which tool is called by the model.

    A list of tools the model may call. Maximum of 128 tools.

    top_p?: null | number

    Nucleus sampling probability mass used.

    1
    
    usage?: null | XAIResponsesUsage

    Token usage information.

    user?: null | string

    Unique identifier representing your end-user. Used for monitoring and abuse detection.