langchain.js
    Preparing search index...

    Variable convertResponsesUsageToUsageMetadataConst

    convertResponsesUsageToUsageMetadata: BaseDynamicToolInput<
        OpenAIClient.Responses.ResponseUsage
        | undefined,
        BaseDynamicToolInput,
    > = ...

    Converts OpenAI Responses API usage statistics to LangChain's UsageMetadata format.

    This converter transforms token usage information from OpenAI's Responses API into the standardized UsageMetadata format used throughout LangChain. It handles both basic token counts and detailed token breakdowns including cached tokens and reasoning tokens.

    The usage statistics object from OpenAI's Responses API containing token counts and optional detailed breakdowns.

    A UsageMetadata object containing:

    • input_tokens: Total number of tokens in the input/prompt (defaults to 0 if not provided)
    • output_tokens: Total number of tokens in the model's output (defaults to 0 if not provided)
    • total_tokens: Combined total of input and output tokens (defaults to 0 if not provided)
    • input_token_details: Object containing detailed input token information:
      • cache_read: Number of tokens read from cache (only included if available)
    • output_token_details: Object containing detailed output token information:
      • reasoning: Number of tokens used for reasoning (only included if available)
    const usage = {
    input_tokens: 100,
    output_tokens: 50,
    total_tokens: 150,
    input_tokens_details: { cached_tokens: 20 },
    output_tokens_details: { reasoning_tokens: 10 }
    };

    const metadata = convertResponsesUsageToUsageMetadata(usage);
    // Returns:
    // {
    // input_tokens: 100,
    // output_tokens: 50,
    // total_tokens: 150,
    // input_token_details: { cache_read: 20 },
    // output_token_details: { reasoning: 10 }
    // }
    • The function safely handles undefined or null values by using optional chaining and nullish coalescing operators
    • Detailed token information (cache_read, reasoning) is only included in the result if the corresponding values are present in the input
    • Token counts default to 0 if not provided in the usage object
    • This converter is specifically designed for OpenAI's Responses API format and may differ from other OpenAI API endpoints