ConstThe usage statistics object from OpenAI's Responses API containing token counts and optional detailed breakdowns.
A UsageMetadata object containing:
input_tokens: Total number of tokens in the input/prompt (defaults to 0 if not provided)output_tokens: Total number of tokens in the model's output (defaults to 0 if not provided)total_tokens: Combined total of input and output tokens (defaults to 0 if not provided)input_token_details: Object containing detailed input token information:
cache_read: Number of tokens read from cache (only included if available)output_token_details: Object containing detailed output token information:
reasoning: Number of tokens used for reasoning (only included if available)const usage = {
input_tokens: 100,
output_tokens: 50,
total_tokens: 150,
input_tokens_details: { cached_tokens: 20 },
output_tokens_details: { reasoning_tokens: 10 }
};
const metadata = convertResponsesUsageToUsageMetadata(usage);
// Returns:
// {
// input_tokens: 100,
// output_tokens: 50,
// total_tokens: 150,
// input_token_details: { cache_read: 20 },
// output_token_details: { reasoning: 10 }
// }
Converts OpenAI Responses API usage statistics to LangChain's UsageMetadata format.
This converter transforms token usage information from OpenAI's Responses API into the standardized UsageMetadata format used throughout LangChain. It handles both basic token counts and detailed token breakdowns including cached tokens and reasoning tokens.