langchain.js
    Preparing search index...

    Variable convertResponsesDeltaToChatGenerationChunkConst

    convertResponsesDeltaToChatGenerationChunk: BaseDynamicToolInput<
        OpenAIClient.Responses.ResponseStreamEvent,
        BaseDynamicToolInput
        | null,
    > = ...

    Converts OpenAI Responses API stream events to LangChain ChatGenerationChunk objects.

    This converter processes streaming events from OpenAI's Responses API and transforms them into LangChain ChatGenerationChunk objects that can be used in streaming chat applications. It handles various event types including text deltas, tool calls, reasoning, and metadata updates.

    A streaming event from OpenAI's Responses API

    A ChatGenerationChunk containing:

    • text: Concatenated text content from all text parts in the event
    • message: An AIMessageChunk with:
      • id: Message ID (set when a message output item is added)
      • content: Array of content blocks (text with optional annotations)
      • tool_call_chunks: Incremental tool call data (name, args, id)
      • usage_metadata: Token usage information (only in completion events)
      • additional_kwargs: Extra data including:
        • refusal: Refusal text if the model refused to respond
        • reasoning: Reasoning output for reasoning models (id, type, summary)
        • tool_outputs: Results from built-in tools (web search, file search, etc.)
        • parsed: Parsed structured output when using json_schema format
        • Function call ID mappings for tracking
      • response_metadata: Metadata about the response (model, id, etc.)
    • generationInfo: Additional generation information (e.g., tool output status)

    Returns null for events that don't produce meaningful chunks:

    • Partial image generation events (to avoid storing all partial images in history)
    • Unrecognized event types
    const stream = await client.responses.create({
    model: "gpt-4",
    input: [{ type: "message", content: "Hello" }],
    stream: true
    });

    for await (const event of stream) {
    const chunk = convertResponsesDeltaToChatGenerationChunk(event);
    if (chunk) {
    console.log(chunk.text); // Incremental text
    console.log(chunk.message.tool_call_chunks); // Tool call updates
    }
    }
    • Text content is accumulated in an array with index tracking for proper ordering
    • Tool call chunks include incremental arguments that need to be concatenated by the consumer
    • Reasoning summaries are built incrementally across multiple events
    • Function call IDs are tracked in additional_kwargs to map call_id to item id
    • The text field is provided for legacy compatibility with onLLMNewToken callbacks
    • Usage metadata is only available in response.completed events
    • Partial images are intentionally ignored to prevent memory bloat in conversation history