ConstA ChatGenerationChunk containing:
text: Concatenated text content from all text parts in the eventmessage: An AIMessageChunk with:
id: Message ID (set when a message output item is added)content: Array of content blocks (text with optional annotations)tool_call_chunks: Incremental tool call data (name, args, id)usage_metadata: Token usage information (only in completion events)additional_kwargs: Extra data including:
refusal: Refusal text if the model refused to respondreasoning: Reasoning output for reasoning models (id, type, summary)tool_outputs: Results from built-in tools (web search, file search, etc.)parsed: Parsed structured output when using json_schema formatresponse_metadata: Metadata about the response (model, id, etc.)generationInfo: Additional generation information (e.g., tool output status)Returns null for events that don't produce meaningful chunks:
const stream = await client.responses.create({
model: "gpt-4",
input: [{ type: "message", content: "Hello" }],
stream: true
});
for await (const event of stream) {
const chunk = convertResponsesDeltaToChatGenerationChunk(event);
if (chunk) {
console.log(chunk.text); // Incremental text
console.log(chunk.message.tool_call_chunks); // Tool call updates
}
}
additional_kwargs to map call_id to item idtext field is provided for legacy compatibility with onLLMNewToken callbacksresponse.completed events
Converts OpenAI Responses API stream events to LangChain ChatGenerationChunk objects.
This converter processes streaming events from OpenAI's Responses API and transforms them into LangChain ChatGenerationChunk objects that can be used in streaming chat applications. It handles various event types including text deltas, tool calls, reasoning, and metadata updates.