_ChatModelStreamBaseSynchronous per-message streaming object for a single LLM response.
Returned by BaseChatModel.stream_v2(). Content-block protocol
events are fed into this object and accumulated into typed projections.
Projections (always return the same cached object):
.text ā iterable of str deltas; str() for full text.reasoning ā same as .text for reasoning content.tool_calls ā iterable of ToolCallChunk deltas;
.get() returns list[ToolCall].output ā blocking property, returns assembled AIMessageUsage info is available on .output.usage_metadata once the stream
has finished.
.output.content is always a list of v1 protocol blocks
(text, reasoning, tool_call, image, ā¦), regardless of the
underlying model's output_version setting. That attribute
only controls the legacy stream() / astream() / invoke()
paths; ChatModelStream is built on the content-block
protocol and emits v1 shapes by construction.
Raw event iteration::
for event in stream:
print(event) # MessagesData dicts
Text content ā iterable of str deltas, str() for full.
Reasoning content ā same interface as :attr:text.
Tool calls ā iterable of ToolCallChunk deltas.
.get() returns finalized list[ToolCall].
Assembled AIMessage ā blocks until the stream finishes.
Bind a pump for standalone streaming.
Delegates to set_request_more. Used by
BaseChatModel.stream_v2().
Install a lazy-start callback on this stream and its projections.
Set the pull callback on this stream and all its projections.
Used by langgraph's GraphRunStream._wire_request_more to
connect the shared graph pump.