LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-corelanguage_modelschat_model_streamChatModelStream
    Classā—Since v1.3

    ChatModelStream

    Copy
    ChatModelStream(
      self,
      *,
      namespace: list[str] | None = None,
      node: str

    Bases

    _ChatModelStreamBase

    Constructors

    Attributes

    Methods

    View source on GitHub
    |
    None
    =
    None
    ,
    message_id
    :
    str
    |
    None
    =
    None
    )
    constructor
    __init__
    NameType
    namespacelist[str] | None
    nodestr | None
    message_idstr | None
    attribute
    text: SyncTextProjection
    attribute
    reasoning: SyncTextProjection
    attribute
    tool_calls: SyncProjection
    attribute
    output: AIMessage
    method
    bind_pump
    method
    set_start
    method
    set_request_more

    Synchronous per-message streaming object for a single LLM response.

    Returned by BaseChatModel.stream_v2(). Content-block protocol events are fed into this object and accumulated into typed projections.

    Projections (always return the same cached object):

    • .text — iterable of str deltas; str() for full text
    • .reasoning — same as .text for reasoning content
    • .tool_calls — iterable of ToolCallChunk deltas; .get() returns list[ToolCall]
    • .output — blocking property, returns assembled AIMessage

    Usage info is available on .output.usage_metadata once the stream has finished.

    Output shape is always v1 content blocks

    .output.content is always a list of v1 protocol blocks (text, reasoning, tool_call, image, …), regardless of the underlying model's output_version setting. That attribute only controls the legacy stream() / astream() / invoke() paths; ChatModelStream is built on the content-block protocol and emits v1 shapes by construction.

    Raw event iteration::

    for event in stream:
        print(event)  # MessagesData dicts
    

    Text content — iterable of str deltas, str() for full.

    Reasoning content — same interface as :attr:text.

    Tool calls — iterable of ToolCallChunk deltas.

    .get() returns finalized list[ToolCall].

    Assembled AIMessage — blocks until the stream finishes.

    Bind a pump for standalone streaming.

    Delegates to set_request_more. Used by BaseChatModel.stream_v2().

    Install a lazy-start callback on this stream and its projections.

    Set the pull callback on this stream and all its projections.

    Used by langgraph's GraphRunStream._wire_request_more to connect the shared graph pump.