LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Graphs
    • Functional API
    • Pregel
    • Checkpointing
    • Storage
    • Caching
    • Types
    • Runtime
    • Config
    • Errors
    • Constants
    • Channels
    • Agents
    LangGraph Checkpoint
    Checkpoint Postgres
    Store Postgres
    Checkpoint SQLite
    LangGraph Prebuilt
    LangGraph CLI
    LangGraph SDK
    LangGraph Supervisor
    LangGraph Swarm
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewGraphsFunctional APIPregelCheckpointingStorageCachingTypesRuntimeConfigErrorsConstantsChannelsAgents
    LangGraph Checkpoint
    Checkpoint Postgres
    Store Postgres
    Checkpoint SQLite
    LangGraph Prebuilt
    LangGraph CLI
    LangGraph SDK
    LangGraph Supervisor
    LangGraph Swarm
    Language
    Theme
    Pythonlanggraphpregel_messagesStreamMessagesHandlerV2
    Class●Since v1.1

    StreamMessagesHandlerV2

    Copy
    StreamMessagesHandlerV2(
      self,
      stream: Callable[[StreamChunk], None],
      subgraphs: bool,
      *

    Bases

    StreamMessagesHandler_V2StreamingCallbackHandler

    Constructors

    Methods

    Inherited fromStreamMessagesHandler

    Attributes

    Arun_inline: bool
    —

    We want this callback to run in the main thread to avoid order/locking issues.

    Astream: streamAsubgraphs: subgraphsAmetadata
    View source on GitHub
    ,
    parent_ns
    :
    tuple
    [
    str
    ,
    .
    .
    .
    ]
    |
    None
    =
    None
    )
    : dict[UUID, Meta]
    Aseen: set[int | str]
    Aparent_ns: parent_ns

    Methods

    Mtap_output_aiterMtap_output_iterMon_chat_model_startMon_chain_startMon_chain_endMon_chain_error

    Inherited fromBaseCallbackHandler(langchain_core)

    Attributes

    Araise_errorArun_inlineAignore_llmAignore_retryAignore_chainAignore_agentAignore_retrieverAignore_chat_modelAignore_custom_event

    Inherited fromChainManagerMixin(langchain_core)

    Methods

    Mon_chain_endMon_chain_errorMon_agent_actionMon_agent_finish

    Inherited fromToolManagerMixin(langchain_core)

    Methods

    Mon_tool_endMon_tool_error

    Inherited fromRetrieverManagerMixin(langchain_core)

    Methods

    Mon_retriever_errorMon_retriever_end

    Inherited fromCallbackManagerMixin(langchain_core)

    Methods

    Mon_llm_startMon_chat_model_startMon_retriever_startMon_chain_startMon_tool_start

    Inherited fromRunManagerMixin(langchain_core)

    Methods

    Mon_textMon_retryMon_custom_event
    constructor
    __init__
    NameType
    streamCallable[[StreamChunk], None]
    subgraphsbool
    parent_nstuple[str, ...] | None
    method
    on_llm_new_token

    Intentional no-op — v1 chunks are not used on v2-flagged runs.

    The v2 marker already steers invoke to the event generator, so on_llm_new_token should not fire under normal routing. This override stays a pass-through (no call to super()) to make the intent explicit and to guard against any caller (e.g. a node that calls model.stream() directly, which still fires the v1 callback) leaking AIMessageChunks onto a v2-flagged messages stream.

    method
    on_llm_end
    method
    on_llm_error
    method
    on_stream_event

    Forward a protocol event from stream_v2 as a messages stream part.

    Fires once per MessagesData event (message-start, per-block content-block-*, message-finish). The transformer layer correlates events back to a single ChatModelStream via metadata["run_id"] — attached here so the v1 stream_mode="messages" output (which emits (AIMessageChunk, metadata) via on_llm_new_token) keeps its original metadata shape.

    Lives on the v2 handler rather than the v1 base: content-block events are a v2-only concept, and forwarding them only when the v2 handler is attached keeps the message channel's shape predictable for v1 callers.

    v2 variant of StreamMessagesHandler.

    Declaring _V2StreamingCallbackHandler as a base flips BaseChatModel.invoke to route through _stream_chat_model_events (firing on_stream_event) instead of _stream (firing on_llm_new_token). Inherits on_stream_event from the parent, which forwards protocol events onto the messages stream channel.

    Pregel attaches this class instead of the v1 handler only when StreamingHandler opts in via the internal CONFIG_KEY_STREAM_MESSAGES_V2 config key; direct graph.stream(stream_mode="messages") callers keep the v1 AIMessageChunk shape.