LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Graphs
    • Functional API
    • Pregel
    • Checkpointing
    • Storage
    • Caching
    • Types
    • Runtime
    • Config
    • Errors
    • Constants
    • Channels
    • Agents
    LangGraph Checkpoint
    Checkpoint Postgres
    Store Postgres
    Checkpoint SQLite
    LangGraph Prebuilt
    LangGraph CLI
    LangGraph SDK
    LangGraph Supervisor
    LangGraph Swarm
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewGraphsFunctional APIPregelCheckpointingStorageCachingTypesRuntimeConfigErrorsConstantsChannelsAgents
    LangGraph Checkpoint
    Checkpoint Postgres
    Store Postgres
    Checkpoint SQLite
    LangGraph Prebuilt
    LangGraph CLI
    LangGraph SDK
    LangGraph Supervisor
    LangGraph Swarm
    Language
    Theme
    PythonlanggraphstreamtransformersMessagesTransformer
    Classā—Since v1.1

    MessagesTransformer

    Copy
    MessagesTransformer(
        self,
        scope:
    tuple
    [
    str
    ,
    .
    .
    .
    ]
    =
    (
    )
    ,
    )

    Bases

    StreamTransformer

    Constructors

    constructor
    __init__
    NameType
    scopetuple[str, ...]

    Attributes

    attribute
    required_stream_modes

    Methods

    method
    init
    method
    process
    method
    finalize
    method
    fail

    Inherited fromStreamTransformer

    Attributes

    Arequires_async: boolAsupports_sync: boolAscope: tuple[str, ...]

    Methods

    Maprocess
    —

    Handle an event on the async lane.

    Mafinalize
    —

    Called when the run ends normally (async lane).

    Mafail
    —

    Called when the run ends with an error (async lane).

    Mschedule
    —

    Schedule a coroutine tied to this transformer's lifecycle.

    View source on GitHub

    Capture messages events as ChatModelStream objects.

    The messages projection yields one ChatModelStream (or AsyncChatModelStream) per LLM call. Consumers iterate run.messages to get stream handles, then use each handle's typed projections (.text, .reasoning, .tool_calls, .usage, .output) for per-message content.

    Two input shapes are handled (via params["data"] = (payload, metadata) from StreamMessagesHandler):

    1. Protocol event (dict with "event" key) — emitted by stream_v2() / astream_v2() via the on_stream_event callback. Routed to an existing ChatModelStream by metadata["run_id"]. A message-start event creates a new stream; message-finish closes it.
    2. Whole AIMessage — emitted from on_chain_end when a node returns a finalized message. Replayed as a synthetic protocol event lifecycle via message_to_events, then the already-complete stream is pushed to the log.

    V1 AIMessageChunk tuples (from on_llm_new_token) are not streamed into this projection: chat models that want to populate run.messages with content-block streaming must use stream_v2() / astream_v2(). Models called via the legacy stream() method still surface their final AIMessage via on_chain_end when a node returns it as state.

    Only events at the run's own level are projected; tokens from deeper subgraphs are left in the main event log but excluded from .messages. "Own level" is defined by scope, which stream_v2 / astream_v2 populate from the caller's checkpoint namespace so that a stream_v2 call inside a node still sees its own root chat model streams on .messages. Consumers that need subgraph tokens should iterate the raw event stream or register a custom transformer.

    Native transformer — the messages projection is exposed as a direct attribute on the run stream.

    Clear any routing state — streams close themselves via message-finish.

    Propagate run error to any streams still open when the graph fails.