LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Graphs
    • Functional API
    • Pregel
    • Checkpointing
    • Storage
    • Caching
    • Types
    • Runtime
    • Config
    • Errors
    • Constants
    • Channels
    • Agents
    LangGraph Checkpoint
    Checkpoint Postgres
    Store Postgres
    Checkpoint SQLite
    LangGraph Prebuilt
    LangGraph CLI
    LangGraph SDK
    LangGraph Supervisor
    LangGraph Swarm
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewGraphsFunctional APIPregelCheckpointingStorageCachingTypesRuntimeConfigErrorsConstantsChannelsAgents
    LangGraph Checkpoint
    Checkpoint Postgres
    Store Postgres
    Checkpoint SQLite
    LangGraph Prebuilt
    LangGraph CLI
    LangGraph SDK
    LangGraph Supervisor
    LangGraph Swarm
    Language
    Theme
    Pythonlanggraphstreamrun_stream
    Moduleā—Since v1.1

    run_stream

    Attributes

    Functions

    Classes

    View source on GitHub
    attribute
    SubgraphStatus: Literal['started', 'completed', 'failed', 'interrupted']
    function
    convert_to_protocol_event
    class
    StreamMux
    class
    ProtocolEvent
    class
    ValuesTransformer
    class
    GraphRunStream
    class
    AsyncGraphRunStream
    class
    SubgraphRunStream
    class
    AsyncSubgraphRunStream

    Convert a v2 StreamPart to a ProtocolEvent.

    Central event dispatcher for the streaming infrastructure.

    Owns the main event log and routes events through a transformer pipeline. StreamChannels with a name discovered in transformer projections are auto-wired so that every push() also injects a ProtocolEvent into the main log. StreamChannels without a name are local-only.

    Pass is_async=True when the mux will be consumed via async iteration (handler.astream()). All StreamChannel instances discovered during registration are automatically bound to the matching mode.

    A protocol event emitted by the streaming infrastructure.

    Wraps a raw stream part (values, messages, custom, etc.) in a uniform envelope with a monotonic sequence number assigned by the root StreamMux. Consumers that need a total order across root events should use seq, not params.timestamp (which is wall-clock and not monotonic).

    Capture values events as a drainable stream of state snapshots.

    Keeps _latest / _interrupted / _interrupts as scalar state regardless of whether the log has a subscriber — so run.output() and run.interrupted work without forcing the caller to iterate run.values. Log pushes are silent no-ops when unsubscribed.

    Native transformer — projection keys are exposed as direct attributes on the run stream (e.g. run.values).

    Only values events at the run's own level are captured; snapshots from deeper subgraphs are left in the main event log but excluded from the projection. "Own level" is defined by scope, which stream_v2 / astream_v2 populate from the caller's checkpoint namespace so that a nested stream_v2 call still sees its own root snapshots.

    Sync run stream with caller-driven pumping.

    The caller's iteration on any projection (values, messages, raw events, or output) drives the graph forward. No background thread is used — the caller's for loop is the pump.

    Projections are single-consumer — iterating run.values twice raises. Use projection.tee(n) if you genuinely need fan-out.

    All transformer projections live in extensions. Native transformer projections (those with _native = True) are also set as direct attributes on this instance (e.g. run.values, run.messages).

    Sync handle for a discovered subgraph (extends GraphRunStream).

    Async handle for a discovered subgraph (extends AsyncGraphRunStream).

    Async run stream with caller-driven pumping.

    Async iteration on any projection drives the graph forward — there is no background task. Concurrent consumers share a single-flight pump via an asyncio.Lock, so each awaiting cursor contributes one event per acquisition. Backpressure comes from the logs: when a subscribed log's buffer reaches maxlen, apush awaits the subscriber to drain, which holds back the pump and paces the graph.

    Projections are single-consumer — a second aiter(run.values) raises. Use projection.tee(n) for fan-out.

    Use as an async context manager to guarantee clean shutdown on early exit:

    async with await handler.astream(input) as run:
        async for msg in run.messages:
            ...