LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Graphs
    • Functional API
    • Pregel
    • Checkpointing
    • Storage
    • Caching
    • Types
    • Runtime
    • Config
    • Errors
    • Constants
    • Channels
    • Agents
    LangGraph Checkpoint
    Checkpoint Postgres
    Store Postgres
    Checkpoint SQLite
    LangGraph Prebuilt
    LangGraph CLI
    LangGraph SDK
    LangGraph Supervisor
    LangGraph Swarm
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewGraphsFunctional APIPregelCheckpointingStorageCachingTypesRuntimeConfigErrorsConstantsChannelsAgents
    LangGraph Checkpoint
    Checkpoint Postgres
    Store Postgres
    Checkpoint SQLite
    LangGraph Prebuilt
    LangGraph CLI
    LangGraph SDK
    LangGraph Supervisor
    LangGraph Swarm
    Language
    Theme
    Pythonlanggraphstream
    Module●Since v1.1

    stream

    Streaming infrastructure for LangGraph.

    Compile a graph with transformers=[...] and call graph.stream_v2() / graph.astream_v2() to drive a transformer pipeline that projects the graph's raw events into ergonomic per-channel streams.

    Attributes

    attribute
    SubgraphStatus: Literal['started', 'completed', 'failed', 'interrupted']

    Classes

    class
    ProtocolEvent

    A protocol event emitted by the streaming infrastructure.

    Wraps a raw stream part (values, messages, custom, etc.) in a uniform envelope with a monotonic sequence number assigned by the root StreamMux. Consumers that need a total order across root events should use seq, not params.timestamp (which is wall-clock and not monotonic).

    class
    StreamTransformer

    Extension point for custom stream projections.

    Transformers observe protocol events flowing through the StreamMux and build typed derived projections (StreamChannels, promises, etc.).

    Set _native = True on a transformer to have its projection keys exposed as direct attributes on the run stream (in addition to appearing in run.extensions).

    Subclasses must implement init and override at least one of process / aprocess. The finalize / afinalize and fail / afail hooks are optional — the default implementations are no-ops. StreamChannel instances in the projection dict are auto-closed / auto-failed by the mux, so most transformers don't need finalize or fail at all.

    Transformers that need async work pick the async lane by:

    1. Overriding aprocess (and optionally afinalize / afail), or
    2. Calling self.schedule(coro) from inside a sync process, or
    3. Setting requires_async = True explicitly.

    The mux detects these cases at registration and raises if they're used under sync stream() — they only work under astream().

    Use aprocess when the pump must wait for async work before the next transformer sees the event (e.g. PII redaction that mutates event in place). Use schedule() for decoupled async work whose result lands on an independent projection (e.g. async moderation scoring, cost lookup, external tracing).

    class
    AsyncGraphRunStream

    Async run stream with caller-driven pumping.

    Async iteration on any projection drives the graph forward — there is no background task. Concurrent consumers share a single-flight pump via an asyncio.Lock, so each awaiting cursor contributes one event per acquisition. Backpressure comes from the logs: when a subscribed log's buffer reaches maxlen, apush awaits the subscriber to drain, which holds back the pump and paces the graph.

    Projections are single-consumer — a second aiter(run.values) raises. Use projection.tee(n) for fan-out.

    Use as an async context manager to guarantee clean shutdown on early exit:

    async with await handler.astream(input) as run:
        async for msg in run.messages:
            ...
    class
    AsyncSubgraphRunStream

    Async handle for a discovered subgraph (extends AsyncGraphRunStream).

    class
    GraphRunStream

    Sync run stream with caller-driven pumping.

    The caller's iteration on any projection (values, messages, raw events, or output) drives the graph forward. No background thread is used — the caller's for loop is the pump.

    Projections are single-consumer — iterating run.values twice raises. Use projection.tee(n) if you genuinely need fan-out.

    All transformer projections live in extensions. Native transformer projections (those with _native = True) are also set as direct attributes on this instance (e.g. run.values, run.messages).

    class
    SubgraphRunStream

    Sync handle for a discovered subgraph (extends GraphRunStream).

    class
    StreamChannel

    Single-consumer drainable queue for streaming events, with optional protocol auto-forwarding.

    When constructed with a name, the StreamMux auto-wires every push() to also inject a ProtocolEvent into the main event stream using the channel's name as the method. When constructed without a name, the channel is local-only — items are only visible to in-process consumers that iterate the channel directly.

    Items are popped off the front as the consumer advances — there is no retention beyond what's currently queued. A channel accepts exactly one subscriber; a second __iter__ / __aiter__ call raises. Use tee(n) / atee(n) for fan-out.

    Starts unbound — neither __iter__ nor __aiter__ is available until the StreamMux calls _bind(is_async). After binding, only the matching iteration protocol works; the other raises TypeError.

    Pump wiring (set by the run stream, not by _bind): - _request_more: sync pump callable, returns True if a new event was produced. - _arequest_more: async pump coroutine factory, same contract.

    Memory is bounded by caller pace: both sync and async use caller- driven pumps, so each cursor advance produces at most one event.

    Lazy-subscribe: push appends to the local buffer only when a subscriber has registered. Auto-forward via _wire_fn always fires regardless of subscription state.

    Lifecycle (close / fail) is managed by the mux — transformers don't need to close their channels manually.

    class
    LifecyclePayload

    Payload of a lifecycle event surfaced on the lifecycle channel.

    Auto-forwarded as lifecycle protocol events (no custom: prefix because LifecycleTransformer is a native transformer) so remote SDK clients receive the same data in-process consumers see via run.lifecycle.

    class
    LifecycleTransformer

    Surface subgraph lifecycle as lifecycle protocol events.

    Pushes LifecyclePayload to a StreamChannel named lifecycle. The channel is auto-forwarded by the mux so payloads land in the main event log under method = "lifecycle" (native transformer — no custom: prefix) — visible to remote SDK clients over the wire and to in-process consumers via run.lifecycle.

    Tracks subgraphs at every depth strictly below the transformer's scope, so a graph → subgraph → subgraph chain produces lifecycle events for both nested levels in a flat stream.

    Native transformer — projection key lifecycle is exposed as run.lifecycle.

    class
    SubgraphTransformer

    Discover subgraph invocations as in-process navigation handles.

    Per discovered direct-child subgraph, builds a SubgraphRunStream (or AsyncSubgraphRunStream) wrapping a child mini-mux scoped to the subgraph's namespace. Consumers iterate run.subgraphs to receive handles, then drill into handle.values / handle.messages / handle.subgraphs (recursive grandchildren) / handle.lifecycle.

    Each mini-mux owns its own scope and uses its own SubgraphTransformer to discover its direct children, so grandchildren live on the child handle — never on the root's subgraphs log. Forwarding events into the matching child mini-mux is what keeps the child's projections populated.

    Native transformer — subgraphs is exposed as run.subgraphs.

    Modules

    module
    transformers
    module
    stream_channel
    module
    run_stream
    View source on GitHub