LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-corelanguage_modelschat_modelsBaseChatModelstream_events
    Methodā—Since v1.4

    stream_events

    Copy
    stream_events(
      self,
      input: LanguageModelInput,
      config: RunnableConfig | None = None,
      *,
      
    View source on GitHub
    version
    :
    Literal
    [
    'v1'
    ,
    'v2'
    ,
    'v3'
    ]
    =
    'v2'
    ,
    stop
    :
    list
    [
    str
    ]
    |
    None
    =
    None
    ,
    **
    kwargs
    :
    Any
    =
    {
    }
    )
    ->
    Iterator
    [
    StreamEvent
    ]
    |
    ChatModelStream

    Parameters

    NameTypeDescription
    input*LanguageModelInput

    The model input.

    configRunnableConfig | None
    Default:None

    Optional runnable config.

    versionLiteral['v1', 'v2', 'v3']
    Default:'v2'

    Streaming-event schema version. "v3" selects the content-block-centric streaming protocol.

    stoplist[str] | None
    Default:None
    **kwargsAny
    Default:{}

    Stream events from this chat model.

    For version="v1" / "v2", yields StreamEvent dicts (see Runnable.stream_events). For version="v3", returns a ChatModelStream exposing typed projections (.text, .reasoning, .tool_calls, .output).

    Beta

    version="v3" is in beta. The protocol shape, return type, and surface area may change in future releases. Calling it emits a LangChainBetaWarning at runtime.

    v3 always produces v1-shaped content

    ChatModelStream.output.content is always a list of v1 content blocks (text / reasoning / tool_call / image / …), regardless of the model's output_version attribute. The setting only affects the legacy stream() / astream() / invoke() paths. If you're mixing stream_events(version="v3") with those paths in the same pipeline and need a consistent output shape across them, set output_version="v1" on the model.

    Optional stop sequences. Only used for version="v3"; ignored otherwise.

    Additional keyword arguments. For version="v3", forwarded to the model.