LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-corelanguage_modelschat_modelsBaseChatModelastream_v2
    Method●Since v1.3

    astream_v2

    Copy
    astream_v2(
      self,
      input: LanguageModelInput,
      config: RunnableConfig | None = None,
      *,
      stop
    View source on GitHub
    :
    list
    [
    str
    ]
    |
    None
    =
    None
    ,
    **
    kwargs
    :
    Any
    =
    {
    }
    )
    ->
    AsyncChatModelStream

    Parameters

    NameTypeDescription
    input*LanguageModelInput

    The model input.

    configRunnableConfig | None
    Default:None

    Optional runnable config.

    stoplist[str] | None
    Default:None
    **kwargsAny
    Default:{}

    Async variant of stream_v2.

    Returns an AsyncChatModelStream whose projections are async-iterable and awaitable.

    Warning

    This API is experimental and may change.

    Always produces v1-shaped content

    The assembled message's content is always a list of v1 content blocks, regardless of the model's output_version attribute — see stream_v2 for the full rationale.

    Optional list of stop words.

    Additional keyword arguments passed to the model.