LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-corerunnablesbaseRunnablebatch_as_completed
    Method●Since v0.1

    batch_as_completed

    Run invoke in parallel on a list of inputs.

    Yields results as they complete.

    Copy
    batch_as_completed(
      self,
      inputs: Sequence[Input],
      config: RunnableConfig | Sequence[RunnableConfig] | None = None,
      *,
      return_exceptions: bool = False,
      **kwargs: Any | None = {}
    ) -> Iterator[tuple[int, Output | Exception]]

    Parameters

    NameTypeDescription
    inputs*Sequence[Input]

    A list of inputs to the Runnable.

    configRunnableConfig | Sequence[RunnableConfig] | None
    Default:None

    A config to use when invoking the Runnable.

    The config supports standard keys like 'tags', 'metadata' for tracing purposes, 'max_concurrency' for controlling how much work to do in parallel, and other keys.

    Please refer to RunnableConfig for more details.

    return_exceptionsbool
    Default:False

    Whether to return exceptions instead of raising them.

    **kwargsAny | None
    Default:{}

    Additional keyword arguments to pass to the Runnable.

    View source on GitHub