LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-corelanguage_modelschat_modelsBaseChatModelgenerate
    Method●Since v0.1

    generate

    Pass a sequence of prompts to the model and return model generations.

    This method should make use of batched calls for models that expose a batched API.

    Use this method when you want to:

    1. Take advantage of batched calls,
    2. Need more output from the model than just the top generated value,
    3. Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).
    Copy
    generate(
      self,
      messages: list[list[BaseMessage]],
      stop: list[str] | None = None,
      callbacks: Callbacks = None,
      *,
      tags: list[str] | None = None,
      metadata: dict[str, Any] | None = None,
      run_name: str | None = None,
      run_id: uuid.UUID | None = None,
      **kwargs: Any = {}
    ) -> LLMResult

    Parameters

    NameTypeDescription
    messages*list[list[BaseMessage]]

    List of list of messages.

    stoplist[str] | None
    Default:None

    Stop words to use when generating.

    Model output is cut off at the first occurrence of any of these substrings.

    callbacksCallbacks
    Default:None

    Callbacks to pass through.

    Used for executing additional functionality, such as logging or streaming, throughout generation.

    tagslist[str] | None
    Default:None

    The tags to apply.

    metadatadict[str, Any] | None
    Default:None

    The metadata to apply.

    run_namestr | None
    Default:None

    The name of the run.

    run_iduuid.UUID | None
    Default:None

    The ID of the run.

    **kwargsAny
    Default:{}

    Arbitrary additional keyword arguments.

    These are usually passed to the model provider API call.

    View source on GitHub