LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-coreoutputsllm_resultLLMResult
    Class●Since v0.1

    LLMResult

    A container for results of an LLM call.

    Both chat models and LLMs generate an LLMResult object. This object contains the generated outputs and any additional information that the model provider wants to return.

    Copy
    LLMResult()

    Bases

    BaseModel

    Used in Docs

    • ChatAnthropicVertex integration

    Attributes

    attribute
    generations: list[list[Generation | ChatGeneration | GenerationChunk | ChatGenerationChunk]]

    Generated outputs.

    The first dimension of the list represents completions for different input prompts.

    The second dimension of the list represents different candidate generations for a given prompt.

    • When returned from an LLM, the type is list[list[Generation]].
    • When returned from a chat model, the type is list[list[ChatGeneration]].

    ChatGeneration is a subclass of Generation that has a field for a structured chat message.

    attribute
    llm_output: dict | None

    For arbitrary LLM provider specific output.

    This dictionary is a free-form dictionary that can contain any information that the provider wants to return. It is not standardized and is provider-specific.

    Users should generally avoid relying on this field and instead rely on accessing relevant information from standardized fields present in AIMessage.

    attribute
    run: list[RunInfo] | None

    List of metadata info for model call for each input.

    See langchain_core.outputs.run_info.RunInfo for details.

    attribute
    type: Literal['LLMResult']

    Type is used exclusively for serialization purposes.

    Methods

    method
    flatten

    Flatten generations into a single list.

    Unpack list[list[Generation]] -> list[LLMResult] where each returned LLMResult contains only a single Generation. If token usage information is available, it is kept only for the LLMResult corresponding to the top-choice Generation, to avoid over-counting of token usage downstream.

    View source on GitHub