LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-coreoutputs
    Module●Since v0.1

    outputs

    Output classes.

    Used to represent the output of a language model call and the output of a chat.

    The top container for information is the LLMResult object. LLMResult is used by both chat models and LLMs. This object contains the output of the language model and any additional information that the model provider wants to return.

    When invoking models via the standard runnable methods (e.g. invoke, batch, etc.):

    • Chat models will return AIMessage objects.
    • LLMs will return regular text strings.

    In addition, users can access the raw output of either LLMs or chat models via callbacks. The on_chat_model_end and on_llm_end callbacks will return an LLMResult object containing the generated outputs and any additional information returned by the model provider.

    In general, if information is already available in the AIMessage object, it is recommended to access it from there rather than from the LLMResult object.

    Functions

    function
    import_attr

    Import an attribute from a module located in a package.

    This utility function is used in custom __getattr__ methods within __init__.py files to dynamically import attributes.

    Classes

    class
    ChatGeneration

    A single chat generation output.

    A subclass of Generation that represents the response from a chat model that generates chat messages.

    The message attribute is a structured representation of the chat message. Most of the time, the message will be of type AIMessage.

    Users working with chat models will usually access information via either AIMessage (returned from runnable interfaces) or LLMResult (available via callbacks).

    class
    ChatGenerationChunk

    ChatGeneration chunk.

    ChatGeneration chunks can be concatenated with other ChatGeneration chunks.

    class
    ChatResult

    Use to represent the result of a chat model call with a single prompt.

    This container is used internally by some implementations of chat model, it will eventually be mapped to a more general LLMResult object, and then projected into an AIMessage object.

    LangChain users working with chat models will usually access information via AIMessage (returned from runnable interfaces) or LLMResult (available via callbacks). Please refer the AIMessage and LLMResult schema documentation for more information.

    class
    Generation

    A single text generation output.

    Generation represents the response from an "old-fashioned" LLM (string-in, string-out) that generates regular text (not chat messages).

    This model is used internally by chat model and will eventually be mapped to a more general LLMResult object, and then projected into an AIMessage object.

    LangChain users working with chat models will usually access information via AIMessage (returned from runnable interfaces) or LLMResult (available via callbacks). Please refer to AIMessage and LLMResult for more information.

    class
    GenerationChunk

    GenerationChunk, which can be concatenated with other Generation chunks.

    class
    LLMResult

    A container for results of an LLM call.

    Both chat models and LLMs generate an LLMResult object. This object contains the generated outputs and any additional information that the model provider wants to return.

    class
    RunInfo

    Class that contains metadata for a single execution of a chain or model.

    Defined for backwards compatibility with older versions of langchain_core.

    Users can acquire the run_id information from callbacks or via run_id information present in the astream_event API (depending on the use case).

    Modules

    module
    generation

    Generation output schema.

    module
    llm_result

    LLMResult class.

    module
    run_info

    RunInfo class.

    module
    chat_generation

    Chat generation output classes.

    module
    chat_result

    Chat result schema.

    View source on GitHub