LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-coreoutput_parserstransform
    Moduleā—Since v0.1

    transform

    Attributes

    Functions

    Classes

    View source on GitHub
    attribute
    T
    function
    run_in_executor
    class
    BaseMessage
    class
    BaseMessageChunk
    class
    BaseOutputParser
    class
    ChatGeneration
    class
    ChatGenerationChunk
    class
    Generation
    class
    GenerationChunk
    class
    RunnableConfig
    class
    BaseTransformOutputParser
    class
    BaseCumulativeTransformOutputParser

    Base classes for output parsers that can handle streaming input.

    Run a function in an executor.

    Base abstract message class.

    Messages are the inputs and outputs of a chat model.

    Examples include HumanMessage, AIMessage, and SystemMessage.

    Message chunk, which can be concatenated with other Message chunks.

    Base class to parse the output of an LLM call.

    Output parsers help structure language model responses.

    A single chat generation output.

    A subclass of Generation that represents the response from a chat model that generates chat messages.

    The message attribute is a structured representation of the chat message. Most of the time, the message will be of type AIMessage.

    Users working with chat models will usually access information via either AIMessage (returned from runnable interfaces) or LLMResult (available via callbacks).

    ChatGeneration chunk.

    ChatGeneration chunks can be concatenated with other ChatGeneration chunks.

    A single text generation output.

    Generation represents the response from an "old-fashioned" LLM (string-in, string-out) that generates regular text (not chat messages).

    This model is used internally by chat model and will eventually be mapped to a more general LLMResult object, and then projected into an AIMessage object.

    LangChain users working with chat models will usually access information via AIMessage (returned from runnable interfaces) or LLMResult (available via callbacks). Please refer to AIMessage and LLMResult for more information.

    GenerationChunk, which can be concatenated with other Generation chunks.

    Base class for an output parser that can handle streaming input.

    Base class for an output parser that can handle streaming input.

    Configuration for a Runnable.

    Note

    Custom values

    The TypedDict has total=False set intentionally to:

    • Allow partial configs to be created and merged together via merge_configs
    • Support config propagation from parent to child runnables via var_child_runnable_config (a ContextVar that automatically passes config down the call stack without explicit parameter passing), where configs are merged rather than replaced
    Example
    # Parent sets tags
    chain.invoke(input, config={"tags": ["parent"]})
    # Child automatically inherits and can add:
    # ensure_config({"tags": ["child"]}) -> {"tags": ["parent", "child"]}