LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicchainscombine_documentsreduceReduceDocumentsChain
    Class●Since v1.0Deprecated

    ReduceDocumentsChain

    Copy
    ReduceDocumentsChain()

    Bases

    BaseCombineDocumentsChain

    Attributes

    Methods

    Inherited fromBaseCombineDocumentsChain

    Attributes

    Ainput_key: str | None
    —

    The key from the model Run's inputs to use as the eval input.

    Aoutput_key: strAinput_keys: list[str]
    View source on GitHub
    A
    output_keys
    : list[str]
    —

    The keys to use for the output.

    Methods

    Mget_input_schemaMget_output_schemaMprompt_length
    —

    Return the prompt length given the documents passed in.

    Inherited fromChain

    Attributes

    Amemory: BaseMemory | None
    —

    Optional memory object.

    Acallbacks: CallbacksAverbose: boolAtags: list[str] | NoneAmetadata: dict[str, Any] | NoneAcallback_manager: BaseCallbackManager | None
    —

    [DEPRECATED] Use callbacks instead.

    Ainput_keys: list[str]Aoutput_keys: list[str]
    —

    The keys to use for the output.

    Methods

    Mget_input_schemaMget_output_schemaMinvokeMainvokeMraise_callback_manager_deprecation
    —

    Raise deprecation warning if callback_manager is used.

    Inherited fromRunnableSerializable(langchain_core)

    Attributes

    Aname

    Methods

    Mto_jsonMconfigurable_fieldsMconfigurable_alternatives

    Inherited fromSerializable(langchain_core)

    Attributes

    Alc_secretsAlc_attributes

    Methods

    Mis_lc_serializableMget_lc_namespaceMlc_idMto_jsonMto_json_not_implemented

    Inherited fromRunnable(langchain_core)

    Attributes

    AnameAInputTypeAOutputTypeAinput_schemaAoutput_schemaAconfig_specs

    Methods

    Mget_nameMget_input_schemaMget_input_jsonschemaMget_output_schemaMget_output_jsonschemaM
    attribute
    combine_documents_chain: BaseCombineDocumentsChain

    Final chain to call to combine documents.

    This is typically a StuffDocumentsChain.

    attribute
    collapse_documents_chain: BaseCombineDocumentsChain | None

    Chain to use to collapse documents if needed until they can all fit. If None, will use the combine_documents_chain.

    This is typically a StuffDocumentsChain.

    attribute
    token_max: int

    The maximum number of tokens to group documents into.

    For example, if set to 3000 then documents will be grouped into chunks of no greater than 3000 tokens before trying to combine them into a smaller chunk.

    attribute
    collapse_max_retries: int | None

    The maximum number of retries to collapse documents to fit token_max.

    If None, it will keep trying to collapse documents to fit token_max.

    Otherwise, after it reaches the max number, it will throw an error.

    attribute
    model_config
    method
    combine_docs

    Combine multiple documents recursively.

    method
    acombine_docs

    Async combine multiple documents recursively.

    Combine documents by recursively reducing them.

    This involves

    • combine_documents_chain
    • collapse_documents_chain

    combine_documents_chain is ALWAYS provided. This is final chain that is called.

    We pass all previous results to this chain, and the output of this chain is returned as a final result.

    collapse_documents_chain is used if the documents passed in are too many to all be passed to combine_documents_chain in one go. In this case, collapse_documents_chain is called recursively on as big of groups of documents as are allowed.

    Example:

    from langchain_classic.chains import (
        StuffDocumentsChain,
        LLMChain,
        ReduceDocumentsChain,
    )
    from langchain_core.prompts import PromptTemplate
    from langchain_openai import OpenAI
    
    # This controls how each document will be formatted. Specifically,
    # it will be passed to `format_document` - see that function for more
    # details.
    document_prompt = PromptTemplate(
        input_variables=["page_content"], template="{page_content}"
    )
    document_variable_name = "context"
    model = OpenAI()
    # The prompt here should take as an input variable the
    # `document_variable_name`
    prompt = PromptTemplate.from_template("Summarize this content: {context}")
    llm_chain = LLMChain(llm=model, prompt=prompt)
    combine_documents_chain = StuffDocumentsChain(
        llm_chain=llm_chain,
        document_prompt=document_prompt,
        document_variable_name=document_variable_name,
    )
    chain = ReduceDocumentsChain(
        combine_documents_chain=combine_documents_chain,
    )
    # If we wanted to, we could also pass in collapse_documents_chain
    # which is specifically aimed at collapsing documents BEFORE
    # the final call.
    prompt = PromptTemplate.from_template("Collapse this content: {context}")
    llm_chain = LLMChain(llm=model, prompt=prompt)
    collapse_documents_chain = StuffDocumentsChain(
        llm_chain=llm_chain,
        document_prompt=document_prompt,
        document_variable_name=document_variable_name,
    )
    chain = ReduceDocumentsChain(
        combine_documents_chain=combine_documents_chain,
        collapse_documents_chain=collapse_documents_chain,
    )
    M
    set_verbose
    —

    Set the chain verbosity.

    Macall
    —

    Asynchronously execute the chain.

    Mprep_outputs
    —

    Validate and prepare chain outputs, and save info about this run to memory.

    Maprep_outputs
    —

    Validate and prepare chain outputs, and save info about this run to memory.

    Mprep_inputs
    —

    Prepare chain inputs, including adding inputs from memory.

    Maprep_inputs
    —

    Prepare chain inputs, including adding inputs from memory.

    Mrun
    —

    Convenience method for executing chain.

    Marun
    —

    Convenience method for executing chain.

    Mdict
    —

    Return dictionary representation of agent.

    Msave
    —

    Save the agent.

    Mapply
    —

    Utilize the LLM generate method for speed gains.

    config_schema
    Mget_config_jsonschema
    Mget_graph
    Mget_prompts
    Mpipe
    Mpick
    Massign
    Minvoke
    Mainvoke
    Mbatch
    Mbatch_as_completed
    Mabatch
    Mabatch_as_completed
    Mstream
    Mastream
    Mastream_log
    Mastream_events
    Mtransform
    Matransform
    Mbind
    Mwith_config
    Mwith_listeners
    Mwith_alisteners
    Mwith_types
    Mwith_retry
    Mmap
    Mwith_fallbacks
    Mas_tool