LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicchainscombine_documentsmap_reduceMapReduceDocumentsChain
    Class●Since v1.0Deprecated

    MapReduceDocumentsChain

    Copy
    MapReduceDocumentsChain()

    Bases

    BaseCombineDocumentsChain

    Attributes

    Methods

    Inherited fromBaseCombineDocumentsChain

    Attributes

    Ainput_key: str | None
    —

    The key from the model Run's inputs to use as the eval input.

    Aoutput_key: strAinput_keys: list[str]

    Methods

    View source on GitHub
    M
    get_input_schema
    Mprompt_length
    —

    Return the prompt length given the documents passed in.

    Inherited fromChain

    Attributes

    Amemory: BaseMemory | None
    —

    Optional memory object.

    Acallbacks: CallbacksAverbose: boolAtags: list[str] | NoneAmetadata: dict[str, Any] | NoneAcallback_manager: BaseCallbackManager | None
    —

    [DEPRECATED] Use callbacks instead.

    Ainput_keys: list[str]

    Methods

    Mget_input_schemaMinvokeMainvokeMraise_callback_manager_deprecation
    —

    Raise deprecation warning if callback_manager is used.

    Mset_verbose
    —

    Set the chain verbosity.

    Inherited fromRunnableSerializable(langchain_core)

    Attributes

    Aname

    Methods

    Mto_jsonMconfigurable_fieldsMconfigurable_alternatives

    Inherited fromSerializable(langchain_core)

    Attributes

    Alc_secretsAlc_attributes

    Methods

    Mis_lc_serializableMget_lc_namespaceMlc_idMto_jsonMto_json_not_implemented

    Inherited fromRunnable(langchain_core)

    Attributes

    AnameAInputTypeAOutputTypeAinput_schemaAoutput_schemaAconfig_specs

    Methods

    Mget_nameMget_input_schemaMget_input_jsonschemaMget_output_jsonschemaMconfig_schemaM
    attribute
    llm_chain: LLMChain

    Chain to apply to each document individually.

    attribute
    reduce_documents_chain: BaseCombineDocumentsChain

    Chain to use to reduce the results of applying llm_chain to each doc. This typically either a ReduceDocumentChain or StuffDocumentChain.

    attribute
    document_variable_name: str

    The variable name in the llm_chain to put the documents in. If only one variable in the llm_chain, this need not be provided.

    attribute
    return_intermediate_steps: bool

    Return the results of the map steps in the output.

    attribute
    output_keys: list[str]

    Expect input key.

    attribute
    model_config
    attribute
    collapse_document_chain: BaseCombineDocumentsChain

    Kept for backward compatibility.

    attribute
    combine_document_chain: BaseCombineDocumentsChain

    Kept for backward compatibility.

    method
    get_output_schema
    method
    get_reduce_chain

    For backwards compatibility.

    method
    get_return_intermediate_steps

    For backwards compatibility.

    method
    get_default_document_variable_name

    Get default document variable name, if not provided.

    method
    combine_docs

    Combine documents in a map reduce manner.

    Combine by mapping first chain over all documents, then reducing the results. This reducing can be done recursively if needed (if there are many documents).

    method
    acombine_docs

    Combine documents in a map reduce manner.

    Combine by mapping first chain over all documents, then reducing the results. This reducing can be done recursively if needed (if there are many documents).

    Combining documents by mapping a chain over them, then combining results.

    We first call llm_chain on each document individually, passing in the page_content and any other kwargs. This is the map step.

    We then process the results of that map step in a reduce step. This should likely be a ReduceDocumentsChain.

    Example:

    from langchain_classic.chains import (
        StuffDocumentsChain,
        LLMChain,
        ReduceDocumentsChain,
        MapReduceDocumentsChain,
    )
    from langchain_core.prompts import PromptTemplate
    from langchain_openai import OpenAI
    
    # This controls how each document will be formatted. Specifically,
    # it will be passed to `format_document` - see that function for more
    # details.
    document_prompt = PromptTemplate(
        input_variables=["page_content"], template="{page_content}"
    )
    document_variable_name = "context"
    model = OpenAI()
    # The prompt here should take as an input variable the
    # `document_variable_name`
    prompt = PromptTemplate.from_template("Summarize this content: {context}")
    llm_chain = LLMChain(llm=model, prompt=prompt)
    # We now define how to combine these summaries
    reduce_prompt = PromptTemplate.from_template(
        "Combine these summaries: {context}"
    )
    reduce_llm_chain = LLMChain(llm=model, prompt=reduce_prompt)
    combine_documents_chain = StuffDocumentsChain(
        llm_chain=reduce_llm_chain,
        document_prompt=document_prompt,
        document_variable_name=document_variable_name,
    )
    reduce_documents_chain = ReduceDocumentsChain(
        combine_documents_chain=combine_documents_chain,
    )
    chain = MapReduceDocumentsChain(
        llm_chain=llm_chain,
        reduce_documents_chain=reduce_documents_chain,
    )
    # If we wanted to, we could also pass in collapse_documents_chain
    # which is specifically aimed at collapsing documents BEFORE
    # the final call.
    prompt = PromptTemplate.from_template("Collapse this content: {context}")
    llm_chain = LLMChain(llm=model, prompt=prompt)
    collapse_documents_chain = StuffDocumentsChain(
        llm_chain=llm_chain,
        document_prompt=document_prompt,
        document_variable_name=document_variable_name,
    )
    reduce_documents_chain = ReduceDocumentsChain(
        combine_documents_chain=combine_documents_chain,
        collapse_documents_chain=collapse_documents_chain,
    )
    chain = MapReduceDocumentsChain(
        llm_chain=llm_chain,
        reduce_documents_chain=reduce_documents_chain,
    )
    Macall
    —

    Asynchronously execute the chain.

    Mprep_outputs
    —

    Validate and prepare chain outputs, and save info about this run to memory.

    Maprep_outputs
    —

    Validate and prepare chain outputs, and save info about this run to memory.

    Mprep_inputs
    —

    Prepare chain inputs, including adding inputs from memory.

    Maprep_inputs
    —

    Prepare chain inputs, including adding inputs from memory.

    Mrun
    —

    Convenience method for executing chain.

    Marun
    —

    Convenience method for executing chain.

    Mdict
    —

    Return dictionary representation of agent.

    Msave
    —

    Save the agent.

    Mapply
    —

    Utilize the LLM generate method for speed gains.

    get_config_jsonschema
    Mget_graph
    Mget_prompts
    Mpipe
    Mpick
    Massign
    Minvoke
    Mainvoke
    Mbatch
    Mbatch_as_completed
    Mabatch
    Mabatch_as_completed
    Mstream
    Mastream
    Mastream_log
    Mastream_events
    Mtransform
    Matransform
    Mbind
    Mwith_config
    Mwith_listeners
    Mwith_alisteners
    Mwith_types
    Mwith_retry
    Mmap
    Mwith_fallbacks
    Mas_tool