LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicchainscombine_documentsmap_rerankMapRerankDocumentsChain
    Class●Since v1.0Deprecated

    MapRerankDocumentsChain

    Copy
    MapRerankDocumentsChain()

    Bases

    BaseCombineDocumentsChain

    Attributes

    Methods

    Inherited fromBaseCombineDocumentsChain

    Attributes

    Ainput_key: str | None
    —

    The key from the model Run's inputs to use as the eval input.

    Aoutput_key: strAinput_keys: list[str]

    Methods

    View source on GitHub
    M
    get_input_schema
    Mprompt_length
    —

    Return the prompt length given the documents passed in.

    Inherited fromChain

    Attributes

    Amemory: BaseMemory | None
    —

    Optional memory object.

    Acallbacks: CallbacksAverbose: boolAtags: list[str] | NoneAmetadata: dict[str, Any] | NoneAcallback_manager: BaseCallbackManager | None
    —

    [DEPRECATED] Use callbacks instead.

    Ainput_keys: list[str]

    Methods

    Mget_input_schemaMinvokeMainvokeMraise_callback_manager_deprecation
    —

    Raise deprecation warning if callback_manager is used.

    Mset_verbose
    —

    Set the chain verbosity.

    Inherited fromRunnableSerializable(langchain_core)

    Attributes

    Aname

    Methods

    Mto_jsonMconfigurable_fieldsMconfigurable_alternatives

    Inherited fromSerializable(langchain_core)

    Attributes

    Alc_secretsAlc_attributes

    Methods

    Mis_lc_serializableMget_lc_namespaceMlc_idMto_jsonMto_json_not_implemented

    Inherited fromRunnable(langchain_core)

    Attributes

    AnameAInputTypeAOutputTypeAinput_schemaAoutput_schemaAconfig_specs

    Methods

    Mget_nameMget_input_schemaMget_input_jsonschemaMget_output_jsonschemaMconfig_schemaM
    attribute
    llm_chain: LLMChain

    Chain to apply to each document individually.

    attribute
    document_variable_name: str

    The variable name in the llm_chain to put the documents in. If only one variable in the llm_chain, this need not be provided.

    attribute
    rank_key: str

    Key in output of llm_chain to rank on.

    attribute
    answer_key: str

    Key in output of llm_chain to return as answer.

    attribute
    metadata_keys: list[str] | None

    Additional metadata from the chosen document to return.

    attribute
    return_intermediate_steps: bool

    Return intermediate steps. Intermediate steps include the results of calling llm_chain on each document.

    attribute
    model_config
    attribute
    output_keys: list[str]

    Expect input key.

    method
    get_output_schema
    method
    validate_llm_output

    Validate that the combine chain outputs a dictionary.

    method
    get_default_document_variable_name

    Get default document variable name, if not provided.

    method
    combine_docs

    Combine documents in a map rerank manner.

    Combine by mapping first chain over all documents, then reranking the results.

    method
    acombine_docs

    Combine documents in a map rerank manner.

    Combine by mapping first chain over all documents, then reranking the results.

    Combining documents by mapping a chain over them, then reranking results.

    This algorithm calls an LLMChain on each input document. The LLMChain is expected to have an OutputParser that parses the result into both an answer (answer_key) and a score (rank_key). The answer with the highest score is then returned.

    Example:

    from langchain_classic.chains import MapRerankDocumentsChain, LLMChain
    from langchain_core.prompts import PromptTemplate
    from langchain_openai import OpenAI
    from langchain_classic.output_parsers.regex import RegexParser
    
    document_variable_name = "context"
    model = OpenAI()
    # The prompt here should take as an input variable the
    # `document_variable_name`
    # The actual prompt will need to be a lot more complex, this is just
    # an example.
    prompt_template = (
        "Use the following context to tell me the chemical formula "
        "for water. Output both your answer and a score of how confident "
        "you are. Context: {context}"
    )
    output_parser = RegexParser(
        regex=r"(.*?)\nScore: (.*)",
        output_keys=["answer", "score"],
    )
    prompt = PromptTemplate(
        template=prompt_template,
        input_variables=["context"],
        output_parser=output_parser,
    )
    llm_chain = LLMChain(llm=model, prompt=prompt)
    chain = MapRerankDocumentsChain(
        llm_chain=llm_chain,
        document_variable_name=document_variable_name,
        rank_key="score",
        answer_key="answer",
    )
    Macall
    —

    Asynchronously execute the chain.

    Mprep_outputs
    —

    Validate and prepare chain outputs, and save info about this run to memory.

    Maprep_outputs
    —

    Validate and prepare chain outputs, and save info about this run to memory.

    Mprep_inputs
    —

    Prepare chain inputs, including adding inputs from memory.

    Maprep_inputs
    —

    Prepare chain inputs, including adding inputs from memory.

    Mrun
    —

    Convenience method for executing chain.

    Marun
    —

    Convenience method for executing chain.

    Mdict
    —

    Return dictionary representation of agent.

    Msave
    —

    Save the agent.

    Mapply
    —

    Utilize the LLM generate method for speed gains.

    get_config_jsonschema
    Mget_graph
    Mget_prompts
    Mpipe
    Mpick
    Massign
    Minvoke
    Mainvoke
    Mbatch
    Mbatch_as_completed
    Mabatch
    Mabatch_as_completed
    Mstream
    Mastream
    Mastream_log
    Mastream_events
    Mtransform
    Matransform
    Mbind
    Mwith_config
    Mwith_listeners
    Mwith_alisteners
    Mwith_types
    Mwith_retry
    Mmap
    Mwith_fallbacks
    Mas_tool