LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicchainscombine_documentsrefineRefineDocumentsChain
    Class●Since v1.0Deprecated

    RefineDocumentsChain

    Copy
    RefineDocumentsChain()

    Bases

    BaseCombineDocumentsChain

    Attributes

    Methods

    Inherited fromBaseCombineDocumentsChain

    Attributes

    Ainput_key: str | None
    —

    The key from the model Run's inputs to use as the eval input.

    Aoutput_key: strAinput_keys: list[str]

    Methods

    View source on GitHub
    M
    get_input_schema
    Mget_output_schema
    Mprompt_length
    —

    Return the prompt length given the documents passed in.

    Inherited fromChain

    Attributes

    Amemory: BaseMemory | None
    —

    Optional memory object.

    Acallbacks: CallbacksAverbose: boolAtags: list[str] | NoneAmetadata: dict[str, Any] | NoneAcallback_manager: BaseCallbackManager | None
    —

    [DEPRECATED] Use callbacks instead.

    Ainput_keys: list[str]

    Methods

    Mget_input_schemaMget_output_schemaMinvokeMainvokeMraise_callback_manager_deprecation
    —

    Raise deprecation warning if callback_manager is used.

    Inherited fromRunnableSerializable(langchain_core)

    Attributes

    Aname

    Methods

    Mto_jsonMconfigurable_fieldsMconfigurable_alternatives

    Inherited fromSerializable(langchain_core)

    Attributes

    Alc_secretsAlc_attributes

    Methods

    Mis_lc_serializableMget_lc_namespaceMlc_idMto_jsonMto_json_not_implemented

    Inherited fromRunnable(langchain_core)

    Attributes

    AnameAInputTypeAOutputTypeAinput_schemaAoutput_schemaAconfig_specs

    Methods

    Mget_nameMget_input_schemaMget_input_jsonschemaMget_output_schemaMget_output_jsonschemaM
    attribute
    initial_llm_chain: LLMChain

    LLM chain to use on initial document.

    attribute
    refine_llm_chain: LLMChain

    LLM chain to use when refining.

    attribute
    document_variable_name: str

    The variable name in the initial_llm_chain to put the documents in. If only one variable in the initial_llm_chain, this need not be provided.

    attribute
    initial_response_name: str

    The variable name to format the initial response in when refining.

    attribute
    document_prompt: BasePromptTemplate

    Prompt to use to format each document, gets passed to format_document.

    attribute
    return_intermediate_steps: bool

    Return the results of the refine steps in the output.

    attribute
    output_keys: list[str]

    Expect input key.

    attribute
    model_config
    method
    get_return_intermediate_steps

    For backwards compatibility.

    method
    get_default_document_variable_name

    Get default document variable name, if not provided.

    method
    combine_docs

    Combine by mapping first chain over all, then stuffing into final chain.

    method
    acombine_docs

    Combine by mapping a first chain over all, then stuffing into a final chain.

    Combine documents by doing a first pass and then refining on more documents.

    This algorithm first calls initial_llm_chain on the first document, passing that first document in with the variable name document_variable_name, and produces a new variable with the variable name initial_response_name.

    Then, it loops over every remaining document. This is called the "refine" step. It calls refine_llm_chain, passing in that document with the variable name document_variable_name as well as the previous response with the variable name initial_response_name.

    Example:

    from langchain_classic.chains import RefineDocumentsChain, LLMChain
    from langchain_core.prompts import PromptTemplate
    from langchain_openai import OpenAI
    
    # This controls how each document will be formatted. Specifically,
    # it will be passed to `format_document` - see that function for more
    # details.
    document_prompt = PromptTemplate(
        input_variables=["page_content"], template="{page_content}"
    )
    document_variable_name = "context"
    model = OpenAI()
    # The prompt here should take as an input variable the
    # `document_variable_name`
    prompt = PromptTemplate.from_template("Summarize this content: {context}")
    initial_llm_chain = LLMChain(llm=model, prompt=prompt)
    initial_response_name = "prev_response"
    # The prompt here should take as an input variable the
    # `document_variable_name` as well as `initial_response_name`
    prompt_refine = PromptTemplate.from_template(
        "Here's your first summary: {prev_response}. "
        "Now add to it based on the following context: {context}"
    )
    refine_llm_chain = LLMChain(llm=model, prompt=prompt_refine)
    chain = RefineDocumentsChain(
        initial_llm_chain=initial_llm_chain,
        refine_llm_chain=refine_llm_chain,
        document_prompt=document_prompt,
        document_variable_name=document_variable_name,
        initial_response_name=initial_response_name,
    )
    M
    set_verbose
    —

    Set the chain verbosity.

    Macall
    —

    Asynchronously execute the chain.

    Mprep_outputs
    —

    Validate and prepare chain outputs, and save info about this run to memory.

    Maprep_outputs
    —

    Validate and prepare chain outputs, and save info about this run to memory.

    Mprep_inputs
    —

    Prepare chain inputs, including adding inputs from memory.

    Maprep_inputs
    —

    Prepare chain inputs, including adding inputs from memory.

    Mrun
    —

    Convenience method for executing chain.

    Marun
    —

    Convenience method for executing chain.

    Mdict
    —

    Return dictionary representation of agent.

    Msave
    —

    Save the agent.

    Mapply
    —

    Utilize the LLM generate method for speed gains.

    config_schema
    Mget_config_jsonschema
    Mget_graph
    Mget_prompts
    Mpipe
    Mpick
    Massign
    Minvoke
    Mainvoke
    Mbatch
    Mbatch_as_completed
    Mabatch
    Mabatch_as_completed
    Mstream
    Mastream
    Mastream_log
    Mastream_events
    Mtransform
    Matransform
    Mbind
    Mwith_config
    Mwith_listeners
    Mwith_alisteners
    Mwith_types
    Mwith_retry
    Mmap
    Mwith_fallbacks
    Mas_tool