# RefineDocumentsChain

> **Class** in `langchain_classic`

📖 [View in docs](https://reference.langchain.com/python/langchain-classic/chains/combine_documents/refine/RefineDocumentsChain)

Combine documents by doing a first pass and then refining on more documents.

This algorithm first calls `initial_llm_chain` on the first document, passing
that first document in with the variable name `document_variable_name`, and
produces a new variable with the variable name `initial_response_name`.

Then, it loops over every remaining document. This is called the "refine" step.
It calls `refine_llm_chain`,
passing in that document with the variable name `document_variable_name`
as well as the previous response with the variable name `initial_response_name`.

## Signature

```python
RefineDocumentsChain()
```

## Description

**Example:**

```python
from langchain_classic.chains import RefineDocumentsChain, LLMChain
from langchain_core.prompts import PromptTemplate
from langchain_openai import OpenAI

# This controls how each document will be formatted. Specifically,
# it will be passed to `format_document` - see that function for more
# details.
document_prompt = PromptTemplate(
    input_variables=["page_content"], template="{page_content}"
)
document_variable_name = "context"
model = OpenAI()
# The prompt here should take as an input variable the
# `document_variable_name`
prompt = PromptTemplate.from_template("Summarize this content: {context}")
initial_llm_chain = LLMChain(llm=model, prompt=prompt)
initial_response_name = "prev_response"
# The prompt here should take as an input variable the
# `document_variable_name` as well as `initial_response_name`
prompt_refine = PromptTemplate.from_template(
    "Here's your first summary: {prev_response}. "
    "Now add to it based on the following context: {context}"
)
refine_llm_chain = LLMChain(llm=model, prompt=prompt_refine)
chain = RefineDocumentsChain(
    initial_llm_chain=initial_llm_chain,
    refine_llm_chain=refine_llm_chain,
    document_prompt=document_prompt,
    document_variable_name=document_variable_name,
    initial_response_name=initial_response_name,
)
```

## Extends

- `BaseCombineDocumentsChain`

## Properties

- `initial_llm_chain`
- `refine_llm_chain`
- `document_variable_name`
- `initial_response_name`
- `document_prompt`
- `return_intermediate_steps`
- `output_keys`
- `model_config`

## Methods

- [`get_return_intermediate_steps()`](https://reference.langchain.com/python/langchain-classic/chains/combine_documents/refine/RefineDocumentsChain/get_return_intermediate_steps)
- [`get_default_document_variable_name()`](https://reference.langchain.com/python/langchain-classic/chains/combine_documents/refine/RefineDocumentsChain/get_default_document_variable_name)
- [`combine_docs()`](https://reference.langchain.com/python/langchain-classic/chains/combine_documents/refine/RefineDocumentsChain/combine_docs)
- [`acombine_docs()`](https://reference.langchain.com/python/langchain-classic/chains/combine_documents/refine/RefineDocumentsChain/acombine_docs)

## ⚠️ Deprecated

Deprecated since version 0.3.1.

---

[View source on GitHub](https://github.com/langchain-ai/langchain/blob/ee95ad6907f5eab94644183393a20aa2a032bb19/libs/langchain/langchain_classic/chains/combine_documents/refine.py#L24)