LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicchainssummarizechain
    Module●Since v1.0

    chain

    Load summarizing chains.

    Used in Docs

    • Build a semantic search engine with LangChain
    • Trace LangChain applications (Python and JS/TS)
    • Agentql integration
    • Anchor browser integration
    • Dappier integration

    Functions

    function
    load_summarize_chain

    Load summarizing chain.

    Classes

    class
    BaseCombineDocumentsChain

    Base interface for chains combining documents.

    Subclasses of this chain deal with combining documents in a variety of ways. This base class exists to add some uniformity in the interface these types of chains should expose. Namely, they expect an input key related to the documents to use (default input_documents), and then also expose a method to calculate the length of a prompt from documents (useful for outside callers to use to determine whether it's safe to pass a list of documents into this chain or whether that will be longer than the context length).

    class
    LoadingCallable

    Interface for loading the combine documents chain.

    deprecatedclass
    MapReduceDocumentsChain

    Combining documents by mapping a chain over them, then combining results.

    We first call llm_chain on each document individually, passing in the page_content and any other kwargs. This is the map step.

    We then process the results of that map step in a reduce step. This should likely be a ReduceDocumentsChain.

    deprecatedclass
    ReduceDocumentsChain

    Combine documents by recursively reducing them.

    This involves

    • combine_documents_chain
    • collapse_documents_chain

    combine_documents_chain is ALWAYS provided. This is final chain that is called.

    We pass all previous results to this chain, and the output of this chain is returned as a final result.

    collapse_documents_chain is used if the documents passed in are too many to all be passed to combine_documents_chain in one go. In this case, collapse_documents_chain is called recursively on as big of groups of documents as are allowed.

    deprecatedclass
    RefineDocumentsChain

    Combine documents by doing a first pass and then refining on more documents.

    This algorithm first calls initial_llm_chain on the first document, passing that first document in with the variable name document_variable_name, and produces a new variable with the variable name initial_response_name.

    Then, it loops over every remaining document. This is called the "refine" step. It calls refine_llm_chain, passing in that document with the variable name document_variable_name as well as the previous response with the variable name initial_response_name.

    deprecatedclass
    StuffDocumentsChain

    Chain that combines documents by stuffing into context.

    This chain takes a list of documents and first combines them into a single string. It does this by formatting each document into a string with the document_prompt and then joining them together with document_separator. It then adds that new string to the inputs with the variable name set by document_variable_name. Those inputs are then passed to the llm_chain.

    deprecatedclass
    LLMChain

    Chain to run queries against LLMs.

    This class is deprecated. See below for an example implementation using LangChain runnables:

    from langchain_core.output_parsers import StrOutputParser
    from langchain_core.prompts import PromptTemplate
    from langchain_openai import OpenAI
    
    prompt_template = "Tell me a {adjective} joke"
    prompt = PromptTemplate(input_variables=["adjective"], template=prompt_template)
    model = OpenAI()
    chain = prompt | model | StrOutputParser()
    
    chain.invoke("your adjective here")

    Modules

    module
    map_reduce_prompt
    module
    refine_prompts
    module
    stuff_prompt
    View source on GitHub