LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicchainsconversational_retrievalbaseConversationalRetrievalChainfrom_llm
    Method●Since v1.0

    from_llm

    Convenience method to load chain from LLM and retriever.

    This provides some logic to create the question_generator chain as well as the combine_docs_chain.

    Copy
    from_llm(
      cls,
      llm: BaseLanguageModel,
      retriever: BaseRetriever,
      condense_question_prompt: BasePromptTemplate = CONDENSE_QUESTION_PROMPT,
      chain_type: str = 'stuff',
      verbose: bool = False,
      condense_question_llm: BaseLanguageModel | None = None,
      combine_docs_chain_kwargs: dict | None = None,
      callbacks: Callbacks = None,
      **kwargs: Any = {}
    ) -> BaseConversationalRetrievalChain

    Parameters

    NameTypeDescription
    llm*BaseLanguageModel

    The default language model to use at every part of this chain (eg in both the question generation and the answering)

    retriever*BaseRetriever

    The retriever to use to fetch relevant documents from.

    condense_question_promptBasePromptTemplate
    Default:CONDENSE_QUESTION_PROMPT

    The prompt to use to condense the chat history and new question into a standalone question.

    chain_typestr
    Default:'stuff'

    The chain type to use to create the combine_docs_chain, will be sent to load_qa_chain.

    verbosebool
    Default:False

    Verbosity flag for logging to stdout.

    condense_question_llmBaseLanguageModel | None
    Default:None

    The language model to use for condensing the chat history and new question into a standalone question. If none is provided, will default to llm.

    combine_docs_chain_kwargsdict | None
    Default:None

    Parameters to pass as kwargs to load_qa_chain when constructing the combine_docs_chain.

    callbacksCallbacks
    Default:None

    Callbacks to pass to all subchains.

    kwargsAny
    Default:{}

    Additional parameters to pass when initializing ConversationalRetrievalChain

    View source on GitHub