Convenience method to load chain from LLM and retriever.
This provides some logic to create the question_generator chain
as well as the combine_docs_chain.
from_llm(
cls,
llm: BaseLanguageModel,
retriever: BaseRetriever,
condense_question_prompt: BasePromptTemplate = CONDENSE_QUESTION_PROMPT,
chain_type: str = 'stuff',
verbose: bool = False,
condense_question_llm: BaseLanguageModel | None = None,
combine_docs_chain_kwargs: dict | None = None,
callbacks: Callbacks = None,
**kwargs: Any = {}
) -> BaseConversationalRetrievalChain| Name | Type | Description |
|---|---|---|
llm* | BaseLanguageModel | The default language model to use at every part of this chain (eg in both the question generation and the answering) |
retriever* | BaseRetriever | The retriever to use to fetch relevant documents from. |
condense_question_prompt | BasePromptTemplate | Default: CONDENSE_QUESTION_PROMPTThe prompt to use to condense the chat history and new question into a standalone question. |
chain_type | str | Default: 'stuff'The chain type to use to create the combine_docs_chain, will
be sent to |
verbose | bool | Default: FalseVerbosity flag for logging to stdout. |
condense_question_llm | BaseLanguageModel | None | Default: NoneThe language model to use for condensing the chat
history and new question into a standalone question. If none is
provided, will default to |
combine_docs_chain_kwargs | dict | None | Default: NoneParameters to pass as kwargs to |
callbacks | Callbacks | Default: NoneCallbacks to pass to all subchains. |
kwargs | Any | Default: {}Additional parameters to pass when initializing ConversationalRetrievalChain |