Load QA Eval Chain from LLM.
from_llm(
cls,
llm: BaseLanguageModel,
prompt: PromptTemplate | None = None,
**kwargs: Any = {}
) -> ContextQAEvalChain| Name | Type | Description |
|---|---|---|
llm* | BaseLanguageModel | The base language model to use. |
prompt | PromptTemplate | None | Default: NoneA prompt template containing the Defaults to |
**kwargs | Any | Default: {}Additional keyword arguments. |