QAEvalChain()Whether to return only the final parsed result.
Generate LLM result from inputs.
Generate LLM result from inputs.
Prepare prompts from inputs.
Prepare prompts from inputs.
Utilize the LLM generate method for speed gains.
Utilize the LLM generate method for speed gains.
Create outputs from response.
Format prompt with kwargs and pass to LLM.
Format prompt with kwargs and pass to LLM.
Call predict and then parse the results.
Call apredict and then parse the results.
Call apply and then parse the results.
Call apply and then parse the results.
Create LLMChain from LLM and template.
Optional memory object.
Optional list of callback handlers (or callback manager).
Whether or not run in verbose mode. In verbose mode, some intermediate logs
Optional list of tags associated with the chain.
Optional metadata associated with the chain.
[DEPRECATED] Use callbacks instead.
Keys expected to be in the chain input.
Keys expected to be in the chain output.
LLM Chain for evaluating question answering.
Set the chain verbosity.