Optionalcriteria?: Criteria | Record<string, string>The "criteria" to insert into the prompt template used for evaluation. See the prompt at https://smith.langchain.com/hub/langchain-ai/labeled-criteria for more information.
Optionalllm?: ToolkitThe language model to use as the evaluator, defaults to GPT-4
const evalConfig = {
evaluators: [LabeledCriteria("correctness")],
};
@example
```ts
const evalConfig = {
evaluators: [
LabeledCriteria({
"mentionsAllFacts": "Does the include all facts provided in the reference?"
})
],
Configuration to load a "LabeledCriteriaEvalChain" evaluator, which prompts an LLM to determine whether the model's prediction complies with the provided criteria and also provides a "ground truth" label for the evaluator to incorporate in its evaluation.