# CriteriaEvalChain

> **Class** in `langchain_classic`

📖 [View in docs](https://reference.langchain.com/python/langchain-classic/evaluation/criteria/eval_chain/CriteriaEvalChain)

LLM Chain for evaluating runs against criteria.

Parameters
----------
llm : BaseLanguageModel
    The language model to use for evaluation.
criteria : Union[Mapping[str, str]]
    The criteria or rubric to evaluate the runs against. It can be a mapping of
    criterion name to its description, or a single criterion name.
prompt : Optional[BasePromptTemplate], default=None
    The prompt template to use for generating prompts. If not provided, a
    default prompt template will be used based on the value of
    `requires_reference`.
requires_reference : bool, default=False
    Whether the evaluation requires a reference text. If `True`, the
    `PROMPT_WITH_REFERENCES` template will be used, which includes the
    reference labels in the prompt. Otherwise, the `PROMPT` template will be
    used, which is a reference-free prompt.
**kwargs : Any
    Additional keyword arguments to pass to the `LLMChain` constructor.

Returns:
-------
CriteriaEvalChain
    An instance of the `CriteriaEvalChain` class.

Examples:
--------
>>> from langchain_anthropic import ChatAnthropic
>>> from langchain_classic.evaluation.criteria import CriteriaEvalChain
>>> model = ChatAnthropic(temperature=0)
>>> criteria = {"my-custom-criterion": "Is the submission the most amazing ever?"}
>>> evaluator = CriteriaEvalChain.from_llm(llm=model, criteria=criteria)
>>> evaluator.evaluate_strings(
...     prediction="Imagine an ice cream flavor for the color aquamarine",
...     input="Tell me an idea",
... )
{
    'reasoning': 'Here is my step-by-step reasoning for the given criteria:\n\nThe criterion is: "Is the submission the most amazing ever?" This is a subjective criterion and open to interpretation. The submission suggests an aquamarine-colored ice cream flavor which is creative but may or may not be considered the most amazing idea ever conceived. There are many possible amazing ideas and this one ice cream flavor suggestion may or may not rise to that level for every person. \n\nN',
    'value': 'N',
    'score': 0,
}

>>> from langchain_openai import ChatOpenAI
>>> from langchain_classic.evaluation.criteria import LabeledCriteriaEvalChain
>>> model = ChatOpenAI(model="gpt-4", temperature=0)
>>> criteria = "correctness"
>>> evaluator = LabeledCriteriaEvalChain.from_llm(
...     llm=model,
...     criteria=criteria,
... )
>>> evaluator.evaluate_strings(
...     prediction="The answer is 4",
...     input="How many apples are there?",
...     reference="There are 3 apples",
... )
{
    'score': 0,
    'reasoning': 'The criterion for this task is the correctness of the submission. The submission states that there are 4 apples, but the reference indicates that there are actually 3 apples. Therefore, the submission is not correct, accurate, or factual according to the given criterion.\n\nN',
    'value': 'N',
}

## Signature

```python
CriteriaEvalChain()
```

## Extends

- `StringEvaluator`
- `LLMEvalChain`
- `LLMChain`

## Properties

- `output_parser`
- `criterion_name`
- `output_key`
- `model_config`
- `requires_reference`
- `requires_input`
- `evaluation_name`

## Methods

- [`is_lc_serializable()`](https://reference.langchain.com/python/langchain-classic/evaluation/criteria/eval_chain/CriteriaEvalChain/is_lc_serializable)
- [`resolve_criteria()`](https://reference.langchain.com/python/langchain-classic/evaluation/criteria/eval_chain/CriteriaEvalChain/resolve_criteria)
- [`from_llm()`](https://reference.langchain.com/python/langchain-classic/evaluation/criteria/eval_chain/CriteriaEvalChain/from_llm)

---

[View source on GitHub](https://github.com/langchain-ai/langchain/blob/fb6ab993a73180538f6cca876b3c85d46c08845f/libs/langchain/langchain_classic/evaluation/criteria/eval_chain.py#L162)