# LLMEvaluator

> **Class** in `langsmith`

📖 [View in docs](https://reference.langchain.com/python/langsmith/evaluation/llm_evaluator/LLMEvaluator)

A class for building LLM-as-a-judge evaluators.

.. deprecated:: 0.5.0

   LLMEvaluator is deprecated. Use openevals instead: https://github.com/langchain-ai/openevals

## Signature

```python
LLMEvaluator(
    self,
    *,
    prompt_template: Union[str, list[tuple[str, str]]],
    score_config: Union[CategoricalScoreConfig, ContinuousScoreConfig],
    map_variables: Optional[Callable[[Run, Optional[Example]], dict]] = None,
    model_name: str = 'gpt-4o',
    model_provider: str = 'openai',
    **kwargs = {},
)
```

## Parameters

| Name | Type | Required | Description |
|------|------|----------|-------------|
| `prompt_template` | `Union[str, List[Tuple[str, str]]` | Yes | The prompt template to use for the evaluation. If a string is provided, it is assumed to be a human / user message. |
| `score_config` | `Union[CategoricalScoreConfig, ContinuousScoreConfig]` | Yes |  The configuration for the score, either categorical or continuous. |
| `map_variables` | `Optional[Callable[[Run, Example], dict]]` | No |  A function that maps the run and example to the variables in the prompt.  If `None`, it is assumed that the prompt only requires 'input', 'output', and 'expected'. (default: `None`) |
| `model_name` | `Optional[str]` | No | The model to use for the evaluation. (default: `'gpt-4o'`) |
| `model_provider` | `Optional[str]` | No | The model provider to use for the evaluation. (default: `'openai'`) |

## Extends

- `RunEvaluator`

## Constructors

```python
__init__(
    self,
    *,
    prompt_template: Union[str, list[tuple[str, str]]],
    score_config: Union[CategoricalScoreConfig, ContinuousScoreConfig],
    map_variables: Optional[Callable[[Run, Optional[Example]], dict]] = None,
    model_name: str = 'gpt-4o',
    model_provider: str = 'openai',
    **kwargs = {},
)
```

| Name | Type |
|------|------|
| `prompt_template` | `Union[str, list[tuple[str, str]]]` |
| `score_config` | `Union[CategoricalScoreConfig, ContinuousScoreConfig]` |
| `map_variables` | `Optional[Callable[[Run, Optional[Example]], dict]]` |
| `model_name` | `str` |
| `model_provider` | `str` |


## Methods

- [`from_model()`](https://reference.langchain.com/python/langsmith/evaluation/llm_evaluator/LLMEvaluator/from_model)
- [`evaluate_run()`](https://reference.langchain.com/python/langsmith/evaluation/llm_evaluator/LLMEvaluator/evaluate_run)
- [`aevaluate_run()`](https://reference.langchain.com/python/langsmith/evaluation/llm_evaluator/LLMEvaluator/aevaluate_run)

---

[View source on GitHub](https://github.com/langchain-ai/langsmith-sdk/blob/fcda9320ff067c3d3857e9e3d088fc1eb0643fc4/python/langsmith/evaluation/llm_evaluator.py#L76)