# evaluate_existing

> **Function** in `langsmith`

📖 [View in docs](https://reference.langchain.com/python/langsmith/evaluation/_runner/evaluate_existing)

Evaluate existing experiment runs.

## Signature

```python
evaluate_existing(
    experiment: Union[str, uuid.UUID, schemas.TracerSession],
    /,
    evaluators: Optional[Sequence[EVALUATOR_T]] = None,
    summary_evaluators: Optional[Sequence[SUMMARY_EVALUATOR_T]] = None,
    metadata: Optional[dict] = None,
    max_concurrency: Optional[int] = 0,
    client: Optional[langsmith.Client] = None,
    load_nested: bool = False,
    blocking: bool = True,
) -> ExperimentResults
```

## Description

**Environment:**

- `LANGSMITH_TEST_CACHE`: If set, API calls will be cached to disk to save time and
cost during testing.

Recommended to commit the cache files to your repository for faster CI/CD runs.

Requires the `'langsmith[vcr]'` package to be installed.

## Parameters

| Name | Type | Required | Description |
|------|------|----------|-------------|
| `experiment` | `Union[str, uuid.UUID]` | Yes | The identifier of the experiment to evaluate. |
| `evaluators` | `Optional[Sequence[EVALUATOR_T]]` | No | Optional sequence of evaluators to use for individual run evaluation. (default: `None`) |
| `summary_evaluators` | `Optional[Sequence[SUMMARY_EVALUATOR_T]]` | No | Optional sequence of evaluators to apply over the entire dataset. (default: `None`) |
| `metadata` | `Optional[dict]` | No | Optional metadata to include in the evaluation results. (default: `None`) |
| `max_concurrency` | `int \| None` | No | The maximum number of concurrent evaluations to run.  If `None` then no limit is set. If `0` then no concurrency. (default: `0`) |
| `client` | `Optional[langsmith.Client]` | No | Optional Langsmith client to use for evaluation. (default: `None`) |
| `load_nested` | `bool` | No | Whether to load all child runs for the experiment.  Default is to only load the top-level root runs. (default: `False`) |
| `blocking` | `bool` | No | Whether to block until evaluation is complete. (default: `True`) |

## Returns

`ExperimentResults`

The evaluation results.

---

[View source on GitHub](https://github.com/langchain-ai/langsmith-sdk/blob/6a74bf5af9e542d8065af8edca54b2448f430916/python/langsmith/evaluation/_runner.py#L418)