# aevaluate_existing

> **Function** in `langsmith`

📖 [View in docs](https://reference.langchain.com/python/langsmith/evaluation/_arunner/aevaluate_existing)

Evaluate existing experiment runs asynchronously.

## Signature

```python
aevaluate_existing(
    experiment: Union[str, uuid.UUID, schemas.TracerSession],
    /,
    evaluators: Optional[Sequence[Union[EVALUATOR_T, AEVALUATOR_T]]] = None,
    summary_evaluators: Optional[Sequence[SUMMARY_EVALUATOR_T]] = None,
    metadata: Optional[dict] = None,
    max_concurrency: Optional[int] = 0,
    client: Optional[langsmith.Client] = None,
    load_nested: bool = False,
    blocking: bool = True,
) -> AsyncExperimentResults
```

## Parameters

| Name | Type | Required | Description |
|------|------|----------|-------------|
| `experiment` | `Union[str, uuid.UUID]` | Yes | The identifier of the experiment to evaluate. |
| `evaluators` | `Optional[Sequence[EVALUATOR_T]]` | No | Optional sequence of evaluators to use for individual run evaluation. (default: `None`) |
| `summary_evaluators` | `Optional[Sequence[SUMMARY_EVALUATOR_T]]` | No | Optional sequence of evaluators to apply over the entire dataset. (default: `None`) |
| `metadata` | `Optional[dict]` | No | Optional metadata to include in the evaluation results. (default: `None`) |
| `max_concurrency` | `int \| None` | No | The maximum number of concurrent evaluations to run.  If `None` then no limit is set. If `0` then no concurrency. (default: `0`) |
| `client` | `Optional[langsmith.Client]` | No | Optional Langsmith client to use for evaluation. (default: `None`) |
| `load_nested` | `bool` | No | Whether to load all child runs for the experiment.  Default is to only load the top-level root runs. (default: `False`) |
| `blocking` | `bool` | No | Whether to block until evaluation is complete. (default: `True`) |

## Returns

`AsyncExperimentResults`

An async iterator over the experiment results.

---

[View source on GitHub](https://github.com/langchain-ai/langsmith-sdk/blob/791701a304a72495d108669ef11c194983fd0e95/python/langsmith/evaluation/_arunner.py#L341)