Get results for an experiment, including experiment session aggregated stats and experiment runs for each dataset example.
Experiment results may not be available immediately after the experiment is created.
get_experiment_results(
self,
name: Optional[str] = None,
project_id: Optional[uuid.UUID] = None,
preview: bool = False,
comparative_experiment_id: Optional[uuid.UUID] = None,
filters: dict[uuid.UUID, list[str]] | None = None,
limit: Optional[int] = None
) -> ls_schemas.ExperimentResultsExample:
client = Client()
results = client.get_experiment_results(
project_id="037ae90f-f297-4926-b93c-37d8abf6899f",
)
for example_with_runs in results["examples_with_runs"]:
print(example_with_runs.model_dump())
# Access aggregated experiment statistics
print(f"Total runs: {results['run_stats']['run_count']}")
print(f"Total cost: {results['run_stats']['total_cost']}")
print(f"P50 latency: {results['run_stats']['latency_p50']}")
# Access feedback statistics
print(f"Feedback stats: {results['feedback_stats']}")| Name | Type | Description |
|---|---|---|
name | Optional[str] | Default: NoneThe experiment name. |
project_id | Optional[uuid.UUID] | Default: NoneExperiment's tracing project id, also called session_id, can be found in the url of the LS experiment page |
preview | bool | Default: FalseWhether to return lightweight preview data only. When True, fetches inputs_preview/outputs_preview summaries instead of full inputs/outputs from S3 storage. Faster and less bandwidth. |
comparative_experiment_id | Optional[uuid.UUID] | Default: NoneOptional comparative experiment UUID for pairwise comparison experiment results. |
filters | dict[uuid.UUID, list[str]] | None | Default: NoneOptional filters to apply to results |
limit | Optional[int] | Default: NoneMaximum number of results to return |