LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • Client
  • AsyncClient
  • Run Helpers
  • Run Trees
  • Evaluation
  • Schemas
  • Utilities
  • Wrappers
  • Anonymizer
  • Testing
  • Expect API
  • Middleware
  • Pytest Plugin
  • Deployment SDK
  • RemoteGraph
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

OverviewClientAsyncClientRun HelpersRun TreesEvaluationSchemasUtilitiesWrappersAnonymizerTestingExpect APIMiddlewarePytest PluginDeployment SDKRemoteGraph
Language
Theme
PythonlangsmithclientClientget_experiment_results
Method●Since v0.4

get_experiment_results

Get results for an experiment, including experiment session aggregated stats and experiment runs for each dataset example.

Experiment results may not be available immediately after the experiment is created.

Copy
get_experiment_results(
  self,
  name: Optional[str] = None,
  project_id: Optional[uuid.UUID] = None,
  preview: bool = False,
  comparative_experiment_id: Optional[uuid.UUID] = None,
  filters: dict[uuid.UUID, list[str]] | None = None,
  limit: Optional[int] = None
) -> ls_schemas.ExperimentResults

Example:

client = Client()
results = client.get_experiment_results(
    project_id="037ae90f-f297-4926-b93c-37d8abf6899f",
)
for example_with_runs in results["examples_with_runs"]:
    print(example_with_runs.model_dump())

# Access aggregated experiment statistics
print(f"Total runs: {results['run_stats']['run_count']}")
print(f"Total cost: {results['run_stats']['total_cost']}")
print(f"P50 latency: {results['run_stats']['latency_p50']}")

# Access feedback statistics
print(f"Feedback stats: {results['feedback_stats']}")

Parameters

NameTypeDescription
nameOptional[str]
Default:None

The experiment name.

project_idOptional[uuid.UUID]
Default:None

Experiment's tracing project id, also called session_id, can be found in the url of the LS experiment page

previewbool
Default:False

Whether to return lightweight preview data only. When True, fetches inputs_preview/outputs_preview summaries instead of full inputs/outputs from S3 storage. Faster and less bandwidth.

comparative_experiment_idOptional[uuid.UUID]
Default:None

Optional comparative experiment UUID for pairwise comparison experiment results.

filtersdict[uuid.UUID, list[str]] | None
Default:None

Optional filters to apply to results

limitOptional[int]
Default:None

Maximum number of results to return

View source on GitHub