LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • Client
  • AsyncClient
  • Run Helpers
  • Run Trees
  • Evaluation
  • Schemas
  • Utilities
  • Wrappers
  • Anonymizer
  • Testing
  • Expect API
  • Middleware
  • Pytest Plugin
  • Deployment SDK
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

OverviewClientAsyncClientRun HelpersRun TreesEvaluationSchemasUtilitiesWrappersAnonymizerTestingExpect APIMiddlewarePytest PluginDeployment SDK
Language
Theme
PythonlangsmithevaluationevaluatorDynamicComparisonRunEvaluatorcompare_runs
Method●Since v0.1

compare_runs

Compare runs to score preferences.

Copy
compare_runs(
  self,
  runs: Sequence[Run],
  example: Optional[Example] = None
) -> ComparisonEvaluationResult

Parameters

NameTypeDescription
runs*Sequence[Run]

A list of runs to compare.

exampleOptional[Example]
Default:None

An optional example to be used in the evaluation.

View source on GitHub