LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • Client
  • AsyncClient
  • Run Helpers
  • Run Trees
  • Evaluation
  • Schemas
  • Utilities
  • Wrappers
  • Anonymizer
  • Testing
  • Expect API
  • Middleware
  • Pytest Plugin
  • Deployment SDK
  • RemoteGraph
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

OverviewClientAsyncClientRun HelpersRun TreesEvaluationSchemasUtilitiesWrappersAnonymizerTestingExpect APIMiddlewarePytest PluginDeployment SDKRemoteGraph
Language
Theme
PythonlangsmithevaluationevaluatorDynamicComparisonRunEvaluatorcompare_runs
Method●Since v0.1

compare_runs

Copy
compare_runs(
  self,
  runs: Sequence[Run],
  example: Optional[Example] = 
View source on GitHub
None
)
->
ComparisonEvaluationResult

Parameters

NameTypeDescription
runs*Sequence[Run]
exampleOptional[Example]
Default:None

Compare runs to score preferences.

A list of runs to compare.

An optional example to be used in the evaluation.