LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • Client
  • AsyncClient
  • Run Helpers
  • Run Trees
  • Evaluation
  • Schemas
  • Utilities
  • Wrappers
  • Anonymizer
  • Testing
  • Expect API
  • Middleware
  • Pytest Plugin
  • Deployment SDK
  • RemoteGraph
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

OverviewClientAsyncClientRun HelpersRun TreesEvaluationSchemasUtilitiesWrappersAnonymizerTestingExpect APIMiddlewarePytest PluginDeployment SDKRemoteGraph
Language
Theme
PythonlangsmithevaluationevaluatorDynamicComparisonRunEvaluator
Class●Since v0.1

DynamicComparisonRunEvaluator

Compare predictions (as traces) from 2 or more runs.

Copy
DynamicComparisonRunEvaluator(
  self,
  func: Callable[[Sequence[Run], Optional[Example]], Union[_COMPARISON_OUTPUT, Awaitable[_COMPARISON_OUTPUT]]],
  afunc: Optional[Callable[[Sequence[Run], Optional[Example]], Awaitable[_COMPARISON_OUTPUT]]] = None
)

Parameters

NameTypeDescription
func*Callable

A function that takes a Run and an optional Example as

Constructors

constructor
__init__
NameType
funcCallable[[Sequence[Run], Optional[Example]], Union[_COMPARISON_OUTPUT, Awaitable[_COMPARISON_OUTPUT]]]
afuncOptional[Callable[[Sequence[Run], Optional[Example]], Awaitable[_COMPARISON_OUTPUT]]]

Attributes

attribute
afunc
attribute
func
attribute
is_async: bool

Check if the evaluator function is asynchronous.

Methods

method
compare_runs

Compare runs to score preferences.

method
acompare_runs

Evaluate a run asynchronously using the wrapped async function.

This method directly invokes the wrapped async function with the provided arguments.

View source on GitHub