LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • Client
  • AsyncClient
  • Run Helpers
  • Run Trees
  • Evaluation
  • Schemas
  • Utilities
  • Wrappers
  • Anonymizer
  • Testing
  • Expect API
  • Middleware
  • Pytest Plugin
  • Deployment SDK
  • RemoteGraph
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

OverviewClientAsyncClientRun HelpersRun TreesEvaluationSchemasUtilitiesWrappersAnonymizerTestingExpect APIMiddlewarePytest PluginDeployment SDKRemoteGraph
Language
Theme
PythonlangsmithevaluationevaluatorEvaluationResult
Class●Since v0.0

EvaluationResult

Evaluation result.

Copy
EvaluationResult()

Bases

BaseModel

Attributes

attribute
key: str

The aspect, metric name, or label for this evaluation.

attribute
score: SCORE_TYPE

The numeric score for this evaluation.

attribute
value: VALUE_TYPE

The value for this evaluation, if not numeric.

attribute
metadata: Optional[dict]

Arbitrary metadata attached to the evaluation.

attribute
comment: Optional[str]

An explanation regarding the evaluation.

attribute
correction: Optional[dict]

What the correct value should be, if applicable.

attribute
evaluator_info: dict

Additional information about the evaluator.

attribute
feedback_config: Optional[Union[FeedbackConfig, dict]]

The configuration used to generate this feedback.

attribute
source_run_id: Optional[Union[uuid.UUID, str]]

The ID of the trace of the evaluator itself.

attribute
target_run_id: Optional[Union[uuid.UUID, str]]

The ID of the trace this evaluation is applied to.

If none provided, the evaluation feedback is applied to the root trace being.

attribute
extra: Optional[dict]

Metadata for the evaluator run.

attribute
model_config

Methods

method
check_value_non_numeric

Warn when numeric values are passed via the value field.

View source on GitHub