LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • Client
  • AsyncClient
  • Run Helpers
  • Run Trees
  • Evaluation
  • Schemas
  • Utilities
  • Wrappers
  • Anonymizer
  • Testing
  • Expect API
  • Middleware
  • Pytest Plugin
  • Deployment SDK
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

OverviewClientAsyncClientRun HelpersRun TreesEvaluationSchemasUtilitiesWrappersAnonymizerTestingExpect APIMiddlewarePytest PluginDeployment SDK
Language
Theme
Pythonlangsmithevaluationevaluator
Module●Since v0.0

evaluator

This module contains the evaluator classes for evaluating runs.

Attributes

attribute
logger

Functions

function
run_evaluator

Create a run evaluator from a function.

Decorator that transforms a function into a RunEvaluator.

function
comparison_evaluator

Create a comaprison evaluator from a function.

Classes

class
Example

Example model.

class
Run

Run schema when loading from the DB.

class
Category

A category for categorical feedback.

class
FeedbackConfig

Configuration to define a type of feedback.

Applied on on the first creation of a feedback_key.

class
EvaluationResult

Evaluation result.

class
EvaluationResults

Batch evaluation results.

This makes it easy for your evaluator to return multiple metrics at once.

class
RunEvaluator

Evaluator interface class.

class
ComparisonEvaluationResult

Feedback scores for the results of comparative evaluations.

These are generated by functions that compare two or more runs, returning a ranking or other feedback.

class
DynamicRunEvaluator

A dynamic evaluator that wraps a function and transforms it into a RunEvaluator.

This class is designed to be used with the @run_evaluator decorator, allowing functions that take a Run and an optional Example as arguments, and return an EvaluationResult or EvaluationResults, to be used as instances of RunEvaluator.

class
DynamicComparisonRunEvaluator

Compare predictions (as traces) from 2 or more runs.

Type Aliases

typeAlias
SCORE_TYPE: Union[StrictBool, StrictInt, StrictFloat, None]
typeAlias
VALUE_TYPE: Union[dict, str, StrictBool, StrictInt, StrictFloat, None]
typeAlias
SUMMARY_EVALUATOR_T

Modules

module
rh

Decorator for creating a run tree from functions.

module
schemas

Schemas for the LangSmith API.

View source on GitHub