LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • Client
  • AsyncClient
  • Run Helpers
  • Run Trees
  • Evaluation
  • Schemas
  • Utilities
  • Wrappers
  • Anonymizer
  • Testing
  • Expect API
  • Middleware
  • Pytest Plugin
  • Deployment SDK
  • RemoteGraph
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

OverviewClientAsyncClientRun HelpersRun TreesEvaluationSchemasUtilitiesWrappersAnonymizerTestingExpect APIMiddlewarePytest PluginDeployment SDKRemoteGraph
Language
Theme
Pythonlangsmithevaluationllm_evaluator
Module●Since v0.1

llm_evaluator

Contains the LLMEvaluator class for building LLM-as-a-judge evaluators.

Classes

class
EvaluationResult

Evaluation result.

class
EvaluationResults

Batch evaluation results.

This makes it easy for your evaluator to return multiple metrics at once.

class
RunEvaluator

Evaluator interface class.

class
Example

Example model.

class
Run

Run schema when loading from the DB.

class
CategoricalScoreConfig

Configuration for a categorical score.

class
ContinuousScoreConfig

Configuration for a continuous score.

class
LLMEvaluator

A class for building LLM-as-a-judge evaluators.

.. deprecated:: 0.5.0

LLMEvaluator is deprecated. Use openevals instead: https://github.com/langchain-ai/openevals

View source on GitHub