LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • Client
  • AsyncClient
  • Run Helpers
  • Run Trees
  • Evaluation
  • Schemas
  • Utilities
  • Wrappers
  • Anonymizer
  • Testing
  • Expect API
  • Middleware
  • Pytest Plugin
  • Deployment SDK
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

OverviewClientAsyncClientRun HelpersRun TreesEvaluationSchemasUtilitiesWrappersAnonymizerTestingExpect APIMiddlewarePytest PluginDeployment SDK
Language
Theme
Pythonlangsmithevaluationllm_evaluator
Module●Since v0.1

llm_evaluator

Contains the LLMEvaluator class for building LLM-as-a-judge evaluators.

Classes

View source on GitHub
class
EvaluationResult
class
EvaluationResults
class
RunEvaluator
class
Example
class
Run
class
CategoricalScoreConfig
class
ContinuousScoreConfig
class
LLMEvaluator

Evaluation result.

Batch evaluation results.

This makes it easy for your evaluator to return multiple metrics at once.

Evaluator interface class.

Example model.

Run schema when loading from the DB.

Configuration for a categorical score.

Configuration for a continuous score.

A class for building LLM-as-a-judge evaluators.

.. deprecated:: 0.5.0

LLMEvaluator is deprecated. Use openevals instead: https://github.com/langchain-ai/openevals