LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • Client
  • AsyncClient
  • Run Helpers
  • Run Trees
  • Evaluation
  • Schemas
  • Utilities
  • Wrappers
  • Anonymizer
  • Testing
  • Expect API
  • Middleware
  • Pytest Plugin
  • Deployment SDK
  • RemoteGraph
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

OverviewClientAsyncClientRun HelpersRun TreesEvaluationSchemasUtilitiesWrappersAnonymizerTestingExpect APIMiddlewarePytest PluginDeployment SDKRemoteGraph
Language
Theme
Pythonlangsmithevaluationllm_evaluatorLLMEvaluatorfrom_model
Method●Since v0.1

from_model

Create an LLMEvaluator instance from a BaseChatModel instance.

Copy
from_model(
  cls,
  model: Any,
  *,
  prompt_template: Union[str, list[tuple[str, str]]],
  score_config: Union[CategoricalScoreConfig, ContinuousScoreConfig],
  map_variables: Optional[Callable[[Run, Optional[Example]], dict]] = None
)

Parameters

NameTypeDescription
model*BaseChatModel

The chat model instance to use for the evaluation.

prompt_template*Union[str, List[Tuple[str, str]]

The prompt template to use for the evaluation. If a string is provided, it is assumed to be a system message.

score_config*Union[CategoricalScoreConfig, ContinuousScoreConfig]

The configuration for the score, either categorical or continuous.

map_variablesOptional[Callable[[Run, Example]], dict]]
Default:None

A function that maps the run and example to the variables in the prompt.

If None, it is assumed that the prompt only requires 'input', 'output', and 'expected'.

View source on GitHub