LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
LangSmith
  • Client
  • Run Trees
  • Traceable
  • Evaluation
  • Schemas
  • Langchain
  • Jest
  • Vitest
  • Wrappers
  • Anonymizer
  • Jestlike
  • Vercel
  • Anthropic
  • Sandbox
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

LangSmith
ClientRun TreesTraceableEvaluationSchemasLangchainJestVitestWrappersAnonymizerJestlikeVercelAnthropicSandbox
Language
Theme
JavaScriptlangsmithevaluationEvaluateOptions
Interface●Since v0.1

EvaluateOptions

Copy
interface EvaluateOptions

Bases

BaseEvaluateOptions

Properties

View source on GitHub
property
client: Client
property
data: DataT
property
description: string
property
evaluationConcurrency: number
property
evaluators: EvaluatorT[]
property
experimentPrefix: string
property
includeAttachments: boolean
property
maxConcurrency: number
property
metadata: KVMap
property
numRepetitions: number
property
summaryEvaluators: SummaryEvaluatorT[]
property
targetConcurrency: number

The LangSmith client to use.

The dataset to evaluate on. Can be a dataset name, a list of examples, or a generator of examples.

The maximum number of concurrent evaluators to run. If not provided, defaults to maxConcurrency when set.

A list of evaluators to run on each example.

A prefix to provide for your experiment name.

Whether to use attachments for the experiment.

The maximum concurrency to use for predictions/evaluations when a more specific concurrency option is not provided.

Additional metadata associated with the example.

The number of repetitions to perform. Each example will be run this many times.

A list of summary evaluators to run on the entire dataset.

The maximum number of concurrent predictions to run. If not provided, defaults to maxConcurrency when set.