LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicsmithevaluationconfig
    Module●Since v1.0

    config

    Configuration for run evaluators.

    Attributes

    attribute
    RUN_EVALUATOR_LIKE: Callable[[Run, Example | None], EvaluationResult | EvaluationResults | dict]
    attribute
    BATCH_EVALUATOR_LIKE: Callable[[Sequence[Run], Sequence[Example] | None], EvaluationResult | EvaluationResults | dict]

    Classes

    class
    EmbeddingDistanceEnum

    Embedding Distance Metric.

    class
    EvaluatorType

    The types of the evaluators.

    class
    StringEvaluator

    String evaluator interface.

    Grade, tag, or otherwise evaluate predictions relative to their inputs and/or reference labels.

    class
    StringDistanceEnum

    Distance metric to use.

    class
    EvalConfig

    Configuration for a given run evaluator.

    class
    SingleKeyEvalConfig

    Configuration for a run evaluator that only requires a single key.

    class
    RunEvalConfig

    Configuration for a run evaluation.

    Type Aliases

    typeAlias
    CRITERIA_TYPE: Mapping[str, str] | Criteria | ConstitutionalPrinciple
    typeAlias
    CUSTOM_EVALUATOR_TYPE
    typeAlias
    SINGLE_EVAL_CONFIG_TYPE
    View source on GitHub