LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicevaluationschemaEvaluatorType
    Class●Since v1.0

    EvaluatorType

    The types of the evaluators.

    Copy
    EvaluatorType()

    Bases

    strEnum

    Attributes

    attribute
    QA: str

    Question answering evaluator, which grades answers to questions directly using an LLM.

    attribute
    COT_QA: str

    Chain of thought question answering evaluator, which grades answers to questions using chain of thought 'reasoning'.

    attribute
    CONTEXT_QA: str

    Question answering evaluator that incorporates 'context' in the response.

    attribute
    PAIRWISE_STRING: str

    The pairwise string evaluator, which predicts the preferred prediction from between two models.

    attribute
    SCORE_STRING: str

    The scored string evaluator, which gives a score between 1 and 10 to a prediction.

    attribute
    LABELED_PAIRWISE_STRING: str

    The labeled pairwise string evaluator, which predicts the preferred prediction from between two models based on a ground truth reference label.

    attribute
    LABELED_SCORE_STRING: str

    The labeled scored string evaluator, which gives a score between 1 and 10 to a prediction based on a ground truth reference label.

    attribute
    AGENT_TRAJECTORY: str

    The agent trajectory evaluator, which grades the agent's intermediate steps.

    attribute
    CRITERIA: str

    The criteria evaluator, which evaluates a model based on a custom set of criteria without any reference labels.

    attribute
    LABELED_CRITERIA: str

    The labeled criteria evaluator, which evaluates a model based on a custom set of criteria, with a reference label.

    attribute
    STRING_DISTANCE: str

    Compare predictions to a reference answer using string edit distances.

    attribute
    EXACT_MATCH: str

    Compare predictions to a reference answer using exact matching.

    attribute
    REGEX_MATCH: str

    Compare predictions to a reference answer using regular expressions.

    attribute
    PAIRWISE_STRING_DISTANCE: str

    Compare predictions based on string edit distances.

    attribute
    EMBEDDING_DISTANCE: str

    Compare a prediction to a reference label using embedding distance.

    attribute
    PAIRWISE_EMBEDDING_DISTANCE: str

    Compare two predictions using embedding distance.

    attribute
    JSON_VALIDITY: str

    Check if a prediction is valid JSON.

    attribute
    JSON_EQUALITY: str

    Check if a prediction is equal to a reference JSON.

    attribute
    JSON_EDIT_DISTANCE: str

    Compute the edit distance between two JSON strings after canonicalization.

    attribute
    JSON_SCHEMA_VALIDATION: str

    Check if a prediction is valid JSON according to a JSON schema.

    View source on GitHub