LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicevaluationembedding_distancebase
    Module●Since v1.0

    base

    A chain for comparing the output of two models using embeddings.

    Attributes

    attribute
    RUN_KEY: str
    attribute
    logger

    Classes

    class
    Chain

    Abstract base class for creating structured sequences of calls to components.

    Chains should be used to encode a sequence of calls to components like models, document retrievers, other chains, etc., and provide a simple interface to this sequence.

    class
    PairwiseStringEvaluator

    Compare the output of two models (or two outputs of the same model).

    class
    StringEvaluator

    String evaluator interface.

    Grade, tag, or otherwise evaluate predictions relative to their inputs and/or reference labels.

    class
    EmbeddingDistance

    Embedding Distance Metric.

    class
    EmbeddingDistanceEvalChain

    Embedding distance evaluation chain.

    Use embedding distances to score semantic difference between a prediction and reference.

    class
    PairwiseEmbeddingDistanceEvalChain

    Use embedding distances to score semantic difference between two predictions.

    Examples:

    chain = PairwiseEmbeddingDistanceEvalChain() result = chain.evaluate_string_pairs(prediction="Hello", prediction_b="Hi") print(result) {'score': 0.5}

    View source on GitHub