LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicevaluationcomparisoneval_chain
    Module●Since v1.0

    eval_chain

    Base classes for comparing the output of two models.

    Attributes

    attribute
    COMPARISON_TEMPLATE
    attribute
    COMPARISON_TEMPLATE_WITH_REFERENCE
    attribute
    CRITERIA_INSTRUCTIONS: str
    attribute
    RUN_KEY: str
    attribute
    logger

    Functions

    function
    resolve_pairwise_criteria

    Resolve the criteria for the pairwise evaluator.

    Classes

    class
    ConstitutionalPrinciple

    Class for a constitutional principle.

    class
    Criteria

    A Criteria to evaluate.

    class
    LLMEvalChain

    A base class for evaluators that use an LLM.

    class
    PairwiseStringEvaluator

    Compare the output of two models (or two outputs of the same model).

    class
    PairwiseStringResultOutputParser

    A parser for the output of the PairwiseStringEvalChain.

    class
    PairwiseStringEvalChain

    Pairwise String Evaluation Chain.

    A chain for comparing two outputs, such as the outputs of two models, prompts, or outputs of a single model on similar inputs.

    class
    LabeledPairwiseStringEvalChain

    Labeled Pairwise String Evaluation Chain.

    A chain for comparing two outputs, such as the outputs of two models, prompts, or outputs of a single model on similar inputs, with labeled preferences.

    deprecatedclass
    LLMChain

    Chain to run queries against LLMs.

    This class is deprecated. See below for an example implementation using LangChain runnables:

    from langchain_core.output_parsers import StrOutputParser
    from langchain_core.prompts import PromptTemplate
    from langchain_openai import OpenAI
    
    prompt_template = "Tell me a {adjective} joke"
    prompt = PromptTemplate(input_variables=["adjective"], template=prompt_template)
    model = OpenAI()
    chain = prompt | model | StrOutputParser()
    
    chain.invoke("your adjective here")

    Type Aliases

    typeAlias
    CRITERIA_TYPE: Mapping[str, str] | Criteria | ConstitutionalPrinciple
    View source on GitHub