LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicevaluationcomparisonprompt
    Module●Since v1.0

    prompt

    Prompts for comparing the outputs of two models for a given question.

    This prompt is used to compare two responses and evaluate which one best follows the instructions and answers the question. The prompt is based on the paper from Zheng, et. al. https://arxiv.org/abs/2306.05685

    Used in Docs

    • Docusaurus integration

    Attributes

    attribute
    SYSTEM_MESSAGE: str
    attribute
    CRITERIA_INSTRUCTIONS: str
    attribute
    COMPARISON_TEMPLATE
    attribute
    COMPARISON_TEMPLATE_WITH_REFERENCE
    View source on GitHub