LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicevaluationscoringprompt
    Module●Since v1.0

    prompt

    Prompts for scoring the outputs of a models for a given question.

    This prompt is used to score the responses and evaluate how it follows the instructions and answers the question. The prompt is based on the paper from Zheng, et. al. https://arxiv.org/abs/2306.05685

    Used in Docs

    • Docusaurus integration

    Attributes

    attribute
    SYSTEM_MESSAGE: str
    attribute
    CRITERIA_INSTRUCTIONS: str
    attribute
    DEFAULT_CRITERIA: str
    attribute
    SCORING_TEMPLATE
    attribute
    SCORING_TEMPLATE_WITH_REFERENCE
    View source on GitHub