LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicevaluationparsingjson_schemaJsonSchemaEvaluator
    Class●Since v1.0

    JsonSchemaEvaluator

    An evaluator that validates a JSON prediction against a JSON schema reference.

    This evaluator checks if a given JSON prediction conforms to the provided JSON schema. If the prediction is valid, the score is True (no errors). Otherwise, the score is False (error occurred).

    Copy
    JsonSchemaEvaluator(
        self,
        **_: Any = {},
    )

    Bases

    StringEvaluator

    Constructors

    constructor
    __init__

    Attributes

    attribute
    requires_input: bool

    Returns whether the evaluator requires input.

    attribute
    requires_reference: bool

    Returns whether the evaluator requires reference.

    attribute
    evaluation_name: str

    Returns the name of the evaluation.

    Inherited fromStringEvaluator

    Methods

    Mevaluate_strings
    —

    Evaluate Chain or LLM output, based on optional input and label.

    Maevaluate_strings
    —

    Asynchronously evaluate Chain or LLM output, based on optional input and label.

    View source on GitHub