LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicevaluationcriteriaeval_chainCriteriaEvalChain
    Class●Since v1.0

    CriteriaEvalChain

    Copy
    CriteriaEvalChain(
    )

    Bases

    StringEvaluatorLLMEvalChainLLMChain

    Attributes

    attribute
    output_parser: BaseOutputParser
    attribute
    criterion_name: str
    attribute
    output_key: str
    attribute
    model_config
    attribute
    requires_reference: bool
    attribute
    requires_input: bool
    attribute
    evaluation_name: str

    Methods

    method
    is_lc_serializable
    method
    resolve_criteria
    method
    from_llm

    Inherited fromStringEvaluator

    Methods

    Mevaluate_strings
    —

    Evaluate Chain or LLM output, based on optional input and label.

    Maevaluate_strings
    —

    Asynchronously evaluate Chain or LLM output, based on optional input and label.

    Inherited fromLLMChain

    Attributes

    Aprompt: strAllm: BaseLanguageModel | NoneAreturn_final_only: bool
    —

    Whether to return only the final parsed result.

    Allm_kwargs: dictAinput_keys: list[str]Aoutput_keys: list[str]
    —

    The keys to use for the output.

    Methods

    Mgenerate
    —

    Generate LLM result from inputs.

    Magenerate
    —

    Generate LLM result from inputs.

    Mprep_prompts
    —

    Prepare prompts from inputs.

    Maprep_prompts
    —

    Prepare prompts from inputs.

    Inherited fromChain

    Attributes

    Amemory: BaseMemory | None
    —

    Optional memory object.

    Acallbacks: CallbacksAverbose: boolAtags: list[str] | None

    Inherited fromRunnableSerializable(langchain_core)

    Attributes

    Aname

    Methods

    Mto_jsonMconfigurable_fieldsMconfigurable_alternatives

    Inherited fromSerializable(langchain_core)

    Attributes

    Alc_secretsAlc_attributes

    Methods

    Mget_lc_namespaceMlc_id

    Inherited fromRunnable(langchain_core)

    Attributes

    AnameAInputTypeAOutputTypeAinput_schemaA
    View source on GitHub

    LLM Chain for evaluating runs against criteria.

    Parameters

    llm : BaseLanguageModel The language model to use for evaluation. criteria : Union[Mapping[str, str]] The criteria or rubric to evaluate the runs against. It can be a mapping of criterion name to its description, or a single criterion name. prompt : Optional[BasePromptTemplate], default=None The prompt template to use for generating prompts. If not provided, a default prompt template will be used based on the value of requires_reference. requires_reference : bool, default=False Whether the evaluation requires a reference text. If True, the PROMPT_WITH_REFERENCES template will be used, which includes the reference labels in the prompt. Otherwise, the PROMPT template will be used, which is a reference-free prompt. **kwargs : Any Additional keyword arguments to pass to the LLMChain constructor.

    Returns:

    CriteriaEvalChain An instance of the CriteriaEvalChain class.

    Examples:

    from langchain_anthropic import ChatAnthropic from langchain_classic.evaluation.criteria import CriteriaEvalChain model = ChatAnthropic(temperature=0) criteria = {"my-custom-criterion": "Is the submission the most amazing ever?"} evaluator = CriteriaEvalChain.from_llm(llm=model, criteria=criteria) evaluator.evaluate_strings( ... prediction="Imagine an ice cream flavor for the color aquamarine", ... input="Tell me an idea", ... ) { 'reasoning': 'Here is my step-by-step reasoning for the given criteria:\n\nThe criterion is: "Is the submission the most amazing ever?" This is a subjective criterion and open to interpretation. The submission suggests an aquamarine-colored ice cream flavor which is creative but may or may not be considered the most amazing idea ever conceived. There are many possible amazing ideas and this one ice cream flavor suggestion may or may not rise to that level for every person. \n\nN', 'value': 'N', 'score': 0, }

    from langchain_openai import ChatOpenAI from langchain_classic.evaluation.criteria import LabeledCriteriaEvalChain model = ChatOpenAI(model="gpt-4", temperature=0) criteria = "correctness" evaluator = LabeledCriteriaEvalChain.from_llm( ... llm=model, ... criteria=criteria, ... ) evaluator.evaluate_strings( ... prediction="The answer is 4", ... input="How many apples are there?", ... reference="There are 3 apples", ... ) { 'score': 0, 'reasoning': 'The criterion for this task is the correctness of the submission. The submission states that there are 4 apples, but the reference indicates that there are actually 3 apples. Therefore, the submission is not correct, accurate, or factual according to the given criterion.\n\nN', 'value': 'N', }

    M
    apply
    —

    Utilize the LLM generate method for speed gains.

    Maapply
    —

    Utilize the LLM generate method for speed gains.

    Mcreate_outputs
    —

    Create outputs from response.

    Mpredict
    —

    Format prompt with kwargs and pass to LLM.

    Mapredict
    —

    Format prompt with kwargs and pass to LLM.

    Mpredict_and_parse
    —

    Call predict and then parse the results.

    Mapredict_and_parse
    —

    Call apredict and then parse the results.

    Mapply_and_parse
    —

    Call apply and then parse the results.

    Maapply_and_parse
    —

    Call apply and then parse the results.

    Mfrom_string
    —

    Create LLMChain from LLM and template.

    Ametadata: dict[str, Any] | None
    Acallback_manager: BaseCallbackManager | None
    —

    [DEPRECATED] Use callbacks instead.

    Ainput_keys: list[str]
    Aoutput_keys: list[str]
    —

    The keys to use for the output.

    Methods

    Mget_input_schemaMget_output_schemaMinvokeMainvokeMraise_callback_manager_deprecation
    —

    Raise deprecation warning if callback_manager is used.

    Mset_verbose
    —

    Set the chain verbosity.

    Macall
    —

    Asynchronously execute the chain.

    Mprep_outputs
    —

    Validate and prepare chain outputs, and save info about this run to memory.

    Maprep_outputs
    —

    Validate and prepare chain outputs, and save info about this run to memory.

    Mprep_inputs
    —

    Prepare chain inputs, including adding inputs from memory.

    Maprep_inputs
    —

    Prepare chain inputs, including adding inputs from memory.

    Mrun
    —

    Convenience method for executing chain.

    Marun
    —

    Convenience method for executing chain.

    Mdict
    —

    Return dictionary representation of agent.

    Msave
    —

    Save the agent.

    Mapply
    —

    Utilize the LLM generate method for speed gains.

    M
    to_json
    Mto_json_not_implemented
    output_schema
    Aconfig_specs

    Methods

    Mget_nameMget_input_schemaMget_input_jsonschemaMget_output_schemaMget_output_jsonschemaMconfig_schemaMget_config_jsonschemaMget_graphMget_promptsMpipeMpickMassignMinvokeMainvokeMbatchMbatch_as_completedMabatchMabatch_as_completedMstreamMastreamMastream_logMastream_eventsMtransformMatransformMbindMwith_configMwith_listenersMwith_alistenersMwith_typesMwith_retryMmapMwith_fallbacksMas_tool

    The parser to use to map the output to a structured result.

    The name of the criterion being evaluated.

    Whether the evaluation requires a reference text.

    Get the name of the evaluation.

    Returns:

    str The name of the evaluation.

    Resolve the criteria to evaluate.

    Parameters

    criteria : CRITERIA_TYPE The criteria to evaluate the runs against. It can be: - a mapping of a criterion name to its description - a single criterion name present in one of the default criteria - a single ConstitutionalPrinciple instance

    Returns:

    Dict[str, str] A dictionary mapping criterion names to descriptions.

    Examples:

    criterion = "relevance" CriteriaEvalChain.resolve_criteria(criteria) {'relevance': 'Is the submission referring to a real quote from the text?'}

    Create a CriteriaEvalChain instance from an llm and criteria.

    Parameters

    llm : BaseLanguageModel The language model to use for evaluation. criteria : CRITERIA_TYPE - default=None for "helpfulness" The criteria to evaluate the runs against. It can be: - a mapping of a criterion name to its description - a single criterion name present in one of the default criteria - a single ConstitutionalPrinciple instance prompt : Optional[BasePromptTemplate], default=None The prompt template to use for generating prompts. If not provided, a default prompt template will be used. **kwargs : Any Additional keyword arguments to pass to the LLMChain constructor.

    Returns:

    CriteriaEvalChain An instance of the CriteriaEvalChain class.

    Examples:

    from langchain_openai import OpenAI from langchain_classic.evaluation.criteria import LabeledCriteriaEvalChain model = OpenAI() criteria = { "hallucination": ( "Does this submission contain information" " not present in the input or reference?" ), } chain = LabeledCriteriaEvalChain.from_llm( llm=model, criteria=criteria, )