LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicevaluationcriteriaeval_chainLabeledCriteriaEvalChain
    Class●Since v1.0

    LabeledCriteriaEvalChain

    Copy
    LabeledCriteriaEvalChain()

    Bases

    CriteriaEvalChain

    Attributes

    Methods

    Inherited fromCriteriaEvalChain

    Attributes

    Aoutput_parser: AgentOutputParser
    —

    Output parser to use for agent.

    Acriterion_name: str
    —

    The name of the criterion being evaluated.

    Aoutput_key: str
    View source on GitHub
    A
    model_config
    Arequires_input: bool
    —

    Whether this evaluator requires an input string.

    Aevaluation_name: str
    —

    The name of the evaluation.

    Methods

    Mresolve_criteria
    —

    Resolve the criteria to evaluate.

    Inherited fromStringEvaluator

    Attributes

    Aevaluation_name: str
    —

    The name of the evaluation.

    Methods

    Mevaluate_strings
    —

    Evaluate Chain or LLM output, based on optional input and label.

    Maevaluate_strings
    —

    Asynchronously evaluate Chain or LLM output, based on optional input and label.

    Inherited fromLLMChain

    Attributes

    Aprompt: strAllm: BaseLanguageModel | NoneAoutput_key: strAoutput_parser: AgentOutputParser
    —

    Output parser to use for agent.

    Areturn_final_only: bool
    —

    Whether to return only the final parsed result.

    Allm_kwargs: dictAmodel_configAinput_keys: list[str]Aoutput_keys: list[str]
    —

    The keys to use for the output.

    Methods

    Mgenerate
    —

    Generate LLM result from inputs.

    Magenerate
    —

    Generate LLM result from inputs.

    Mprep_prompts
    —

    Prepare prompts from inputs.

    Maprep_prompts
    —

    Prepare prompts from inputs.

    Inherited fromChain

    Attributes

    Amemory: BaseMemory | None
    —

    Optional memory object.

    Acallbacks: CallbacksAverbose: boolAtags: list[str] | NoneAmetadata: dict[str, Any] | NoneAcallback_manager: BaseCallbackManager | None
    —

    [DEPRECATED] Use callbacks instead.

    Amodel_configAinput_keys: list[str]Aoutput_keys: list[str]
    —

    The keys to use for the output.

    Methods

    Mget_input_schemaMget_output_schemaMinvokeMainvokeMraise_callback_manager_deprecation
    —

    Raise deprecation warning if callback_manager is used.

    Inherited fromRunnableSerializable(langchain_core)

    Attributes

    AnameAmodel_config

    Methods

    Mto_jsonMconfigurable_fieldsMconfigurable_alternatives

    Inherited fromSerializable(langchain_core)

    Attributes

    Alc_secretsAlc_attributesAmodel_config

    Methods

    Mget_lc_namespaceMlc_idMto_jsonMto_json_not_implemented

    Inherited fromRunnable(langchain_core)

    Attributes

    AnameAInputTypeAOutputTypeAinput_schemaAoutput_schemaAconfig_specs

    Methods

    Mget_nameMget_input_schemaMget_input_jsonschemaMget_output_schemaMget_output_jsonschemaM
    attribute
    requires_reference: bool

    Whether the evaluation requires a reference text.

    method
    is_lc_serializable
    method
    from_llm

    Create a LabeledCriteriaEvalChain instance from an llm and criteria.

    Parameters

    llm : BaseLanguageModel The language model to use for evaluation. criteria : CRITERIA_TYPE - default=None for "helpfulness" The criteria to evaluate the runs against. It can be: - a mapping of a criterion name to its description - a single criterion name present in one of the default criteria - a single ConstitutionalPrinciple instance prompt : Optional[BasePromptTemplate], default=None The prompt template to use for generating prompts. If not provided, a default prompt will be used. **kwargs : Any Additional keyword arguments to pass to the LLMChain constructor.

    Returns:

    LabeledCriteriaEvalChain An instance of the LabeledCriteriaEvalChain class.

    Examples:

    from langchain_openai import OpenAI from langchain_classic.evaluation.criteria import LabeledCriteriaEvalChain model = OpenAI() criteria = { "hallucination": ( "Does this submission contain information" " not present in the input or reference?" ), } chain = LabeledCriteriaEvalChain.from_llm( llm=model, criteria=criteria, )

    Criteria evaluation chain that requires references.

    M
    apply
    —

    Utilize the LLM generate method for speed gains.

    Maapply
    —

    Utilize the LLM generate method for speed gains.

    Mcreate_outputs
    —

    Create outputs from response.

    Mpredict
    —

    Format prompt with kwargs and pass to LLM.

    Mapredict
    —

    Format prompt with kwargs and pass to LLM.

    Mpredict_and_parse
    —

    Call predict and then parse the results.

    Mapredict_and_parse
    —

    Call apredict and then parse the results.

    Mapply_and_parse
    —

    Call apply and then parse the results.

    Maapply_and_parse
    —

    Call apply and then parse the results.

    Mfrom_string
    —

    Create LLMChain from LLM and template.

    M
    set_verbose
    —

    Set the chain verbosity.

    Macall
    —

    Asynchronously execute the chain.

    Mprep_outputs
    —

    Validate and prepare chain outputs, and save info about this run to memory.

    Maprep_outputs
    —

    Validate and prepare chain outputs, and save info about this run to memory.

    Mprep_inputs
    —

    Prepare chain inputs, including adding inputs from memory.

    Maprep_inputs
    —

    Prepare chain inputs, including adding inputs from memory.

    Mrun
    —

    Convenience method for executing chain.

    Marun
    —

    Convenience method for executing chain.

    Mdict
    —

    Return dictionary representation of agent.

    Msave
    —

    Save the agent.

    Mapply
    —

    Utilize the LLM generate method for speed gains.

    config_schema
    Mget_config_jsonschema
    Mget_graph
    Mget_prompts
    Mpipe
    Mpick
    Massign
    Minvoke
    Mainvoke
    Mbatch
    Mbatch_as_completed
    Mabatch
    Mabatch_as_completed
    Mstream
    Mastream
    Mastream_log
    Mastream_events
    Mtransform
    Matransform
    Mbind
    Mwith_config
    Mwith_listeners
    Mwith_alisteners
    Mwith_types
    Mwith_retry
    Mmap
    Mwith_fallbacks
    Mas_tool