LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicevaluationloading
    Module●Since v1.0

    loading

    Loading datasets and evaluators.

    Functions

    Classes

    View source on GitHub
    function
    load_dataset
    function
    load_evaluator
    function
    load_evaluators
    class
    Chain
    class
    TrajectoryEvalChain
    class
    PairwiseStringEvalChain
    class
    LabeledPairwiseStringEvalChain
    class
    CriteriaEvalChain
    class
    LabeledCriteriaEvalChain
    class
    EmbeddingDistanceEvalChain
    class
    PairwiseEmbeddingDistanceEvalChain
    class
    ExactMatchStringEvaluator
    class
    JsonEqualityEvaluator
    class
    JsonValidityEvaluator
    class
    JsonEditDistanceEvaluator
    class
    JsonSchemaEvaluator
    class
    ContextQAEvalChain
    class
    CotQAEvalChain
    class
    QAEvalChain
    class
    RegexMatchStringEvaluator
    class
    EvaluatorType
    class
    LLMEvalChain
    class
    StringEvaluator
    class
    LabeledScoreStringEvalChain
    class
    ScoreStringEvalChain
    class
    PairwiseStringDistanceEvalChain
    class
    StringDistanceEvalChain

    Load a dataset from the LangChainDatasets on HuggingFace.

    Load the requested evaluation chain specified by a string.

    Parameters

    evaluator : EvaluatorType The type of evaluator to load. llm : BaseLanguageModel, optional The language model to use for evaluation, by default None **kwargs : Any Additional keyword arguments to pass to the evaluator.

    Returns:

    Chain The loaded evaluation chain.

    Examples:

    from langchain_classic.evaluation import load_evaluator, EvaluatorType evaluator = load_evaluator(EvaluatorType.QA)

    Load evaluators specified by a list of evaluator types.

    Parameters

    evaluators : Sequence[EvaluatorType] The list of evaluator types to load. llm : BaseLanguageModel, optional The language model to use for evaluation, if none is provided, a default ChatOpenAI gpt-4 model will be used. config : dict, optional A dictionary mapping evaluator types to additional keyword arguments, by default None **kwargs : Any Additional keyword arguments to pass to all evaluators.

    Returns:

    List[Chain] The loaded evaluators.

    Examples:

    from langchain_classic.evaluation import load_evaluators, EvaluatorType evaluators = [EvaluatorType.QA, EvaluatorType.CRITERIA] loaded_evaluators = load_evaluators(evaluators, criteria="helpfulness")

    Abstract base class for creating structured sequences of calls to components.

    Chains should be used to encode a sequence of calls to components like models, document retrievers, other chains, etc., and provide a simple interface to this sequence.

    Pairwise String Evaluation Chain.

    A chain for comparing two outputs, such as the outputs of two models, prompts, or outputs of a single model on similar inputs.

    Labeled Pairwise String Evaluation Chain.

    A chain for comparing two outputs, such as the outputs of two models, prompts, or outputs of a single model on similar inputs, with labeled preferences.

    LLM Chain for evaluating runs against criteria.

    Parameters

    llm : BaseLanguageModel The language model to use for evaluation. criteria : Union[Mapping[str, str]] The criteria or rubric to evaluate the runs against. It can be a mapping of criterion name to its description, or a single criterion name. prompt : Optional[BasePromptTemplate], default=None The prompt template to use for generating prompts. If not provided, a default prompt template will be used based on the value of requires_reference. requires_reference : bool, default=False Whether the evaluation requires a reference text. If True, the PROMPT_WITH_REFERENCES template will be used, which includes the reference labels in the prompt. Otherwise, the PROMPT template will be used, which is a reference-free prompt. **kwargs : Any Additional keyword arguments to pass to the LLMChain constructor.

    Returns:

    CriteriaEvalChain An instance of the CriteriaEvalChain class.

    Examples:

    from langchain_anthropic import ChatAnthropic from langchain_classic.evaluation.criteria import CriteriaEvalChain model = ChatAnthropic(temperature=0) criteria = {"my-custom-criterion": "Is the submission the most amazing ever?"} evaluator = CriteriaEvalChain.from_llm(llm=model, criteria=criteria) evaluator.evaluate_strings( ... prediction="Imagine an ice cream flavor for the color aquamarine", ... input="Tell me an idea", ... ) { 'reasoning': 'Here is my step-by-step reasoning for the given criteria:\n\nThe criterion is: "Is the submission the most amazing ever?" This is a subjective criterion and open to interpretation. The submission suggests an aquamarine-colored ice cream flavor which is creative but may or may not be considered the most amazing idea ever conceived. There are many possible amazing ideas and this one ice cream flavor suggestion may or may not rise to that level for every person. \n\nN', 'value': 'N', 'score': 0, }

    from langchain_openai import ChatOpenAI from langchain_classic.evaluation.criteria import LabeledCriteriaEvalChain model = ChatOpenAI(model="gpt-4", temperature=0) criteria = "correctness" evaluator = LabeledCriteriaEvalChain.from_llm( ... llm=model, ... criteria=criteria, ... ) evaluator.evaluate_strings( ... prediction="The answer is 4", ... input="How many apples are there?", ... reference="There are 3 apples", ... ) { 'score': 0, 'reasoning': 'The criterion for this task is the correctness of the submission. The submission states that there are 4 apples, but the reference indicates that there are actually 3 apples. Therefore, the submission is not correct, accurate, or factual according to the given criterion.\n\nN', 'value': 'N', }

    Criteria evaluation chain that requires references.

    Embedding distance evaluation chain.

    Use embedding distances to score semantic difference between a prediction and reference.

    Use embedding distances to score semantic difference between two predictions.

    Examples:

    chain = PairwiseEmbeddingDistanceEvalChain() result = chain.evaluate_string_pairs(prediction="Hello", prediction_b="Hi") print(result) {'score': 0.5}

    Compute an exact match between the prediction and the reference.

    Examples:

    evaluator = ExactMatchChain() evaluator.evaluate_strings( prediction="Mindy is the CTO", reference="Mindy is the CTO", ) # This will return {'score': 1.0}

    evaluator.evaluate_strings( prediction="Mindy is the CTO", reference="Mindy is the CEO", ) # This will return {'score': 0.0}

    Json Equality Evaluator.

    Evaluate whether the prediction is equal to the reference after parsing both as JSON.

    This evaluator checks if the prediction, after parsing as JSON, is equal to the reference, which is also parsed as JSON. It does not require an input string.

    Evaluate whether the prediction is valid JSON.

    This evaluator checks if the prediction is a valid JSON string. It does not require any input or reference.

    An evaluator that calculates the edit distance between JSON strings.

    This evaluator computes a normalized Damerau-Levenshtein distance between two JSON strings after parsing them and converting them to a canonical format (i.e., whitespace and key order are normalized). It can be customized with alternative distance and canonicalization functions.

    An evaluator that validates a JSON prediction against a JSON schema reference.

    This evaluator checks if a given JSON prediction conforms to the provided JSON schema. If the prediction is valid, the score is True (no errors). Otherwise, the score is False (error occurred).

    LLM Chain for evaluating QA w/o GT based on context.

    LLM Chain for evaluating QA using chain of thought reasoning.

    LLM Chain for evaluating question answering.

    Compute a regex match between the prediction and the reference.

    Examples:

    evaluator = RegexMatchStringEvaluator(flags=re.IGNORECASE) evaluator.evaluate_strings( prediction="Mindy is the CTO", reference="^mindy.*cto$", ) # This will return {'score': 1.0} due to the IGNORECASE flag

    evaluator = RegexMatchStringEvaluator() evaluator.evaluate_strings( prediction="Mindy is the CTO", reference="^Mike.*CEO$", ) # This will return {'score': 0.0}

    evaluator.evaluate_strings( prediction="Mindy is the CTO", reference="^Mike.*CEO$|^Mindy.*CTO$", ) # This will return {'score': 1.0} as the prediction matches the second pattern in the union

    The types of the evaluators.

    A base class for evaluators that use an LLM.

    String evaluator interface.

    Grade, tag, or otherwise evaluate predictions relative to their inputs and/or reference labels.

    A chain for scoring the output of a model on a scale of 1-10.

    A chain for scoring on a scale of 1-10 the output of a model.

    Compute string edit distances between two predictions.

    Compute string distances between the prediction and the reference.

    Examples:

    from langchain_classic.evaluation import StringDistanceEvalChain evaluator = StringDistanceEvalChain() evaluator.evaluate_strings( prediction="Mindy is the CTO", reference="Mindy is the CEO", )

    Using the load_evaluator function:

    from langchain_classic.evaluation import load_evaluator evaluator = load_evaluator("string_distance") evaluator.evaluate_strings( prediction="The answer is three", reference="three", )

    A chain for evaluating ReAct style agents.

    This chain is used to evaluate ReAct style agents by reasoning about the sequence of actions taken and their outcomes. Based on the paper "ReAct: Synergizing Reasoning and Acting in Language Models" (https://arxiv.org/abs/2210.03629)

    Example:

    from langchain_classic.agents import AgentType, initialize_agent
    from langchain_openai import ChatOpenAI
    from langchain_classic.evaluation import TrajectoryEvalChain
    from langchain_classic.tools import tool
    
    @tool
    def geography_answers(country: str, question: str) -> str:
        """Very helpful answers to geography questions."""
        return f"{country}? IDK - We may never know {question}."
    
    model = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
    agent = initialize_agent(
        tools=[geography_answers],
        llm=model,
        agent=AgentType.OPENAI_FUNCTIONS,
        return_intermediate_steps=True,
    )
    
    question = "How many dwell in the largest minor region in Argentina?"
    response = agent(question)
    
    eval_chain = TrajectoryEvalChain.from_llm(
        llm=model, agent_tools=[geography_answers], return_reasoning=True
    )
    
    result = eval_chain.evaluate_agent_trajectory(
        input=question,
        agent_trajectory=response["intermediate_steps"],
        prediction=response["output"],
        reference="Paris",
    )
    print(result["score"])  # noqa: T201
    # 0