PairwiseStringEvalChain()Generate LLM result from inputs.
Generate LLM result from inputs.
Prepare prompts from inputs.
Prepare prompts from inputs.
Utilize the LLM generate method for speed gains.
Utilize the LLM generate method for speed gains.
Create outputs from response.
Format prompt with kwargs and pass to LLM.
Format prompt with kwargs and pass to LLM.
Call predict and then parse the results.
Call apredict and then parse the results.
Call apply and then parse the results.
Call apply and then parse the results.
Create LLMChain from LLM and template.
Pairwise String Evaluation Chain.
A chain for comparing two outputs, such as the outputs of two models, prompts, or outputs of a single model on similar inputs.
Example:
from langchain_openai import ChatOpenAI from langchain_classic.evaluation.comparison import PairwiseStringEvalChain model = ChatOpenAI( ... temperature=0, model_name="gpt-4", model_kwargs={"random_seed": 42} ... ) chain = PairwiseStringEvalChain.from_llm(llm=model) result = chain.evaluate_string_pairs( ... input = "What is the chemical formula for water?", ... prediction = "H2O", ... prediction_b = ( ... "The chemical formula for water is H2O, which means" ... " there are two hydrogen atoms and one oxygen atom." ... reference = "The chemical formula for water is H2O.", ... ) print(result)
Set the chain verbosity.