LLM Chains for evaluating question answering.
A base class for evaluators that use an LLM.
String evaluator interface.
Grade, tag, or otherwise evaluate predictions relative to their inputs and/or reference labels.
LLM Chain for evaluating question answering.
LLM Chain for evaluating QA w/o GT based on context.
LLM Chain for evaluating QA using chain of thought reasoning.
Chain to run queries against LLMs.
This class is deprecated. See below for an example implementation using LangChain runnables:
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import PromptTemplate
from langchain_openai import OpenAI
prompt_template = "Tell me a {adjective} joke"
prompt = PromptTemplate(input_variables=["adjective"], template=prompt_template)
model = OpenAI()
chain = prompt | model | StrOutputParser()
chain.invoke("your adjective here")