QAGenerationChain()Raise deprecation warning if callback_manager is used.
Set the chain verbosity.
Asynchronously execute the chain.
Validate and prepare chain outputs, and save info about this run to memory.
Validate and prepare chain outputs, and save info about this run to memory.
Prepare chain inputs, including adding inputs from memory.
Prepare chain inputs, including adding inputs from memory.
Convenience method for executing chain.
Convenience method for executing chain.
Return dictionary representation of agent.
Save the agent.
Utilize the LLM generate method for speed gains.
Base class for question-answer generation chains.
This class is deprecated. See below for an alternative implementation.
Advantages of this implementation include:
from langchain_classic.chains.qa_generation.prompt import (
CHAT_PROMPT as prompt,
)
# Note: import PROMPT if using a legacy non-chat model.
from langchain_core.output_parsers import JsonOutputParser
from langchain_core.runnables import (
RunnableLambda,
RunnableParallel,
RunnablePassthrough,
)
from langchain_core.runnables.base import RunnableEach
from langchain_openai import ChatOpenAI
from langchain_text_splitters import RecursiveCharacterTextSplitter
model = ChatOpenAI()
text_splitter = RecursiveCharacterTextSplitter(chunk_overlap=500)
split_text = RunnableLambda(lambda x: text_splitter.create_documents([x]))
chain = RunnableParallel(
text=RunnablePassthrough(),
questions=(
split_text | RunnableEach(bound=prompt | model | JsonOutputParser())
),
)