ConversationChainChain to have a conversation and load context from memory.
This class is deprecated in favor of RunnableWithMessageHistory. Please refer
to this tutorial for more detail: https://python.langchain.com/docs/tutorials/chatbot/
RunnableWithMessageHistory offers several benefits, including:
Below is a minimal implementation, analogous to using ConversationChain with
the default ConversationBufferMemory:
from langchain_core.chat_history import InMemoryChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_openai import ChatOpenAI
store = {} # memory is maintained outside the chain
def get_session_history(session_id: str) -> InMemoryChatMessageHistory:
if session_id not in store:
store[session_id] = InMemoryChatMessageHistory()
return store[session_id]
model = ChatOpenAI(model="gpt-3.5-turbo-0125")
chain = RunnableWithMessageHistory(model, get_session_history)
chain.invoke(
"Hi I'm Bob.",
config={"configurable": {"session_id": "1"}},
) # session_id determines thread
Memory objects can also be incorporated into the get_session_history callable:
from langchain_classic.memory import ConversationBufferWindowMemory
from langchain_core.chat_history import InMemoryChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_openai import ChatOpenAI
store = {} # memory is maintained outside the chain
def get_session_history(session_id: str) -> InMemoryChatMessageHistory:
if session_id not in store:
store[session_id] = InMemoryChatMessageHistory()
return store[session_id]
memory = ConversationBufferWindowMemory(
chat_memory=store[session_id],
k=3,
return_messages=True,
)
assert len(memory.memory_variables) == 1
key = memory.memory_variables[0]
messages = memory.load_memory_variables({})[key]
store[session_id] = InMemoryChatMessageHistory(messages=messages)
return store[session_id]
model = ChatOpenAI(model="gpt-3.5-turbo-0125")
chain = RunnableWithMessageHistory(model, get_session_history)
chain.invoke(
"Hi I'm Bob.",
config={"configurable": {"session_id": "1"}},
) # session_id determines threadExample:
from langchain_classic.chains import ConversationChain
from langchain_openai import OpenAI
conversation = ConversationChain(llm=OpenAI())Utilize the LLM generate method for speed gains.
[DEPRECATED] Use callbacks instead.
Raise deprecation warning if callback_manager is used.
Set the chain verbosity.
Asynchronously execute the chain.
Validate and prepare chain outputs, and save info about this run to memory.
Validate and prepare chain outputs, and save info about this run to memory.
Prepare chain inputs, including adding inputs from memory.
Prepare chain inputs, including adding inputs from memory.
Convenience method for executing chain.
Convenience method for executing chain.
Return dictionary representation of agent.
Save the agent.
Utilize the LLM generate method for speed gains.
Default memory store.
Default conversation prompt to use.
Use this since so some prompt vars come from history.
Validate that prompt input variables are consistent.