Runnable that manages chat message history for another Runnable.
A chat message history is a sequence of messages that represent a conversation.
RunnableWithMessageHistory wraps another Runnable and manages the chat message
history for it; it is responsible for reading and updating the chat message
history.
The formats supported for the inputs and outputs of the wrapped Runnable
are described below.
RunnableWithMessageHistory must always be called with a config that contains
the appropriate parameters for the chat message history factory.
By default, the Runnable is expected to take a single configuration parameter
called session_id which is a string. This parameter is used to create a new
or look up an existing chat message history that matches the given session_id.
In this case, the invocation would look like this:
with_history.invoke(..., config={"configurable": {"session_id": "bar"}})
; e.g., {"configurable": {"session_id": "<SESSION_ID>"}}.
The configuration can be customized by passing in a list of
ConfigurableFieldSpec objects to the history_factory_config parameter (see
example below).
In the examples, we will use a chat message history with an in-memory implementation to make it easy to experiment and see the results.
For production use cases, you will want to use a persistent implementation
of chat message history, such as RedisChatMessageHistory.
Example: Chat message history with an in-memory implementation for testing.
from operator import itemgetter
from langchain_openai.chat_models import ChatOpenAI
from langchain_core.chat_history import BaseChatMessageHistory
from langchain_core.documents import Document
from langchain_core.messages import BaseMessage, AIMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from pydantic import BaseModel, Field
from langchain_core.runnables import (
RunnableLambda,
ConfigurableFieldSpec,
RunnablePassthrough,
)
from langchain_core.runnables.history import RunnableWithMessageHistory
class InMemoryHistory(BaseChatMessageHistory, BaseModel):
"""In memory implementation of chat message history."""
messages: list[BaseMessage] = Field(default_factory=list)
def add_messages(self, messages: list[BaseMessage]) -> None:
"""Add a list of messages to the store"""
self.messages.extend(messages)
def clear(self) -> None:
self.messages = []
# Here we use a global variable to store the chat message history.
# This will make it easier to inspect it to see the underlying results.
store = {}
def get_by_session_id(session_id: str) -> BaseChatMessageHistory:
if session_id not in store:
store[session_id] = InMemoryHistory()
return store[session_id]
history = get_by_session_id("1")
history.add_message(AIMessage(content="hello"))
print(store) # noqa: T201
Example where the wrapped Runnable takes a dictionary input:
from typing import Optional
from langchain_anthropic import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables.history import RunnableWithMessageHistory
prompt = ChatPromptTemplate.from_messages(
[
("system", "You're an assistant who's good at {ability}"),
MessagesPlaceholder(variable_name="history"),
("human", "{question}"),
]
)
chain = prompt | ChatAnthropic(model="claude-2")
chain_with_history = RunnableWithMessageHistory(
chain,
# Uses the get_by_session_id function defined in the example
# above.
get_by_session_id,
input_messages_key="question",
history_messages_key="history",
)
print(
chain_with_history.invoke( # noqa: T201
{"ability": "math", "question": "What does cosine mean?"},
config={"configurable": {"session_id": "foo"}},
)
)
# Uses the store defined in the example above.
print(store) # noqa: T201
print(
chain_with_history.invoke( # noqa: T201
{"ability": "math", "question": "What's its inverse"},
config={"configurable": {"session_id": "foo"}},
)
)
print(store) # noqa: T201
Example where the session factory takes two keys (user_id and conversation_id):
store = {}
def get_session_history(
user_id: str, conversation_id: str
) -> BaseChatMessageHistory:
if (user_id, conversation_id) not in store:
store[(user_id, conversation_id)] = InMemoryHistory()
return store[(user_id, conversation_id)]
prompt = ChatPromptTemplate.from_messages(
[
("system", "You're an assistant who's good at {ability}"),
MessagesPlaceholder(variable_name="history"),
("human", "{question}"),
]
)
chain = prompt | ChatAnthropic(model="claude-2")
with_message_history = RunnableWithMessageHistory(
chain,
get_session_history=get_session_history,
input_messages_key="question",
history_messages_key="history",
history_factory_config=[
ConfigurableFieldSpec(
id="user_id",
annotation=str,
name="User ID",
description="Unique identifier for the user.",
default="",
is_shared=True,
),
ConfigurableFieldSpec(
id="conversation_id",
annotation=str,
name="Conversation ID",
description="Unique identifier for the conversation.",
default="",
is_shared=True,
),
],
)
with_message_history.invoke(
{"ability": "math", "question": "What does cosine mean?"},
config={"configurable": {"user_id": "123", "conversation_id": "1"}},
)RunnableWithMessageHistory(
self,
runnable: Runnable[list[BaseMessage], str | BaseMessage | MessagesOrDictWithMessages] | Runnable[dict[str, Any], str | BaseMessage | MessagesOrDictWithMessages] | LanguageModelLike,
get_session_history: GetSessionHistoryCallable,
*,
input_messages_key: str | None = None,
output_messages_key: str | None = None,
history_messages_key: str | None = None,
history_factory_config: Sequence[ConfigurableFieldSpec] | None = None,
**kwargs: Any = {}
)| Name | Type | Description |
|---|---|---|
runnable* | Runnable[list[BaseMessage], str | BaseMessage | MessagesOrDictWithMessages] | Runnable[dict[str, Any], str | BaseMessage | MessagesOrDictWithMessages] | LanguageModelLike | The base Must take as input one of:
Must return as output one of:
|
get_session_history* | GetSessionHistoryCallable | Function that returns a new This function should either take a single positional argument
Or it should take keyword arguments that match the keys of
|
input_messages_key | str | None | Default: NoneMust be specified if the base runnable accepts a |
output_messages_key | str | None | Default: NoneMust be specified if the base runnable returns a |
history_messages_key | str | None | Default: NoneMust be specified if the base runnable accepts a
|
history_factory_config | Sequence[ConfigurableFieldSpec] | None | Default: NoneConfigure fields that should be passed to the
chat history factory. See Specifying these allows you to pass multiple config keys into the
|
**kwargs | Any | Default: {}Arbitrary additional kwargs to pass to parent class
|
| Name | Type |
|---|---|
| runnable | Runnable[list[BaseMessage], str | BaseMessage | MessagesOrDictWithMessages] | Runnable[dict[str, Any], str | BaseMessage | MessagesOrDictWithMessages] | LanguageModelLike |
| get_session_history | GetSessionHistoryCallable |
| input_messages_key | str | None |
| output_messages_key | str | None |
| history_messages_key | str | None |
| history_factory_config | Sequence[ConfigurableFieldSpec] | None |
Function that returns a new BaseChatMessageHistory.
This function should either take a single positional argument session_id of type
string and return a corresponding chat message history instance
Must be specified if the base Runnable accepts a dict as input.
The key in the input dict that contains the messages.
Must be specified if the base Runnable returns a dict as output.
The key in the output dict that contains the messages.
Must be specified if the base Runnable accepts a dict as input and expects a
separate key for historical messages.
Configure fields that should be passed to the chat history factory.
See ConfigurableFieldSpec for more details.
Get the configuration specs for the RunnableWithMessageHistory.
Get a Pydantic model that can be used to validate output to the Runnable.
Runnable objects that leverage the configurable_fields and
configurable_alternatives methods will have a dynamic output schema that
depends on which configuration the Runnable is invoked with.
This method allows to get an output schema for a specific configuration.
kwargs to pass to the underlying Runnable when running.
The configuration to use.
The config factories to bind to the underlying Runnable.
Override the input type of the underlying Runnable with a custom type.
Override the output type of the underlying Runnable with a custom type.
Return True as this class is serializable.
Get the namespace of the LangChain object.
Invoke the retriever to get relevant documents.
Asynchronously invoke the retriever to get relevant documents.
Run invoke in parallel on a list of inputs.
Run ainvoke in parallel on a list of inputs.
Generate a stream of events.
Return True as this class is serializable.
Get the namespace of the LangChain object.
Return a unique identifier for this class for serialization purposes.
Convert the graph to a JSON-serializable format.
Serialize a "not implemented" object.
Get a JSON schema that represents the input to the Runnable.
Get a JSON schema that represents the output of the Runnable.
The type of config this Runnable accepts specified as a Pydantic model.
Get a JSON schema that represents the config of the Runnable.
Return a list of prompts used by this Runnable.
Pipe Runnable objects.
Pick keys from the output dict of this Runnable.
Merge the Dict input with the output produced by the mapping argument.
Invoke the retriever to get relevant documents.
Asynchronously invoke the retriever to get relevant documents.
Run invoke in parallel on a list of inputs.
Run ainvoke in parallel on a list of inputs.
Stream all output from a Runnable, as reported to the callback system.
Generate a stream of events.
Bind arguments to a Runnable, returning a new Runnable.
Bind lifecycle listeners to a Runnable, returning a new Runnable.
Bind async lifecycle listeners to a Runnable.
Bind input and output types to a Runnable, returning a new Runnable.
Create a new Runnable that retries the original Runnable on exceptions.
Map a function to multiple iterables.
Add fallbacks to a Runnable, returning a new Runnable.
Create a BaseTool from a Runnable.