LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-corerunnableshistoryRunnableWithMessageHistory
    Class●Since v0.1

    RunnableWithMessageHistory

    Runnable that manages chat message history for another Runnable.

    A chat message history is a sequence of messages that represent a conversation.

    RunnableWithMessageHistory wraps another Runnable and manages the chat message history for it; it is responsible for reading and updating the chat message history.

    The formats supported for the inputs and outputs of the wrapped Runnable are described below.

    RunnableWithMessageHistory must always be called with a config that contains the appropriate parameters for the chat message history factory.

    By default, the Runnable is expected to take a single configuration parameter called session_id which is a string. This parameter is used to create a new or look up an existing chat message history that matches the given session_id.

    In this case, the invocation would look like this:

    with_history.invoke(..., config={"configurable": {"session_id": "bar"}}) ; e.g., {"configurable": {"session_id": "<SESSION_ID>"}}.

    The configuration can be customized by passing in a list of ConfigurableFieldSpec objects to the history_factory_config parameter (see example below).

    In the examples, we will use a chat message history with an in-memory implementation to make it easy to experiment and see the results.

    For production use cases, you will want to use a persistent implementation of chat message history, such as RedisChatMessageHistory.

    Example: Chat message history with an in-memory implementation for testing.

    from operator import itemgetter
    
    from langchain_openai.chat_models import ChatOpenAI
    
    from langchain_core.chat_history import BaseChatMessageHistory
    from langchain_core.documents import Document
    from langchain_core.messages import BaseMessage, AIMessage
    from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
    from pydantic import BaseModel, Field
    from langchain_core.runnables import (
        RunnableLambda,
        ConfigurableFieldSpec,
        RunnablePassthrough,
    )
    from langchain_core.runnables.history import RunnableWithMessageHistory
    
    class InMemoryHistory(BaseChatMessageHistory, BaseModel):
        """In memory implementation of chat message history."""
    
        messages: list[BaseMessage] = Field(default_factory=list)
    
        def add_messages(self, messages: list[BaseMessage]) -> None:
            """Add a list of messages to the store"""
            self.messages.extend(messages)
    
        def clear(self) -> None:
            self.messages = []
    
    # Here we use a global variable to store the chat message history.
    # This will make it easier to inspect it to see the underlying results.
    store = {}
    
    def get_by_session_id(session_id: str) -> BaseChatMessageHistory:
        if session_id not in store:
            store[session_id] = InMemoryHistory()
        return store[session_id]
    
    history = get_by_session_id("1")
    history.add_message(AIMessage(content="hello"))
    print(store)  # noqa: T201
    

    Example where the wrapped Runnable takes a dictionary input:

    from typing import Optional
    
    from langchain_anthropic import ChatAnthropic
    from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
    from langchain_core.runnables.history import RunnableWithMessageHistory
    
    prompt = ChatPromptTemplate.from_messages(
        [
            ("system", "You're an assistant who's good at {ability}"),
            MessagesPlaceholder(variable_name="history"),
            ("human", "{question}"),
        ]
    )
    
    chain = prompt | ChatAnthropic(model="claude-2")
    
    chain_with_history = RunnableWithMessageHistory(
        chain,
        # Uses the get_by_session_id function defined in the example
        # above.
        get_by_session_id,
        input_messages_key="question",
        history_messages_key="history",
    )
    
    print(
        chain_with_history.invoke(  # noqa: T201
            {"ability": "math", "question": "What does cosine mean?"},
            config={"configurable": {"session_id": "foo"}},
        )
    )
    
    # Uses the store defined in the example above.
    print(store)  # noqa: T201
    
    print(
        chain_with_history.invoke(  # noqa: T201
            {"ability": "math", "question": "What's its inverse"},
            config={"configurable": {"session_id": "foo"}},
        )
    )
    
    print(store)  # noqa: T201

    Example where the session factory takes two keys (user_id and conversation_id):

    store = {}
    
    def get_session_history(
        user_id: str, conversation_id: str
    ) -> BaseChatMessageHistory:
        if (user_id, conversation_id) not in store:
            store[(user_id, conversation_id)] = InMemoryHistory()
        return store[(user_id, conversation_id)]
    
    prompt = ChatPromptTemplate.from_messages(
        [
            ("system", "You're an assistant who's good at {ability}"),
            MessagesPlaceholder(variable_name="history"),
            ("human", "{question}"),
        ]
    )
    
    chain = prompt | ChatAnthropic(model="claude-2")
    
    with_message_history = RunnableWithMessageHistory(
        chain,
        get_session_history=get_session_history,
        input_messages_key="question",
        history_messages_key="history",
        history_factory_config=[
            ConfigurableFieldSpec(
                id="user_id",
                annotation=str,
                name="User ID",
                description="Unique identifier for the user.",
                default="",
                is_shared=True,
            ),
            ConfigurableFieldSpec(
                id="conversation_id",
                annotation=str,
                name="Conversation ID",
                description="Unique identifier for the conversation.",
                default="",
                is_shared=True,
            ),
        ],
    )
    
    with_message_history.invoke(
        {"ability": "math", "question": "What does cosine mean?"},
        config={"configurable": {"user_id": "123", "conversation_id": "1"}},
    )
    Copy
    RunnableWithMessageHistory(
      self,
      runnable: Runnable[list[BaseMessage], str | BaseMessage | MessagesOrDictWithMessages] | Runnable[dict[str, Any], str | BaseMessage | MessagesOrDictWithMessages] | LanguageModelLike,
      get_session_history: GetSessionHistoryCallable,
      *,
      input_messages_key: str | None = None,
      output_messages_key: str | None = None,
      history_messages_key: str | None = None,
      history_factory_config: Sequence[ConfigurableFieldSpec] | None = None,
      **kwargs: Any = {}
    )

    Bases

    RunnableBindingBase

    Used in Docs

    • Amazon neptune with cypher integration
    • ChatNVIDIA integration

    Parameters

    NameTypeDescription
    runnable*Runnable[list[BaseMessage], str | BaseMessage | MessagesOrDictWithMessages] | Runnable[dict[str, Any], str | BaseMessage | MessagesOrDictWithMessages] | LanguageModelLike

    The base Runnable to be wrapped.

    Must take as input one of:

    1. A list of BaseMessage
    2. A dict with one key for all messages
    3. A dict with one key for the current input string/message(s) and a separate key for historical messages. If the input key points to a string, it will be treated as a HumanMessage in history.

    Must return as output one of:

    1. A string which can be treated as an AIMessage
    2. A BaseMessage or sequence of BaseMessage
    3. A dict with a key for a BaseMessage or sequence of BaseMessage
    get_session_history*GetSessionHistoryCallable

    Function that returns a new BaseChatMessageHistory.

    This function should either take a single positional argument session_id of type string and return a corresponding chat message history instance.

    def get_session_history(
        session_id: str, *, user_id: str | None = None
    ) -> BaseChatMessageHistory: ...

    Or it should take keyword arguments that match the keys of session_history_config_specs and return a corresponding chat message history instance.

    def get_session_history(
        *,
        user_id: str,
        thread_id: str,
    ) -> BaseChatMessageHistory: ...
    input_messages_keystr | None
    Default:None

    Must be specified if the base runnable accepts a dict as input.

    output_messages_keystr | None
    Default:None

    Must be specified if the base runnable returns a dict as output.

    history_messages_keystr | None
    Default:None

    Must be specified if the base runnable accepts a dict as input and expects a separate key for historical messages.

    history_factory_configSequence[ConfigurableFieldSpec] | None
    Default:None

    Configure fields that should be passed to the chat history factory. See ConfigurableFieldSpec for more details.

    Specifying these allows you to pass multiple config keys into the get_session_history factory.

    **kwargsAny
    Default:{}

    Arbitrary additional kwargs to pass to parent class RunnableBindingBase init.

    Constructors

    constructor
    __init__
    NameType
    runnableRunnable[list[BaseMessage], str | BaseMessage | MessagesOrDictWithMessages] | Runnable[dict[str, Any], str | BaseMessage | MessagesOrDictWithMessages] | LanguageModelLike
    get_session_historyGetSessionHistoryCallable
    input_messages_keystr | None
    output_messages_keystr | None
    history_messages_keystr | None
    history_factory_configSequence[ConfigurableFieldSpec] | None

    Attributes

    attribute
    get_session_history: GetSessionHistoryCallable

    Function that returns a new BaseChatMessageHistory.

    This function should either take a single positional argument session_id of type string and return a corresponding chat message history instance

    attribute
    input_messages_key: str | None

    Must be specified if the base Runnable accepts a dict as input. The key in the input dict that contains the messages.

    attribute
    output_messages_key: str | None

    Must be specified if the base Runnable returns a dict as output. The key in the output dict that contains the messages.

    attribute
    history_messages_key: str | None

    Must be specified if the base Runnable accepts a dict as input and expects a separate key for historical messages.

    attribute
    history_factory_config: Sequence[ConfigurableFieldSpec]

    Configure fields that should be passed to the chat history factory.

    See ConfigurableFieldSpec for more details.

    attribute
    config_specs: list[ConfigurableFieldSpec]

    Get the configuration specs for the RunnableWithMessageHistory.

    attribute
    OutputType: type[Output]

    Methods

    method
    get_input_schema
    method
    get_output_schema

    Get a Pydantic model that can be used to validate output to the Runnable.

    Runnable objects that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the Runnable is invoked with.

    This method allows to get an output schema for a specific configuration.

    Inherited fromRunnableBindingBase

    Attributes

    Abound: Runnable[Input, Output]Akwargs: Mapping[str, Any]
    —

    kwargs to pass to the underlying Runnable when running.

    Aconfig: RunnableConfig | None
    —

    The configuration to use.

    Aconfig_factories: list[Callable[[RunnableConfig], RunnableConfig]]
    —

    The config factories to bind to the underlying Runnable.

    Acustom_input_type: Any | None
    —

    Override the input type of the underlying Runnable with a custom type.

    Acustom_output_type: Any | None
    —

    Override the output type of the underlying Runnable with a custom type.

    Amodel_configAInputType: Any

    Methods

    Mget_nameMget_graphMis_lc_serializable
    —

    Return True as this class is serializable.

    Mget_lc_namespace
    —

    Get the namespace of the LangChain object.

    Minvoke
    —

    Invoke the retriever to get relevant documents.

    Mainvoke
    —

    Asynchronously invoke the retriever to get relevant documents.

    MbatchMabatchMbatch_as_completed
    —

    Run invoke in parallel on a list of inputs.

    Mabatch_as_completed
    —

    Run ainvoke in parallel on a list of inputs.

    MstreamMastreamMastream_events
    —

    Generate a stream of events.

    MtransformMatransform

    Inherited fromRunnableSerializable

    Attributes

    Aname: str
    —

    The name of the function.

    Amodel_config

    Methods

    Mto_json
    —

    Convert the graph to a JSON-serializable format.

    Mconfigurable_fieldsMconfigurable_alternatives
    —

    Configure alternatives for Runnable objects that can be set at runtime.

    Inherited fromSerializable

    Attributes

    Alc_secrets: dict[str, str]
    —

    A map of constructor argument names to secret ids.

    Alc_attributes: dict
    —

    List of attribute names that should be included in the serialized kwargs.

    Amodel_config

    Methods

    Mis_lc_serializable
    —

    Return True as this class is serializable.

    Mget_lc_namespace
    —

    Get the namespace of the LangChain object.

    Mlc_id
    —

    Return a unique identifier for this class for serialization purposes.

    Mto_json
    —

    Convert the graph to a JSON-serializable format.

    Mto_json_not_implemented
    —

    Serialize a "not implemented" object.

    Inherited fromRunnable

    Attributes

    Aname: str
    —

    The name of the function.

    AInputType: AnyAinput_schema: type[BaseModel]
    —

    The type of input this Runnable accepts specified as a Pydantic model.

    Aoutput_schema: type[BaseModel]
    —

    Output schema.

    Methods

    Mget_nameMget_input_jsonschema
    —

    Get a JSON schema that represents the input to the Runnable.

    Mget_output_jsonschema
    —

    Get a JSON schema that represents the output of the Runnable.

    Mconfig_schema
    —

    The type of config this Runnable accepts specified as a Pydantic model.

    Mget_config_jsonschema
    —

    Get a JSON schema that represents the config of the Runnable.

    Mget_graphMget_prompts
    —

    Return a list of prompts used by this Runnable.

    Mpipe
    —

    Pipe Runnable objects.

    Mpick
    —

    Pick keys from the output dict of this Runnable.

    Massign
    —

    Merge the Dict input with the output produced by the mapping argument.

    Minvoke
    —

    Invoke the retriever to get relevant documents.

    Mainvoke
    —

    Asynchronously invoke the retriever to get relevant documents.

    MbatchMbatch_as_completed
    —

    Run invoke in parallel on a list of inputs.

    MabatchMabatch_as_completed
    —

    Run ainvoke in parallel on a list of inputs.

    MstreamMastreamMastream_log
    —

    Stream all output from a Runnable, as reported to the callback system.

    Mastream_events
    —

    Generate a stream of events.

    MtransformMatransformMbind
    —

    Bind arguments to a Runnable, returning a new Runnable.

    Mwith_configMwith_listeners
    —

    Bind lifecycle listeners to a Runnable, returning a new Runnable.

    Mwith_alisteners
    —

    Bind async lifecycle listeners to a Runnable.

    Mwith_types
    —

    Bind input and output types to a Runnable, returning a new Runnable.

    Mwith_retry
    —

    Create a new Runnable that retries the original Runnable on exceptions.

    Mmap
    —

    Map a function to multiple iterables.

    Mwith_fallbacks
    —

    Add fallbacks to a Runnable, returning a new Runnable.

    Mas_tool
    —

    Create a BaseTool from a Runnable.

    View source on GitHub