LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-corepromptschatMessagesPlaceholder
    Class●Since v0.1

    MessagesPlaceholder

    Prompt template that assumes variable is already list of messages.

    A placeholder which can be used to pass in a list of messages.

    Direct usage
    from langchain_core.prompts import MessagesPlaceholder
    
    prompt = MessagesPlaceholder("history")
    prompt.format_messages()  # raises KeyError
    
    prompt = MessagesPlaceholder("history", optional=True)
    prompt.format_messages()  # returns empty list []
    
    prompt.format_messages(
        history=[
            ("system", "You are an AI assistant."),
            ("human", "Hello!"),
        ]
    )
    # -> [
    #     SystemMessage(content="You are an AI assistant."),
    #     HumanMessage(content="Hello!"),
    # ]
    Building a prompt with chat history
    from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
    
    prompt = ChatPromptTemplate.from_messages(
        [
            ("system", "You are a helpful assistant."),
            MessagesPlaceholder("history"),
            ("human", "{question}"),
        ]
    )
    prompt.invoke(
        {
            "history": [("human", "what's 5 + 2"), ("ai", "5 + 2 is 7")],
            "question": "now multiply that by 4",
        }
    )
    # -> ChatPromptValue(messages=[
    #     SystemMessage(content="You are a helpful assistant."),
    #     HumanMessage(content="what's 5 + 2"),
    #     AIMessage(content="5 + 2 is 7"),
    #     HumanMessage(content="now multiply that by 4"),
    # ])
    Limiting the number of messages
    from langchain_core.prompts import MessagesPlaceholder
    
    prompt = MessagesPlaceholder("history", n_messages=1)
    
    prompt.format_messages(
        history=[
            ("system", "You are an AI assistant."),
            ("human", "Hello!"),
        ]
    )
    # -> [
    #     HumanMessage(content="Hello!"),
    # ]
    Copy
    MessagesPlaceholder(
      self,
      variable_name: str,
      *,
      optional: bool = False,
      **kwargs: Any = {}
    )

    Bases

    BaseMessagePromptTemplate

    Used in Docs

    • Hyperbrowser browser agent integration
    • Llama2chat integration

    Parameters

    NameTypeDescription
    variable_name*str

    Name of variable to use as messages.

    optionalbool
    Default:False

    Whether format_messages must be provided.

    If True format_messages can be called with no arguments and will return an empty list.

    If False then a named argument with name variable_name must be passed in, even if the value is an empty list.

    Constructors

    constructor
    __init__
    NameType
    variable_namestr
    optionalbool

    Attributes

    attribute
    variable_name: str

    Name of variable to use as messages.

    attribute
    optional: bool

    Whether format_messages must be provided.

    If True format_messages can be called with no arguments and will return an empty list.

    If False then a named argument with name variable_name must be passed in, even if the value is an empty list.

    attribute
    n_messages: PositiveInt | None

    Maximum number of messages to include.

    If None, then will include all.

    attribute
    input_variables: list[str]

    Input variables for this prompt template.

    Methods

    method
    format_messages

    Format messages from kwargs.

    method
    pretty_repr

    Human-readable representation.

    Inherited fromBaseMessagePromptTemplate

    Methods

    Mis_lc_serializable
    —

    Return True as this class is serializable.

    Mget_lc_namespace
    —

    Get the namespace of the LangChain object.

    Maformat_messages
    —

    Async format messages from kwargs.

    Mpretty_print
    —

    Print a pretty representation of the message.

    Inherited fromSerializable

    Attributes

    Alc_secrets: dict[str, str]
    —

    A map of constructor argument names to secret ids.

    Alc_attributes: dict
    —

    List of attribute names that should be included in the serialized kwargs.

    Amodel_config

    Methods

    Mis_lc_serializable
    —

    Return True as this class is serializable.

    Mget_lc_namespace
    —

    Get the namespace of the LangChain object.

    Mlc_id
    —

    Return a unique identifier for this class for serialization purposes.

    Mto_json
    —

    Convert the graph to a JSON-serializable format.

    Mto_json_not_implemented
    —

    Serialize a "not implemented" object.

    View source on GitHub