LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-corepromptschatChatPromptTemplate
    Class●Since v0.1

    ChatPromptTemplate

    Prompt template for chat models.

    Use to create flexible templated prompts for chat models.

    Example
    from langchain_core.prompts import ChatPromptTemplate
    
    template = ChatPromptTemplate(
        [
            ("system", "You are a helpful AI bot. Your name is {name}."),
            ("human", "Hello, how are you doing?"),
            ("ai", "I'm doing well, thanks!"),
            ("human", "{user_input}"),
        ]
    )
    
    prompt_value = template.invoke(
        {
            "name": "Bob",
            "user_input": "What is your name?",
        }
    )
    # Output:
    # ChatPromptValue(
    #    messages=[
    #        SystemMessage(content='You are a helpful AI bot. Your name is Bob.'),
    #        HumanMessage(content='Hello, how are you doing?'),
    #        AIMessage(content="I'm doing well, thanks!"),
    #        HumanMessage(content='What is your name?')
    #    ]
    # )
    Messages Placeholder
    # In addition to Human/AI/Tool/Function messages,
    # you can initialize the template with a MessagesPlaceholder
    # either using the class directly or with the shorthand tuple syntax:
    
    template = ChatPromptTemplate(
        [
            ("system", "You are a helpful AI bot."),
            # Means the template will receive an optional list of messages under
            # the "conversation" key
            ("placeholder", "{conversation}"),
            # Equivalently:
            # MessagesPlaceholder(variable_name="conversation", optional=True)
        ]
    )
    
    prompt_value = template.invoke(
        {
            "conversation": [
                ("human", "Hi!"),
                ("ai", "How can I assist you today?"),
                ("human", "Can you make me an ice cream sundae?"),
                ("ai", "No."),
            ]
        }
    )
    
    # Output:
    # ChatPromptValue(
    #    messages=[
    #        SystemMessage(content='You are a helpful AI bot.'),
    #        HumanMessage(content='Hi!'),
    #        AIMessage(content='How can I assist you today?'),
    #        HumanMessage(content='Can you make me an ice cream sundae?'),
    #        AIMessage(content='No.'),
    #    ]
    # )
    Single-variable template

    If your prompt has only a single input variable (i.e., one instance of '{variable_nams}'), and you invoke the template with a non-dict object, the prompt template will inject the provided argument into that variable location.

    from langchain_core.prompts import ChatPromptTemplate
    
    template = ChatPromptTemplate(
        [
            ("system", "You are a helpful AI bot. Your name is Carl."),
            ("human", "{user_input}"),
        ]
    )
    
    prompt_value = template.invoke("Hello, there!")
    # Equivalent to
    # prompt_value = template.invoke({"user_input": "Hello, there!"})
    
    # Output:
    #  ChatPromptValue(
    #     messages=[
    #         SystemMessage(content='You are a helpful AI bot. Your name is Carl.'),
    #         HumanMessage(content='Hello, there!'),
    #     ]
    # )
    Copy
    ChatPromptTemplate(
      self,
      messages: Sequence[MessageLikeRepresentation],
      *,
      template_format: PromptTemplateFormat = 'f-string',
      **kwargs: Any = {}
    )

    Bases

    BaseChatPromptTemplate

    Used in Docs

    • How to evaluate a runnable
    • Manage prompts programmatically
    • Trace LangChain applications (Python and JS/TS)
    • Trace with OpenTelemetry
    • Activeloop Deep memory integration
    (61 more not shown)

    Parameters

    NameTypeDescription
    messages*Sequence[MessageLikeRepresentation]

    Sequence of message representations.

    A message can be represented using the following formats:

    1. BaseMessagePromptTemplate
    2. BaseMessage
    3. 2-tuple of (message type, template); e.g., ('human', '{user_input}')
    4. 2-tuple of (message class, template)
    5. A string which is shorthand for ('human', template); e.g., '{user_input}'
    template_formatPromptTemplateFormat
    Default:'f-string'

    Format of the template.

    **kwargsAny
    Default:{}

    Additional keyword arguments passed to BasePromptTemplate, including (but not limited to):

    • input_variables: A list of the names of the variables whose values are required as inputs to the prompt.

    • optional_variables: A list of the names of the variables for placeholder or MessagePlaceholder that are optional.

      These variables are auto inferred from the prompt and user need not provide them.

    • partial_variables: A dictionary of the partial variables the prompt template carries.

      Partial variables populate the template so that you don't need to pass them in every time you call the prompt.

    • validate_template: Whether to validate the template.

    • input_types: A dictionary of the types of the variables the prompt template expects.

      If not provided, all variables are assumed to be strings.

    Constructors

    constructor
    __init__
    NameType
    messagesSequence[MessageLikeRepresentation]
    template_formatPromptTemplateFormat

    Attributes

    attribute
    messages: Annotated[list[MessageLike], SkipValidation()]

    List of messages consisting of either message prompt templates or messages.

    attribute
    validate_template: bool

    Whether or not to try validating the template.

    Methods

    method
    get_lc_namespace

    Get the namespace of the LangChain object.

    method
    validate_input_variables

    Validate input variables.

    If input_variables is not set, it will be set to the union of all input variables in the messages.

    method
    from_template

    Create a chat prompt template from a template string.

    Creates a chat template consisting of a single message assumed to be from the human.

    method
    from_messages

    Create a chat prompt template from a variety of message formats.

    method
    format_messages

    Format the chat template into a list of finalized messages.

    method
    aformat_messages

    Async format the chat template into a list of finalized messages.

    method
    partial

    Get a new ChatPromptTemplate with some input variables already filled in.

    method
    append

    Append a message to the end of the chat template.

    method
    extend

    Extend the chat template with a sequence of messages.

    method
    save

    Save prompt to file.

    method
    pretty_repr

    Human-readable representation.

    Inherited fromBaseChatPromptTemplate

    Attributes

    Alc_attributes: dict
    —

    List of attribute names that should be included in the serialized kwargs.

    Methods

    Mformat
    —

    Format the prompt with the inputs.

    Maformat
    —

    Format the prompt with the inputs.

    Mformat_prompt
    —

    Format prompt.

    Maformat_prompt
    —

    Async format prompt.

    Mpretty_print
    —

    Print a pretty representation of the message.

    Inherited fromBasePromptTemplate

    Attributes

    Ainput_variables: list[str]
    —

    Template input variables.

    Aoptional_variables: list[str]
    —

    A list of the names of the variables for placeholder or MessagePlaceholder that

    Ainput_types: builtins.dict[str, Any]
    —

    A dictionary of the types of the variables the prompt template expects.

    Aoutput_parser: BaseOutputParser | None
    —

    How to parse the output of calling an LLM on this formatted prompt.

    Apartial_variables: Mapping[str, Any]
    —

    A dictionary of the partial variables the prompt template carries.

    Ametadata: dict[str, Any] | None
    —

    Optional metadata associated with the retriever.

    Atags: list[str] | None
    —

    Optional list of tags associated with the retriever.

    Amodel_configAOutputType: Any

    Methods

    Mvalidate_variable_names
    —

    Validate variable names do not include restricted names.

    Mis_lc_serializable
    —

    Return True as this class is serializable.

    Mget_input_schemaMinvoke
    —

    Invoke the retriever to get relevant documents.

    Mainvoke
    —

    Asynchronously invoke the retriever to get relevant documents.

    Mformat_prompt
    —

    Format prompt.

    Maformat_prompt
    —

    Async format prompt.

    Mformat
    —

    Format the prompt with the inputs.

    Maformat
    —

    Format the prompt with the inputs.

    Mdict
    —

    Return dictionary representation of output parser.

    Inherited fromRunnableSerializable

    Attributes

    Aname: str
    —

    The name of the function.

    Amodel_config

    Methods

    Mto_json
    —

    Convert the graph to a JSON-serializable format.

    Mconfigurable_fieldsMconfigurable_alternatives
    —

    Configure alternatives for Runnable objects that can be set at runtime.

    Inherited fromSerializable

    Attributes

    Alc_secrets: dict[str, str]
    —

    A map of constructor argument names to secret ids.

    Alc_attributes: dict
    —

    List of attribute names that should be included in the serialized kwargs.

    Amodel_config

    Methods

    Mis_lc_serializable
    —

    Return True as this class is serializable.

    Mlc_id
    —

    Return a unique identifier for this class for serialization purposes.

    Mto_json
    —

    Convert the graph to a JSON-serializable format.

    Mto_json_not_implemented
    —

    Serialize a "not implemented" object.

    Inherited fromRunnable

    Attributes

    Aname: str
    —

    The name of the function.

    AInputType: AnyAOutputType: AnyAinput_schema: type[BaseModel]
    —

    The type of input this Runnable accepts specified as a Pydantic model.

    Aoutput_schema: type[BaseModel]
    —

    Output schema.

    Aconfig_specs: list[ConfigurableFieldSpec]

    Methods

    Mget_nameMget_input_schemaMget_input_jsonschema
    —

    Get a JSON schema that represents the input to the Runnable.

    Mget_output_schemaMget_output_jsonschema
    —

    Get a JSON schema that represents the output of the Runnable.

    Mconfig_schema
    —

    The type of config this Runnable accepts specified as a Pydantic model.

    Mget_config_jsonschema
    —

    Get a JSON schema that represents the config of the Runnable.

    Mget_graphMget_prompts
    —

    Return a list of prompts used by this Runnable.

    Mpipe
    —

    Pipe Runnable objects.

    Mpick
    —

    Pick keys from the output dict of this Runnable.

    Massign
    —

    Merge the Dict input with the output produced by the mapping argument.

    Minvoke
    —

    Invoke the retriever to get relevant documents.

    Mainvoke
    —

    Asynchronously invoke the retriever to get relevant documents.

    MbatchMbatch_as_completed
    —

    Run invoke in parallel on a list of inputs.

    MabatchMabatch_as_completed
    —

    Run ainvoke in parallel on a list of inputs.

    MstreamMastreamMastream_log
    —

    Stream all output from a Runnable, as reported to the callback system.

    Mastream_events
    —

    Generate a stream of events.

    MtransformMatransformMbind
    —

    Bind arguments to a Runnable, returning a new Runnable.

    Mwith_configMwith_listeners
    —

    Bind lifecycle listeners to a Runnable, returning a new Runnable.

    Mwith_alisteners
    —

    Bind async lifecycle listeners to a Runnable.

    Mwith_types
    —

    Bind input and output types to a Runnable, returning a new Runnable.

    Mwith_retry
    —

    Create a new Runnable that retries the original Runnable on exceptions.

    Mmap
    —

    Map a function to multiple iterables.

    Mwith_fallbacks
    —

    Add fallbacks to a Runnable, returning a new Runnable.

    Mas_tool
    —

    Create a BaseTool from a Runnable.

    View source on GitHub