Prompt template for chat models.
Use to create flexible templated prompts for chat models.
from langchain_core.prompts import ChatPromptTemplate
template = ChatPromptTemplate(
[
("system", "You are a helpful AI bot. Your name is {name}."),
("human", "Hello, how are you doing?"),
("ai", "I'm doing well, thanks!"),
("human", "{user_input}"),
]
)
prompt_value = template.invoke(
{
"name": "Bob",
"user_input": "What is your name?",
}
)
# Output:
# ChatPromptValue(
# messages=[
# SystemMessage(content='You are a helpful AI bot. Your name is Bob.'),
# HumanMessage(content='Hello, how are you doing?'),
# AIMessage(content="I'm doing well, thanks!"),
# HumanMessage(content='What is your name?')
# ]
# )# In addition to Human/AI/Tool/Function messages,
# you can initialize the template with a MessagesPlaceholder
# either using the class directly or with the shorthand tuple syntax:
template = ChatPromptTemplate(
[
("system", "You are a helpful AI bot."),
# Means the template will receive an optional list of messages under
# the "conversation" key
("placeholder", "{conversation}"),
# Equivalently:
# MessagesPlaceholder(variable_name="conversation", optional=True)
]
)
prompt_value = template.invoke(
{
"conversation": [
("human", "Hi!"),
("ai", "How can I assist you today?"),
("human", "Can you make me an ice cream sundae?"),
("ai", "No."),
]
}
)
# Output:
# ChatPromptValue(
# messages=[
# SystemMessage(content='You are a helpful AI bot.'),
# HumanMessage(content='Hi!'),
# AIMessage(content='How can I assist you today?'),
# HumanMessage(content='Can you make me an ice cream sundae?'),
# AIMessage(content='No.'),
# ]
# )If your prompt has only a single input variable (i.e., one instance of
'{variable_nams}'), and you invoke the template with a non-dict object, the
prompt template will inject the provided argument into that variable location.
from langchain_core.prompts import ChatPromptTemplate
template = ChatPromptTemplate(
[
("system", "You are a helpful AI bot. Your name is Carl."),
("human", "{user_input}"),
]
)
prompt_value = template.invoke("Hello, there!")
# Equivalent to
# prompt_value = template.invoke({"user_input": "Hello, there!"})
# Output:
# ChatPromptValue(
# messages=[
# SystemMessage(content='You are a helpful AI bot. Your name is Carl.'),
# HumanMessage(content='Hello, there!'),
# ]
# )ChatPromptTemplate(
self,
messages: Sequence[MessageLikeRepresentation],
*,
template_format: PromptTemplateFormat = 'f-string',
**kwargs: Any = {}
)| Name | Type | Description |
|---|---|---|
messages* | Sequence[MessageLikeRepresentation] | Sequence of message representations. A message can be represented using the following formats:
|
template_format | PromptTemplateFormat | Default: 'f-string'Format of the template. |
**kwargs | Any | Default: {}Additional keyword arguments passed to
|
| Name | Type |
|---|---|
| messages | Sequence[MessageLikeRepresentation] |
| template_format | PromptTemplateFormat |
Get the namespace of the LangChain object.
Validate input variables.
If input_variables is not set, it will be set to the union of all input
variables in the messages.
Create a chat prompt template from a template string.
Creates a chat template consisting of a single message assumed to be from the human.
Create a chat prompt template from a variety of message formats.
Format the chat template into a list of finalized messages.
Async format the chat template into a list of finalized messages.
Get a new ChatPromptTemplate with some input variables already filled in.
Append a message to the end of the chat template.
Extend the chat template with a sequence of messages.
Save prompt to file.
Human-readable representation.
Template input variables.
A list of the names of the variables for placeholder or MessagePlaceholder that
A dictionary of the types of the variables the prompt template expects.
How to parse the output of calling an LLM on this formatted prompt.
A dictionary of the partial variables the prompt template carries.
Optional metadata associated with the retriever.
Optional list of tags associated with the retriever.
Validate variable names do not include restricted names.
Return True as this class is serializable.
Invoke the retriever to get relevant documents.
Asynchronously invoke the retriever to get relevant documents.
Format prompt.
Async format prompt.
Format the prompt with the inputs.
Format the prompt with the inputs.
Return dictionary representation of output parser.
Get a JSON schema that represents the input to the Runnable.
Get a JSON schema that represents the output of the Runnable.
The type of config this Runnable accepts specified as a Pydantic model.
Get a JSON schema that represents the config of the Runnable.
Return a list of prompts used by this Runnable.
Pipe Runnable objects.
Pick keys from the output dict of this Runnable.
Merge the Dict input with the output produced by the mapping argument.
Invoke the retriever to get relevant documents.
Asynchronously invoke the retriever to get relevant documents.
Run invoke in parallel on a list of inputs.
Run ainvoke in parallel on a list of inputs.
Stream all output from a Runnable, as reported to the callback system.
Generate a stream of events.
Bind arguments to a Runnable, returning a new Runnable.
Bind lifecycle listeners to a Runnable, returning a new Runnable.
Bind async lifecycle listeners to a Runnable.
Bind input and output types to a Runnable, returning a new Runnable.
Create a new Runnable that retries the original Runnable on exceptions.
Map a function to multiple iterables.
Add fallbacks to a Runnable, returning a new Runnable.
Create a BaseTool from a Runnable.