Chat prompt template that supports few-shot examples.
The high level structure of produced by this prompt template is a list of messages consisting of prefix message(s), example message(s), and suffix message(s).
This structure enables creating a conversation with intermediate examples like:
System: You are a helpful AI Assistant
Human: What is 2+2?
AI: 4
Human: What is 2+3?
AI: 5
Human: What is 4+4?
This prompt template can be used to generate a fixed list of examples or else to dynamically select examples based on the input.
BaseChatPromptTemplate_FewShotPromptTemplateMixinReturn False as this class is not serializable.
Format kwargs into a list of messages.
Async format kwargs into a list of messages.
Format the prompt with inputs generating a string.
Use this method to generate a string representation of a prompt consisting of chat messages.
Useful for feeding into a string-based completion language model or debugging.
Async format the prompt with inputs generating a string.
Use this method to generate a string representation of a prompt consisting of chat messages.
Useful for feeding into a string-based completion language model or debugging.
Return a pretty representation of the prompt template.
A list of the names of the variables for placeholder or MessagePlaceholder that
A dictionary of the types of the variables the prompt template expects.
How to parse the output of calling an LLM on this formatted prompt.
A dictionary of the partial variables the prompt template carries.
Optional metadata associated with the retriever.
Optional list of tags associated with the retriever.
Validate variable names do not include restricted names.
Get the namespace of the LangChain object.
Invoke the retriever to get relevant documents.
Asynchronously invoke the retriever to get relevant documents.
Format prompt.
Async format prompt.
Get a new ChatPromptTemplate with some input variables already filled in.
Return dictionary representation of output parser.
Save prompt to file.
Get a JSON schema that represents the input to the Runnable.
Get a JSON schema that represents the output of the Runnable.
The type of config this Runnable accepts specified as a Pydantic model.
Get a JSON schema that represents the config of the Runnable.
Return a list of prompts used by this Runnable.
Pipe Runnable objects.
Pick keys from the output dict of this Runnable.
Merge the Dict input with the output produced by the mapping argument.
Invoke the retriever to get relevant documents.
Asynchronously invoke the retriever to get relevant documents.
Run invoke in parallel on a list of inputs.
Run ainvoke in parallel on a list of inputs.
Stream all output from a Runnable, as reported to the callback system.
Generate a stream of events.
Bind arguments to a Runnable, returning a new Runnable.
Bind lifecycle listeners to a Runnable, returning a new Runnable.
Bind async lifecycle listeners to a Runnable.
Bind input and output types to a Runnable, returning a new Runnable.
Create a new Runnable that retries the original Runnable on exceptions.
Map a function to multiple iterables.
Add fallbacks to a Runnable, returning a new Runnable.
Create a BaseTool from a Runnable.