Structured prompt template for a language model.
StructuredPrompt(
self,
messages: Sequence[MessageLikeRepresentation],
schema_: dict | type[BaseModel] | None = None,
*,
structured_output_kwargs: dict[str, Any] | None = None,
template_format: PromptTemplateFormat = 'f-string',
**kwargs: Any = {}
)| Name | Type | Description |
|---|---|---|
messages* | Sequence[MessageLikeRepresentation] | Sequence of messages. |
schema_ | dict | type[BaseModel] | None | Default: NoneSchema for the structured prompt. |
structured_output_kwargs | dict[str, Any] | None | Default: NoneAdditional kwargs for structured output. |
template_format | PromptTemplateFormat | Default: 'f-string'Template format for the prompt. |
| Name | Type |
|---|---|
| messages | Sequence[MessageLikeRepresentation] |
| schema_ | dict | type[BaseModel] | None |
| structured_output_kwargs | dict[str, Any] | None |
| template_format | PromptTemplateFormat |
Get the namespace of the LangChain object.
For example, if the class is langchain.llms.openai.OpenAI, then the namespace
is ["langchain", "llms", "openai"]
Create a chat prompt template from a variety of message formats.
Pipe the structured prompt to a language model.
Validate that input variables match the placeholders in a format string.
Create a class from a string template.
Create a chat prompt template from a variety of message formats.
Format messages from kwargs.
Async format messages from kwargs.
Get a new ChatPromptTemplate with some input variables already filled in.
Append a message to the end of the chat template.
Add all nodes and edges from another graph.
Save prompt to file.
Return a pretty representation of the message for display.
Format the prompt with the inputs.
Format the prompt with the inputs.
Format prompt.
Async format prompt.
Format messages from kwargs.
Async format messages from kwargs.
Return a pretty representation of the message for display.
Print a pretty representation of the message.
Template input variables.
A list of the names of the variables for placeholder or MessagePlaceholder that
A dictionary of the types of the variables the prompt template expects.
How to parse the output of calling an LLM on this formatted prompt.
A dictionary of the partial variables the prompt template carries.
Optional metadata associated with the retriever.
Optional list of tags associated with the retriever.
Validate variable names do not include restricted names.
Return True as this class is serializable.
Invoke the retriever to get relevant documents.
Asynchronously invoke the retriever to get relevant documents.
Format prompt.
Async format prompt.
Get a new ChatPromptTemplate with some input variables already filled in.
Format the prompt with the inputs.
Format the prompt with the inputs.
Return dictionary representation of output parser.
Save prompt to file.
Get a JSON schema that represents the input to the Runnable.
Get a JSON schema that represents the output of the Runnable.
The type of config this Runnable accepts specified as a Pydantic model.
Get a JSON schema that represents the config of the Runnable.
Return a list of prompts used by this Runnable.
Pick keys from the output dict of this Runnable.
Merge the Dict input with the output produced by the mapping argument.
Invoke the retriever to get relevant documents.
Asynchronously invoke the retriever to get relevant documents.
Run invoke in parallel on a list of inputs.
Run ainvoke in parallel on a list of inputs.
Stream all output from a Runnable, as reported to the callback system.
Generate a stream of events.
Bind arguments to a Runnable, returning a new Runnable.
Bind lifecycle listeners to a Runnable, returning a new Runnable.
Bind async lifecycle listeners to a Runnable.
Bind input and output types to a Runnable, returning a new Runnable.
Create a new Runnable that retries the original Runnable on exceptions.
Map a function to multiple iterables.
Add fallbacks to a Runnable, returning a new Runnable.
Create a BaseTool from a Runnable.