Structured prompt template for a language model.
StructuredPrompt(
self,
messages: Sequence[MessageLikeRepresentation],
schema_: dict | type[BaseModel] | None = None,
*,
structured_output_kwargs: dict[str, Any] | None = None,
template_format: PromptTemplateFormat = 'f-string',
**kwargs: Any = {}
)| Name | Type | Description |
|---|---|---|
messages* | Sequence[MessageLikeRepresentation] | Sequence of messages. |
schema_ | dict | type[BaseModel] | None | Default: NoneSchema for the structured prompt. |
structured_output_kwargs | dict[str, Any] | None | Default: NoneAdditional kwargs for structured output. |
template_format | PromptTemplateFormat | Default: 'f-string'Template format for the prompt. |
| Name | Type |
|---|---|
| messages | Sequence[MessageLikeRepresentation] |
| schema_ | dict | type[BaseModel] | None |
| structured_output_kwargs | dict[str, Any] | None |
| template_format | PromptTemplateFormat |
Get the namespace of the LangChain object.
For example, if the class is langchain.llms.openai.OpenAI, then the namespace
is ["langchain", "llms", "openai"]
Create a chat prompt template from a variety of message formats.
Pipe the structured prompt to a language model.
Validate input variables.
Create a chat prompt template from a template string.
Create a chat prompt template from a variety of message formats.
Format the chat template into a list of finalized messages.
Async format the chat template into a list of finalized messages.
Get a new ChatPromptTemplate with some input variables already filled in.
Append a message to the end of the chat template.
Extend the chat template with a sequence of messages.
Save prompt to file.
Human-readable representation.
Format the chat template into a string.
Async format the chat template into a string.
Format prompt.
Async format prompt.
Format kwargs into a list of messages.
Async format kwargs into a list of messages.
Human-readable representation.
Print a human-readable representation.
A list of the names of the variables whose values are required as inputs to the
A list of the names of the variables for placeholder or MessagePlaceholder that
A dictionary of the types of the variables the prompt template expects.
How to parse the output of calling an LLM on this formatted prompt.
A dictionary of the partial variables the prompt template carries.
Metadata to be used for tracing.
Tags to be used for tracing.
Return the output type of the prompt.
Validate variable names do not include restricted names.
Return True as this class is serializable.
Get the input schema for the prompt.
Invoke the prompt.
Async invoke the prompt.
Create PromptValue.
Async create PromptValue.
Return a partial of the prompt template.
Format the prompt with the inputs.
Async format the prompt with the inputs.
Return dictionary representation of prompt.
Save the prompt.
The name of the Runnable. Used for debugging and tracing.
Input type.
Output Type.
The type of input this Runnable accepts specified as a Pydantic model.
Output schema.
List configurable fields for this Runnable.
Get the name of the Runnable.
Get a Pydantic model that can be used to validate input to the Runnable.
Get a JSON schema that represents the input to the Runnable.
Get a Pydantic model that can be used to validate output to the Runnable.
Get a JSON schema that represents the output of the Runnable.
The type of config this Runnable accepts specified as a Pydantic model.
Get a JSON schema that represents the config of the Runnable.
Return a graph representation of this Runnable.
Return a list of prompts used by this Runnable.
Pick keys from the output dict of this Runnable.
Assigns new fields to the dict output of this Runnable.
Transform a single input into an output.
Transform a single input into an output.
Default implementation runs invoke in parallel using a thread pool executor.
Run invoke in parallel on a list of inputs.
Default implementation runs ainvoke in parallel using asyncio.gather.
Run ainvoke in parallel on a list of inputs.
Default implementation of stream, which calls invoke.
Default implementation of astream, which calls ainvoke.
Stream all output from a Runnable, as reported to the callback system.
Generate a stream of events.
Transform inputs to outputs.
Transform inputs to outputs.
Bind arguments to a Runnable, returning a new Runnable.
Bind config to a Runnable, returning a new Runnable.
Bind lifecycle listeners to a Runnable, returning a new Runnable.
Bind async lifecycle listeners to a Runnable.
Bind input and output types to a Runnable, returning a new Runnable.
Create a new Runnable that retries the original Runnable on exceptions.
Return a new Runnable that maps a list of inputs to a list of outputs.
Add fallbacks to a Runnable, returning a new Runnable.
Create a BaseTool from a Runnable.