LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-corepromptsstructuredStructuredPrompt
    Class●Since v0.1

    StructuredPrompt

    Structured prompt template for a language model.

    Copy
    StructuredPrompt(
      self,
      messages: Sequence[MessageLikeRepresentation],
      schema_: dict | type[BaseModel] | None = None,
      *,
      structured_output_kwargs: dict[str, Any] | None = None,
      template_format: PromptTemplateFormat = 'f-string',
      **kwargs: Any = {}
    )

    Bases

    ChatPromptTemplate

    Used in Docs

    • Manage prompts programmatically

    Parameters

    NameTypeDescription
    messages*Sequence[MessageLikeRepresentation]

    Sequence of messages.

    schema_dict | type[BaseModel] | None
    Default:None

    Schema for the structured prompt.

    structured_output_kwargsdict[str, Any] | None
    Default:None

    Additional kwargs for structured output.

    template_formatPromptTemplateFormat
    Default:'f-string'

    Template format for the prompt.

    Constructors

    constructor
    __init__
    NameType
    messagesSequence[MessageLikeRepresentation]
    schema_dict | type[BaseModel] | None
    structured_output_kwargsdict[str, Any] | None
    template_formatPromptTemplateFormat

    Attributes

    attribute
    schema_: dict | type

    Schema for the structured prompt.

    attribute
    structured_output_kwargs: dict[str, Any]

    Methods

    method
    get_lc_namespace

    Get the namespace of the LangChain object.

    For example, if the class is langchain.llms.openai.OpenAI, then the namespace is ["langchain", "llms", "openai"]

    method
    from_messages_and_schema

    Create a chat prompt template from a variety of message formats.

    method
    pipe

    Pipe the structured prompt to a language model.

    Inherited fromChatPromptTemplate

    Attributes

    Amessages: Annotated[list[MessageLike], SkipValidation()]
    —

    List of messages consisting of either message prompt templates or messages.

    Avalidate_template: bool
    —

    Whether or not to try validating the template.

    Methods

    Mvalidate_input_variables
    —

    Validate input variables.

    Mfrom_template
    —

    Create a chat prompt template from a template string.

    Mfrom_messages
    —

    Create a chat prompt template from a variety of message formats.

    Mformat_messages
    —

    Format the chat template into a list of finalized messages.

    Maformat_messages
    —

    Async format the chat template into a list of finalized messages.

    Mpartial
    —

    Get a new ChatPromptTemplate with some input variables already filled in.

    Mappend
    —

    Append a message to the end of the chat template.

    Mextend
    —

    Extend the chat template with a sequence of messages.

    Msave
    —

    Save prompt to file.

    Mpretty_repr
    —

    Human-readable representation.

    Inherited fromBaseChatPromptTemplate

    Attributes

    Alc_attributes: dict

    Methods

    Mformat
    —

    Format the chat template into a string.

    Maformat
    —

    Async format the chat template into a string.

    Mformat_prompt
    —

    Format prompt.

    Maformat_prompt
    —

    Async format prompt.

    Mformat_messages
    —

    Format kwargs into a list of messages.

    Maformat_messages
    —

    Async format kwargs into a list of messages.

    Mpretty_repr
    —

    Human-readable representation.

    Mpretty_print
    —

    Print a human-readable representation.

    Inherited fromBasePromptTemplate

    Attributes

    Ainput_variables: list[str]
    —

    A list of the names of the variables whose values are required as inputs to the

    Aoptional_variables: list[str]
    —

    A list of the names of the variables for placeholder or MessagePlaceholder that

    Ainput_types: builtins.dict[str, Any]
    —

    A dictionary of the types of the variables the prompt template expects.

    Aoutput_parser: BaseOutputParser | None
    —

    How to parse the output of calling an LLM on this formatted prompt.

    Apartial_variables: Mapping[str, Any]
    —

    A dictionary of the partial variables the prompt template carries.

    Ametadata: builtins.dict[str, Any] | None
    —

    Metadata to be used for tracing.

    Atags: list[str] | None
    —

    Tags to be used for tracing.

    Amodel_configAOutputType: Any
    —

    Return the output type of the prompt.

    Methods

    Mvalidate_variable_names
    —

    Validate variable names do not include restricted names.

    Mis_lc_serializable
    —

    Return True as this class is serializable.

    Mget_input_schema
    —

    Get the input schema for the prompt.

    Minvoke
    —

    Invoke the prompt.

    Mainvoke
    —

    Async invoke the prompt.

    Mformat_prompt
    —

    Create PromptValue.

    Maformat_prompt
    —

    Async create PromptValue.

    Mpartial
    —

    Return a partial of the prompt template.

    Mformat
    —

    Format the prompt with the inputs.

    Maformat
    —

    Async format the prompt with the inputs.

    Mdict
    —

    Return dictionary representation of prompt.

    Msave
    —

    Save the prompt.

    Inherited fromRunnableSerializable

    Attributes

    Aname: str | None
    —

    The name of the Runnable.

    Amodel_config

    Methods

    Mto_json
    —

    Serialize the Runnable to JSON.

    Mconfigurable_fields
    —

    Configure particular Runnable fields at runtime.

    Mconfigurable_alternatives
    —

    Configure alternatives for Runnable objects that can be set at runtime.

    Inherited fromSerializable

    Attributes

    Alc_secrets: dict[str, str]
    —

    A map of constructor argument names to secret ids.

    Alc_attributes: dict
    —

    List of attribute names that should be included in the serialized kwargs.

    Amodel_config

    Methods

    Mis_lc_serializable
    —

    Is this class serializable?

    Mlc_id
    —

    Return a unique identifier for this class for serialization purposes.

    Mto_json
    —

    Serialize the object to JSON.

    Mto_json_not_implemented
    —

    Serialize a "not implemented" object.

    Inherited fromRunnable

    Attributes

    Aname: str | None
    —

    The name of the Runnable. Used for debugging and tracing.

    AInputType: type[Input]
    —

    Input type.

    AOutputType: type[Output]
    —

    Output Type.

    Ainput_schema: type[BaseModel]
    —

    The type of input this Runnable accepts specified as a Pydantic model.

    Aoutput_schema: type[BaseModel]
    —

    Output schema.

    Aconfig_specs: list[ConfigurableFieldSpec]
    —

    List configurable fields for this Runnable.

    Methods

    Mget_name
    —

    Get the name of the Runnable.

    Mget_input_schema
    —

    Get a Pydantic model that can be used to validate input to the Runnable.

    Mget_input_jsonschema
    —

    Get a JSON schema that represents the input to the Runnable.

    Mget_output_schema
    —

    Get a Pydantic model that can be used to validate output to the Runnable.

    Mget_output_jsonschema
    —

    Get a JSON schema that represents the output of the Runnable.

    Mconfig_schema
    —

    The type of config this Runnable accepts specified as a Pydantic model.

    Mget_config_jsonschema
    —

    Get a JSON schema that represents the config of the Runnable.

    Mget_graph
    —

    Return a graph representation of this Runnable.

    Mget_prompts
    —

    Return a list of prompts used by this Runnable.

    Mpick
    —

    Pick keys from the output dict of this Runnable.

    Massign
    —

    Assigns new fields to the dict output of this Runnable.

    Minvoke
    —

    Transform a single input into an output.

    Mainvoke
    —

    Transform a single input into an output.

    Mbatch
    —

    Default implementation runs invoke in parallel using a thread pool executor.

    Mbatch_as_completed
    —

    Run invoke in parallel on a list of inputs.

    Mabatch
    —

    Default implementation runs ainvoke in parallel using asyncio.gather.

    Mabatch_as_completed
    —

    Run ainvoke in parallel on a list of inputs.

    Mstream
    —

    Default implementation of stream, which calls invoke.

    Mastream
    —

    Default implementation of astream, which calls ainvoke.

    Mastream_log
    —

    Stream all output from a Runnable, as reported to the callback system.

    Mastream_events
    —

    Generate a stream of events.

    Mtransform
    —

    Transform inputs to outputs.

    Matransform
    —

    Transform inputs to outputs.

    Mbind
    —

    Bind arguments to a Runnable, returning a new Runnable.

    Mwith_config
    —

    Bind config to a Runnable, returning a new Runnable.

    Mwith_listeners
    —

    Bind lifecycle listeners to a Runnable, returning a new Runnable.

    Mwith_alisteners
    —

    Bind async lifecycle listeners to a Runnable.

    Mwith_types
    —

    Bind input and output types to a Runnable, returning a new Runnable.

    Mwith_retry
    —

    Create a new Runnable that retries the original Runnable on exceptions.

    Mmap
    —

    Return a new Runnable that maps a list of inputs to a list of outputs.

    Mwith_fallbacks
    —

    Add fallbacks to a Runnable, returning a new Runnable.

    Mas_tool
    —

    Create a BaseTool from a Runnable.

    View source on GitHub