LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-corepromptspromptPromptTemplate
    Class●Since v0.1

    PromptTemplate

    Prompt template for a language model.

    A prompt template consists of a string template. It accepts a set of parameters from the user that can be used to generate a prompt for a language model.

    The template can be formatted using either f-strings (default), jinja2, or mustache syntax.

    Security

    Prefer using template_format='f-string' instead of template_format='jinja2', or make sure to NEVER accept jinja2 templates from untrusted sources as they may lead to arbitrary Python code execution.

    As of LangChain 0.0.329, Jinja2 templates will be rendered using Jinja2's SandboxedEnvironment by default. This sand-boxing should be treated as a best-effort approach rather than a guarantee of security, as it is an opt-out rather than opt-in approach.

    Despite the sandboxing, we recommend to never use jinja2 templates from untrusted sources.

    Copy
    PromptTemplate(
        self,
        *args: Any = (),
        **kwargs: Any = {},
    )

    Bases

    StringPromptTemplate

    Example:

    from langchain_core.prompts import PromptTemplate
    
    # Instantiation using from_template (recommended)
    prompt = PromptTemplate.from_template("Say {foo}")
    prompt.format(foo="bar")
    
    # Instantiation using initializer
    prompt = PromptTemplate(template="Say {foo}")

    Used in Docs

    • AI21LLM integration
    • Aim integrations
    • AirbyteLoader integration
    • Aleph Alpha integration
    • Alibaba cloud pai eas integration
    (76 more not shown)

    Attributes

    attribute
    lc_attributes: dict[str, Any]
    attribute
    template: str

    The prompt template.

    attribute
    template_format: PromptTemplateFormat

    The format of the prompt template.

    Options are: 'f-string', 'mustache', 'jinja2'.

    attribute
    validate_template: bool

    Whether or not to try validating the template.

    Methods

    method
    get_lc_namespace

    Get the namespace of the LangChain object.

    method
    pre_init_validation

    Check that template and input variables are consistent.

    method
    get_input_schema

    Get the input schema for the prompt.

    method
    format

    Format the prompt with the inputs.

    method
    from_examples

    Take examples in list format with prefix and suffix to create a prompt.

    Intended to be used as a way to dynamically create a prompt from examples.

    method
    from_file

    Load a prompt from a file.

    method
    from_template

    Load a prompt template from a template.

    Security

    Prefer using template_format='f-string' instead of template_format='jinja2', or make sure to NEVER accept jinja2 templates from untrusted sources as they may lead to arbitrary Python code execution.

    As of LangChain 0.0.329, Jinja2 templates will be rendered using Jinja2's SandboxedEnvironment by default. This sand-boxing should be treated as a best-effort approach rather than a guarantee of security, as it is an opt-out rather than opt-in approach.

    Despite the sandboxing, we recommend to never use jinja2 templates from untrusted sources.

    Inherited fromStringPromptTemplate

    Methods

    Mformat_prompt
    —

    Format the prompt with the inputs.

    Maformat_prompt
    —

    Async format the prompt with the inputs.

    Mpretty_repr
    —

    Get a pretty representation of the prompt.

    Mpretty_print
    —

    Print a pretty representation of the prompt.

    Inherited fromBasePromptTemplate

    Attributes

    Ainput_variables: list[str]
    —

    A list of the names of the variables whose values are required as inputs to the

    Aoptional_variables: list[str]
    —

    A list of the names of the variables for placeholder or MessagePlaceholder that

    Ainput_types: builtins.dict[str, Any]
    —

    A dictionary of the types of the variables the prompt template expects.

    Aoutput_parser: BaseOutputParser | None
    —

    How to parse the output of calling an LLM on this formatted prompt.

    Apartial_variables: Mapping[str, Any]
    —

    A dictionary of the partial variables the prompt template carries.

    Ametadata: builtins.dict[str, Any] | None
    —

    Metadata to be used for tracing.

    Atags: list[str] | None
    —

    Tags to be used for tracing.

    Amodel_configAOutputType: Any
    —

    Return the output type of the prompt.

    Methods

    Mvalidate_variable_names
    —

    Validate variable names do not include restricted names.

    Mis_lc_serializable
    —

    Return True as this class is serializable.

    Minvoke
    —

    Invoke the prompt.

    Mainvoke
    —

    Async invoke the prompt.

    Mformat_prompt
    —

    Create PromptValue.

    Maformat_prompt
    —

    Async create PromptValue.

    Mpartial
    —

    Return a partial of the prompt template.

    Maformat
    —

    Async format the prompt with the inputs.

    Mdict
    —

    Return dictionary representation of prompt.

    Msave
    —

    Save the prompt.

    Inherited fromRunnableSerializable

    Attributes

    Aname: str | None
    —

    The name of the Runnable.

    Amodel_config

    Methods

    Mto_json
    —

    Serialize the Runnable to JSON.

    Mconfigurable_fields
    —

    Configure particular Runnable fields at runtime.

    Mconfigurable_alternatives
    —

    Configure alternatives for Runnable objects that can be set at runtime.

    Inherited fromSerializable

    Attributes

    Alc_secrets: dict[str, str]
    —

    A map of constructor argument names to secret ids.

    Amodel_config

    Methods

    Mis_lc_serializable
    —

    Is this class serializable?

    Mlc_id
    —

    Return a unique identifier for this class for serialization purposes.

    Mto_json
    —

    Serialize the object to JSON.

    Mto_json_not_implemented
    —

    Serialize a "not implemented" object.

    Inherited fromRunnable

    Attributes

    Aname: str | None
    —

    The name of the Runnable. Used for debugging and tracing.

    AInputType: type[Input]
    —

    Input type.

    AOutputType: type[Output]
    —

    Output Type.

    Ainput_schema: type[BaseModel]
    —

    The type of input this Runnable accepts specified as a Pydantic model.

    Aoutput_schema: type[BaseModel]
    —

    Output schema.

    Aconfig_specs: list[ConfigurableFieldSpec]
    —

    List configurable fields for this Runnable.

    Methods

    Mget_name
    —

    Get the name of the Runnable.

    Mget_input_jsonschema
    —

    Get a JSON schema that represents the input to the Runnable.

    Mget_output_schema
    —

    Get a Pydantic model that can be used to validate output to the Runnable.

    Mget_output_jsonschema
    —

    Get a JSON schema that represents the output of the Runnable.

    Mconfig_schema
    —

    The type of config this Runnable accepts specified as a Pydantic model.

    Mget_config_jsonschema
    —

    Get a JSON schema that represents the config of the Runnable.

    Mget_graph
    —

    Return a graph representation of this Runnable.

    Mget_prompts
    —

    Return a list of prompts used by this Runnable.

    Mpipe
    —

    Pipe Runnable objects.

    Mpick
    —

    Pick keys from the output dict of this Runnable.

    Massign
    —

    Assigns new fields to the dict output of this Runnable.

    Minvoke
    —

    Transform a single input into an output.

    Mainvoke
    —

    Transform a single input into an output.

    Mbatch
    —

    Default implementation runs invoke in parallel using a thread pool executor.

    Mbatch_as_completed
    —

    Run invoke in parallel on a list of inputs.

    Mabatch
    —

    Default implementation runs ainvoke in parallel using asyncio.gather.

    Mabatch_as_completed
    —

    Run ainvoke in parallel on a list of inputs.

    Mstream
    —

    Default implementation of stream, which calls invoke.

    Mastream
    —

    Default implementation of astream, which calls ainvoke.

    Mastream_log
    —

    Stream all output from a Runnable, as reported to the callback system.

    Mastream_events
    —

    Generate a stream of events.

    Mtransform
    —

    Transform inputs to outputs.

    Matransform
    —

    Transform inputs to outputs.

    Mbind
    —

    Bind arguments to a Runnable, returning a new Runnable.

    Mwith_config
    —

    Bind config to a Runnable, returning a new Runnable.

    Mwith_listeners
    —

    Bind lifecycle listeners to a Runnable, returning a new Runnable.

    Mwith_alisteners
    —

    Bind async lifecycle listeners to a Runnable.

    Mwith_types
    —

    Bind input and output types to a Runnable, returning a new Runnable.

    Mwith_retry
    —

    Create a new Runnable that retries the original Runnable on exceptions.

    Mmap
    —

    Return a new Runnable that maps a list of inputs to a list of outputs.

    Mwith_fallbacks
    —

    Add fallbacks to a Runnable, returning a new Runnable.

    Mas_tool
    —

    Create a BaseTool from a Runnable.

    View source on GitHub