LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-corepromptsbaseBasePromptTemplate
    Class●Since v0.1

    BasePromptTemplate

    Base class for all prompt templates, returning a prompt.

    Copy
    BasePromptTemplate(
        self,
        *args: Any = (),
        **kwargs: Any = {},
    )

    Bases

    RunnableSerializable[dict, PromptValue]ABCGeneric[FormatOutputType]

    Attributes

    attribute
    input_variables: list[str]

    A list of the names of the variables whose values are required as inputs to the prompt.

    attribute
    optional_variables: list[str]

    A list of the names of the variables for placeholder or MessagePlaceholder that are optional.

    These variables are auto inferred from the prompt and user need not provide them.

    attribute
    input_types: builtins.dict[str, Any]

    A dictionary of the types of the variables the prompt template expects.

    If not provided, all variables are assumed to be strings.

    attribute
    output_parser: BaseOutputParser | None

    How to parse the output of calling an LLM on this formatted prompt.

    attribute
    partial_variables: Mapping[str, Any]

    A dictionary of the partial variables the prompt template carries.

    Partial variables populate the template so that you don't need to pass them in every time you call the prompt.

    attribute
    metadata: builtins.dict[str, Any] | None

    Metadata to be used for tracing.

    attribute
    tags: list[str] | None

    Tags to be used for tracing.

    attribute
    model_config
    attribute
    OutputType: Any

    Return the output type of the prompt.

    Methods

    method
    validate_variable_names

    Validate variable names do not include restricted names.

    method
    get_lc_namespace

    Get the namespace of the LangChain object.

    method
    is_lc_serializable

    Return True as this class is serializable.

    method
    get_input_schema

    Get the input schema for the prompt.

    method
    invoke

    Invoke the prompt.

    method
    ainvoke

    Async invoke the prompt.

    method
    format_prompt

    Create PromptValue.

    method
    aformat_prompt

    Async create PromptValue.

    method
    partial

    Return a partial of the prompt template.

    method
    format

    Format the prompt with the inputs.

    method
    aformat

    Async format the prompt with the inputs.

    method
    dict

    Return dictionary representation of prompt.

    method
    save

    Save the prompt.

    Inherited fromRunnableSerializable

    Attributes

    Aname: str
    —

    The name of the function.

    Methods

    Mto_json
    —

    Convert the graph to a JSON-serializable format.

    Mconfigurable_fieldsMconfigurable_alternatives
    —

    Configure alternatives for Runnable objects that can be set at runtime.

    Inherited fromSerializable

    Attributes

    Alc_secrets: dict[str, str]
    —

    A map of constructor argument names to secret ids.

    Alc_attributes: dict
    —

    List of attribute names that should be included in the serialized kwargs.

    Methods

    Mlc_id
    —

    Return a unique identifier for this class for serialization purposes.

    Mto_json
    —

    Convert the graph to a JSON-serializable format.

    Mto_json_not_implemented
    —

    Serialize a "not implemented" object.

    Inherited fromRunnable

    Attributes

    Aname: str
    —

    The name of the function.

    AInputType: AnyAinput_schema: type[BaseModel]
    —

    The type of input this Runnable accepts specified as a Pydantic model.

    Aoutput_schema: type[BaseModel]
    —

    Output schema.

    Aconfig_specs: list[ConfigurableFieldSpec]

    Methods

    Mget_nameMget_input_jsonschema
    —

    Get a JSON schema that represents the input to the Runnable.

    Mget_output_schemaMget_output_jsonschema
    —

    Get a JSON schema that represents the output of the Runnable.

    Mconfig_schema
    —

    The type of config this Runnable accepts specified as a Pydantic model.

    Mget_config_jsonschema
    —

    Get a JSON schema that represents the config of the Runnable.

    Mget_graphMget_prompts
    —

    Return a list of prompts used by this Runnable.

    Mpipe
    —

    Pipe Runnable objects.

    Mpick
    —

    Pick keys from the output dict of this Runnable.

    Massign
    —

    Merge the Dict input with the output produced by the mapping argument.

    MbatchMbatch_as_completed
    —

    Run invoke in parallel on a list of inputs.

    MabatchMabatch_as_completed
    —

    Run ainvoke in parallel on a list of inputs.

    MstreamMastreamMastream_log
    —

    Stream all output from a Runnable, as reported to the callback system.

    Mastream_events
    —

    Generate a stream of events.

    MtransformMatransformMbind
    —

    Bind arguments to a Runnable, returning a new Runnable.

    Mwith_configMwith_listeners
    —

    Bind lifecycle listeners to a Runnable, returning a new Runnable.

    Mwith_alisteners
    —

    Bind async lifecycle listeners to a Runnable.

    Mwith_types
    —

    Bind input and output types to a Runnable, returning a new Runnable.

    Mwith_retry
    —

    Create a new Runnable that retries the original Runnable on exceptions.

    Mmap
    —

    Map a function to multiple iterables.

    Mwith_fallbacks
    —

    Add fallbacks to a Runnable, returning a new Runnable.

    Mas_tool
    —

    Create a BaseTool from a Runnable.

    View source on GitHub