LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-corepromptsloading
    Module●Since v0.1

    loading

    Load prompts.

    Attributes

    attribute
    URL_BASE: str
    attribute
    logger
    attribute
    type_to_loader_dict: dict[str, Callable[[dict], BasePromptTemplate]]

    Functions

    function
    load_prompt_from_config

    Load prompt from config dict.

    function
    load_prompt

    Unified method for loading a prompt from LangChainHub or local filesystem.

    Classes

    class
    StrOutputParser

    Extract text content from model outputs as a string.

    Converts model outputs (such as AIMessage or AIMessageChunk objects) into plain text strings. It's the simplest output parser and is useful when you need string responses for downstream processing, display, or storage.

    Supports streaming, yielding text chunks as they're generated by the model.

    class
    BasePromptTemplate

    Base class for all prompt templates, returning a prompt.

    class
    ChatPromptTemplate

    Prompt template for chat models.

    Use to create flexible templated prompts for chat models.

    Example
    from langchain_core.prompts import ChatPromptTemplate
    
    template = ChatPromptTemplate(
        [
            ("system", "You are a helpful AI bot. Your name is {name}."),
            ("human", "Hello, how are you doing?"),
            ("ai", "I'm doing well, thanks!"),
            ("human", "{user_input}"),
        ]
    )
    
    prompt_value = template.invoke(
        {
            "name": "Bob",
            "user_input": "What is your name?",
        }
    )
    # Output:
    # ChatPromptValue(
    #    messages=[
    #        SystemMessage(content='You are a helpful AI bot. Your name is Bob.'),
    #        HumanMessage(content='Hello, how are you doing?'),
    #        AIMessage(content="I'm doing well, thanks!"),
    #        HumanMessage(content='What is your name?')
    #    ]
    # )
    Messages Placeholder
    # In addition to Human/AI/Tool/Function messages,
    # you can initialize the template with a MessagesPlaceholder
    # either using the class directly or with the shorthand tuple syntax:
    
    template = ChatPromptTemplate(
        [
            ("system", "You are a helpful AI bot."),
            # Means the template will receive an optional list of messages under
            # the "conversation" key
            ("placeholder", "{conversation}"),
            # Equivalently:
            # MessagesPlaceholder(variable_name="conversation", optional=True)
        ]
    )
    
    prompt_value = template.invoke(
        {
            "conversation": [
                ("human", "Hi!"),
                ("ai", "How can I assist you today?"),
                ("human", "Can you make me an ice cream sundae?"),
                ("ai", "No."),
            ]
        }
    )
    
    # Output:
    # ChatPromptValue(
    #    messages=[
    #        SystemMessage(content='You are a helpful AI bot.'),
    #        HumanMessage(content='Hi!'),
    #        AIMessage(content='How can I assist you today?'),
    #        HumanMessage(content='Can you make me an ice cream sundae?'),
    #        AIMessage(content='No.'),
    #    ]
    # )
    Single-variable template

    If your prompt has only a single input variable (i.e., one instance of '{variable_nams}'), and you invoke the template with a non-dict object, the prompt template will inject the provided argument into that variable location.

    from langchain_core.prompts import ChatPromptTemplate
    
    template = ChatPromptTemplate(
        [
            ("system", "You are a helpful AI bot. Your name is Carl."),
            ("human", "{user_input}"),
        ]
    )
    
    prompt_value = template.invoke("Hello, there!")
    # Equivalent to
    # prompt_value = template.invoke({"user_input": "Hello, there!"})
    
    # Output:
    #  ChatPromptValue(
    #     messages=[
    #         SystemMessage(content='You are a helpful AI bot. Your name is Carl.'),
    #         HumanMessage(content='Hello, there!'),
    #     ]
    # )
    class
    FewShotPromptTemplate

    Prompt template that contains few shot examples.

    class
    PromptTemplate

    Prompt template for a language model.

    A prompt template consists of a string template. It accepts a set of parameters from the user that can be used to generate a prompt for a language model.

    The template can be formatted using either f-strings (default), jinja2, or mustache syntax.

    Security

    Prefer using template_format='f-string' instead of template_format='jinja2', or make sure to NEVER accept jinja2 templates from untrusted sources as they may lead to arbitrary Python code execution.

    As of LangChain 0.0.329, Jinja2 templates will be rendered using Jinja2's SandboxedEnvironment by default. This sand-boxing should be treated as a best-effort approach rather than a guarantee of security, as it is an opt-out rather than opt-in approach.

    Despite the sandboxing, we recommend to never use jinja2 templates from untrusted sources.

    View source on GitHub