LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • Client
  • AsyncClient
  • Run Helpers
  • Run Trees
  • Evaluation
  • Schemas
  • Utilities
  • Wrappers
  • Anonymizer
  • Testing
  • Expect API
  • Middleware
  • Pytest Plugin
  • Deployment SDK
  • RemoteGraph
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

OverviewClientAsyncClientRun HelpersRun TreesEvaluationSchemasUtilitiesWrappersAnonymizerTestingExpect APIMiddlewarePytest PluginDeployment SDKRemoteGraph
Language
Theme
PythonlangsmithclientClientpull_prompt
Method●Since v0.1

pull_prompt

Pull a prompt and return it as a LangChain PromptTemplate.

This method requires langchain-core.

Copy
pull_prompt(
  self,
  prompt_identifier: str,
  *,
  include_model: bool | None = False,
  secrets: dict[str, str] | None = None,
  secrets_from_env: bool = False,
  skip_cache: bool = False
) -> Any

Updated to take arguments secrets and secrets_from_env which default to None and False, respectively.

By default secrets needed to initialize a pulled object will no longer be read from environment variables. This is relevant when include_model=True. For example, to load an OpenAI model you need to have an OPENAI_API_KEY. Previously this was read from environment variables by default. To do so now you must specify secrets={"OPENAI_API_KEY": "sk-..."} or secrets_from_env=True. secrets_from_env should only be used when pulling trusted prompts.

These updates were made to remediate vulnerability GHSA-c67j-w6g6-q2cm in the langchain-core package which this method (but not the entire langsmith package) depends on.

Parameters

NameTypeDescription
prompt_identifier*str

The identifier of the prompt.

include_modelbool | None
Default:False

Whether to include the model information in the prompt data.

secretsdict[str, str] | None
Default:None

A map of secrets to use when loading, e.g. {'OPENAI_API_KEY': 'sk-...'}.

If a secret is not found in the map, it will be loaded from the environment if secrets_from_env is True. Should only be needed when include_model=True.

secrets_from_envbool
Default:False

Whether to load secrets from the environment.

SECURITY NOTE: Should only be set to True when pulling trusted prompts.

skip_cachebool
Default:False

Whether to skip the prompt cache. Defaults to False.

View source on GitHub