Pull a prompt and return it as a LangChain PromptTemplate.
This method requires langchain-core.
pull_prompt(
self,
prompt_identifier: str,
*,
include_model: bool | None = False,
secrets: dict[str, str] | None = None,
secrets_from_env: bool = False,
skip_cache: bool = False
) -> AnyUpdated to take arguments secrets and secrets_from_env which default
to None and False, respectively.
By default secrets needed to initialize a pulled object will no longer be
read from environment variables. This is relevant when
include_model=True. For example, to load an OpenAI model you need to
have an OPENAI_API_KEY. Previously this was read from environment
variables by default. To do so now you must specify
secrets={"OPENAI_API_KEY": "sk-..."} or secrets_from_env=True.
secrets_from_env should only be used when pulling trusted prompts.
These updates were made to remediate vulnerability
GHSA-c67j-w6g6-q2cm
in the langchain-core package which this method (but not the entire
langsmith package) depends on.
| Name | Type | Description |
|---|---|---|
prompt_identifier* | str | The identifier of the prompt. |
include_model | bool | None | Default: FalseWhether to include the model information in the prompt data. |
secrets | dict[str, str] | None | Default: NoneA map of secrets to use when loading, e.g.
If a secret is not found in the map, it will be loaded from the
environment if |
secrets_from_env | bool | Default: FalseWhether to load secrets from the environment. SECURITY NOTE: Should only be set to |
skip_cache | bool | Default: FalseWhether to skip the prompt cache. Defaults to |