The name of the repo containing the prompt, as well as an optional commit hash separated by a slash.
Optionaloptions: {OptionalapiKey?: stringLangSmith API key to use when pulling the prompt
OptionalapiUrl?: stringLangSmith API URL to use when pulling the prompt
OptionalincludeModel?: booleanWhether to also instantiate and attach a model instance to the prompt, if the prompt has associated model metadata. If set to true, invoking the resulting pulled prompt will also invoke the instantiated model. For non-OpenAI models, you must also set "modelClass" to the correct class of the model.
OptionalmodelClass?: new (...args: any[]) => BaseLanguageModelIf includeModel is true, the class of the model to instantiate. Required for non-OpenAI models. If you are running in Node or another environment that supports dynamic imports, you may instead import this function from "langchain/hub/node" and pass "includeModel: true" instead of specifying this parameter.
Optionalsecrets?: Record<string, string>A map of secrets to use when loading, e.g.
{'OPENAI_API_KEY': 'sk-...'}. If a secret is not found in the map, it will be loaded from the environment if secrets_from_envisTrue. Should only be needed when includeModelistrue`.
OptionalsecretsFromEnv?: booleanWhether to load secrets from environment variables. Use with caution and only with trusted prompts.
Pull a prompt from the hub.