Async Client¶
async_client
¶
The Async LangSmith Client.
AsyncClient
¶
Async Client for interacting with the LangSmith API.
| METHOD | DESCRIPTION |
|---|---|
__init__ |
Initialize the async client. |
__aenter__ |
Enter the async client. |
__aexit__ |
Exit the async client. |
aclose |
Close the async client. |
create_run |
Create a run. |
update_run |
Update a run. |
read_run |
Read a run. |
list_runs |
List runs from the LangSmith API. |
share_run |
Get a share link for a run asynchronously. |
run_is_shared |
Get share state for a run asynchronously. |
read_run_shared_link |
Retrieve the shared link for a specific run asynchronously. |
create_project |
Create a project. |
read_project |
Read a project. |
delete_project |
Delete a project from LangSmith. |
create_dataset |
Create a dataset. |
read_dataset |
Read a dataset. |
delete_dataset |
Delete a dataset. |
list_datasets |
List datasets. |
create_example |
Create an example. |
read_example |
Read an example. |
list_examples |
List examples. |
create_feedback |
Create feedback for a run. |
create_feedback_from_token |
Create feedback from a presigned token or URL. |
create_presigned_feedback_token |
Create a pre-signed URL to send feedback data to. |
read_feedback |
Read feedback. |
list_feedback |
List feedback. |
delete_feedback |
Delete a feedback by ID. |
list_annotation_queues |
List the annotation queues on the LangSmith API. |
create_annotation_queue |
Create an annotation queue on the LangSmith API. |
read_annotation_queue |
Read an annotation queue with the specified |
update_annotation_queue |
Update an annotation queue with the specified |
delete_annotation_queue |
Delete an annotation queue with the specified |
add_runs_to_annotation_queue |
Add runs to an annotation queue with the specified |
delete_run_from_annotation_queue |
Delete a run from an annotation queue with the specified |
get_run_from_annotation_queue |
Get a run from an annotation queue at the specified index. |
index_dataset |
Enable dataset indexing. Examples are indexed by their inputs. |
sync_indexed_dataset |
Sync dataset index. |
similar_examples |
Retrieve the dataset examples whose inputs best match the current inputs. |
like_prompt |
Like a prompt. |
unlike_prompt |
Unlike a prompt. |
list_prompts |
List prompts with pagination. |
get_prompt |
Get a specific prompt by its identifier. |
create_prompt |
Create a new prompt. |
create_commit |
Create a commit for an existing prompt. |
update_prompt |
Update a prompt's metadata. |
delete_prompt |
Delete a prompt. |
pull_prompt_commit |
Pull a prompt object from the LangSmith API. |
list_prompt_commits |
List commits for a given prompt. |
pull_prompt |
Pull a prompt and return it as a LangChain |
push_prompt |
Push a prompt to the LangSmith API. |
__init__
¶
__init__(
api_url: str | None = None,
api_key: str | None = None,
timeout_ms: int
| tuple[int | None, int | None, int | None, int | None]
| None = None,
retry_config: Mapping[str, Any] | None = None,
web_url: str | None = None,
cache: AsyncCache | bool = False,
)
Initialize the async client.
| PARAMETER | DESCRIPTION |
|---|---|
api_url
|
URL for the LangSmith API.
TYPE:
|
api_key
|
API key for the LangSmith API.
TYPE:
|
timeout_ms
|
Timeout for requests in milliseconds.
TYPE:
|
retry_config
|
Retry configuration. |
web_url
|
URL for the LangSmith web app.
TYPE:
|
cache
|
Configuration for caching. Can be:
TYPE:
|
create_run
async
¶
create_run(
name: str,
inputs: dict[str, Any],
run_type: str,
*,
project_name: str | None = None,
revision_id: ID_TYPE | None = None,
**kwargs: Any,
) -> None
Create a run.
list_runs
async
¶
list_runs(
*,
project_id: ID_TYPE | Sequence[ID_TYPE] | None = None,
project_name: str | Sequence[str] | None = None,
run_type: str | None = None,
trace_id: ID_TYPE | None = None,
reference_example_id: ID_TYPE | None = None,
query: str | None = None,
filter: str | None = None,
trace_filter: str | None = None,
tree_filter: str | None = None,
is_root: bool | None = None,
parent_run_id: ID_TYPE | None = None,
start_time: datetime | None = None,
error: bool | None = None,
run_ids: Sequence[ID_TYPE] | None = None,
select: Sequence[str] | None = None,
limit: int | None = None,
**kwargs: Any,
) -> AsyncIterator[Run]
List runs from the LangSmith API.
| PARAMETER | DESCRIPTION |
|---|---|
project_id
|
The ID(s) of the project to filter by.
TYPE:
|
project_name
|
The name(s) of the project to filter by. |
run_type
|
The type of the runs to filter by.
TYPE:
|
trace_id
|
The ID of the trace to filter by.
TYPE:
|
reference_example_id
|
The ID of the reference example to filter by.
TYPE:
|
query
|
The query string to filter by.
TYPE:
|
filter
|
The filter string to filter by.
TYPE:
|
trace_filter
|
Filter to apply to the ROOT run in the trace tree. This is meant to be used in conjunction with the regular
TYPE:
|
tree_filter
|
Filter to apply to OTHER runs in the trace tree, including sibling and child runs. This is meant to be used in conjunction with the regular
TYPE:
|
is_root
|
Whether to filter by root runs.
TYPE:
|
parent_run_id
|
The ID of the parent run to filter by.
TYPE:
|
start_time
|
The start time to filter by.
TYPE:
|
error
|
Whether to filter by error status.
TYPE:
|
run_ids
|
The IDs of the runs to filter by.
TYPE:
|
select
|
The fields to select. |
limit
|
The maximum number of runs to return.
TYPE:
|
**kwargs
|
Additional keyword arguments.
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[Run]
|
The runs. |
Examples:
# List all runs in a project
project_runs = client.list_runs(project_name="<your_project>")
# List LLM and Chat runs in the last 24 hours
todays_llm_runs = client.list_runs(
project_name="<your_project>",
start_time=datetime.now() - timedelta(days=1),
run_type="llm",
)
# List root traces in a project
root_runs = client.list_runs(project_name="<your_project>", is_root=1)
# List runs without errors
correct_runs = client.list_runs(project_name="<your_project>", error=False)
# List runs and only return their inputs/outputs (to speed up the query)
input_output_runs = client.list_runs(
project_name="<your_project>", select=["inputs", "outputs"]
)
# List runs by run ID
run_ids = [
"a36092d2-4ad5-4fb4-9c0d-0dba9a2ed836",
"9398e6be-964f-4aa4-8ae9-ad78cd4b7074",
]
selected_runs = client.list_runs(id=run_ids)
# List all "chain" type runs that took more than 10 seconds and had
# `total_tokens` greater than 5000
chain_runs = client.list_runs(
project_name="<your_project>",
filter='and(eq(run_type, "chain"), gt(latency, 10), gt(total_tokens, 5000))',
)
# List all runs called "extractor" whose root of the trace was assigned feedback "user_score" score of 1
good_extractor_runs = client.list_runs(
project_name="<your_project>",
filter='eq(name, "extractor")',
trace_filter='and(eq(feedback_key, "user_score"), eq(feedback_score, 1))',
)
# List all runs that started after a specific timestamp and either have "error" not equal to null or a "Correctness" feedback score equal to 0
complex_runs = client.list_runs(
project_name="<your_project>",
filter='and(gt(start_time, "2023-07-15T12:34:56Z"), or(neq(error, null), and(eq(feedback_key, "Correctness"), eq(feedback_score, 0.0))))',
)
# List all runs where `tags` include "experimental" or "beta" and `latency` is greater than 2 seconds
tagged_runs = client.list_runs(
project_name="<your_project>",
filter='and(or(has(tags, "experimental"), has(tags, "beta")), gt(latency, 2))',
)
share_run
async
¶
share_run(run_id: ID_TYPE, *, share_id: ID_TYPE | None = None) -> str
Get a share link for a run asynchronously.
| PARAMETER | DESCRIPTION |
|---|---|
run_id
|
The ID of the run to share.
TYPE:
|
share_id
|
Custom share ID. If not provided, a random UUID will be generated.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
str
|
The URL of the shared run. |
| RAISES | DESCRIPTION |
|---|---|
HTTPStatusError
|
If the API request fails. |
run_is_shared
async
¶
run_is_shared(run_id: ID_TYPE) -> bool
Get share state for a run asynchronously.
read_run_shared_link
async
¶
read_run_shared_link(run_id: ID_TYPE) -> str | None
Retrieve the shared link for a specific run asynchronously.
| PARAMETER | DESCRIPTION |
|---|---|
run_id
|
The ID of the run.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
str | None
|
Optional[str]: The shared link for the run, or None if the link is not |
str | None
|
available. |
| RAISES | DESCRIPTION |
|---|---|
HTTPStatusError
|
If the API request fails. |
create_project
async
¶
create_project(project_name: str, **kwargs: Any) -> TracerSession
Create a project.
read_project
async
¶
read_project(
project_name: str | None = None, project_id: ID_TYPE | None = None
) -> TracerSession
Read a project.
delete_project
async
¶
create_dataset
async
¶
Create a dataset.
read_dataset
async
¶
Read a dataset.
create_example
async
¶
create_example(
inputs: dict[str, Any],
outputs: dict[str, Any] | None = None,
dataset_id: ID_TYPE | None = None,
dataset_name: str | None = None,
**kwargs: Any,
) -> Example
Create an example.
list_examples
async
¶
list_examples(
*, dataset_id: ID_TYPE | None = None, dataset_name: str | None = None, **kwargs: Any
) -> AsyncIterator[Example]
List examples.
create_feedback
async
¶
create_feedback(
run_id: ID_TYPE | None,
key: str,
score: float | None = None,
value: float | int | bool | str | dict | None = None,
comment: str | None = None,
**kwargs: Any,
) -> Feedback
Create feedback for a run.
| PARAMETER | DESCRIPTION |
|---|---|
run_id
|
The ID of the run to provide feedback for. Can be
TYPE:
|
key
|
The name of the metric or aspect this feedback is about.
TYPE:
|
score
|
The score to rate this run on the metric or aspect.
TYPE:
|
value
|
The display value or non-numeric value for this feedback. |
comment
|
A comment about this feedback.
TYPE:
|
**kwargs
|
Additional keyword arguments to include in the feedback data.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Feedback
|
The created feedback object. |
| RAISES | DESCRIPTION |
|---|---|
HTTPStatusError
|
If the API request fails. |
create_feedback_from_token
async
¶
create_feedback_from_token(
token_or_url: str | UUID,
score: float | int | bool | None = None,
*,
value: float | int | bool | str | dict | None = None,
correction: dict | None = None,
comment: str | None = None,
metadata: dict | None = None,
) -> None
Create feedback from a presigned token or URL.
| PARAMETER | DESCRIPTION |
|---|---|
token_or_url
|
The token or URL from which to create feedback. |
score
|
The score of the feedback. |
value
|
The value of the feedback. |
correction
|
The correction of the feedback.
TYPE:
|
comment
|
The comment of the feedback.
TYPE:
|
metadata
|
Additional metadata for the feedback.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If the source API URL is invalid. |
| RETURNS | DESCRIPTION |
|---|---|
None
|
This method does not return anything. |
create_presigned_feedback_token
async
¶
create_presigned_feedback_token(
run_id: ID_TYPE,
feedback_key: str,
*,
expiration: datetime | timedelta | None = None,
feedback_config: FeedbackConfig | None = None,
feedback_id: ID_TYPE | None = None,
) -> FeedbackIngestToken
Create a pre-signed URL to send feedback data to.
This is useful for giving browser-based clients a way to upload feedback data directly to LangSmith without accessing the API key.
| PARAMETER | DESCRIPTION |
|---|---|
run_id
|
The ID of the run to provide feedback for.
TYPE:
|
feedback_key
|
The name of the metric or aspect this feedback is about.
TYPE:
|
expiration
|
The expiration time of the pre-signed URL. Either a datetime or a timedelta offset from now. Default to 3 hours. |
feedback_config
|
If creating a feedback_key for the first time, this defines how the metric should be interpreted, such as a continuous score (w/ optional bounds), or distribution over categorical values.
TYPE:
|
feedback_id
|
The ID of the feedback to create. If not provided, a new feedback will be created.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
FeedbackIngestToken
|
The pre-signed URL for uploading feedback data. |
list_feedback
async
¶
list_feedback(
*,
run_ids: Sequence[ID_TYPE] | None = None,
feedback_key: Sequence[str] | None = None,
feedback_source_type: Sequence[FeedbackSourceType] | None = None,
limit: int | None = None,
**kwargs: Any,
) -> AsyncIterator[Feedback]
List feedback.
delete_feedback
async
¶
Delete a feedback by ID.
| PARAMETER | DESCRIPTION |
|---|---|
feedback_id
|
The ID of the feedback to delete.
TYPE:
|
list_annotation_queues
async
¶
list_annotation_queues(
*,
queue_ids: list[ID_TYPE] | None = None,
name: str | None = None,
name_contains: str | None = None,
limit: int | None = None,
) -> AsyncIterator[AnnotationQueue]
List the annotation queues on the LangSmith API.
| PARAMETER | DESCRIPTION |
|---|---|
queue_ids
|
The IDs of the queues to filter by.
TYPE:
|
name
|
The name of the queue to filter by.
TYPE:
|
name_contains
|
The substring that the queue name should contain.
TYPE:
|
limit
|
The maximum number of queues to return.
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[AnnotationQueue]
|
The annotation queues. |
create_annotation_queue
async
¶
create_annotation_queue(
*, name: str, description: str | None = None, queue_id: ID_TYPE | None = None
) -> AnnotationQueue
Create an annotation queue on the LangSmith API.
| PARAMETER | DESCRIPTION |
|---|---|
name
|
The name of the annotation queue.
TYPE:
|
description
|
The description of the annotation queue.
TYPE:
|
queue_id
|
The ID of the annotation queue.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
AnnotationQueue
|
The created annotation queue object. |
read_annotation_queue
async
¶
read_annotation_queue(queue_id: ID_TYPE) -> AnnotationQueue
Read an annotation queue with the specified queue_id.
| PARAMETER | DESCRIPTION |
|---|---|
queue_id
|
The ID of the annotation queue to read.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
AnnotationQueue
|
The annotation queue object. |
update_annotation_queue
async
¶
delete_annotation_queue
async
¶
Delete an annotation queue with the specified queue_id.
| PARAMETER | DESCRIPTION |
|---|---|
queue_id
|
The ID of the annotation queue to delete.
TYPE:
|
add_runs_to_annotation_queue
async
¶
add_runs_to_annotation_queue(queue_id: ID_TYPE, *, run_ids: list[ID_TYPE]) -> None
delete_run_from_annotation_queue
async
¶
get_run_from_annotation_queue
async
¶
get_run_from_annotation_queue(
queue_id: ID_TYPE, *, index: int
) -> RunWithAnnotationQueueInfo
Get a run from an annotation queue at the specified index.
| PARAMETER | DESCRIPTION |
|---|---|
queue_id
|
The ID of the annotation queue.
TYPE:
|
index
|
The index of the run to retrieve.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunWithAnnotationQueueInfo
|
The run at the specified index. |
| RAISES | DESCRIPTION |
|---|---|
LangSmithNotFoundError
|
If the run is not found at the given index. |
LangSmithError
|
For other API-related errors. |
index_dataset
async
¶
Enable dataset indexing. Examples are indexed by their inputs.
This enables searching for similar examples by inputs with
client.similar_examples().
| PARAMETER | DESCRIPTION |
|---|---|
dataset_id
|
The ID of the dataset to index.
TYPE:
|
tag
|
The version of the dataset to index. If
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
HTTPError
|
If the request fails. |
sync_indexed_dataset
async
¶
sync_indexed_dataset(*, dataset_id: ID_TYPE, **kwargs: Any) -> None
Sync dataset index.
This already happens automatically every 5 minutes, but you can call this to force a sync.
| PARAMETER | DESCRIPTION |
|---|---|
dataset_id
|
The ID of the dataset to sync.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
HTTPError
|
If the request fails. |
similar_examples
async
¶
similar_examples(
inputs: dict,
/,
*,
limit: int,
dataset_id: ID_TYPE,
filter: str | None = None,
**kwargs: Any,
) -> list[ExampleSearch]
Retrieve the dataset examples whose inputs best match the current inputs.
Note
Must have few-shot indexing enabled for the dataset. See client.index_dataset().
| PARAMETER | DESCRIPTION |
|---|---|
inputs
|
The inputs to use as a search query. Must match the dataset input schema. Must be JSON serializable.
TYPE:
|
limit
|
The maximum number of examples to return.
TYPE:
|
dataset_id
|
The ID of the dataset to search over.
TYPE:
|
filter
|
A filter string to apply to the search results. Uses the same syntax as the Only a subset of operations are supported.
TYPE:
|
kwargs
|
Additional keyword args to pass as part of request body.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[ExampleSearch]
|
List of |
Examples:
from langsmith import Client
client = Client()
await client.similar_examples(
{"question": "When would i use the runnable generator"},
limit=3,
dataset_id="...",
)
[
ExampleSearch(
inputs={
"question": "How do I cache a Chat model? What caches can I use?"
},
outputs={
"answer": "You can use LangChain's caching layer for Chat Models. This can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times, and speed up your application.\n\n```python\n\nfrom langchain.cache import InMemoryCache\nlangchain.llm_cache = InMemoryCache()\n\n# The first time, it is not yet in cache, so it should take longer\nllm.predict('Tell me a joke')\n\n```\n\nYou can also use SQLite Cache which uses a SQLite database:\n\n```python\n rm .langchain.db\n\nfrom langchain.cache import SQLiteCache\nlangchain.llm_cache = SQLiteCache(database_path=\".langchain.db\")\n\n# The first time, it is not yet in cache, so it should take longer\nllm.predict('Tell me a joke') \n```\n"
},
metadata=None,
id=UUID("b2ddd1c4-dff6-49ae-8544-f48e39053398"),
dataset_id=UUID("01b6ce0f-bfb6-4f48-bbb8-f19272135d40"),
),
ExampleSearch(
inputs={"question": "What's a runnable lambda?"},
outputs={
"answer": "A runnable lambda is an object that implements LangChain's `Runnable` interface and runs a callbale (i.e., a function). Note the function must accept a single argument."
},
metadata=None,
id=UUID("f94104a7-2434-4ba7-8293-6a283f4860b4"),
dataset_id=UUID("01b6ce0f-bfb6-4f48-bbb8-f19272135d40"),
),
ExampleSearch(
inputs={"question": "Show me how to use RecursiveURLLoader"},
outputs={
"answer": 'The RecursiveURLLoader comes from the langchain.document_loaders.recursive_url_loader module. Here\'s an example of how to use it:\n\n```python\nfrom langchain.document_loaders.recursive_url_loader import RecursiveUrlLoader\n\n# Create an instance of RecursiveUrlLoader with the URL you want to load\nloader = RecursiveUrlLoader(url="https://example.com")\n\n# Load all child links from the URL page\nchild_links = loader.load()\n\n# Print the child links\nfor link in child_links:\n print(link)\n```\n\nMake sure to replace "https://example.com" with the actual URL you want to load. The load() method returns a list of child links found on the URL page. You can iterate over this list to access each child link.'
},
metadata=None,
id=UUID("0308ea70-a803-4181-a37d-39e95f138f8c"),
dataset_id=UUID("01b6ce0f-bfb6-4f48-bbb8-f19272135d40"),
),
]
like_prompt
async
¶
unlike_prompt
async
¶
list_prompts
async
¶
list_prompts(
*,
limit: int = 100,
offset: int = 0,
is_public: bool | None = None,
is_archived: bool | None = False,
sort_field: PromptSortField = updated_at,
sort_direction: Literal["desc", "asc"] = "desc",
query: str | None = None,
) -> ListPromptsResponse
List prompts with pagination.
| PARAMETER | DESCRIPTION |
|---|---|
limit
|
The maximum number of prompts to return.
TYPE:
|
offset
|
The number of prompts to skip.
TYPE:
|
is_public
|
Filter prompts by if they are public.
TYPE:
|
is_archived
|
Filter prompts by if they are archived.
TYPE:
|
sort_field
|
The field to sort by. Defaults to
TYPE:
|
sort_direction
|
The order to sort by.
TYPE:
|
query
|
Filter prompts by a search query.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
ListPromptsResponse
|
A response object containing the list of prompts. |
get_prompt
async
¶
Get a specific prompt by its identifier.
| PARAMETER | DESCRIPTION |
|---|---|
prompt_identifier
|
The identifier of the prompt. The identifier should be in the format
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Prompt | None
|
The prompt object. |
| RAISES | DESCRIPTION |
|---|---|
HTTPError
|
If the prompt is not found or another error occurs. |
create_prompt
async
¶
create_prompt(
prompt_identifier: str,
*,
description: str | None = None,
readme: str | None = None,
tags: Sequence[str] | None = None,
is_public: bool = False,
) -> Prompt
Create a new prompt.
Does not attach prompt object, just creates an empty prompt.
| PARAMETER | DESCRIPTION |
|---|---|
prompt_identifier
|
The identifier of the prompt. The identifier should be in the format of
TYPE:
|
description
|
A description of the prompt.
TYPE:
|
readme
|
A readme for the prompt.
TYPE:
|
tags
|
A list of tags for the prompt. |
is_public
|
Whether the prompt should be public.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Prompt
|
The created |
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If the current tenant is not the owner. |
HTTPError
|
If the server request fails. |
create_commit
async
¶
create_commit(
prompt_identifier: str,
object: Any,
*,
parent_commit_hash: str | None = None,
tags: str | list[str] | None = None,
) -> str
Create a commit for an existing prompt.
| PARAMETER | DESCRIPTION |
|---|---|
prompt_identifier
|
The identifier of the prompt.
TYPE:
|
object
|
The LangChain object to commit.
TYPE:
|
parent_commit_hash
|
The hash of the parent commit. Defaults to latest commit.
TYPE:
|
tags
|
A single tag or list of tags to apply to the commit. Defaults to |
| RETURNS | DESCRIPTION |
|---|---|
str
|
The url of the prompt commit. |
| RAISES | DESCRIPTION |
|---|---|
HTTPError
|
If the server request fails. |
ValueError
|
If the prompt does not exist. |
update_prompt
async
¶
update_prompt(
prompt_identifier: str,
*,
description: str | None = None,
readme: str | None = None,
tags: Sequence[str] | None = None,
is_public: bool | None = None,
is_archived: bool | None = None,
) -> dict[str, Any]
Update a prompt's metadata.
To update the content of a prompt, use push_prompt or create_commit instead.
| PARAMETER | DESCRIPTION |
|---|---|
prompt_identifier
|
The identifier of the prompt to update.
TYPE:
|
description
|
New description for the prompt.
TYPE:
|
readme
|
New readme for the prompt.
TYPE:
|
tags
|
New list of tags for the prompt. |
is_public
|
New public status for the prompt.
TYPE:
|
is_archived
|
New archived status for the prompt.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
dict[str, Any]
|
The updated prompt data as returned by the server. |
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If the |
HTTPError
|
If the server request fails. |
delete_prompt
async
¶
delete_prompt(prompt_identifier: str) -> None
Delete a prompt.
| PARAMETER | DESCRIPTION |
|---|---|
prompt_identifier
|
The identifier of the prompt to delete.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
None
|
|
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If the current tenant is not the owner of the prompt. |
pull_prompt_commit
async
¶
pull_prompt_commit(
prompt_identifier: str,
*,
include_model: bool | None = False,
skip_cache: bool = False,
) -> PromptCommit
Pull a prompt object from the LangSmith API.
| PARAMETER | DESCRIPTION |
|---|---|
prompt_identifier
|
The identifier of the prompt.
TYPE:
|
include_model
|
Whether to include model information.
TYPE:
|
skip_cache
|
Whether to skip the prompt cache. Defaults to
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
PromptCommit
|
The prompt object. |
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If no commits are found for the prompt. |
list_prompt_commits
async
¶
list_prompt_commits(
prompt_identifier: str,
*,
limit: int | None = None,
offset: int = 0,
include_model: bool = False,
) -> AsyncGenerator[ListedPromptCommit, None]
List commits for a given prompt.
| PARAMETER | DESCRIPTION |
|---|---|
prompt_identifier
|
The identifier of the prompt in the format
TYPE:
|
limit
|
The maximum number of commits to return. If
TYPE:
|
offset
|
The number of commits to skip before starting to return results.
TYPE:
|
include_model
|
Whether to include the model information in the commit data.
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncGenerator[ListedPromptCommit, None]
|
A |
Note
This method uses pagination to retrieve commits. It will make multiple API calls if necessary to retrieve all commits or up to the specified limit.
pull_prompt
async
¶
pull_prompt(
prompt_identifier: str,
*,
include_model: bool | None = False,
secrets: dict[str, str] | None = None,
secrets_from_env: bool = False,
skip_cache: bool = False,
) -> Any
Pull a prompt and return it as a LangChain PromptTemplate.
This method requires langchain-core.
| PARAMETER | DESCRIPTION |
|---|---|
prompt_identifier
|
The identifier of the prompt.
TYPE:
|
include_model
|
Whether to include the model information in the prompt data.
TYPE:
|
secrets
|
A map of secrets to use when loading, e.g.
If a secret is not found in the map, it will be loaded from the
environment if |
secrets_from_env
|
Whether to load secrets from the environment. SECURITY NOTE: Should only be set to
TYPE:
|
skip_cache
|
Whether to skip the prompt cache. Defaults to
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Any
|
The prompt object in the specified format. |
Behavior changed in langsmith 0.5.1
Updated to take arguments secrets and secrets_from_env which default
to None and False, respectively.
By default secrets needed to initialize a pulled object will no longer be
read from environment variables. This is relevant when
include_model=True. For example, to load an OpenAI model you need to
have an OPENAI_API_KEY. Previously this was read from environment
variables by default. To do so now you must specify
secrets={"OPENAI_API_KEY": "sk-..."} or secrets_from_env=True.
secrets_from_env should only be used when pulling trusted prompts.
These updates were made to remediate vulnerability
GHSA-c67j-w6g6-q2cm
in the langchain-core package which this method (but not the entire
langsmith package) depends on.
push_prompt
async
¶
push_prompt(
prompt_identifier: str,
*,
object: Any | None = None,
parent_commit_hash: str = "latest",
is_public: bool | None = None,
description: str | None = None,
readme: str | None = None,
tags: Sequence[str] | None = None,
commit_tags: str | list[str] | None = None,
) -> str
Push a prompt to the LangSmith API.
Can be used to update prompt metadata or prompt content.
If the prompt does not exist, it will be created.
If the prompt exists, it will be updated.
| PARAMETER | DESCRIPTION |
|---|---|
prompt_identifier
|
The identifier of the prompt.
TYPE:
|
object
|
The LangChain object to push.
TYPE:
|
parent_commit_hash
|
The parent commit hash.
TYPE:
|
is_public
|
Whether the prompt should be public. If For new prompts, Set to
TYPE:
|
description
|
A description of the prompt. Defaults to an empty string.
TYPE:
|
readme
|
A readme for the prompt. Defaults to an empty string.
TYPE:
|
tags
|
A list of tags for the prompt. Defaults to an empty list. |
commit_tags
|
A single tag or list of tags for the prompt commit. Defaults to an empty list. |
| RETURNS | DESCRIPTION |
|---|---|
str
|
The URL of the prompt. |