LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • Client
  • AsyncClient
  • Run Helpers
  • Run Trees
  • Evaluation
  • Schemas
  • Utilities
  • Wrappers
  • Anonymizer
  • Testing
  • Expect API
  • Middleware
  • Pytest Plugin
  • Deployment SDK
  • RemoteGraph
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

OverviewClientAsyncClientRun HelpersRun TreesEvaluationSchemasUtilitiesWrappersAnonymizerTestingExpect APIMiddlewarePytest PluginDeployment SDKRemoteGraph
Language
Theme
PythonlangsmithclientClient
Class●Since v0.0

Client

Client for interacting with the LangSmith API.

Copy
Client(
  self,
  api_url: Optional[str] = None,
  *,
  api_key: Optional[str

Used in Docs

  • Configure threads
  • Configure webhook notifications for rules
  • Custom instrumentation
  • Discover errors and usage patterns with the Insights Agent
  • Dynamic few shot example selection
(28 more not shown)

Constructors

Attributes

Methods

View source on GitHub
]
=
None
,
retry_config
:
Optional
[
Retry
]
=
None
,
timeout_ms
:
Optional
[
Union
[
int
,
tuple
[
int
,
int
]
]
]
=
None
,
web_url
:
Optional
[
str
]
=
None
,
session
:
Optional
[
requests
.
Session
]
=
None
,
auto_batch_tracing
:
bool
=
True
,
anonymizer
:
Optional
[
Callable
[
[
dict
]
,
dict
]
]
=
None
,
hide_inputs
:
Optional
[
Union
[
Callable
[
[
dict
]
,
dict
]
,
bool
]
]
=
None
,
hide_outputs
:
Optional
[
Union
[
Callable
[
[
dict
]
,
dict
]
,
bool
]
]
=
None
,
hide_metadata
:
Optional
[
Union
[
Callable
[
[
dict
]
,
dict
]
,
bool
]
]
=
None
,
omit_traced_runtime_info
:
bool
=
False
,
process_buffered_run_ops
:
Optional
[
Callable
[
[
Sequence
[
dict
]
]
,
Sequence
[
dict
]
]
]
=
None
,
run_ops_buffer_size
:
Optional
[
int
]
=
None
,
run_ops_buffer_timeout_ms
:
Optional
[
float
]
=
None
,
info
:
Optional
[
Union
[
dict
,
ls_schemas
.
LangSmithInfo
]
]
=
None
,
api_urls
:
Optional
[
dict
[
str
,
str
]
]
=
None
,
otel_tracer_provider
:
Optional
[
TracerProvider
]
=
None
,
otel_enabled
:
Optional
[
bool
]
=
None
,
tracing_sampling_rate
:
Optional
[
float
]
=
None
,
workspace_id
:
Optional
[
str
]
=
None
,
max_batch_size_bytes
:
Optional
[
int
]
=
None
,
headers
:
Optional
[
dict
[
str
,
str
]
]
=
None
,
tracing_error_callback
:
Optional
[
Callable
[
[
Exception
]
,
None
]
]
=
None
,
disable_prompt_cache
:
bool
=
False
,
cache
:
Optional
[
Union
[
bool
,
PromptCache
]
]
=
None
)

Parameters

NameTypeDescription
api_urlOptional[str]
Default:None

URL for the LangSmith API.

Defaults to the LANGCHAIN_ENDPOINT environment variable or https://api.smith.langchain.com if not set.

api_keyOptional[str]
Default:None

API key for the LangSmith API.

Defaults to the LANGCHAIN_API_KEY environment variable.

retry_configOptional[Retry]
Default:None
timeout_msOptional[Union[int, tuple[int, int]]]
Default:None
web_urlOptional[str]
Default:None
sessionOptional[requests.Session]
Default:None
auto_batch_tracingbool
Default:True
anonymizerOptional[Callable[[dict], dict]]
Default:None
hide_inputsOptional[Union[Callable[[dict], dict], bool]]
Default:None
hide_outputsOptional[Union[Callable[[dict], dict], bool]]
Default:None
hide_metadataOptional[Union[Callable[[dict], dict], bool]]
Default:None
omit_traced_runtime_infobool
Default:False
process_buffered_run_opsOptional[Callable[[Sequence[dict]], Sequence[dict]]]
Default:None
run_ops_buffer_sizeOptional[int]
Default:None
run_ops_buffer_timeout_msOptional[float]
Default:None
infoOptional[Union[dict, ls_schemas.LangSmithInfo]]
Default:None
api_urlsOptional[dict[str, str]]
Default:None
otel_tracer_providerOptional[TracerProvider]
Default:None
tracing_sampling_rateOptional[float]
Default:None
workspace_idOptional[str]
Default:None
max_batch_size_bytesOptional[int]
Default:None
headersOptional[dict[str, str]]
Default:None
tracing_error_callbackOptional[Callable[[Exception], None]]
Default:None
disable_prompt_cachebool
Default:False
cacheOptional[Union[bool, PromptCache]]
Default:None
constructor
__init__
NameType
api_urlOptional[str]
api_keyOptional[str]
retry_configOptional[Retry]
timeout_msOptional[Union[int, tuple[int, int]]]
web_urlOptional[str]
sessionOptional[requests.Session]
auto_batch_tracingbool
anonymizerOptional[Callable[[dict], dict]]
hide_inputsOptional[Union[Callable[[dict], dict], bool]]
hide_outputsOptional[Union[Callable[[dict], dict], bool]]
hide_metadataOptional[Union[Callable[[dict], dict], bool]]
omit_traced_runtime_infobool
process_buffered_run_opsOptional[Callable[[Sequence[dict]], Sequence[dict]]]
run_ops_buffer_sizeOptional[int]
run_ops_buffer_timeout_msOptional[float]
infoOptional[Union[dict, ls_schemas.LangSmithInfo]]
api_urlsOptional[dict[str, str]]
otel_tracer_providerOptional[TracerProvider]
otel_enabledOptional[bool]
tracing_sampling_rateOptional[float]
workspace_idOptional[str]
max_batch_size_bytesOptional[int]
headersOptional[dict[str, str]]
tracing_error_callbackOptional[Callable[[Exception], None]]
disable_prompt_cachebool
cacheOptional[Union[bool, PromptCache]]
attribute
tracing_sample_rate
attribute
api_url
attribute
api_key: Optional[str]

Return the API key used for authentication.

attribute
retry_config
attribute
timeout_ms
attribute
session: session_
attribute
compressed_traces: Optional[CompressedTraces]
attribute
otel_exporter: Optional[OTELExporter]
attribute
tracing_queue: Optional[PriorityQueue]
attribute
workspace_id: Optional[str]

Return the workspace ID used for API requests.

attribute
info: ls_schemas.LangSmithInfo

Get the information about the LangSmith API.

method
request_with_retries

Send a request with retries.

method
upload_dataframe

Upload a dataframe as individual examples to the LangSmith API.

method
upload_csv

Upload a CSV file to the LangSmith API.

method
create_run

Persist a run to the LangSmith API.

method
batch_ingest_runs

Batch ingest/upsert multiple runs in the Langsmith system.

method
multipart_ingest

Batch ingest/upsert multiple runs in the Langsmith system.

method
update_run

Update a run in the LangSmith API.

method
flush_compressed_traces

Force flush the currently buffered compressed runs.

method
flush

Flush either queue or compressed buffer, depending on mode.

method
read_run

Read a run from the LangSmith API.

method
list_runs

List runs from the LangSmith API.

method
get_run_stats

Get aggregate statistics over queried runs.

Takes in similar query parameters to list_runs and returns statistics based on the runs that match the query.

method
get_run_url

Get the URL for a run.

Not recommended for use within your agent runtime. More for use interacting with runs after the fact for data analysis or ETL workloads.

method
share_run

Get a share link for a run.

method
unshare_run

Delete share link for a run.

method
read_run_shared_link

Retrieve the shared link for a specific run.

method
run_is_shared

Get share state for a run.

method
read_shared_run

Get shared runs.

method
list_shared_runs

Get shared runs.

method
read_dataset_shared_schema

Retrieve the shared schema of a dataset.

method
share_dataset

Get a share link for a dataset.

method
unshare_dataset

Delete share link for a dataset.

method
read_shared_dataset

Get shared datasets.

method
list_shared_examples

Get shared examples.

method
list_shared_projects

List shared projects.

method
create_project

Create a project on the LangSmith API.

method
update_project

Update a LangSmith project.

method
read_project

Read a project from the LangSmith API.

method
has_project

Check if a project exists.

method
get_test_results

Read the record-level information from an experiment into a Pandas DF.

Note

This will fetch whatever data exists in the DB. Results are not immediately available in the DB upon evaluation run completion.

Feedback score values will be returned as an average across all runs for the experiment. Non-numeric feedback scores will be omitted.

method
list_projects

List projects from the LangSmith API.

method
delete_project

Delete a project from LangSmith.

method
create_dataset

Create a dataset in the LangSmith API.

method
has_dataset

Check whether a dataset exists in your tenant.

method
read_dataset

Read a dataset from the LangSmith API.

method
diff_dataset_versions

Get the difference between two versions of a dataset.

method
read_dataset_openai_finetuning

Download a dataset in OpenAI Jsonl format and load it as a list of dicts.

method
list_datasets

List the datasets on the LangSmith API.

method
delete_dataset

Delete a dataset from the LangSmith API.

method
update_dataset_tag

Update the tags of a dataset.

If the tag is already assigned to a different version of this dataset, the tag will be moved to the new version. The as_of parameter is used to determine which version of the dataset to apply the new tags to. It must be an exact version of the dataset to succeed. You can use the read_dataset_version method to find the exact version to apply the tags to.

method
list_dataset_versions

List dataset versions.

method
read_dataset_version

Get dataset version by as_of or exact tag.

Ues this to resolve the nearest version to a given timestamp or for a given tag.

method
clone_public_dataset

Clone a public dataset to your own langsmith tenant.

This operation is idempotent. If you already have a dataset with the given name, this function will do nothing.

method
create_llm_example

Add an example (row) to an LLM-type dataset.

method
create_chat_example

Add an example (row) to a Chat-type dataset.

method
create_example_from_run

Add an example (row) to a dataset from a run.

method
update_examples_multipart

Update examples using multipart.

.. deprecated:: 0.3.9

Use Client.update_examples instead. Will be removed in 0.4.0.
method
upload_examples_multipart

Upload examples using multipart.

.. deprecated:: 0.3.9

Use Client.create_examples instead. Will be removed in 0.4.0.
method
upsert_examples_multipart

Upsert examples.

.. deprecated:: 0.3.9

Use Client.create_examples and Client.update_examples instead. Will be
removed in 0.4.0.
method
create_examples

Create examples in a dataset.

method
create_example

Create a dataset example in the LangSmith API.

Examples are rows in a dataset, containing the inputs and expected outputs (or other reference information) for a model or chain.

method
read_example

Read an example from the LangSmith API.

method
list_examples

Retrieve the example rows of the specified dataset.

method
index_dataset

Enable dataset indexing. Examples are indexed by their inputs.

This enables searching for similar examples by inputs with client.similar_examples().

method
sync_indexed_dataset

Sync dataset index.

This already happens automatically every 5 minutes, but you can call this to force a sync.

method
similar_examples

Retrieve the dataset examples whose inputs best match the current inputs.

Note

Must have few-shot indexing enabled for the dataset. See client.index_dataset().

method
update_example

Update a specific example.

method
update_examples

Update multiple examples.

Examples are expected to all be part of the same dataset.

method
delete_example

Delete an example by ID.

method
delete_examples

Delete multiple examples by ID.

Parameters

example_ids : Sequence[ID_TYPE] The IDs of the examples to delete. hard_delete : bool, default=False If True, permanently delete the examples. If False, soft delete them.

method
list_dataset_splits

Get the splits for a dataset.

method
update_dataset_splits

Update the splits for a dataset.

method
evaluate_run

Evaluate a run.

method
aevaluate_run

Evaluate a run asynchronously.

method
create_feedback

Create feedback for a run.

Note

To enable feedback to be batch uploaded in the background you must specify trace_id. We highly encourage this for latency-sensitive environments.

method
update_feedback

Update a feedback in the LangSmith API.

method
read_feedback

Read a feedback from the LangSmith API.

method
list_feedback

List the feedback objects on the LangSmith API.

method
delete_feedback

Delete a feedback by ID.

method
create_feedback_from_token

Create feedback from a presigned token or URL.

method
create_presigned_feedback_token

Create a pre-signed URL to send feedback data to.

This is useful for giving browser-based clients a way to upload feedback data directly to LangSmith without accessing the API key.

method
create_presigned_feedback_tokens

Create a pre-signed URL to send feedback data to.

This is useful for giving browser-based clients a way to upload feedback data directly to LangSmith without accessing the API key.

method
list_presigned_feedback_tokens

List the feedback ingest tokens for a run.

method
list_feedback_formulas

List feedback formulas.

method
get_feedback_formula_by_id

Get a feedback formula by ID.

method
create_feedback_formula

Create a feedback formula.

method
update_feedback_formula

Update a feedback formula.

method
delete_feedback_formula

Delete a feedback formula by ID.

method
create_feedback_config

Create a feedback configuration.

Defines how feedback with a given key should be interpreted. If an identical configuration already exists for the key, it is returned unchanged. If a different configuration already exists for the key, an error is raised.

method
list_feedback_configs

List feedback configurations.

method
update_feedback_config

Update a feedback configuration.

Only the provided fields will be updated; others remain unchanged.

method
delete_feedback_config

Delete a feedback configuration.

This performs a soft delete. The configuration can be recreated later with the same key.

method
list_annotation_queues

List the annotation queues on the LangSmith API.

method
create_annotation_queue

Create an annotation queue on the LangSmith API.

method
read_annotation_queue

Read an annotation queue with the specified queue_id.

method
update_annotation_queue

Update an annotation queue with the specified queue_id.

method
delete_annotation_queue

Delete an annotation queue with the specified queue_id.

method
add_runs_to_annotation_queue

Add runs to an annotation queue with the specified queue_id.

method
delete_run_from_annotation_queue

Delete a run from an annotation queue with the specified queue_id and run_id.

method
get_run_from_annotation_queue

Get a run from an annotation queue at the specified index.

method
create_comparative_experiment

Create a comparative experiment on the LangSmith API.

These experiments compare 2 or more experiment results over a shared dataset.

method
like_prompt

Like a prompt.

method
unlike_prompt

Unlike a prompt.

method
list_prompts

List prompts with pagination.

method
get_prompt

Get a specific prompt by its identifier.

method
create_prompt

Create a new prompt.

Does not attach prompt object, just creates an empty prompt.

method
create_commit

Create a commit for an existing prompt.

method
update_prompt

Update a prompt's metadata.

To update the content of a prompt, use push_prompt or create_commit instead.

method
delete_prompt

Delete a prompt.

method
pull_prompt_commit

Pull a prompt object from the LangSmith API.

method
list_prompt_commits

List commits for a given prompt.

method
pull_prompt

Pull a prompt and return it as a LangChain PromptTemplate.

This method requires langchain-core.

method
push_prompt

Push a prompt to the LangSmith API.

Can be used to update prompt metadata or prompt content.

If the prompt does not exist, it will be created. If the prompt exists, it will be updated.

method
cleanup

Manually trigger cleanup of background threads.

method
evaluate

Evaluate a target system on a given dataset.

method
aevaluate

Evaluate an async target system on a given dataset.

method
get_experiment_results

Get results for an experiment, including experiment session aggregated stats and experiment runs for each dataset example.

Experiment results may not be available immediately after the experiment is created.

method
generate_insights

Generate Insights over your agent chat histories.

Note
  • Only available to Plus and higher tier LangSmith users.
  • Insights Agent uses user's model API key. The cost of the report grows linearly with the number of chat histories you upload and the size of each history. For more see insights.
  • This method will upload your chat histories as traces to LangSmith.
  • If you pass in a model API key this will be set as a workspace secret meaning it will be usedin for evaluators and the playground.
method
poll_insights

Poll the status of an Insights report.

Retry configuration for the HTTPAdapter.

Timeout for the HTTPAdapter.

Can also be a 2-tuple of (connect timeout, read timeout) to set them separately.

URL for the LangSmith web app.

Default is auto-inferred from the ENDPOINT.

The session to use for requests.

If None, a new session will be created.

Whether to automatically batch tracing.

A function applied for masking serialized run inputs and outputs, before sending to the API.

Whether to hide run inputs when tracing with this client.

If True, hides the entire inputs.

If a function, applied to all run inputs when creating runs.

Whether to hide run outputs when tracing with this client.

If True, hides the entire outputs.

If a function, applied to all run outputs when creating runs.

Whether to hide run metadata when tracing with this client.

If True, hides the entire metadata.

If a function, applied to all run metadata when creating runs.

Whether to omit runtime information from traced runs.

If True, runtime information (SDK version, platform, Python version, etc.) will not be stored in the extra.runtime field of runs.

Defaults to False.

A function applied to buffered run operations that allows for modification of the raw run dicts before they are converted to multipart and compressed.

Useful specifically for high throughput tracing where you need to apply a rate-limited API or other costly process to the runs before they are sent to the API.

Note that the buffer will only flush automatically when run_ops_buffer_size is reached or a new run is added to the buffer after run_ops_buffer_timeout_ms has elapsed - it will not flush outside of these conditions unless you manually call client.flush(), so be sure to do this before your code exits.

Maximum number of run operations to collect in the buffer before applying process_buffered_run_ops and sending to the API.

Required when process_buffered_run_ops is provided.

Maximum time in milliseconds to wait before flushing the run ops buffer when new runs are added.

Defaults to 5000.

Only used when process_buffered_run_ops is provided.

The information about the LangSmith API.

If not provided, it will be fetched from the API.

A dictionary of write API URLs and their corresponding API keys.

Useful for multi-tenant setups.

Data is only read from the first URL in the dictionary. However, ONLY Runs are written (POST and PATCH) to all URLs in the dictionary. Feedback, sessions, datasets, examples, annotation queues and evaluation results are only written to the first.

Optional tracer provider for OpenTelemetry integration.

If not provided, a LangSmith-specific tracer provider will be used.

The sampling rate for tracing.

If provided, overrides the LANGCHAIN_TRACING_SAMPLING_RATE environment variable.

Should be a float between 0 and 1, where 1 means trace everything and 0 means trace nothing.

The workspace ID.

Required for org-scoped API keys.

The maximum size of a batch of runs in bytes.

If not provided, the default is set by the server.

Additional HTTP headers to include in all requests.

These headers will be merged with the default headers (User-Agent, Accept, x-api-key, etc.). Custom headers will not override the default required headers.

Optional callback function to handle errors.

Called when exceptions occur during tracing operations.

Disable prompt caching for this client.

By default, prompt caching is enabled globally using a singleton cache. Set this to True to disable caching for this specific client instance.

To configure the global cache, use configure_global_prompt_cache().

Example
from langsmith import Client, configure_global_prompt_cache

# Use default global cache
client = Client()

# Disable caching for this client
client_no_cache = Client(disable_prompt_cache=True)

# Configure global cache settings
configure_global_prompt_cache(max_size=200, ttl_seconds=7200)

[Deprecated] Control prompt caching behavior.

This parameter is deprecated. Use configure_global_prompt_cache() to configure caching, or disable_prompt_cache=True to disable it.

  • True: Enable caching with the global singleton (default behavior)
  • False: Disable caching (equivalent to disable_prompt_cache=True)
  • Cache(...)/PromptCache(...): Use a custom cache instance
Example
from langsmith import Client, Cache, configure_global_prompt_cache

# Old API (deprecated but still supported)
client = Client(cache=True)  # Use global cache
client = Client(cache=False)  # Disable cache

# Use custom cache instance
my_cache = Cache(max_size=100, ttl_seconds=3600)
client = Client(cache=my_cache)

# New API (recommended)
client = Client()  # Use global cache (default)

# Configure global cache for all clients
configure_global_prompt_cache(max_size=200, ttl_seconds=7200)

# Or disable for a specific client
client = Client(disable_prompt_cache=True)