Client for interacting with the LangSmith API.
| Name | Type | Description |
|---|---|---|
api_url | Optional[str] | Default: NoneURL for the LangSmith API. Defaults to the |
api_key | Optional[str] | Default: NoneAPI key for the LangSmith API. Defaults to the |
retry_config | Optional[Retry] | Default: None |
timeout_ms | Optional[Union[int, tuple[int, int]]] | Default: None |
web_url | Optional[str] | Default: None |
session | Optional[requests.Session] | Default: None |
auto_batch_tracing | bool | Default: True |
anonymizer | Optional[Callable[[dict], dict]] | Default: None |
hide_inputs | Optional[Union[Callable[[dict], dict], bool]] | Default: None |
hide_outputs | Optional[Union[Callable[[dict], dict], bool]] | Default: None |
hide_metadata | Optional[Union[Callable[[dict], dict], bool]] | Default: None |
omit_traced_runtime_info | bool | Default: False |
process_buffered_run_ops | Optional[Callable[[Sequence[dict]], Sequence[dict]]] | Default: None |
run_ops_buffer_size | Optional[int] | Default: None |
run_ops_buffer_timeout_ms | Optional[float] | Default: None |
info | Optional[Union[dict, ls_schemas.LangSmithInfo]] | Default: None |
api_urls | Optional[dict[str, str]] | Default: None |
otel_tracer_provider | Optional[TracerProvider] | Default: None |
tracing_sampling_rate | Optional[float] | Default: None |
workspace_id | Optional[str] | Default: None |
max_batch_size_bytes | Optional[int] | Default: None |
headers | Optional[dict[str, str]] | Default: None |
tracing_error_callback | Optional[Callable[[Exception], None]] | Default: None |
disable_prompt_cache | bool | Default: False |
cache | Optional[Union[bool, PromptCache]] | Default: None |
| Name | Type |
|---|---|
| api_url | Optional[str] |
| api_key | Optional[str] |
| retry_config | Optional[Retry] |
| timeout_ms | Optional[Union[int, tuple[int, int]]] |
| web_url | Optional[str] |
| session | Optional[requests.Session] |
| auto_batch_tracing | bool |
| anonymizer | Optional[Callable[[dict], dict]] |
| hide_inputs | Optional[Union[Callable[[dict], dict], bool]] |
| hide_outputs | Optional[Union[Callable[[dict], dict], bool]] |
| hide_metadata | Optional[Union[Callable[[dict], dict], bool]] |
| omit_traced_runtime_info | bool |
| process_buffered_run_ops | Optional[Callable[[Sequence[dict]], Sequence[dict]]] |
| run_ops_buffer_size | Optional[int] |
| run_ops_buffer_timeout_ms | Optional[float] |
| info | Optional[Union[dict, ls_schemas.LangSmithInfo]] |
| api_urls | Optional[dict[str, str]] |
| otel_tracer_provider | Optional[TracerProvider] |
| otel_enabled | Optional[bool] |
| tracing_sampling_rate | Optional[float] |
| workspace_id | Optional[str] |
| max_batch_size_bytes | Optional[int] |
| headers | Optional[dict[str, str]] |
| tracing_error_callback | Optional[Callable[[Exception], None]] |
| disable_prompt_cache | bool |
| cache | Optional[Union[bool, PromptCache]] |
Create a new prompt.
Does not attach prompt object, just creates an empty prompt.
Pull a prompt and return it as a LangChain PromptTemplate.
This method requires langchain-core.
Generate Insights over your agent chat histories.
Retry configuration for the HTTPAdapter.
Timeout for the HTTPAdapter.
Can also be a 2-tuple of (connect timeout, read timeout) to set them
separately.
URL for the LangSmith web app.
Default is auto-inferred from the ENDPOINT.
The session to use for requests.
If None, a new session will be created.
Whether to automatically batch tracing.
A function applied for masking serialized run inputs and outputs, before sending to the API.
Whether to hide run inputs when tracing with this client.
If True, hides the entire inputs.
If a function, applied to all run inputs when creating runs.
Whether to hide run outputs when tracing with this client.
If True, hides the entire outputs.
If a function, applied to all run outputs when creating runs.
Whether to hide run metadata when tracing with this client.
If True, hides the entire metadata.
If a function, applied to all run metadata when creating runs.
Whether to omit runtime information from traced runs.
If True, runtime information (SDK version, platform, Python version,
etc.) will not be stored in the extra.runtime field of runs.
Defaults to False.
A function applied to buffered run operations that allows for modification of the raw run dicts before they are converted to multipart and compressed.
Useful specifically for high throughput tracing where you need to apply a rate-limited API or other costly process to the runs before they are sent to the API.
Note that the buffer will only flush automatically when
run_ops_buffer_size is reached or a new run is added to the buffer
after run_ops_buffer_timeout_ms has elapsed - it will not flush
outside of these conditions unless you manually call client.flush(),
so be sure to do this before your code exits.
Maximum number of run operations to collect in the
buffer before applying process_buffered_run_ops and sending to the
API.
Required when process_buffered_run_ops is provided.
Maximum time in milliseconds to wait before flushing the run ops buffer when new runs are added.
Defaults to 5000.
Only used when process_buffered_run_ops is provided.
The information about the LangSmith API.
If not provided, it will be fetched from the API.
A dictionary of write API URLs and their corresponding API keys.
Useful for multi-tenant setups.
Data is only read from the first URL in the dictionary. However, ONLY
Runs are written (POST and PATCH) to all URLs in the dictionary.
Feedback, sessions, datasets, examples, annotation queues and evaluation
results are only written to the first.
Optional tracer provider for OpenTelemetry integration.
If not provided, a LangSmith-specific tracer provider will be used.
The sampling rate for tracing.
If provided, overrides the LANGCHAIN_TRACING_SAMPLING_RATE environment
variable.
Should be a float between 0 and 1, where 1 means trace everything
and 0 means trace nothing.
The workspace ID.
Required for org-scoped API keys.
The maximum size of a batch of runs in bytes.
If not provided, the default is set by the server.
Additional HTTP headers to include in all requests.
These headers will be merged with the default headers (User-Agent, Accept, x-api-key, etc.). Custom headers will not override the default required headers.
Optional callback function to handle errors.
Called when exceptions occur during tracing operations.
Disable prompt caching for this client.
By default, prompt caching is enabled globally using a singleton cache.
Set this to True to disable caching for this specific client instance.
To configure the global cache, use configure_global_prompt_cache().
from langsmith import Client, configure_global_prompt_cache
# Use default global cache
client = Client()
# Disable caching for this client
client_no_cache = Client(disable_prompt_cache=True)
# Configure global cache settings
configure_global_prompt_cache(max_size=200, ttl_seconds=7200)[Deprecated] Control prompt caching behavior.
This parameter is deprecated. Use configure_global_prompt_cache() to
configure caching, or disable_prompt_cache=True to disable it.
True: Enable caching with the global singleton (default behavior)False: Disable caching (equivalent to disable_prompt_cache=True)Cache(...)/PromptCache(...): Use a custom cache instancefrom langsmith import Client, Cache, configure_global_prompt_cache
# Old API (deprecated but still supported)
client = Client(cache=True) # Use global cache
client = Client(cache=False) # Disable cache
# Use custom cache instance
my_cache = Cache(max_size=100, ttl_seconds=3600)
client = Client(cache=my_cache)
# New API (recommended)
client = Client() # Use global cache (default)
# Configure global cache for all clients
configure_global_prompt_cache(max_size=200, ttl_seconds=7200)
# Or disable for a specific client
client = Client(disable_prompt_cache=True)