OptionalanonymizerOptionalapiOptionalapiOptionalautoOptionalbatchOptionalbatchMaximum number of operations to batch in a single request.
OptionalblockOptionalcacheUse configureGlobalPromptCache() to configure caching, or
disablePromptCache: true to disable it. This parameter is deprecated.
Configuration for caching. Can be:
true: Enable caching with default settings (uses global singleton)Cache/PromptCache instance: Use custom cache configurationfalse: Disable caching (equivalent to disablePromptCache: true)import { Client, Cache, configureGlobalPromptCache } from "langsmith";
// Enable with defaults
const client1 = new Client({});
// Or use custom configuration
import { configureGlobalPromptCache } from "langsmith";
configureGlobalPromptCache({
maxSize: 100,
ttlSeconds: 3600, // 1 hour, or null for infinite TTL
});
const client2 = new Client({});
// Or disable for a specific client
const client3 = new Client({ disablePromptCache: true });
OptionalcallerOptionaldebugEnable debug mode for the client. If set, all sent HTTP requests will be logged.
OptionaldisableDisable prompt caching for this client. By default, prompt caching is enabled globally.
OptionalfetchCustom fetch implementation. Useful for testing.
Optionalinit: RequestInitOptionalinit: RequestInitOptionalfetchOptionalhideOptionalhideOptionalmanualWhether to require manual .flush() calls before sending traces. Useful if encountering network rate limits at trace high volumes.
OptionalmaxMaximum total memory (in bytes) for both the AutoBatchQueue and batchIngestCaller queue. When exceeded, runs/batches are dropped. Defaults to 1GB.
OptionalomitWhether to omit runtime information from traced runs. If true, runtime information (SDK version, platform, etc.) and LangChain environment variable metadata will not be stored in runs. Defaults to false.
Optionaltimeout_OptionaltraceOptionaltracingOptionalwebOptionalworkspaceThe workspace ID. Required for org-scoped API keys.
Maximum size of a batch of runs in bytes.