LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
LangSmith
  • Client
  • Run Trees
  • Traceable
  • Evaluation
  • Schemas
  • Langchain
  • Jest
  • Vitest
  • Wrappers
  • Anonymizer
  • Traceable
  • Jestlike
  • Vercel
  • Anthropic
  • Sandbox
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

LangSmith
ClientRun TreesTraceableEvaluationSchemasLangchainJestVitestWrappersAnonymizerTraceableJestlikeVercelAnthropicSandbox
Language
Theme
JavaScriptlangsmithindexClientConfig
Interface●Since v0.1

ClientConfig

Copy
interface ClientConfig

Properties

View source on GitHub
property
anonymizer: (values: KVMap) => KVMap | Promise<KVMap>
property
apiKey: string
property
apiUrl: string
property
autoBatchTracing: boolean
property
batchSizeBytesLimit: number
property
batchSizeLimit: number
property
blockOnRootRunFinalization: boolean
property
callerOptions: AsyncCallerParams
property
debug: boolean
property
disablePromptCache: boolean
property
fetchImplementation: (input: RequestInfo | URL, init?: RequestInit) => Promise<Response>
property
fetchOptions: RequestInit
property
hideInputs: boolean | (inputs: KVMap) => KVMap | Promise<KVMap>
property
hideOutputs: boolean | (outputs: KVMap) => KVMap | Promise<KVMap>
property
manualFlushMode: boolean
property
maxIngestMemoryBytes: number
property
omitTracedRuntimeInfo: boolean
property
timeout_ms: number
property
traceBatchConcurrency: number
property
tracingSamplingRate: number
property
webUrl: string
property
workspaceId: string
deprecatedproperty
cache: boolean | PromptCache

Maximum size of a batch of runs in bytes.

Maximum number of operations to batch in a single request.

Disable prompt caching for this client. By default, prompt caching is enabled globally.

Custom fetch implementation. Useful for testing.

Whether to require manual .flush() calls before sending traces. Useful if encountering network rate limits at trace high volumes.

Maximum total memory (in bytes) for both the AutoBatchQueue and batchIngestCaller queue. When exceeded, runs/batches are dropped. Defaults to 1GB.

Whether to omit runtime information from traced runs. If true, runtime information (SDK version, platform, etc.) and LangChain environment variable metadata will not be stored in runs. Defaults to false.

The workspace ID. Required for org-scoped API keys.

Copy
import { Client, Cache, configureGlobalPromptCache } from "langsmith";

// Enable with defaults
const client1 = new Client({});

// Or use custom configuration
import { configureGlobalPromptCache } from "langsmith";
configureGlobalPromptCache({
  maxSize: 100,
  ttlSeconds: 3600, // 1 hour, or null for infinite TTL
});
const client2 = new Client({});

// Or disable for a specific client
const client3 = new Client({ disablePromptCache: true });