LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • Client
  • AsyncClient
  • Run Helpers
  • Run Trees
  • Evaluation
  • Schemas
  • Utilities
  • Wrappers
  • Anonymizer
  • Testing
  • Expect API
  • Middleware
  • Pytest Plugin
  • Deployment SDK
  • RemoteGraph
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

OverviewClientAsyncClientRun HelpersRun TreesEvaluationSchemasUtilitiesWrappersAnonymizerTestingExpect APIMiddlewarePytest PluginDeployment SDKRemoteGraph
Language
Theme
Pythonlangsmithprompt_cacheAsyncPromptCacheconfigure
Method●Since v0.7

configure

Reconfigure the cache parameters.

Copy
configure(
  self,
  *,
  max_size: int = DEFAULT_PROMPT_CACHE_MAX_SIZE,
  ttl_seconds: Optional[float] = DEFAULT_PROMPT_CACHE_TTL_SECONDS,
  refresh_interval_seconds: float = DEFAULT_PROMPT_CACHE_REFRESH_INTERVAL_SECONDS
) -> None

Used in Docs

  • Trace PydanticAI applications
  • Trace Semantic Kernel applications
  • Trace with OpenTelemetry

Parameters

NameTypeDescription
max_sizeint
Default:DEFAULT_PROMPT_CACHE_MAX_SIZE

Maximum entries in cache (LRU eviction when exceeded).

ttl_secondsOptional[float]
Default:DEFAULT_PROMPT_CACHE_TTL_SECONDS

Time before entry is considered stale.

refresh_interval_secondsfloat
Default:DEFAULT_PROMPT_CACHE_REFRESH_INTERVAL_SECONDS

How often to check for stale entries.

View source on GitHub