| Name | Type | Description |
|---|---|---|
distance_threshold | float | Default: 0.2Maximum distance for semantic matches. |
ttl | Optional[int] | Default: NoneCache TTL in seconds. If |
name | Optional[str] | Default: _DEFAULT_CACHE_NAME |
server_url | Optional[str] | Default: None |
api_key | Optional[str] | Default: None |
cache_id | Optional[str] | Default: None |
use_exact_search | bool | Default: True |
use_semantic_search | bool | Default: True |
distance_scale | Literal['normalized', 'redis'] | Default: 'normalized' |
**kwargs | Any | Default: {} |
Managed LangCache-backed semantic cache.
This uses redisvl.extensions.cache.llm.LangCacheSemanticCache (a wrapper over the
managed LangCache API). The optional dependency langcache must be installed at
runtime when this class is used.
Install with either pip install 'langchain-redis[langcache]' or
pip install langcache.
Example:
from langchain_redis import LangCacheSemanticCache
cache = LangCacheSemanticCache(
cache_id="your-cache-id",
api_key="your-api-key",
name="mycache",
ttl=3600,
)
Notes:
Cache name used by LangCache.
Defaults to 'llmcache'.
LangCache API endpoint.
If not set, a default managed endpoint is used; prefer the server URL provided for your cache.
API key for LangCache authentication.
Required LangCache instance identifier.
Enable exact match search.
Enable semantic search.
Distance scaling mode.
Additional options forwarded to the LangCache wrapper.