RedisSemanticCache(
self,
embeddings: Embeddings,
redis_url: str = 'redis://localhost:6379',
distance_threshold: float| Name | Type | Description |
|---|---|---|
embeddings* | Embeddings | The embedding function to use for encoding prompts. |
redis_url | str | Default: 'redis://localhost:6379'The URL of the Redis instance to connect to. |
distance_threshold | float | Default: 0.2The maximum distance for considering a cache hit. |
ttl | Optional[int] | Default: None |
name | Optional[str] | Default: _DEFAULT_CACHE_NAME |
prefix | Optional[str] | Default: _DEFAULT_CACHE_PREFIX |
redis_client | Optional[Redis] | Default: None |
Redis-based semantic cache implementation for LangChain.
This class provides a semantic caching mechanism using Redis and vector similarity search. It allows for storing and retrieving language model responses based on the semantic similarity of prompts, rather than exact string matching.
Example:
from langchain_redis import RedisSemanticCache
from langchain_openai import OpenAIEmbeddings
from langchain_core.globals import set_llm_cache
embeddings = OpenAIEmbeddings()
semantic_cache = RedisSemanticCache(
embeddings=embeddings,
redis_url="redis://localhost:6379",
distance_threshold=0.1
)
set_llm_cache(semantic_cache)
# Now, when you use an LLM, it will automatically use this semantic cache
Note:
Time-to-live for cache entries in seconds.
Name for the cache index.
Defaults to 'llmcache'.
Prefix for all keys stored in Redis.
Defaults to 'llmcache'.
An existing Redis client instance.
If provided, redis_url is ignored.