Redis cache implementation for LangChain.
This class provides a Redis-based caching mechanism for LangChain, allowing storage and retrieval of language model responses.
Example:
from langchain_redis import RedisCache
from langchain_core.globals import set_llm_cache
# Create a Redis cache instance
redis_cache = RedisCache(redis_url="redis://localhost:6379", ttl=3600)
# Set it as the global LLM cache
set_llm_cache(redis_cache)
# Now, when you use an LLM, it will automatically use this cache
Note:
Prefix for all keys stored in Redis.
An existing Redis client instance.
If provided, redis_url is ignored.