# RedisSemanticCache

> **Class** in `langchain_redis`

📖 [View in docs](https://reference.langchain.com/python/langchain-redis/cache/RedisSemanticCache)

Redis-based semantic cache implementation for LangChain.

This class provides a semantic caching mechanism using Redis and vector similarity
search. It allows for storing and retrieving language model responses based on the
semantic similarity of prompts, rather than exact string matching.

## Signature

```python
RedisSemanticCache(
    self,
    embeddings: Embeddings,
    redis_url: str = 'redis://localhost:6379',
    distance_threshold: float = 0.2,
    ttl: Optional[int] = None,
    name: Optional[str] = _DEFAULT_CACHE_NAME,
    prefix: Optional[str] = _DEFAULT_CACHE_PREFIX,
    redis_client: Optional[Redis] = None,
)
```

## Description

**Example:**

```python
from langchain_redis import RedisSemanticCache
from langchain_openai import OpenAIEmbeddings
from langchain_core.globals import set_llm_cache

embeddings = OpenAIEmbeddings()
semantic_cache = RedisSemanticCache(
    embeddings=embeddings,
    redis_url="redis://localhost:6379",
    distance_threshold=0.1
)

set_llm_cache(semantic_cache)

# Now, when you use an LLM, it will automatically use this semantic cache
```

**Note:**

- This cache uses vector similarity search to find semantically similar prompts.
- The distance_threshold determines how similar a prompt must be to trigger
    a cache hit.
- Lowering the distance_threshold increases precision but may reduce cache hits.
- The cache uses the RedisVL library for efficient vector storage and retrieval.
- Semantic caching can be more flexible than exact matching, allowing cache hits
    for prompts that are semantically similar but not identical.

## Parameters

| Name | Type | Required | Description |
|------|------|----------|-------------|
| `embeddings` | `Embeddings` | Yes | The embedding function to use for encoding prompts. |
| `redis_url` | `str` | No | The URL of the Redis instance to connect to. (default: `'redis://localhost:6379'`) |
| `distance_threshold` | `float` | No | The maximum distance for considering a cache hit. (default: `0.2`) |
| `ttl` | `Optional[int]` | No | Time-to-live for cache entries in seconds. (default: `None`) |
| `name` | `Optional[str]` | No | Name for the cache index.  Defaults to `'llmcache'`. (default: `_DEFAULT_CACHE_NAME`) |
| `prefix` | `Optional[str]` | No | Prefix for all keys stored in Redis.  Defaults to `'llmcache'`. (default: `_DEFAULT_CACHE_PREFIX`) |
| `redis_client` | `Optional[Redis]` | No | An existing Redis client instance.  If provided, `redis_url` is ignored. (default: `None`) |

## Extends

- `BaseCache`

## Constructors

```python
__init__(
    self,
    embeddings: Embeddings,
    redis_url: str = 'redis://localhost:6379',
    distance_threshold: float = 0.2,
    ttl: Optional[int] = None,
    name: Optional[str] = _DEFAULT_CACHE_NAME,
    prefix: Optional[str] = _DEFAULT_CACHE_PREFIX,
    redis_client: Optional[Redis] = None,
)
```

| Name | Type |
|------|------|
| `embeddings` | `Embeddings` |
| `redis_url` | `str` |
| `distance_threshold` | `float` |
| `ttl` | `Optional[int]` |
| `name` | `Optional[str]` |
| `prefix` | `Optional[str]` |
| `redis_client` | `Optional[Redis]` |


## Properties

- `redis`
- `embeddings`
- `prefix`
- `cache`

## Methods

- [`lookup()`](https://reference.langchain.com/python/langchain-redis/cache/RedisSemanticCache/lookup)
- [`update()`](https://reference.langchain.com/python/langchain-redis/cache/RedisSemanticCache/update)
- [`clear()`](https://reference.langchain.com/python/langchain-redis/cache/RedisSemanticCache/clear)
- [`name()`](https://reference.langchain.com/python/langchain-redis/cache/RedisSemanticCache/name)
- [`alookup()`](https://reference.langchain.com/python/langchain-redis/cache/RedisSemanticCache/alookup)
- [`aupdate()`](https://reference.langchain.com/python/langchain-redis/cache/RedisSemanticCache/aupdate)
- [`aclear()`](https://reference.langchain.com/python/langchain-redis/cache/RedisSemanticCache/aclear)

---

[View source on GitHub](https://github.com/langchain-ai/langchain-redis/blob/17794ab183d4abde98747360f251478088836347/libs/redis/langchain_redis/cache.py#L330)