# CacheBackedEmbeddings

> **Class** in `langchain_classic`

📖 [View in docs](https://reference.langchain.com/python/langchain-classic/embeddings/cache/CacheBackedEmbeddings)

Interface for caching results from embedding models.

The interface allows works with any store that implements
the abstract store interface accepting keys of type str and values of list of
floats.

If need be, the interface can be extended to accept other implementations
of the value serializer and deserializer, as well as the key encoder.

Note that by default only document embeddings are cached. To cache query
embeddings too, pass in a query_embedding_store to constructor.

## Signature

```python
CacheBackedEmbeddings(
    self,
    underlying_embeddings: Embeddings,
    document_embedding_store: BaseStore[str, list[float]],
    *,
    batch_size: int | None = None,
    query_embedding_store: BaseStore[str, list[float]] | None = None,
)
```

## Parameters

| Name | Type | Required | Description |
|------|------|----------|-------------|
| `underlying_embeddings` | `Embeddings` | Yes | the embedder to use for computing embeddings. |
| `document_embedding_store` | `BaseStore[str, list[float]]` | Yes | The store to use for caching document embeddings. |
| `batch_size` | `int \| None` | No | The number of documents to embed between store updates. (default: `None`) |
| `query_embedding_store` | `BaseStore[str, list[float]] \| None` | No | The store to use for caching query embeddings. If `None`, query embeddings are not cached. (default: `None`) |

## Extends

- `Embeddings`

## Constructors

```python
__init__(
    self,
    underlying_embeddings: Embeddings,
    document_embedding_store: BaseStore[str, list[float]],
    *,
    batch_size: int | None = None,
    query_embedding_store: BaseStore[str, list[float]] | None = None,
) -> None
```

| Name | Type |
|------|------|
| `underlying_embeddings` | `Embeddings` |
| `document_embedding_store` | `BaseStore[str, list[float]]` |
| `batch_size` | `int \| None` |
| `query_embedding_store` | `BaseStore[str, list[float]] \| None` |


## Properties

- `document_embedding_store`
- `query_embedding_store`
- `underlying_embeddings`
- `batch_size`

## Methods

- [`embed_documents()`](https://reference.langchain.com/python/langchain-classic/embeddings/cache/CacheBackedEmbeddings/embed_documents)
- [`aembed_documents()`](https://reference.langchain.com/python/langchain-classic/embeddings/cache/CacheBackedEmbeddings/aembed_documents)
- [`embed_query()`](https://reference.langchain.com/python/langchain-classic/embeddings/cache/CacheBackedEmbeddings/embed_query)
- [`aembed_query()`](https://reference.langchain.com/python/langchain-classic/embeddings/cache/CacheBackedEmbeddings/aembed_query)
- [`from_bytes_store()`](https://reference.langchain.com/python/langchain-classic/embeddings/cache/CacheBackedEmbeddings/from_bytes_store)

---

[View source on GitHub](https://github.com/langchain-ai/langchain/blob/fb6ab993a73180538f6cca876b3c85d46c08845f/libs/langchain/langchain_classic/embeddings/cache.py#L108)