LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicembeddingscacheCacheBackedEmbeddings
    Class●Since v1.0

    CacheBackedEmbeddings

    Copy
    CacheBackedEmbeddings(
      self,
      underlying_embeddings: Embeddings,
      document_embedding_store: BaseStore[str, list[float]],

    Bases

    Embeddings

    Used in Docs

    • BigTableByteStore integration
    • Embedding model integrations

    Constructors

    Attributes

    Methods

    View source on GitHub
    *
    ,
    batch_size
    :
    int
    |
    None
    =
    None
    ,
    query_embedding_store
    :
    BaseStore
    [
    str
    ,
    list
    [
    float
    ]
    ]
    |
    None
    =
    None
    )

    Parameters

    NameTypeDescription
    underlying_embeddings*Embeddings

    the embedder to use for computing embeddings.

    document_embedding_store*BaseStore[str, list[float]]

    The store to use for caching document embeddings.

    batch_sizeint | None
    Default:None
    query_embedding_storeBaseStore[str, list[float]] | None
    Default:None
    constructor
    __init__
    NameType
    underlying_embeddingsEmbeddings
    document_embedding_storeBaseStore[str, list[float]]
    batch_sizeint | None
    query_embedding_storeBaseStore[str, list[float]] | None
    attribute
    document_embedding_store: document_embedding_store
    attribute
    query_embedding_store: query_embedding_store
    attribute
    underlying_embeddings: underlying_embeddings
    attribute
    batch_size: batch_size
    method
    embed_documents

    Embed a list of texts.

    The method first checks the cache for the embeddings. If the embeddings are not found, the method uses the underlying embedder to embed the documents and stores the results in the cache.

    method
    aembed_documents

    Embed a list of texts.

    The method first checks the cache for the embeddings. If the embeddings are not found, the method uses the underlying embedder to embed the documents and stores the results in the cache.

    method
    embed_query

    Embed query text.

    By default, this method does not cache queries. To enable caching, set the cache_query parameter to True when initializing the embedder.

    method
    aembed_query

    Embed query text.

    By default, this method does not cache queries. To enable caching, set the cache_query parameter to True when initializing the embedder.

    method
    from_bytes_store

    On-ramp that adds the necessary serialization and encoding to the store.

    Interface for caching results from embedding models.

    The interface allows works with any store that implements the abstract store interface accepting keys of type str and values of list of floats.

    If need be, the interface can be extended to accept other implementations of the value serializer and deserializer, as well as the key encoder.

    Note that by default only document embeddings are cached. To cache query embeddings too, pass in a query_embedding_store to constructor.

    The number of documents to embed between store updates.

    The store to use for caching query embeddings. If None, query embeddings are not cached.