LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-coreembeddingsfakeDeterministicFakeEmbedding
    Class●Since v0.1

    DeterministicFakeEmbedding

    Copy
    DeterministicFakeEmbedding()

    Bases

    EmbeddingsBaseModel

    Used in Docs

    • Elasticsearch integration
    • Vector store integrations

    Attributes

    Methods

    Inherited fromEmbeddings

    Methods

    Maembed_documents
    —

    Asynchronous Embed search docs.

    Maembed_query
    —

    Asynchronous Embed query text.

    View source on GitHub
    attribute
    size: int

    The size of the embedding vector.

    method
    embed_documents
    method
    embed_query

    Deterministic fake embedding model for unit testing purposes.

    This embedding model creates embeddings by sampling from a normal distribution with a seed based on the hash of the text.

    Toy model

    Do not use this outside of testing, as it is not a real embedding model.

    Instantiate:

    from langchain_core.embeddings import DeterministicFakeEmbedding
    
    embed = DeterministicFakeEmbedding(size=100)

    Embed single text:

    input_text = "The meaning of life is 42"
    vector = embed.embed_query(input_text)
    print(vector[:3])
    [-0.700234640213188, -0.581266257710429, -1.1328482266445354]

    Embed multiple texts:

    input_texts = ["Document 1...", "Document 2..."]
    vectors = embed.embed_documents(input_texts)
    print(len(vectors))
    # The first 3 coordinates for the first vector
    print(vectors[0][:3])
    2
    [-0.5670477847544458, -0.31403828652395727, -0.5840547508955257]