LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
LangChain
  • Universal
  • Hub
  • Node
  • Load
  • Serializable
  • Encoder Backed
  • File System
  • In Memory
LangChain Core
  • Agents
  • Caches
  • Base
  • Dispatch
  • Web
  • Manager
  • Promises
  • Chat History
  • Context
  • Base
  • Langsmith
  • Documents
  • Embeddings
  • Errors
  • Example Selectors
  • Indexing
  • Base
  • Chat Models
  • Llms
  • Profile
  • Load
  • Serializable
  • Memory
  • Messages
  • Tool
  • Output Parsers
  • Openai Functions
  • Openai Tools
  • Outputs
  • Prompt Values
  • Prompts
  • Retrievers
  • Document Compressors
  • Runnables
  • Graph
  • Singletons
  • Stores
  • Structured Query
  • Tools
  • Base
  • Console
  • Log Stream
  • Run Collector
  • Tracer Langchain
  • Stream
  • Async Caller
  • Chunk Array
  • Context
  • Env
  • Event Source Parse
  • Format
  • Function Calling
  • Hash
  • Json Patch
  • Json Schema
  • Math
  • Ssrf
  • Stream
  • Testing
  • Tiktoken
  • Types
  • Vectorstores
Text Splitters
MCP Adapters
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

LangChain
UniversalHubNodeLoadSerializableEncoder BackedFile SystemIn Memory
LangChain Core
AgentsCachesBaseDispatchWebManagerPromisesChat HistoryContextBaseLangsmithDocumentsEmbeddingsErrorsExample SelectorsIndexingBaseChat ModelsLlmsProfileLoadSerializableMemoryMessagesToolOutput ParsersOpenai FunctionsOpenai ToolsOutputsPrompt ValuesPromptsRetrieversDocument CompressorsRunnablesGraphSingletonsStoresStructured QueryToolsBaseConsoleLog StreamRun CollectorTracer LangchainStreamAsync CallerChunk ArrayContextEnvEvent Source ParseFormatFunction CallingHashJson PatchJson SchemaMathSsrfStreamTestingTiktokenTypesVectorstores
Text Splitters
MCP Adapters
Language
Theme
JavaScript@langchain/corevectorstoresVectorStoreRetrieverconstructor
Constructorā—Since v1.0

constructor

Initializes a new instance of VectorStoreRetriever with the specified configuration.

This constructor configures the retriever to interact with a given VectorStore and supports different retrieval strategies, including similarity search and maximal marginal relevance (MMR) search. Various options allow customization of the number of documents retrieved per query, filtering based on conditions, and fine-tuning MMR-specific parameters.

Copy
constructor<
  V extends VectorStoreInterface = VectorStoreInterface
>(fields: VectorStoreRetrieverInput<V>): VectorStoreRetriever<V>

Parameters

NameTypeDescription
fields*VectorStoreRetrieverInput<V>

Configuration options for setting up the retriever:

  • vectorStore (required): The VectorStore instance implementing VectorStoreInterface that will be used to store and retrieve document embeddings. This is the core component of the retriever, enabling vector-based similarity and MMR searches.

  • k (optional): Specifies the number of documents to retrieve per search query. If not provided, defaults to 4. This count determines the number of most relevant documents returned for each search operation, balancing performance with comprehensiveness.

  • searchType (optional): Defines the search approach used by the retriever, allowing for flexibility between two methods:

    • "similarity" (default): A similarity-based search, retrieving documents with high vector similarity to the query. This type prioritizes relevance and is often used when diversity among results is less critical.
    • "mmr": Maximal Marginal Relevance search, which combines relevance with diversity. MMR is useful for scenarios where varied content is essential, as it selects results that both match the query and introduce content diversity.
  • filter (optional): A filter of type FilterType, defined by the vector store, that allows for refined and targeted search results. This filter applies specified conditions to limit which documents are eligible for retrieval, offering control over the scope of results.

  • searchKwargs (optional, applicable only if searchType is "mmr"): Additional settings for configuring MMR-specific behavior. These parameters allow further tuning of the MMR search process:

    • fetchK: The initial number of documents fetched from the vector store before the MMR algorithm is applied. Fetching a larger set enables the algorithm to select a more diverse subset of documents.
    • lambda: A parameter controlling the relevance-diversity balance, where 0 emphasizes diversity and 1 prioritizes relevance. Intermediate values provide a blend of the two, allowing customization based on the importance of content variety relative to query relevance.
View source on GitHub