LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
LangChain
  • Universal
  • Hub
  • Node
  • Load
  • Serializable
  • Encoder Backed
  • File System
  • In Memory
LangChain Core
  • Agents
  • Caches
  • Base
  • Dispatch
  • Web
  • Manager
  • Promises
  • Chat History
  • Context
  • Base
  • Langsmith
  • Documents
  • Embeddings
  • Errors
  • Example Selectors
  • Indexing
  • Base
  • Chat Models
  • Llms
  • Profile
  • Load
  • Serializable
  • Memory
  • Messages
  • Tool
  • Output Parsers
  • Openai Functions
  • Openai Tools
  • Outputs
  • Prompt Values
  • Prompts
  • Retrievers
  • Document Compressors
  • Runnables
  • Graph
  • Singletons
  • Stores
  • Structured Query
  • Tools
  • Base
  • Console
  • Log Stream
  • Run Collector
  • Tracer Langchain
  • Stream
  • Async Caller
  • Chunk Array
  • Context
  • Env
  • Event Source Parse
  • Format
  • Function Calling
  • Hash
  • Json Patch
  • Json Schema
  • Math
  • Ssrf
  • Stream
  • Testing
  • Tiktoken
  • Types
  • Vectorstores
Text Splitters
MCP Adapters
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

LangChain
UniversalHubNodeLoadSerializableEncoder BackedFile SystemIn Memory
LangChain Core
AgentsCachesBaseDispatchWebManagerPromisesChat HistoryContextBaseLangsmithDocumentsEmbeddingsErrorsExample SelectorsIndexingBaseChat ModelsLlmsProfileLoadSerializableMemoryMessagesToolOutput ParsersOpenai FunctionsOpenai ToolsOutputsPrompt ValuesPromptsRetrieversDocument CompressorsRunnablesGraphSingletonsStoresStructured QueryToolsBaseConsoleLog StreamRun CollectorTracer LangchainStreamAsync CallerChunk ArrayContextEnvEvent Source ParseFormatFunction CallingHashJson PatchJson SchemaMathSsrfStreamTestingTiktokenTypesVectorstores
Text Splitters
MCP Adapters
Language
Theme
JavaScript@langchain/corevectorstoresMaxMarginalRelevanceSearchOptions
Typeā—Since v1.0

MaxMarginalRelevanceSearchOptions

Copy
MaxMarginalRelevanceSearchOptions

Properties

View source on GitHub
property
fetchK: number
property
filter: FilterType
property
k: number
property
lambda: number

Options for configuring a maximal marginal relevance (MMR) search.

MMR search optimizes for both similarity to the query and diversity among the results, balancing the retrieval of relevant documents with variation in the content returned.

Fields:

  • fetchK (optional): The initial number of documents to retrieve from the vector store before applying the MMR algorithm. This larger set provides a pool of documents from which the algorithm can select the most diverse results based on relevance to the query.

  • filter (optional): A filter of type FilterType to refine the search results, allowing additional conditions to target specific subsets of documents.

  • k: The number of documents to return in the final results. This is the primary count of documents that are most relevant to the query.

  • lambda (optional): A value between 0 and 1 that determines the balance between relevance and diversity:

    • A lambda of 0 emphasizes diversity, maximizing content variation.
    • A lambda of 1 emphasizes similarity to the query, focusing on relevance. Values between 0 and 1 provide a mix of relevance and diversity.

Specifies the number of documents to retrieve for each search query. Defaults to 4 if not specified, providing a basic result count for similarity or MMR searches.