Calculate maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to the query AND diversity among selected documents.
maximal_marginal_relevance(
query_embedding: np.ndarray,
embedding_list: List[np.ndarray],
lambda_mult: float = 0.5,
k: int = 4
) -> List[int]Example:
from langchain_redis import RedisVectorStore
from langchain_openai import OpenAIEmbeddings
import numpy as np
embeddings = OpenAIEmbeddings()
vector_store = RedisVectorStore(
index_name="langchain-demo",
embedding=embeddings,
redis_url="redis://localhost:6379",
)
query = "What is the capital of France?"
query_embedding = embeddings.embed_query(query)
# Assuming you have a list of document embeddings
doc_embeddings = [embeddings.embed_query(doc) for doc in documents]
selected_indices = vector_store.maximal_marginal_relevance(
query_embedding=np.array(query_embedding),
embedding_list=[np.array(emb) for emb in doc_embeddings],
lambda_mult=0.5,
k=2
)
for idx in selected_indices:
print(f"Selected document: {documents[idx]}")| Name | Type | Description |
|---|---|---|
query_embedding* | np.ndarray | Embedding of the query text. |
embedding_list* | List[np.ndarray] | List of embeddings to select from. |
lambda_mult | float | Default: 0.5Number between |
k | int | Default: 4Number of results to return. |