Skip to content

langchain-mongodb

Integrate your operational database and vector search in a single, unified, fully managed platform with full vector database capabilities on MongoDB Atlas.

Store your operational data, metadata, and vector embeddings in oue VectorStore, MongoDBAtlasVectorSearch. Insert into a Chain via a Vector, FullText, or Hybrid Retriever.

Modules:

Name Description
agent_toolkit
cache

LangChain MongoDB Caches.

chat_message_histories
docstores
graphrag
index

Search Index Commands

indexes
loaders
pipelines

Aggregation pipeline components used in Atlas Full-Text, Vector, and Hybrid Search

retrievers

Search Retrievers of various types.

utils

Various Utility Functions

vectorstores

Classes:

Name Description
MongoDBAtlasSemanticCache

MongoDB Atlas Semantic cache.

MongoDBCache

MongoDB Atlas cache

MongoDBChatMessageHistory

Chat message history that stores history in MongoDB.

MongoDBAtlasVectorSearch

MongoDB Atlas vector store integration.

MongoDBAtlasSemanticCache

Bases: BaseCache, MongoDBAtlasVectorSearch

MongoDB Atlas Semantic cache.

A Cache backed by a MongoDB Atlas server with vector-store support

Methods:

Name Description
add_texts

Add texts, create embeddings, and add to the Collection and index.

delete

Delete documents from VectorStore by ids.

get_by_ids

Get documents by their IDs.

aget_by_ids

Async get documents by their IDs.

adelete

Delete by vector ID or other criteria.

aadd_texts

Async run more texts through the embeddings and add to the vectorstore.

add_documents

Add documents to the vectorstore.

aadd_documents

Async run more documents through the embeddings and add to the vectorstore.

search

Return docs most similar to query using a specified search type.

asearch

Async return docs most similar to query using a specified search type.

similarity_search

Return MongoDB documents most similar to the given query.

similarity_search_with_score

Return MongoDB documents most similar to the given query and their scores.

asimilarity_search_with_score

Async run similarity search with distance.

similarity_search_with_relevance_scores

Return docs and relevance scores in the range [0, 1].

asimilarity_search_with_relevance_scores

Async return docs and relevance scores in the range [0, 1].

asimilarity_search

Async return docs most similar to query.

similarity_search_by_vector

Return docs most similar to embedding vector.

asimilarity_search_by_vector

Async return docs most similar to embedding vector.

max_marginal_relevance_search

Return documents selected using the maximal marginal relevance.

amax_marginal_relevance_search

Async return docs selected using the maximal marginal relevance.

max_marginal_relevance_search_by_vector

Return docs selected using the maximal marginal relevance.

amax_marginal_relevance_search_by_vector

Return docs selected using the maximal marginal relevance.

from_documents

Return VectorStore initialized from documents and embeddings.

afrom_documents

Async return VectorStore initialized from documents and embeddings.

from_texts

Construct a MongoDB Atlas Vector Search vector store from raw documents.

afrom_texts

Async return VectorStore initialized from texts and embeddings.

as_retriever

Return VectorStoreRetriever initialized from this VectorStore.

from_connection_string

Construct a MongoDB Atlas Vector Search vector store

close

Close the resources used by the MongoDBAtlasVectorSearch.

bulk_embed_and_insert_texts

Bulk insert single batch of texts, embeddings, and optionally ids.

create_vector_search_index

Creates a MongoDB Atlas vectorSearch index for the VectorStore

alookup

Async look up based on prompt and llm_string.

aupdate

Async update cache based on prompt and llm_string.

aclear

Async clear cache that can take additional keyword arguments.

__init__

Initialize Atlas VectorSearch Cache.

lookup

Look up based on prompt and llm_string.

update

Update cache based on prompt and llm_string.

clear

Clear cache that can take additional keyword arguments.

add_texts

add_texts(
    texts: Iterable[str],
    metadatas: Optional[List[Dict[str, Any]]] = None,
    ids: Optional[List[str]] = None,
    batch_size: int = DEFAULT_INSERT_BATCH_SIZE,
    **kwargs: Any
) -> List[str]

Add texts, create embeddings, and add to the Collection and index.

Important notes on ids
  • If _id or id is a key in the metadatas dicts, one must pop them and provide as separate list.
  • They must be unique.
  • If they are not provided, the VectorStore will create unique ones, stored as bson.ObjectIds internally, and strings in Langchain. These will appear in Document.metadata with key, '_id'.

Parameters:

Name Type Description Default
texts Iterable[str]

Iterable of strings to add to the vectorstore.

required
metadatas Optional[List[Dict[str, Any]]]

Optional list of metadatas associated with the texts.

None
ids Optional[List[str]]

Optional list of unique ids that will be used as index in VectorStore. See note on ids.

None
batch_size int

Number of documents to insert at a time. Tuning this may help with performance and sidestep MongoDB limits.

DEFAULT_INSERT_BATCH_SIZE

Returns:

Type Description
List[str]

List of ids added to the vectorstore.

delete

delete(
    ids: Optional[List[str]] = None, **kwargs: Any
) -> Optional[bool]

Delete documents from VectorStore by ids.

Parameters:

Name Type Description Default
ids Optional[List[str]]

List of ids to delete.

None
**kwargs Any

Other keyword arguments passed to Collection.delete_many()

{}

Returns:

Type Description
Optional[bool]

Optional[bool]: True if deletion is successful,

Optional[bool]

False otherwise, None if not implemented.

get_by_ids

get_by_ids(ids: Sequence[str]) -> list[Document]

Get documents by their IDs.

The returned documents are expected to have the ID field set to the ID of the document in the vector store.

Fewer documents may be returned than requested if some IDs are not found or if there are duplicated IDs.

Users should not assume that the order of the returned documents matches the order of the input IDs. Instead, users should rely on the ID field of the returned documents.

This method should NOT raise exceptions if no documents are found for some IDs.

Parameters:

Name Type Description Default
ids Sequence[str]

List of ids to retrieve.

required

Returns:

Type Description
list[Document]

List of Documents.

.. versionadded:: 0.6.0

aget_by_ids async

aget_by_ids(ids: Sequence[str]) -> list[Document]

Async get documents by their IDs.

The returned documents are expected to have the ID field set to the ID of the document in the vector store.

Fewer documents may be returned than requested if some IDs are not found or if there are duplicated IDs.

Users should not assume that the order of the returned documents matches the order of the input IDs. Instead, users should rely on the ID field of the returned documents.

This method should NOT raise exceptions if no documents are found for some IDs.

Parameters:

Name Type Description Default
ids Sequence[str]

List of ids to retrieve.

required

Returns:

Type Description
list[Document]

List of Documents.

Added in version 0.2.11

adelete async

adelete(
    ids: Optional[List[str]] = None, **kwargs: Any
) -> Optional[bool]

Delete by vector ID or other criteria.

Parameters:

Name Type Description Default
ids Optional[List[str]]

List of ids to delete.

None
**kwargs Any

Other keyword arguments that subclasses might use.

{}

Returns:

Type Description
Optional[bool]

Optional[bool]: True if deletion is successful,

Optional[bool]

False otherwise, None if not implemented.

aadd_texts async

aadd_texts(
    texts: Iterable[str],
    metadatas: list[dict] | None = None,
    *,
    ids: list[str] | None = None,
    **kwargs: Any
) -> list[str]

Async run more texts through the embeddings and add to the vectorstore.

Parameters:

Name Type Description Default
texts Iterable[str]

Iterable of strings to add to the vectorstore.

required
metadatas list[dict] | None

Optional list of metadatas associated with the texts. Default is None.

None
ids list[str] | None

Optional list

None
**kwargs Any

vectorstore specific parameters.

{}

Returns:

Type Description
list[str]

List of ids from adding the texts into the vectorstore.

Raises:

Type Description
ValueError

If the number of metadatas does not match the number of texts.

ValueError

If the number of ids does not match the number of texts.

add_documents

add_documents(
    documents: List[Document],
    ids: Optional[List[str]] = None,
    batch_size: int = DEFAULT_INSERT_BATCH_SIZE,
    **kwargs: Any
) -> List[str]

Add documents to the vectorstore.

Parameters:

Name Type Description Default
documents List[Document]

Documents to add to the vectorstore.

required
ids Optional[List[str]]

Optional list of unique ids that will be used as index in VectorStore. See note on ids in add_texts.

None
batch_size int

Number of documents to insert at a time. Tuning this may help with performance and sidestep MongoDB limits.

DEFAULT_INSERT_BATCH_SIZE

Returns:

Type Description
List[str]

List of IDs of the added texts.

aadd_documents async

aadd_documents(
    documents: list[Document], **kwargs: Any
) -> list[str]

Async run more documents through the embeddings and add to the vectorstore.

Parameters:

Name Type Description Default
documents list[Document]

Documents to add to the vectorstore.

required
kwargs Any

Additional keyword arguments.

{}

Returns:

Type Description
list[str]

List of IDs of the added texts.

search

search(
    query: str, search_type: str, **kwargs: Any
) -> list[Document]

Return docs most similar to query using a specified search type.

Parameters:

Name Type Description Default
query str

Input text

required
search_type str

Type of search to perform. Can be "similarity", "mmr", or "similarity_score_threshold".

required
**kwargs Any

Arguments to pass to the search method.

{}

Returns:

Type Description
list[Document]

List of Documents most similar to the query.

Raises:

Type Description
ValueError

If search_type is not one of "similarity", "mmr", or "similarity_score_threshold".

asearch async

asearch(
    query: str, search_type: str, **kwargs: Any
) -> list[Document]

Async return docs most similar to query using a specified search type.

Parameters:

Name Type Description Default
query str

Input text.

required
search_type str

Type of search to perform. Can be "similarity", "mmr", or "similarity_score_threshold".

required
**kwargs Any

Arguments to pass to the search method.

{}

Returns:

Type Description
list[Document]

List of Documents most similar to the query.

Raises:

Type Description
ValueError

If search_type is not one of "similarity", "mmr", or "similarity_score_threshold".

similarity_search(
    query: str,
    k: int = 4,
    pre_filter: Optional[Dict[str, Any]] = None,
    post_filter_pipeline: Optional[List[Dict]] = None,
    oversampling_factor: int = 10,
    include_scores: bool = False,
    include_embeddings: bool = False,
    **kwargs: Any
) -> List[Document]

Return MongoDB documents most similar to the given query.

Atlas Vector Search eliminates the need to run a separate search system alongside your database.

Args: query: Input text of semantic query k: (Optional) number of documents to return. Defaults to 4. pre_filter: List of MQL match expressions comparing an indexed field post_filter_pipeline: (Optional) Pipeline of MongoDB aggregation stages to filter/process results after $vectorSearch. oversampling_factor: Multiple of k used when generating number of candidates at each step in the HNSW Vector Search, include_scores: If True, the query score of each result will be included in metadata. include_embeddings: If True, the embedding vector of each result will be included in metadata. kwargs: Additional arguments are specific to the search_type

Returns:

Type Description
List[Document]

List of documents most similar to the query and their scores.

similarity_search_with_score

similarity_search_with_score(
    query: str,
    k: int = 4,
    pre_filter: Optional[Dict[str, Any]] = None,
    post_filter_pipeline: Optional[List[Dict]] = None,
    oversampling_factor: int = 10,
    include_embeddings: bool = False,
    **kwargs: Any
) -> List[Tuple[Document, float]]

Return MongoDB documents most similar to the given query and their scores.

Atlas Vector Search eliminates the need to run a separate search system alongside your database.

Args: query: Input text of semantic query k: Number of documents to return. Also known as top_k. pre_filter: List of MQL match expressions comparing an indexed field post_filter_pipeline: (Optional) Arbitrary pipeline of MongoDB aggregation stages applied after the search is complete. oversampling_factor: This times k is the number of candidates chosen at each step in the in HNSW Vector Search include_embeddings: If True, the embedding vector of each result will be included in metadata. kwargs: Additional arguments are specific to the search_type

Returns:

Type Description
List[Tuple[Document, float]]

List of documents most similar to the query and their scores.

asimilarity_search_with_score async

asimilarity_search_with_score(
    *args: Any, **kwargs: Any
) -> list[tuple[Document, float]]

Async run similarity search with distance.

Parameters:

Name Type Description Default
*args Any

Arguments to pass to the search method.

()
**kwargs Any

Arguments to pass to the search method.

{}

Returns:

Type Description
list[tuple[Document, float]]

List of Tuples of (doc, similarity_score).

similarity_search_with_relevance_scores

similarity_search_with_relevance_scores(
    query: str, k: int = 4, **kwargs: Any
) -> list[tuple[Document, float]]

Return docs and relevance scores in the range [0, 1].

0 is dissimilar, 1 is most similar.

Parameters:

Name Type Description Default
query str

Input text.

required
k int

Number of Documents to return. Defaults to 4.

4
**kwargs Any

kwargs to be passed to similarity search. Should include: score_threshold: Optional, a floating point value between 0 to 1 to filter the resulting set of retrieved docs.

{}

Returns:

Type Description
list[tuple[Document, float]]

List of Tuples of (doc, similarity_score).

asimilarity_search_with_relevance_scores async

asimilarity_search_with_relevance_scores(
    query: str, k: int = 4, **kwargs: Any
) -> list[tuple[Document, float]]

Async return docs and relevance scores in the range [0, 1].

0 is dissimilar, 1 is most similar.

Parameters:

Name Type Description Default
query str

Input text.

required
k int

Number of Documents to return. Defaults to 4.

4
**kwargs Any

kwargs to be passed to similarity search. Should include: score_threshold: Optional, a floating point value between 0 to 1 to filter the resulting set of retrieved docs

{}

Returns:

Type Description
list[tuple[Document, float]]

List of Tuples of (doc, similarity_score)

asimilarity_search(
    query: str, k: int = 4, **kwargs: Any
) -> list[Document]

Async return docs most similar to query.

Parameters:

Name Type Description Default
query str

Input text.

required
k int

Number of Documents to return. Defaults to 4.

4
**kwargs Any

Arguments to pass to the search method.

{}

Returns:

Type Description
list[Document]

List of Documents most similar to the query.

similarity_search_by_vector

similarity_search_by_vector(
    embedding: list[float], k: int = 4, **kwargs: Any
) -> list[Document]

Return docs most similar to embedding vector.

Parameters:

Name Type Description Default
embedding list[float]

Embedding to look up documents similar to.

required
k int

Number of Documents to return. Defaults to 4.

4
**kwargs Any

Arguments to pass to the search method.

{}

Returns:

Type Description
list[Document]

List of Documents most similar to the query vector.

asimilarity_search_by_vector async

asimilarity_search_by_vector(
    embedding: list[float], k: int = 4, **kwargs: Any
) -> list[Document]

Async return docs most similar to embedding vector.

Parameters:

Name Type Description Default
embedding list[float]

Embedding to look up documents similar to.

required
k int

Number of Documents to return. Defaults to 4.

4
**kwargs Any

Arguments to pass to the search method.

{}

Returns:

Type Description
list[Document]

List of Documents most similar to the query vector.

max_marginal_relevance_search(
    query: str,
    k: int = 4,
    fetch_k: int = 20,
    lambda_mult: float = 0.5,
    pre_filter: Optional[Dict[str, Any]] = None,
    post_filter_pipeline: Optional[List[Dict]] = None,
    **kwargs: Any
) -> List[Document]

Return documents selected using the maximal marginal relevance.

Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.

Parameters:

Name Type Description Default
query str

Text to look up documents similar to.

required
k int

(Optional) number of documents to return. Defaults to 4.

4
fetch_k int

(Optional) number of documents to fetch before passing to MMR algorithm. Defaults to 20.

20
lambda_mult float

Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.

0.5
pre_filter Optional[Dict[str, Any]]

List of MQL match expressions comparing an indexed field

None
post_filter_pipeline Optional[List[Dict]]

(Optional) pipeline of MongoDB aggregation stages following the $vectorSearch stage.

None

Returns: List of documents selected by maximal marginal relevance.

amax_marginal_relevance_search(
    query: str,
    k: int = 4,
    fetch_k: int = 20,
    lambda_mult: float = 0.5,
    **kwargs: Any
) -> list[Document]

Async return docs selected using the maximal marginal relevance.

Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.

Parameters:

Name Type Description Default
query str

Text to look up documents similar to.

required
k int

Number of Documents to return. Defaults to 4.

4
fetch_k int

Number of Documents to fetch to pass to MMR algorithm. Default is 20.

20
lambda_mult float

Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.

0.5
**kwargs Any

Arguments to pass to the search method.

{}

Returns:

Type Description
list[Document]

List of Documents selected by maximal marginal relevance.

max_marginal_relevance_search_by_vector

max_marginal_relevance_search_by_vector(
    embedding: List[float],
    k: int = 4,
    fetch_k: int = 20,
    lambda_mult: float = 0.5,
    pre_filter: Optional[Dict[str, Any]] = None,
    post_filter_pipeline: Optional[List[Dict]] = None,
    oversampling_factor: int = 10,
    **kwargs: Any
) -> List[Document]

Return docs selected using the maximal marginal relevance.

Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.

Parameters:

Name Type Description Default
embedding List[float]

Embedding to look up documents similar to.

required
k int

Number of Documents to return. Defaults to 4.

4
fetch_k int

Number of Documents to fetch to pass to MMR algorithm.

20
lambda_mult float

Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.

0.5
pre_filter Optional[Dict[str, Any]]

(Optional) dictionary of arguments to filter document fields on.

None
post_filter_pipeline Optional[List[Dict]]

(Optional) pipeline of MongoDB aggregation stages following the vectorSearch stage.

None
oversampling_factor int

Multiple of k used when generating number of candidates in HNSW Vector Search,

10
kwargs Any

Additional arguments are specific to the search_type

{}

Returns:

Type Description
List[Document]

List of Documents selected by maximal marginal relevance.

amax_marginal_relevance_search_by_vector async

amax_marginal_relevance_search_by_vector(
    embedding: List[float],
    k: int = 4,
    fetch_k: int = 20,
    lambda_mult: float = 0.5,
    pre_filter: Optional[Dict[str, Any]] = None,
    post_filter_pipeline: Optional[List[Dict]] = None,
    oversampling_factor: int = 10,
    **kwargs: Any
) -> List[Document]

Return docs selected using the maximal marginal relevance.

from_documents classmethod

from_documents(
    documents: list[Document],
    embedding: Embeddings,
    **kwargs: Any
) -> Self

Return VectorStore initialized from documents and embeddings.

Parameters:

Name Type Description Default
documents list[Document]

List of Documents to add to the vectorstore.

required
embedding Embeddings

Embedding function to use.

required
kwargs Any

Additional keyword arguments.

{}

Returns:

Name Type Description
VectorStore Self

VectorStore initialized from documents and embeddings.

afrom_documents async classmethod

afrom_documents(
    documents: list[Document],
    embedding: Embeddings,
    **kwargs: Any
) -> Self

Async return VectorStore initialized from documents and embeddings.

Parameters:

Name Type Description Default
documents list[Document]

List of Documents to add to the vectorstore.

required
embedding Embeddings

Embedding function to use.

required
kwargs Any

Additional keyword arguments.

{}

Returns:

Name Type Description
VectorStore Self

VectorStore initialized from documents and embeddings.

from_texts classmethod

from_texts(
    texts: List[str],
    embedding: Embeddings,
    metadatas: Optional[List[Dict]] = None,
    collection: Optional[Collection] = None,
    ids: Optional[List[str]] = None,
    **kwargs: Any
) -> MongoDBAtlasVectorSearch

Construct a MongoDB Atlas Vector Search vector store from raw documents.

This is a user-friendly interface that
  1. Embeds documents.
  2. Adds the documents to a provided MongoDB Atlas Vector Search index (Lucene)

This is intended to be a quick way to get started.

See MongoDBAtlasVectorSearch for kwargs and further description.

Example

.. code-block:: python from pymongo import MongoClient

from langchain_mongodb import MongoDBAtlasVectorSearch
from langchain_openai import OpenAIEmbeddings

mongo_client = MongoClient("<YOUR-CONNECTION-STRING>")
collection = mongo_client["<db_name>"]["<collection_name>"]
embeddings = OpenAIEmbeddings()
vectorstore = MongoDBAtlasVectorSearch.from_texts(
    texts,
    embeddings,
    metadatas=metadatas,
    collection=collection
)

afrom_texts async classmethod

afrom_texts(
    texts: list[str],
    embedding: Embeddings,
    metadatas: list[dict] | None = None,
    *,
    ids: list[str] | None = None,
    **kwargs: Any
) -> Self

Async return VectorStore initialized from texts and embeddings.

Parameters:

Name Type Description Default
texts list[str]

Texts to add to the vectorstore.

required
embedding Embeddings

Embedding function to use.

required
metadatas list[dict] | None

Optional list of metadatas associated with the texts. Default is None.

None
ids list[str] | None

Optional list of IDs associated with the texts.

None
kwargs Any

Additional keyword arguments.

{}

Returns:

Name Type Description
VectorStore Self

VectorStore initialized from texts and embeddings.

as_retriever

as_retriever(**kwargs: Any) -> VectorStoreRetriever

Return VectorStoreRetriever initialized from this VectorStore.

Parameters:

Name Type Description Default
**kwargs Any

Keyword arguments to pass to the search function. Can include: search_type (Optional[str]): Defines the type of search that the Retriever should perform. Can be "similarity" (default), "mmr", or "similarity_score_threshold". search_kwargs (Optional[Dict]): Keyword arguments to pass to the search function. Can include things like: k: Amount of documents to return (Default: 4) score_threshold: Minimum relevance threshold for similarity_score_threshold fetch_k: Amount of documents to pass to MMR algorithm (Default: 20) lambda_mult: Diversity of results returned by MMR; 1 for minimum diversity and 0 for maximum. (Default: 0.5) filter: Filter by document metadata

{}

Returns:

Name Type Description
VectorStoreRetriever VectorStoreRetriever

Retriever class for VectorStore.

Examples:

.. code-block:: python

# Retrieve more documents with higher diversity
# Useful if your dataset has many similar documents
docsearch.as_retriever(
    search_type="mmr", search_kwargs={"k": 6, "lambda_mult": 0.25}
)

# Fetch more documents for the MMR algorithm to consider
# But only return the top 5
docsearch.as_retriever(
    search_type="mmr", search_kwargs={"k": 5, "fetch_k": 50}
)

# Only retrieve documents that have a relevance score
# Above a certain threshold
docsearch.as_retriever(
    search_type="similarity_score_threshold",
    search_kwargs={"score_threshold": 0.8},
)

# Only get the single most similar document from the dataset
docsearch.as_retriever(search_kwargs={"k": 1})

# Use a filter to only retrieve documents from a specific paper
docsearch.as_retriever(
    search_kwargs={"filter": {"paper_title": "GPT-4 Technical Report"}}
)

from_connection_string classmethod

from_connection_string(
    connection_string: str,
    namespace: str,
    embedding: Embeddings,
    **kwargs: Any
) -> MongoDBAtlasVectorSearch

Construct a MongoDB Atlas Vector Search vector store from a MongoDB connection URI.

Parameters:

Name Type Description Default
connection_string str

A valid MongoDB connection URI.

required
namespace str

A valid MongoDB namespace (database and collection).

required
embedding Embeddings

The text embedding model to use for the vector store.

required

Returns:

Type Description
MongoDBAtlasVectorSearch

A new MongoDBAtlasVectorSearch instance.

close

close() -> None

Close the resources used by the MongoDBAtlasVectorSearch.

bulk_embed_and_insert_texts

bulk_embed_and_insert_texts(
    texts: Union[List[str], Iterable[str]],
    metadatas: Union[List[dict], Generator[dict, Any, Any]],
    ids: Optional[List[str]] = None,
) -> List[str]

Bulk insert single batch of texts, embeddings, and optionally ids.

See add_texts for additional details.

create_vector_search_index

create_vector_search_index(
    dimensions: int,
    filters: Optional[List[str]] = None,
    update: bool = False,
    wait_until_complete: Optional[float] = None,
    **kwargs: Any
) -> None

Creates a MongoDB Atlas vectorSearch index for the VectorStore

Note**: This method may fail as it requires a MongoDB Atlas with these pre-requisites <https://www.mongodb.com/docs/atlas/atlas-vector-search/create-index/#prerequisites>. Currently, vector and full-text search index operations need to be performed manually on the Atlas UI for shared M0 clusters.

Parameters:

Name Type Description Default
dimensions int

Number of dimensions in embedding

required
filters Optional[List[Dict[str, str]]]

additional filters

None
update Optional[bool]

Updates existing vectorSearch index. Defaults to False.

False
wait_until_complete Optional[float]

If given, a TimeoutError is raised if search index is not ready after this number of seconds. If not given, the default, operation will not wait.

None
kwargs Any

(Optional): Keyword arguments supplying any additional options to SearchIndexModel.

{}

alookup async

alookup(
    prompt: str, llm_string: str
) -> RETURN_VAL_TYPE | None

Async look up based on prompt and llm_string.

A cache implementation is expected to generate a key from the 2-tuple of prompt and llm_string (e.g., by concatenating them with a delimiter).

Parameters:

Name Type Description Default
prompt str

a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.

required
llm_string str

A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.

required

Returns:

Type Description
RETURN_VAL_TYPE | None

On a cache miss, return None. On a cache hit, return the cached value.

RETURN_VAL_TYPE | None

The cached value is a list of Generations (or subclasses).

aupdate async

aupdate(
    prompt: str,
    llm_string: str,
    return_val: RETURN_VAL_TYPE,
) -> None

Async update cache based on prompt and llm_string.

The prompt and llm_string are used to generate a key for the cache. The key should match that of the look up method.

Parameters:

Name Type Description Default
prompt str

a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.

required
llm_string str

A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.

required
return_val RETURN_VAL_TYPE

The value to be cached. The value is a list of Generations (or subclasses).

required

aclear async

aclear(**kwargs: Any) -> None

Async clear cache that can take additional keyword arguments.

__init__

__init__(
    connection_string: str,
    embedding: Embeddings,
    collection_name: str = "default",
    database_name: str = "default",
    index_name: str = "default",
    wait_until_ready: Optional[float] = None,
    score_threshold: Optional[float] = None,
    **kwargs: Dict[str, Any]
)

Initialize Atlas VectorSearch Cache. Assumes collection exists before instantiation

Parameters:

Name Type Description Default
connection_string str

MongoDB URI to connect to MongoDB Atlas cluster.

required
embedding Embeddings

Text embedding model to use.

required
collection_name str

MongoDB Collection to add the texts to. Defaults to "default".

'default'
database_name str

MongoDB Database where to store texts. Defaults to "default".

'default'
index_name str

Name of the Atlas Search index. defaults to 'default'

'default'
wait_until_ready float

Wait this time for Atlas to finish indexing the stored text. Defaults to None.

None

lookup

lookup(
    prompt: str, llm_string: str
) -> Optional[RETURN_VAL_TYPE]

Look up based on prompt and llm_string.

update

update(
    prompt: str,
    llm_string: str,
    return_val: RETURN_VAL_TYPE,
    wait_until_ready: Optional[float] = None,
) -> None

Update cache based on prompt and llm_string.

clear

clear(**kwargs: Any) -> None

Clear cache that can take additional keyword arguments. Any additional arguments will propagate as filtration criteria for what gets deleted. It will delete any locally cached content regardless

E.g. # Delete only entries that have llm_string as "fake-model" self.clear(llm_string="fake-model")

MongoDBCache

Bases: BaseCache

MongoDB Atlas cache

A cache that uses MongoDB Atlas as a backend

Methods:

Name Description
alookup

Async look up based on prompt and llm_string.

aupdate

Async update cache based on prompt and llm_string.

aclear

Async clear cache that can take additional keyword arguments.

__init__

Initialize Atlas Cache. Creates collection on instantiation

close

Close the MongoClient used by the MongoDBCache.

lookup

Look up based on prompt and llm_string.

update

Update cache based on prompt and llm_string.

clear

Clear cache that can take additional keyword arguments.

Attributes:

Name Type Description
database Database

Returns the database used to store cache values.

collection Collection

Returns the collection used to store cache values.

database property

database: Database

Returns the database used to store cache values.

collection property

collection: Collection

Returns the collection used to store cache values.

alookup async

alookup(
    prompt: str, llm_string: str
) -> RETURN_VAL_TYPE | None

Async look up based on prompt and llm_string.

A cache implementation is expected to generate a key from the 2-tuple of prompt and llm_string (e.g., by concatenating them with a delimiter).

Parameters:

Name Type Description Default
prompt str

a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.

required
llm_string str

A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.

required

Returns:

Type Description
RETURN_VAL_TYPE | None

On a cache miss, return None. On a cache hit, return the cached value.

RETURN_VAL_TYPE | None

The cached value is a list of Generations (or subclasses).

aupdate async

aupdate(
    prompt: str,
    llm_string: str,
    return_val: RETURN_VAL_TYPE,
) -> None

Async update cache based on prompt and llm_string.

The prompt and llm_string are used to generate a key for the cache. The key should match that of the look up method.

Parameters:

Name Type Description Default
prompt str

a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.

required
llm_string str

A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.

required
return_val RETURN_VAL_TYPE

The value to be cached. The value is a list of Generations (or subclasses).

required

aclear async

aclear(**kwargs: Any) -> None

Async clear cache that can take additional keyword arguments.

__init__

__init__(
    connection_string: str,
    collection_name: str = "default",
    database_name: str = "default",
    **kwargs: Dict[str, Any]
) -> None

Initialize Atlas Cache. Creates collection on instantiation

Parameters:

Name Type Description Default
collection_name str

Name of collection for cache to live. Defaults to "default".

'default'
connection_string str

Connection URI to MongoDB Atlas. Defaults to "default".

required
database_name str

Name of database for cache to live. Defaults to "default".

'default'

close

close() -> None

Close the MongoClient used by the MongoDBCache.

lookup

lookup(
    prompt: str, llm_string: str
) -> Optional[RETURN_VAL_TYPE]

Look up based on prompt and llm_string.

update

update(
    prompt: str,
    llm_string: str,
    return_val: RETURN_VAL_TYPE,
) -> None

Update cache based on prompt and llm_string.

clear

clear(**kwargs: Any) -> None

Clear cache that can take additional keyword arguments. Any additional arguments will propagate as filtration criteria for what gets deleted.

E.g. # Delete only entries that have llm_string as "fake-model" self.clear(llm_string="fake-model")

MongoDBChatMessageHistory

Bases: BaseChatMessageHistory

Chat message history that stores history in MongoDB.

Setup

Install langchain-mongodb python package.

.. code-block:: bash

pip install langchain-mongodb
Instantiate

.. code-block:: python

from langchain_mongodb import MongoDBChatMessageHistory


history = MongoDBChatMessageHistory(
    connection_string="mongodb://your-host:your-port/",  # mongodb://localhost:27017/
    session_id = "your-session-id",
)
Add and retrieve messages

.. code-block:: python

# Add single message
history.add_message(message)

# Add batch messages
history.add_messages([message1, message2, message3, ...])

# Add human message
history.add_user_message(human_message)

# Add ai message
history.add_ai_message(ai_message)

# Retrieve messages
messages = history.messages

Methods:

Name Description
aget_messages

Async version of getting messages.

add_user_message

Convenience method for adding a human message string to the store.

add_ai_message

Convenience method for adding an AI message string to the store.

add_messages

Add a list of messages.

aadd_messages

Async add a list of messages.

aclear

Async remove all messages from the store.

__str__

Return a string representation of the chat history.

__init__

Initialize with a MongoDBChatMessageHistory instance.

close

Close the resources used by the MongoDBChatMessageHistory.

add_message

Append the message to the record in MongoDB

clear

Clear session memory from MongoDB

Attributes:

Name Type Description
messages List[BaseMessage]

Retrieve the messages from MongoDB

messages property

messages: List[BaseMessage]

Retrieve the messages from MongoDB

aget_messages async

aget_messages() -> list[BaseMessage]

Async version of getting messages.

Can over-ride this method to provide an efficient async implementation.

In general, fetching messages may involve IO to the underlying persistence layer.

Returns:

Type Description
list[BaseMessage]

The messages.

add_user_message

add_user_message(message: HumanMessage | str) -> None

Convenience method for adding a human message string to the store.

Note

This is a convenience method. Code should favor the bulk add_messages interface instead to save on round-trips to the persistence layer.

This method may be deprecated in a future release.

Parameters:

Name Type Description Default
message HumanMessage | str

The human message to add to the store.

required

add_ai_message

add_ai_message(message: AIMessage | str) -> None

Convenience method for adding an AI message string to the store.

Note

This is a convenience method. Code should favor the bulk add_messages interface instead to save on round-trips to the persistence layer.

This method may be deprecated in a future release.

Parameters:

Name Type Description Default
message AIMessage | str

The AI message to add.

required

add_messages

add_messages(messages: Sequence[BaseMessage]) -> None

Add a list of messages.

Implementations should over-ride this method to handle bulk addition of messages in an efficient manner to avoid unnecessary round-trips to the underlying store.

Parameters:

Name Type Description Default
messages Sequence[BaseMessage]

A sequence of BaseMessage objects to store.

required

aadd_messages async

aadd_messages(messages: Sequence[BaseMessage]) -> None

Async add a list of messages.

Parameters:

Name Type Description Default
messages Sequence[BaseMessage]

A sequence of BaseMessage objects to store.

required

aclear async

aclear() -> None

Async remove all messages from the store.

__str__

__str__() -> str

Return a string representation of the chat history.

__init__

__init__(
    connection_string: Optional[str],
    session_id: str,
    database_name: str = DEFAULT_DBNAME,
    collection_name: str = DEFAULT_COLLECTION_NAME,
    *,
    session_id_key: str = DEFAULT_SESSION_ID_KEY,
    history_key: str = DEFAULT_HISTORY_KEY,
    create_index: bool = True,
    history_size: Optional[int] = None,
    index_kwargs: Optional[Dict] = None,
    client: Optional[MongoClient] = None
)

Initialize with a MongoDBChatMessageHistory instance.

Parameters:

Name Type Description Default
connection_string Optional[str]

Optional[str] connection string to connect to MongoDB. Can be None if mongo_client is provided.

required
session_id str

str arbitrary key that is used to store the messages of a single chat session.

required
database_name str

Optional[str] name of the database to use.

DEFAULT_DBNAME
collection_name str

Optional[str] name of the collection to use.

DEFAULT_COLLECTION_NAME
session_id_key str

Optional[str] name of the field that stores the session id.

DEFAULT_SESSION_ID_KEY
history_key str

Optional[str] name of the field that stores the chat history.

DEFAULT_HISTORY_KEY
create_index bool

Optional[bool] whether to create an index on the session id field.

True
history_size Optional[int]

Optional[int] count of (most recent) messages to fetch from MongoDB.

None
index_kwargs Optional[Dict]

Optional[Dict] additional keyword arguments to pass to the index creation.

None
client Optional[MongoClient]

Optional[MongoClient] an existing MongoClient instance. If provided, connection_string is ignored.

None

close

close() -> None

Close the resources used by the MongoDBChatMessageHistory.

add_message

add_message(message: BaseMessage) -> None

Append the message to the record in MongoDB

clear

clear() -> None

Clear session memory from MongoDB

MongoDBAtlasVectorSearch

Bases: VectorStore

MongoDB Atlas vector store integration.

MongoDBAtlasVectorSearch performs data operations on text, embeddings and arbitrary data. In addition to CRUD operations, the VectorStore provides Vector Search based on similarity of embedding vectors following the Hierarchical Navigable Small Worlds (HNSW) algorithm.

This supports a number of models to ascertain scores, "similarity" (default), "MMR", and "similarity_score_threshold". These are described in the search_type argument to as_retriever, which provides the Runnable.invoke(query) API, allowing MongoDBAtlasVectorSearch to be used within a chain.

Setup
  • Set up a MongoDB Atlas cluster. The free tier M0 will allow you to start. Search Indexes are only available on Atlas, the fully managed cloud service, not the self-managed MongoDB. Follow this guide

  • Create a Collection and a Vector Search Index. The procedure is described here. You can optionally supply a dimensions argument to programmatically create a Vector Search Index.

  • Install langchain-mongodb

.. code-block:: bash

pip install -qU langchain-mongodb pymongo

.. code-block:: python

import getpass
MONGODB_ATLAS_CONNECTION_STRING = getpass.getpass("MongoDB Atlas Connection String:")

Key init args — indexing params: embedding: Embeddings Embedding function to use.

Key init args — client params: collection: Collection MongoDB collection to use. index_name: str Name of the Atlas Search index.

Instantiate

.. code-block:: python

from pymongo import MongoClient
from langchain_mongodb.vectorstores import MongoDBAtlasVectorSearch
from pymongo import MongoClient
from langchain_openai import OpenAIEmbeddings

vector_store = MongoDBAtlasVectorSearch.from_connection_string(
    connection_string=os=MONGODB_ATLAS_CONNECTION_STRING,
    namespace="db_name.collection_name",
    embedding=OpenAIEmbeddings(),
    index_name="vector_index",
    text_key="text_field"
)
Add Documents

.. code-block:: python

from langchain_core.documents import Document

document_1 = Document(page_content="foo", metadata={"baz": "bar"})
document_2 = Document(page_content="thud", metadata={"bar": "baz"})
document_3 = Document(page_content="i will be deleted :(")

documents = [document_1, document_2, document_3]
ids = ["1", "2", "3"]
vector_store.add_documents(documents=documents, ids=ids)
Delete Documents

.. code-block:: python

vector_store.delete(ids=["3"])
Search with filter

.. code-block:: python

results = vector_store.similarity_search(query="thud",k=1,post_filter=[{"bar": "baz"]})
for doc in results:
    print(f"* {doc.page_content} [{doc.metadata}]")

.. code-block:: python

* thud [{'_id': '2', 'baz': 'baz'}]
Search with score

.. code-block:: python

results = vector_store.similarity_search_with_score(query="qux",k=1)
for doc, score in results:
    print(f"* [SIM={score:3f}] {doc.page_content} [{doc.metadata}]")

.. code-block:: python

* [SIM=0.916096] foo [{'_id': '1', 'baz': 'bar'}]
Async

.. code-block:: python

# add documents
# await vector_store.aadd_documents(documents=documents, ids=ids)

# delete documents
# await vector_store.adelete(ids=["3"])

# search
# results = vector_store.asimilarity_search(query="thud",k=1)

# search with score
results = await vector_store.asimilarity_search_with_score(query="qux",k=1)
for doc,score in results:
    print(f"* [SIM={score:3f}] {doc.page_content} [{doc.metadata}]")

.. code-block:: python

* [SIM=0.916096] foo [{'_id': '1', 'baz': 'bar'}]

Use as Retriever:

.. code-block:: python

    retriever = vector_store.as_retriever(
        search_type="mmr",
        search_kwargs={"k": 1, "fetch_k": 2, "lambda_mult": 0.5},
    )
    retriever.invoke("thud")

.. code-block:: python

    [Document(metadata={'_id': '2', 'embedding': [-0.01850726455450058, -0.0014740974875167012, -0.009762819856405258, ...], 'baz': 'baz'}, page_content='thud')]

Methods:

Name Description
aget_by_ids

Async get documents by their IDs.

aadd_texts

Async run more texts through the embeddings and add to the vectorstore.

aadd_documents

Async run more documents through the embeddings and add to the vectorstore.

search

Return docs most similar to query using a specified search type.

asearch

Async return docs most similar to query using a specified search type.

asimilarity_search_with_score

Async run similarity search with distance.

similarity_search_with_relevance_scores

Return docs and relevance scores in the range [0, 1].

asimilarity_search_with_relevance_scores

Async return docs and relevance scores in the range [0, 1].

asimilarity_search

Async return docs most similar to query.

similarity_search_by_vector

Return docs most similar to embedding vector.

asimilarity_search_by_vector

Async return docs most similar to embedding vector.

amax_marginal_relevance_search

Async return docs selected using the maximal marginal relevance.

from_documents

Return VectorStore initialized from documents and embeddings.

afrom_documents

Async return VectorStore initialized from documents and embeddings.

afrom_texts

Async return VectorStore initialized from texts and embeddings.

as_retriever

Return VectorStoreRetriever initialized from this VectorStore.

__init__

Args:

from_connection_string

Construct a MongoDB Atlas Vector Search vector store

close

Close the resources used by the MongoDBAtlasVectorSearch.

add_texts

Add texts, create embeddings, and add to the Collection and index.

get_by_ids

Get documents by their IDs.

bulk_embed_and_insert_texts

Bulk insert single batch of texts, embeddings, and optionally ids.

add_documents

Add documents to the vectorstore.

similarity_search_with_score

Return MongoDB documents most similar to the given query and their scores.

similarity_search

Return MongoDB documents most similar to the given query.

max_marginal_relevance_search

Return documents selected using the maximal marginal relevance.

from_texts

Construct a MongoDB Atlas Vector Search vector store from raw documents.

delete

Delete documents from VectorStore by ids.

adelete

Delete by vector ID or other criteria.

max_marginal_relevance_search_by_vector

Return docs selected using the maximal marginal relevance.

amax_marginal_relevance_search_by_vector

Return docs selected using the maximal marginal relevance.

create_vector_search_index

Creates a MongoDB Atlas vectorSearch index for the VectorStore

aget_by_ids async

aget_by_ids(ids: Sequence[str]) -> list[Document]

Async get documents by their IDs.

The returned documents are expected to have the ID field set to the ID of the document in the vector store.

Fewer documents may be returned than requested if some IDs are not found or if there are duplicated IDs.

Users should not assume that the order of the returned documents matches the order of the input IDs. Instead, users should rely on the ID field of the returned documents.

This method should NOT raise exceptions if no documents are found for some IDs.

Parameters:

Name Type Description Default
ids Sequence[str]

List of ids to retrieve.

required

Returns:

Type Description
list[Document]

List of Documents.

Added in version 0.2.11

aadd_texts async

aadd_texts(
    texts: Iterable[str],
    metadatas: list[dict] | None = None,
    *,
    ids: list[str] | None = None,
    **kwargs: Any
) -> list[str]

Async run more texts through the embeddings and add to the vectorstore.

Parameters:

Name Type Description Default
texts Iterable[str]

Iterable of strings to add to the vectorstore.

required
metadatas list[dict] | None

Optional list of metadatas associated with the texts. Default is None.

None
ids list[str] | None

Optional list

None
**kwargs Any

vectorstore specific parameters.

{}

Returns:

Type Description
list[str]

List of ids from adding the texts into the vectorstore.

Raises:

Type Description
ValueError

If the number of metadatas does not match the number of texts.

ValueError

If the number of ids does not match the number of texts.

aadd_documents async

aadd_documents(
    documents: list[Document], **kwargs: Any
) -> list[str]

Async run more documents through the embeddings and add to the vectorstore.

Parameters:

Name Type Description Default
documents list[Document]

Documents to add to the vectorstore.

required
kwargs Any

Additional keyword arguments.

{}

Returns:

Type Description
list[str]

List of IDs of the added texts.

search

search(
    query: str, search_type: str, **kwargs: Any
) -> list[Document]

Return docs most similar to query using a specified search type.

Parameters:

Name Type Description Default
query str

Input text

required
search_type str

Type of search to perform. Can be "similarity", "mmr", or "similarity_score_threshold".

required
**kwargs Any

Arguments to pass to the search method.

{}

Returns:

Type Description
list[Document]

List of Documents most similar to the query.

Raises:

Type Description
ValueError

If search_type is not one of "similarity", "mmr", or "similarity_score_threshold".

asearch async

asearch(
    query: str, search_type: str, **kwargs: Any
) -> list[Document]

Async return docs most similar to query using a specified search type.

Parameters:

Name Type Description Default
query str

Input text.

required
search_type str

Type of search to perform. Can be "similarity", "mmr", or "similarity_score_threshold".

required
**kwargs Any

Arguments to pass to the search method.

{}

Returns:

Type Description
list[Document]

List of Documents most similar to the query.

Raises:

Type Description
ValueError

If search_type is not one of "similarity", "mmr", or "similarity_score_threshold".

asimilarity_search_with_score async

asimilarity_search_with_score(
    *args: Any, **kwargs: Any
) -> list[tuple[Document, float]]

Async run similarity search with distance.

Parameters:

Name Type Description Default
*args Any

Arguments to pass to the search method.

()
**kwargs Any

Arguments to pass to the search method.

{}

Returns:

Type Description
list[tuple[Document, float]]

List of Tuples of (doc, similarity_score).

similarity_search_with_relevance_scores

similarity_search_with_relevance_scores(
    query: str, k: int = 4, **kwargs: Any
) -> list[tuple[Document, float]]

Return docs and relevance scores in the range [0, 1].

0 is dissimilar, 1 is most similar.

Parameters:

Name Type Description Default
query str

Input text.

required
k int

Number of Documents to return. Defaults to 4.

4
**kwargs Any

kwargs to be passed to similarity search. Should include: score_threshold: Optional, a floating point value between 0 to 1 to filter the resulting set of retrieved docs.

{}

Returns:

Type Description
list[tuple[Document, float]]

List of Tuples of (doc, similarity_score).

asimilarity_search_with_relevance_scores async

asimilarity_search_with_relevance_scores(
    query: str, k: int = 4, **kwargs: Any
) -> list[tuple[Document, float]]

Async return docs and relevance scores in the range [0, 1].

0 is dissimilar, 1 is most similar.

Parameters:

Name Type Description Default
query str

Input text.

required
k int

Number of Documents to return. Defaults to 4.

4
**kwargs Any

kwargs to be passed to similarity search. Should include: score_threshold: Optional, a floating point value between 0 to 1 to filter the resulting set of retrieved docs

{}

Returns:

Type Description
list[tuple[Document, float]]

List of Tuples of (doc, similarity_score)

asimilarity_search(
    query: str, k: int = 4, **kwargs: Any
) -> list[Document]

Async return docs most similar to query.

Parameters:

Name Type Description Default
query str

Input text.

required
k int

Number of Documents to return. Defaults to 4.

4
**kwargs Any

Arguments to pass to the search method.

{}

Returns:

Type Description
list[Document]

List of Documents most similar to the query.

similarity_search_by_vector

similarity_search_by_vector(
    embedding: list[float], k: int = 4, **kwargs: Any
) -> list[Document]

Return docs most similar to embedding vector.

Parameters:

Name Type Description Default
embedding list[float]

Embedding to look up documents similar to.

required
k int

Number of Documents to return. Defaults to 4.

4
**kwargs Any

Arguments to pass to the search method.

{}

Returns:

Type Description
list[Document]

List of Documents most similar to the query vector.

asimilarity_search_by_vector async

asimilarity_search_by_vector(
    embedding: list[float], k: int = 4, **kwargs: Any
) -> list[Document]

Async return docs most similar to embedding vector.

Parameters:

Name Type Description Default
embedding list[float]

Embedding to look up documents similar to.

required
k int

Number of Documents to return. Defaults to 4.

4
**kwargs Any

Arguments to pass to the search method.

{}

Returns:

Type Description
list[Document]

List of Documents most similar to the query vector.

amax_marginal_relevance_search(
    query: str,
    k: int = 4,
    fetch_k: int = 20,
    lambda_mult: float = 0.5,
    **kwargs: Any
) -> list[Document]

Async return docs selected using the maximal marginal relevance.

Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.

Parameters:

Name Type Description Default
query str

Text to look up documents similar to.

required
k int

Number of Documents to return. Defaults to 4.

4
fetch_k int

Number of Documents to fetch to pass to MMR algorithm. Default is 20.

20
lambda_mult float

Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.

0.5
**kwargs Any

Arguments to pass to the search method.

{}

Returns:

Type Description
list[Document]

List of Documents selected by maximal marginal relevance.

from_documents classmethod

from_documents(
    documents: list[Document],
    embedding: Embeddings,
    **kwargs: Any
) -> Self

Return VectorStore initialized from documents and embeddings.

Parameters:

Name Type Description Default
documents list[Document]

List of Documents to add to the vectorstore.

required
embedding Embeddings

Embedding function to use.

required
kwargs Any

Additional keyword arguments.

{}

Returns:

Name Type Description
VectorStore Self

VectorStore initialized from documents and embeddings.

afrom_documents async classmethod

afrom_documents(
    documents: list[Document],
    embedding: Embeddings,
    **kwargs: Any
) -> Self

Async return VectorStore initialized from documents and embeddings.

Parameters:

Name Type Description Default
documents list[Document]

List of Documents to add to the vectorstore.

required
embedding Embeddings

Embedding function to use.

required
kwargs Any

Additional keyword arguments.

{}

Returns:

Name Type Description
VectorStore Self

VectorStore initialized from documents and embeddings.

afrom_texts async classmethod

afrom_texts(
    texts: list[str],
    embedding: Embeddings,
    metadatas: list[dict] | None = None,
    *,
    ids: list[str] | None = None,
    **kwargs: Any
) -> Self

Async return VectorStore initialized from texts and embeddings.

Parameters:

Name Type Description Default
texts list[str]

Texts to add to the vectorstore.

required
embedding Embeddings

Embedding function to use.

required
metadatas list[dict] | None

Optional list of metadatas associated with the texts. Default is None.

None
ids list[str] | None

Optional list of IDs associated with the texts.

None
kwargs Any

Additional keyword arguments.

{}

Returns:

Name Type Description
VectorStore Self

VectorStore initialized from texts and embeddings.

as_retriever

as_retriever(**kwargs: Any) -> VectorStoreRetriever

Return VectorStoreRetriever initialized from this VectorStore.

Parameters:

Name Type Description Default
**kwargs Any

Keyword arguments to pass to the search function. Can include: search_type (Optional[str]): Defines the type of search that the Retriever should perform. Can be "similarity" (default), "mmr", or "similarity_score_threshold". search_kwargs (Optional[Dict]): Keyword arguments to pass to the search function. Can include things like: k: Amount of documents to return (Default: 4) score_threshold: Minimum relevance threshold for similarity_score_threshold fetch_k: Amount of documents to pass to MMR algorithm (Default: 20) lambda_mult: Diversity of results returned by MMR; 1 for minimum diversity and 0 for maximum. (Default: 0.5) filter: Filter by document metadata

{}

Returns:

Name Type Description
VectorStoreRetriever VectorStoreRetriever

Retriever class for VectorStore.

Examples:

.. code-block:: python

# Retrieve more documents with higher diversity
# Useful if your dataset has many similar documents
docsearch.as_retriever(
    search_type="mmr", search_kwargs={"k": 6, "lambda_mult": 0.25}
)

# Fetch more documents for the MMR algorithm to consider
# But only return the top 5
docsearch.as_retriever(
    search_type="mmr", search_kwargs={"k": 5, "fetch_k": 50}
)

# Only retrieve documents that have a relevance score
# Above a certain threshold
docsearch.as_retriever(
    search_type="similarity_score_threshold",
    search_kwargs={"score_threshold": 0.8},
)

# Only get the single most similar document from the dataset
docsearch.as_retriever(search_kwargs={"k": 1})

# Use a filter to only retrieve documents from a specific paper
docsearch.as_retriever(
    search_kwargs={"filter": {"paper_title": "GPT-4 Technical Report"}}
)

__init__

__init__(
    collection: Collection[Dict[str, Any]],
    embedding: Embeddings,
    index_name: str = "vector_index",
    text_key: Union[str, List[str]] = "text",
    embedding_key: str = "embedding",
    relevance_score_fn: str = "cosine",
    dimensions: int = -1,
    auto_create_index: bool | None = None,
    auto_index_timeout: int = 15,
    **kwargs: Any
)

Parameters:

Name Type Description Default
collection Collection[Dict[str, Any]]

MongoDB collection to add the texts to

required
embedding Embeddings

Text embedding model to use

required
text_key Union[str, List[str]]

MongoDB field that will contain the text for each document. It is possible to parse a list of fields. The first one will be used as text key. Default: 'text'

'text'
index_name str

Existing Atlas Vector Search Index

'vector_index'
embedding_key str

Field that will contain the embedding for each document

'embedding'
relevance_score_fn str

The similarity score used for the index Currently supported: 'euclidean', 'cosine', and 'dotProduct'

'cosine'
auto_create_index bool | None

Whether to automatically create an index if it does not exist.

None
dimensions int

Number of dimensions in embedding. If the value is not provided, and auto_create_index is true, the value will be inferred.

-1
auto_index_timeout int

Timeout in seconds to wait for an auto-created index to be ready.

15

from_connection_string classmethod

from_connection_string(
    connection_string: str,
    namespace: str,
    embedding: Embeddings,
    **kwargs: Any
) -> MongoDBAtlasVectorSearch

Construct a MongoDB Atlas Vector Search vector store from a MongoDB connection URI.

Parameters:

Name Type Description Default
connection_string str

A valid MongoDB connection URI.

required
namespace str

A valid MongoDB namespace (database and collection).

required
embedding Embeddings

The text embedding model to use for the vector store.

required

Returns:

Type Description
MongoDBAtlasVectorSearch

A new MongoDBAtlasVectorSearch instance.

close

close() -> None

Close the resources used by the MongoDBAtlasVectorSearch.

add_texts

add_texts(
    texts: Iterable[str],
    metadatas: Optional[List[Dict[str, Any]]] = None,
    ids: Optional[List[str]] = None,
    batch_size: int = DEFAULT_INSERT_BATCH_SIZE,
    **kwargs: Any
) -> List[str]

Add texts, create embeddings, and add to the Collection and index.

Important notes on ids
  • If _id or id is a key in the metadatas dicts, one must pop them and provide as separate list.
  • They must be unique.
  • If they are not provided, the VectorStore will create unique ones, stored as bson.ObjectIds internally, and strings in Langchain. These will appear in Document.metadata with key, '_id'.

Parameters:

Name Type Description Default
texts Iterable[str]

Iterable of strings to add to the vectorstore.

required
metadatas Optional[List[Dict[str, Any]]]

Optional list of metadatas associated with the texts.

None
ids Optional[List[str]]

Optional list of unique ids that will be used as index in VectorStore. See note on ids.

None
batch_size int

Number of documents to insert at a time. Tuning this may help with performance and sidestep MongoDB limits.

DEFAULT_INSERT_BATCH_SIZE

Returns:

Type Description
List[str]

List of ids added to the vectorstore.

get_by_ids

get_by_ids(ids: Sequence[str]) -> list[Document]

Get documents by their IDs.

The returned documents are expected to have the ID field set to the ID of the document in the vector store.

Fewer documents may be returned than requested if some IDs are not found or if there are duplicated IDs.

Users should not assume that the order of the returned documents matches the order of the input IDs. Instead, users should rely on the ID field of the returned documents.

This method should NOT raise exceptions if no documents are found for some IDs.

Parameters:

Name Type Description Default
ids Sequence[str]

List of ids to retrieve.

required

Returns:

Type Description
list[Document]

List of Documents.

.. versionadded:: 0.6.0

bulk_embed_and_insert_texts

bulk_embed_and_insert_texts(
    texts: Union[List[str], Iterable[str]],
    metadatas: Union[List[dict], Generator[dict, Any, Any]],
    ids: Optional[List[str]] = None,
) -> List[str]

Bulk insert single batch of texts, embeddings, and optionally ids.

See add_texts for additional details.

add_documents

add_documents(
    documents: List[Document],
    ids: Optional[List[str]] = None,
    batch_size: int = DEFAULT_INSERT_BATCH_SIZE,
    **kwargs: Any
) -> List[str]

Add documents to the vectorstore.

Parameters:

Name Type Description Default
documents List[Document]

Documents to add to the vectorstore.

required
ids Optional[List[str]]

Optional list of unique ids that will be used as index in VectorStore. See note on ids in add_texts.

None
batch_size int

Number of documents to insert at a time. Tuning this may help with performance and sidestep MongoDB limits.

DEFAULT_INSERT_BATCH_SIZE

Returns:

Type Description
List[str]

List of IDs of the added texts.

similarity_search_with_score

similarity_search_with_score(
    query: str,
    k: int = 4,
    pre_filter: Optional[Dict[str, Any]] = None,
    post_filter_pipeline: Optional[List[Dict]] = None,
    oversampling_factor: int = 10,
    include_embeddings: bool = False,
    **kwargs: Any
) -> List[Tuple[Document, float]]

Return MongoDB documents most similar to the given query and their scores.

Atlas Vector Search eliminates the need to run a separate search system alongside your database.

Args: query: Input text of semantic query k: Number of documents to return. Also known as top_k. pre_filter: List of MQL match expressions comparing an indexed field post_filter_pipeline: (Optional) Arbitrary pipeline of MongoDB aggregation stages applied after the search is complete. oversampling_factor: This times k is the number of candidates chosen at each step in the in HNSW Vector Search include_embeddings: If True, the embedding vector of each result will be included in metadata. kwargs: Additional arguments are specific to the search_type

Returns:

Type Description
List[Tuple[Document, float]]

List of documents most similar to the query and their scores.

similarity_search(
    query: str,
    k: int = 4,
    pre_filter: Optional[Dict[str, Any]] = None,
    post_filter_pipeline: Optional[List[Dict]] = None,
    oversampling_factor: int = 10,
    include_scores: bool = False,
    include_embeddings: bool = False,
    **kwargs: Any
) -> List[Document]

Return MongoDB documents most similar to the given query.

Atlas Vector Search eliminates the need to run a separate search system alongside your database.

Args: query: Input text of semantic query k: (Optional) number of documents to return. Defaults to 4. pre_filter: List of MQL match expressions comparing an indexed field post_filter_pipeline: (Optional) Pipeline of MongoDB aggregation stages to filter/process results after $vectorSearch. oversampling_factor: Multiple of k used when generating number of candidates at each step in the HNSW Vector Search, include_scores: If True, the query score of each result will be included in metadata. include_embeddings: If True, the embedding vector of each result will be included in metadata. kwargs: Additional arguments are specific to the search_type

Returns:

Type Description
List[Document]

List of documents most similar to the query and their scores.

max_marginal_relevance_search(
    query: str,
    k: int = 4,
    fetch_k: int = 20,
    lambda_mult: float = 0.5,
    pre_filter: Optional[Dict[str, Any]] = None,
    post_filter_pipeline: Optional[List[Dict]] = None,
    **kwargs: Any
) -> List[Document]

Return documents selected using the maximal marginal relevance.

Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.

Parameters:

Name Type Description Default
query str

Text to look up documents similar to.

required
k int

(Optional) number of documents to return. Defaults to 4.

4
fetch_k int

(Optional) number of documents to fetch before passing to MMR algorithm. Defaults to 20.

20
lambda_mult float

Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.

0.5
pre_filter Optional[Dict[str, Any]]

List of MQL match expressions comparing an indexed field

None
post_filter_pipeline Optional[List[Dict]]

(Optional) pipeline of MongoDB aggregation stages following the $vectorSearch stage.

None

Returns: List of documents selected by maximal marginal relevance.

from_texts classmethod

from_texts(
    texts: List[str],
    embedding: Embeddings,
    metadatas: Optional[List[Dict]] = None,
    collection: Optional[Collection] = None,
    ids: Optional[List[str]] = None,
    **kwargs: Any
) -> MongoDBAtlasVectorSearch

Construct a MongoDB Atlas Vector Search vector store from raw documents.

This is a user-friendly interface that
  1. Embeds documents.
  2. Adds the documents to a provided MongoDB Atlas Vector Search index (Lucene)

This is intended to be a quick way to get started.

See MongoDBAtlasVectorSearch for kwargs and further description.

Example

.. code-block:: python from pymongo import MongoClient

from langchain_mongodb import MongoDBAtlasVectorSearch
from langchain_openai import OpenAIEmbeddings

mongo_client = MongoClient("<YOUR-CONNECTION-STRING>")
collection = mongo_client["<db_name>"]["<collection_name>"]
embeddings = OpenAIEmbeddings()
vectorstore = MongoDBAtlasVectorSearch.from_texts(
    texts,
    embeddings,
    metadatas=metadatas,
    collection=collection
)

delete

delete(
    ids: Optional[List[str]] = None, **kwargs: Any
) -> Optional[bool]

Delete documents from VectorStore by ids.

Parameters:

Name Type Description Default
ids Optional[List[str]]

List of ids to delete.

None
**kwargs Any

Other keyword arguments passed to Collection.delete_many()

{}

Returns:

Type Description
Optional[bool]

Optional[bool]: True if deletion is successful,

Optional[bool]

False otherwise, None if not implemented.

adelete async

adelete(
    ids: Optional[List[str]] = None, **kwargs: Any
) -> Optional[bool]

Delete by vector ID or other criteria.

Parameters:

Name Type Description Default
ids Optional[List[str]]

List of ids to delete.

None
**kwargs Any

Other keyword arguments that subclasses might use.

{}

Returns:

Type Description
Optional[bool]

Optional[bool]: True if deletion is successful,

Optional[bool]

False otherwise, None if not implemented.

max_marginal_relevance_search_by_vector

max_marginal_relevance_search_by_vector(
    embedding: List[float],
    k: int = 4,
    fetch_k: int = 20,
    lambda_mult: float = 0.5,
    pre_filter: Optional[Dict[str, Any]] = None,
    post_filter_pipeline: Optional[List[Dict]] = None,
    oversampling_factor: int = 10,
    **kwargs: Any
) -> List[Document]

Return docs selected using the maximal marginal relevance.

Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.

Parameters:

Name Type Description Default
embedding List[float]

Embedding to look up documents similar to.

required
k int

Number of Documents to return. Defaults to 4.

4
fetch_k int

Number of Documents to fetch to pass to MMR algorithm.

20
lambda_mult float

Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.

0.5
pre_filter Optional[Dict[str, Any]]

(Optional) dictionary of arguments to filter document fields on.

None
post_filter_pipeline Optional[List[Dict]]

(Optional) pipeline of MongoDB aggregation stages following the vectorSearch stage.

None
oversampling_factor int

Multiple of k used when generating number of candidates in HNSW Vector Search,

10
kwargs Any

Additional arguments are specific to the search_type

{}

Returns:

Type Description
List[Document]

List of Documents selected by maximal marginal relevance.

amax_marginal_relevance_search_by_vector async

amax_marginal_relevance_search_by_vector(
    embedding: List[float],
    k: int = 4,
    fetch_k: int = 20,
    lambda_mult: float = 0.5,
    pre_filter: Optional[Dict[str, Any]] = None,
    post_filter_pipeline: Optional[List[Dict]] = None,
    oversampling_factor: int = 10,
    **kwargs: Any
) -> List[Document]

Return docs selected using the maximal marginal relevance.

create_vector_search_index

create_vector_search_index(
    dimensions: int,
    filters: Optional[List[str]] = None,
    update: bool = False,
    wait_until_complete: Optional[float] = None,
    **kwargs: Any
) -> None

Creates a MongoDB Atlas vectorSearch index for the VectorStore

Note**: This method may fail as it requires a MongoDB Atlas with these pre-requisites <https://www.mongodb.com/docs/atlas/atlas-vector-search/create-index/#prerequisites>. Currently, vector and full-text search index operations need to be performed manually on the Atlas UI for shared M0 clusters.

Parameters:

Name Type Description Default
dimensions int

Number of dimensions in embedding

required
filters Optional[List[Dict[str, str]]]

additional filters

None
update Optional[bool]

Updates existing vectorSearch index. Defaults to False.

False
wait_until_complete Optional[float]

If given, a TimeoutError is raised if search index is not ready after this number of seconds. If not given, the default, operation will not wait.

None
kwargs Any

(Optional): Keyword arguments supplying any additional options to SearchIndexModel.

{}