Skip to content

langchain-astradb

Astra DB integration for LangChain.

Modules:

Name Description
cache

Astra DB - based caches.

chat_message_histories

Astra DB - based chat message history, based on astrapy.

document_loaders

Loader for loading documents from DataStax Astra DB.

storage

Astra DB - based storages.

utils

Utilities for the langchain_astradb package.

vectorstores

Astra DB vector store integration.

Classes:

Name Description
AstraDBCache
AstraDBSemanticCache
AstraDBChatMessageHistory
AstraDBLoader
AstraDBByteStore
AstraDBStore
AstraDBVectorStore

A vector store which uses DataStax Astra DB as backend.

AstraDBVectorStoreError

An exception during vector-store activities.

AstraDBCache

Bases: BaseCache

Methods:

Name Description
__init__

Cache that uses Astra DB as a backend.

delete_through_llm

A wrapper around delete with the LLM being passed.

adelete_through_llm

A wrapper around adelete with the LLM being passed.

delete

Evict from cache if there's an entry.

adelete

Evict from cache if there's an entry.

__init__

__init__(
    *,
    collection_name: str = ASTRA_DB_CACHE_DEFAULT_COLLECTION_NAME,
    token: str | TokenProvider | None = None,
    api_endpoint: str | None = None,
    namespace: str | None = None,
    environment: str | None = None,
    pre_delete_collection: bool = False,
    setup_mode: SetupMode = SYNC,
    ext_callers: (
        list[tuple[str | None, str | None] | str | None]
        | None
    ) = None,
    api_options: APIOptions | None = None
)

Cache that uses Astra DB as a backend.

It uses a single collection as a kv store The lookup keys, combined in the _id of the documents, are: - prompt, a string - llm_string, a deterministic str representation of the model parameters. (needed to prevent same-prompt-different-model collisions)

Parameters:

Name Type Description Default
collection_name str

name of the Astra DB collection to create/use.

ASTRA_DB_CACHE_DEFAULT_COLLECTION_NAME
token str | TokenProvider | None

API token for Astra DB usage, either in the form of a string or a subclass of astrapy.authentication.TokenProvider. If not provided, the environment variable ASTRA_DB_APPLICATION_TOKEN is inspected.

None
api_endpoint str | None

full URL to the API endpoint, such as https://<DB-ID>-us-east1.apps.astra.datastax.com. If not provided, the environment variable ASTRA_DB_API_ENDPOINT is inspected.

None
namespace str | None

namespace (aka keyspace) where the collection is created. If not provided, the environment variable ASTRA_DB_KEYSPACE is inspected. Defaults to the database's "default namespace".

None
environment str | None

a string specifying the environment of the target Data API. If omitted, defaults to "prod" (Astra DB production). Other values are in astrapy.constants.Environment enum class.

None
setup_mode SetupMode

mode used to create the Astra DB collection (SYNC, ASYNC or OFF).

SYNC
pre_delete_collection bool

whether to delete the collection before creating it. If False and the collection already exists, the collection will be used as is.

False
ext_callers list[tuple[str | None, str | None] | str | None] | None

one or more caller identities to identify Data API calls in the User-Agent header. This is a list of (name, version) pairs, or just strings if no version info is provided, which, if supplied, becomes the leading part of the User-Agent string in all API requests related to this component.

None
api_options APIOptions | None

an instance of astrapy.utils.api_options.APIOptions that can be supplied to customize the interaction with the Data API regarding serialization/deserialization, timeouts, custom headers and so on. The provided options are applied on top of settings already tailored to this library, and if specified will take precedence. Passing None (default) means no customization is requested. Refer to the astrapy documentation for details.

None

delete_through_llm

delete_through_llm(
    prompt: str, llm: LLM, stop: list[str] | None = None
) -> None

A wrapper around delete with the LLM being passed.

In case the llm(prompt) calls have a stop param, you should pass it here.

adelete_through_llm async

adelete_through_llm(
    prompt: str, llm: LLM, stop: list[str] | None = None
) -> None

A wrapper around adelete with the LLM being passed.

In case the llm(prompt) calls have a stop param, you should pass it here.

delete

delete(prompt: str, llm_string: str) -> None

Evict from cache if there's an entry.

adelete async

adelete(prompt: str, llm_string: str) -> None

Evict from cache if there's an entry.

AstraDBSemanticCache

Bases: BaseCache

Methods:

Name Description
__init__

Astra DB semantic cache.

lookup_with_id

Look up based on prompt and llm_string.

alookup_with_id

Look up based on prompt and llm_string.

lookup_with_id_through_llm

Look up based on prompt and LLM.

alookup_with_id_through_llm

Look up based on prompt and LLM.

delete_by_document_id

Delete by document ID.

adelete_by_document_id

Delete by document ID.

__init__

__init__(
    *,
    collection_name: str = ASTRA_DB_SEMANTIC_CACHE_DEFAULT_COLLECTION_NAME,
    token: str | TokenProvider | None = None,
    api_endpoint: str | None = None,
    namespace: str | None = None,
    environment: str | None = None,
    setup_mode: SetupMode = SYNC,
    pre_delete_collection: bool = False,
    embedding: Embeddings,
    metric: str | None = None,
    similarity_threshold: float = ASTRA_DB_SEMANTIC_CACHE_DEFAULT_THRESHOLD,
    ext_callers: (
        list[tuple[str | None, str | None] | str | None]
        | None
    ) = None,
    api_options: APIOptions | None = None
)

Astra DB semantic cache.

Cache that uses Astra DB as a vector-store backend for semantic (i.e. similarity-based) lookup.

It uses a single (vector) collection and can store cached values from several LLMs, so the LLM's 'llm_string' is stored in the document metadata.

You can choose the preferred similarity (or use the API default). The default score threshold is tuned to the default metric. Tune it carefully yourself if switching to another distance metric.

Parameters:

Name Type Description Default
collection_name str

name of the Astra DB collection to create/use.

ASTRA_DB_SEMANTIC_CACHE_DEFAULT_COLLECTION_NAME
token str | TokenProvider | None

API token for Astra DB usage, either in the form of a string or a subclass of astrapy.authentication.TokenProvider. If not provided, the environment variable ASTRA_DB_APPLICATION_TOKEN is inspected.

None
api_endpoint str | None

full URL to the API endpoint, such as https://<DB-ID>-us-east1.apps.astra.datastax.com. If not provided, the environment variable ASTRA_DB_API_ENDPOINT is inspected.

None
namespace str | None

namespace (aka keyspace) where the collection is created. If not provided, the environment variable ASTRA_DB_KEYSPACE is inspected. Defaults to the database's "default namespace".

None
environment str | None

a string specifying the environment of the target Data API. If omitted, defaults to "prod" (Astra DB production). Other values are in astrapy.constants.Environment enum class.

None
setup_mode SetupMode

mode used to create the Astra DB collection (SYNC, ASYNC or OFF).

SYNC
pre_delete_collection bool

whether to delete the collection before creating it. If False and the collection already exists, the collection will be used as is.

False
embedding Embeddings

Embedding provider for semantic encoding and search.

required
metric str | None

the function to use for evaluating similarity of text embeddings. Defaults to 'cosine' (alternatives: 'euclidean', 'dot_product')

None
similarity_threshold float

the minimum similarity for accepting a (semantic-search) match.

ASTRA_DB_SEMANTIC_CACHE_DEFAULT_THRESHOLD
ext_callers list[tuple[str | None, str | None] | str | None] | None

one or more caller identities to identify Data API calls in the User-Agent header. This is a list of (name, version) pairs, or just strings if no version info is provided, which, if supplied, becomes the leading part of the User-Agent string in all API requests related to this component.

None
api_options APIOptions | None

an instance of astrapy.utils.api_options.APIOptions that can be supplied to customize the interaction with the Data API regarding serialization/deserialization, timeouts, custom headers and so on. The provided options are applied on top of settings already tailored to this library, and if specified will take precedence. Passing None (default) means no customization is requested. Refer to the astrapy documentation for details.

None

lookup_with_id

lookup_with_id(
    prompt: str, llm_string: str
) -> tuple[str, RETURN_VAL_TYPE] | None

Look up based on prompt and llm_string.

Parameters:

Name Type Description Default
prompt str

the prompt string to look up

required
llm_string str

the str representation of the model parameters

required

Returns:

Type Description
tuple[str, RETURN_VAL_TYPE] | None

If there are hits, (document_id, cached_entry) for the top hit

alookup_with_id async

alookup_with_id(
    prompt: str, llm_string: str
) -> tuple[str, RETURN_VAL_TYPE] | None

Look up based on prompt and llm_string.

Parameters:

Name Type Description Default
prompt str

the prompt string to look up

required
llm_string str

the str representation of the model parameters

required

Returns:

Type Description
tuple[str, RETURN_VAL_TYPE] | None

If there are hits, (document_id, cached_entry) for the top hit

lookup_with_id_through_llm

lookup_with_id_through_llm(
    prompt: str, llm: LLM, stop: list[str] | None = None
) -> tuple[str, RETURN_VAL_TYPE] | None

Look up based on prompt and LLM.

Parameters:

Name Type Description Default
prompt str

the prompt string to look up

required
llm LLM

the LLM instance whose parameters are used in the lookup

required
stop list[str] | None

optional list of stop words passed to the LLM calls

None

Returns:

Type Description
tuple[str, RETURN_VAL_TYPE] | None

If there are hits, (document_id, cached_entry) for the top hit.

alookup_with_id_through_llm async

alookup_with_id_through_llm(
    prompt: str, llm: LLM, stop: list[str] | None = None
) -> tuple[str, RETURN_VAL_TYPE] | None

Look up based on prompt and LLM.

Parameters:

Name Type Description Default
prompt str

the prompt string to look up

required
llm LLM

the LLM instance whose parameters are used in the lookup

required
stop list[str] | None

optional list of stop words passed to the LLM calls

None

Returns:

Type Description
tuple[str, RETURN_VAL_TYPE] | None

If there are hits, (document_id, cached_entry) for the top hit.

delete_by_document_id

delete_by_document_id(document_id: str) -> None

Delete by document ID.

Given this is a "similarity search" cache, an invalidation pattern that makes sense is first a lookup to get an ID, and then deleting with that ID. This is for the second step.

adelete_by_document_id async

adelete_by_document_id(document_id: str) -> None

Delete by document ID.

Given this is a "similarity search" cache, an invalidation pattern that makes sense is first a lookup to get an ID, and then deleting with that ID. This is for the second step.

AstraDBChatMessageHistory

Bases: BaseChatMessageHistory

Methods:

Name Description
add_user_message

Convenience method for adding a human message string to the store.

add_ai_message

Convenience method for adding an AI message string to the store.

add_message

Add a Message object to the store.

__str__

Return a string representation of the chat history.

__init__

Chat message history that stores history in Astra DB.

Attributes:

Name Type Description
messages list[BaseMessage]

Retrieve all session messages from DB.

messages property writable

messages: list[BaseMessage]

Retrieve all session messages from DB.

add_user_message

add_user_message(message: HumanMessage | str) -> None

Convenience method for adding a human message string to the store.

Note

This is a convenience method. Code should favor the bulk add_messages interface instead to save on round-trips to the persistence layer.

This method may be deprecated in a future release.

Parameters:

Name Type Description Default
message HumanMessage | str

The human message to add to the store.

required

add_ai_message

add_ai_message(message: AIMessage | str) -> None

Convenience method for adding an AI message string to the store.

Note

This is a convenience method. Code should favor the bulk add_messages interface instead to save on round-trips to the persistence layer.

This method may be deprecated in a future release.

Parameters:

Name Type Description Default
message AIMessage | str

The AI message to add.

required

add_message

add_message(message: BaseMessage) -> None

Add a Message object to the store.

Parameters:

Name Type Description Default
message BaseMessage

A BaseMessage object to store.

required

Raises:

Type Description
NotImplementedError

If the sub-class has not implemented an efficient add_messages method.

__str__

__str__() -> str

Return a string representation of the chat history.

__init__

__init__(
    *,
    session_id: str,
    collection_name: str = DEFAULT_COLLECTION_NAME,
    token: str | TokenProvider | None = None,
    api_endpoint: str | None = None,
    namespace: str | None = None,
    environment: str | None = None,
    setup_mode: SetupMode = SYNC,
    pre_delete_collection: bool = False,
    ext_callers: (
        list[tuple[str | None, str | None] | str | None]
        | None
    ) = None,
    api_options: APIOptions | None = None
) -> None

Chat message history that stores history in Astra DB.

Parameters:

Name Type Description Default
session_id str

arbitrary key that is used to store the messages of a single chat session.

required
collection_name str

name of the Astra DB collection to create/use.

DEFAULT_COLLECTION_NAME
token str | TokenProvider | None

API token for Astra DB usage, either in the form of a string or a subclass of astrapy.authentication.TokenProvider. If not provided, the environment variable ASTRA_DB_APPLICATION_TOKEN is inspected.

None
api_endpoint str | None

full URL to the API endpoint, such as https://<DB-ID>-us-east1.apps.astra.datastax.com. If not provided, the environment variable ASTRA_DB_API_ENDPOINT is inspected.

None
namespace str | None

namespace (aka keyspace) where the collection is created. If not provided, the environment variable ASTRA_DB_KEYSPACE is inspected. Defaults to the database's "default namespace".

None
environment str | None

a string specifying the environment of the target Data API. If omitted, defaults to "prod" (Astra DB production). Other values are in astrapy.constants.Environment enum class.

None
setup_mode SetupMode

mode used to create the Astra DB collection (SYNC, ASYNC or OFF).

SYNC
pre_delete_collection bool

whether to delete the collection before creating it. If False and the collection already exists, the collection will be used as is.

False
ext_callers list[tuple[str | None, str | None] | str | None] | None

one or more caller identities to identify Data API calls in the User-Agent header. This is a list of (name, version) pairs, or just strings if no version info is provided, which, if supplied, becomes the leading part of the User-Agent string in all API requests related to this component.

None
api_options APIOptions | None

an instance of astrapy.utils.api_options.APIOptions that can be supplied to customize the interaction with the Data API regarding serialization/deserialization, timeouts, custom headers and so on. The provided options are applied on top of settings already tailored to this library, and if specified will take precedence. Passing None (default) means no customization is requested. Refer to the astrapy documentation for details.

None

AstraDBLoader

Bases: BaseLoader

Methods:

Name Description
load

Load data into Document objects.

load_and_split

Load Documents and split into chunks. Chunks are returned as Documents.

__init__

Load DataStax Astra DB documents.

load

load() -> list[Document]

Load data into Document objects.

Returns:

Type Description
list[Document]

the documents.

load_and_split

load_and_split(
    text_splitter: TextSplitter | None = None,
) -> list[Document]

Load Documents and split into chunks. Chunks are returned as Documents.

Do not override this method. It should be considered to be deprecated!

Parameters:

Name Type Description Default
text_splitter TextSplitter | None

TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.

None

Raises:

Type Description
ImportError

If langchain-text-splitters is not installed and no text_splitter is provided.

Returns:

Type Description
list[Document]

List of Documents.

__init__

__init__(
    collection_name: str,
    *,
    token: str | TokenProvider | None = None,
    api_endpoint: str | None = None,
    environment: str | None = None,
    namespace: str | None = None,
    filter_criteria: dict[str, Any] | None = None,
    projection: dict[str, Any] | None = _NOT_SET,
    limit: int | None = None,
    nb_prefetched: int = _NOT_SET,
    page_content_mapper: Callable[[dict], str] = dumps,
    metadata_mapper: (
        Callable[[dict], dict[str, Any]] | None
    ) = None,
    ext_callers: (
        list[tuple[str | None, str | None] | str | None]
        | None
    ) = None,
    api_options: APIOptions | None = None
) -> None

Load DataStax Astra DB documents.

Parameters:

Name Type Description Default
collection_name str

name of the Astra DB collection to use.

required
token str | TokenProvider | None

API token for Astra DB usage, either in the form of a string or a subclass of astrapy.authentication.TokenProvider. If not provided, the environment variable ASTRA_DB_APPLICATION_TOKEN is inspected.

None
api_endpoint str | None

full URL to the API endpoint, such as https://<DB-ID>-us-east1.apps.astra.datastax.com. If not provided, the environment variable ASTRA_DB_API_ENDPOINT is inspected.

None
environment str | None

a string specifying the environment of the target Data API. If omitted, defaults to "prod" (Astra DB production). Other values are in astrapy.constants.Environment enum class.

None
namespace str | None

namespace (aka keyspace) where the collection resides. If not provided, the environment variable ASTRA_DB_KEYSPACE is inspected. Defaults to the database's "default namespace".

None
filter_criteria dict[str, Any] | None

Criteria to filter documents.

None
projection dict[str, Any] | None

Specifies the fields to return. If not provided, reads fall back to the Data API default projection.

_NOT_SET
limit int | None

a maximum number of documents to return in the read query.

None
nb_prefetched int

Max number of documents to pre-fetch. IGNORED starting from v. 0.3.5: astrapy v1.0+ does not support it.

_NOT_SET
page_content_mapper Callable[[dict], str]

Function applied to collection documents to create the page_content of the LangChain Document. Defaults to json.dumps.

dumps
metadata_mapper Callable[[dict], dict[str, Any]] | None

Function applied to collection documents to create the metadata of the LangChain Document. Defaults to returning the namespace, API endpoint and collection name.

None
ext_callers list[tuple[str | None, str | None] | str | None] | None

one or more caller identities to identify Data API calls in the User-Agent header. This is a list of (name, version) pairs, or just strings if no version info is provided, which, if supplied, becomes the leading part of the User-Agent string in all API requests related to this component.

None
api_options APIOptions | None

an instance of astrapy.utils.api_options.APIOptions that can be supplied to customize the interaction with the Data API regarding serialization/deserialization, timeouts, custom headers and so on. The provided options are applied on top of settings already tailored to this library, and if specified will take precedence. Passing None (default) means no customization is requested. Refer to the astrapy documentation for details.

None

AstraDBByteStore

Bases: AstraDBBaseStore[bytes], ByteStore

Methods:

Name Description
__init__

ByteStore implementation using DataStax AstraDB as the underlying store.

__init__

__init__(
    *,
    collection_name: str,
    token: str | TokenProvider | None = None,
    api_endpoint: str | None = None,
    namespace: str | None = None,
    environment: str | None = None,
    pre_delete_collection: bool = False,
    setup_mode: SetupMode = SYNC,
    ext_callers: (
        list[tuple[str | None, str | None] | str | None]
        | None
    ) = None,
    api_options: APIOptions | None = None
) -> None

ByteStore implementation using DataStax AstraDB as the underlying store.

The bytes values are converted to base64 encoded strings Documents in the AstraDB collection will have the format

.. code-block:: json { "_id": "", "value": "" }

Parameters:

Name Type Description Default
collection_name str

name of the Astra DB collection to create/use.

required
token str | TokenProvider | None

API token for Astra DB usage, either in the form of a string or a subclass of astrapy.authentication.TokenProvider. If not provided, the environment variable ASTRA_DB_APPLICATION_TOKEN is inspected.

None
api_endpoint str | None

full URL to the API endpoint, such as https://<DB-ID>-us-east1.apps.astra.datastax.com. If not provided, the environment variable ASTRA_DB_API_ENDPOINT is inspected.

None
namespace str | None

namespace (aka keyspace) where the collection is created. If not provided, the environment variable ASTRA_DB_KEYSPACE is inspected. Defaults to the database's "default namespace".

None
environment str | None

a string specifying the environment of the target Data API. If omitted, defaults to "prod" (Astra DB production). Other values are in astrapy.constants.Environment enum class.

None
setup_mode SetupMode

mode used to create the Astra DB collection (SYNC, ASYNC or OFF).

SYNC
pre_delete_collection bool

whether to delete the collection before creating it. If False and the collection already exists, the collection will be used as is.

False
ext_callers list[tuple[str | None, str | None] | str | None] | None

one or more caller identities to identify Data API calls in the User-Agent header. This is a list of (name, version) pairs, or just strings if no version info is provided, which, if supplied, becomes the leading part of the User-Agent string in all API requests related to this component.

None
api_options APIOptions | None

an instance of astrapy.utils.api_options.APIOptions that can be supplied to customize the interaction with the Data API regarding serialization/deserialization, timeouts, custom headers and so on. The provided options are applied on top of settings already tailored to this library, and if specified will take precedence. Passing None (default) means no customization is requested. Refer to the astrapy documentation for details.

None

AstraDBStore

Bases: AstraDBBaseStore[Any]

Methods:

Name Description
__init__

BaseStore implementation using DataStax AstraDB as the underlying store.

__init__

__init__(
    collection_name: str,
    *,
    token: str | TokenProvider | None = None,
    api_endpoint: str | None = None,
    namespace: str | None = None,
    environment: str | None = None,
    pre_delete_collection: bool = False,
    setup_mode: SetupMode = SYNC,
    ext_callers: (
        list[tuple[str | None, str | None] | str | None]
        | None
    ) = None,
    api_options: APIOptions | None = None
) -> None

BaseStore implementation using DataStax AstraDB as the underlying store.

The value type can be any type serializable by json.dumps. Can be used to store embeddings with the CacheBackedEmbeddings.

Documents in the AstraDB collection will have the format

.. code-block:: json

{
  "_id": "<key>",
  "value": <value>
}

Parameters:

Name Type Description Default
collection_name str

name of the Astra DB collection to create/use.

required
token str | TokenProvider | None

API token for Astra DB usage, either in the form of a string or a subclass of astrapy.authentication.TokenProvider. If not provided, the environment variable ASTRA_DB_APPLICATION_TOKEN is inspected.

None
api_endpoint str | None

full URL to the API endpoint, such as https://<DB-ID>-us-east1.apps.astra.datastax.com. If not provided, the environment variable ASTRA_DB_API_ENDPOINT is inspected.

None
namespace str | None

namespace (aka keyspace) where the collection is created. If not provided, the environment variable ASTRA_DB_KEYSPACE is inspected. Defaults to the database's "default namespace".

None
environment str | None

a string specifying the environment of the target Data API. If omitted, defaults to "prod" (Astra DB production). Other values are in astrapy.constants.Environment enum class.

None
setup_mode SetupMode

mode used to create the Astra DB collection (SYNC, ASYNC or OFF).

SYNC
pre_delete_collection bool

whether to delete the collection before creating it. If False and the collection already exists, the collection will be used as is.

False
ext_callers list[tuple[str | None, str | None] | str | None] | None

one or more caller identities to identify Data API calls in the User-Agent header. This is a list of (name, version) pairs, or just strings if no version info is provided, which, if supplied, becomes the leading part of the User-Agent string in all API requests related to this component.

None
api_options APIOptions | None

an instance of astrapy.utils.api_options.APIOptions that can be supplied to customize the interaction with the Data API regarding serialization/deserialization, timeouts, custom headers and so on. The provided options are applied on top of settings already tailored to this library, and if specified will take precedence. Passing None (default) means no customization is requested. Refer to the astrapy documentation for details.

None

AstraDBVectorStore

Bases: VectorStore

A vector store which uses DataStax Astra DB as backend.

Setup

Install the langchain-astradb package and head to the AstraDB website <https://astra.datastax.com>, create an account, create a new database and create an application token <https://docs.datastax.com/en/astra-db-serverless/administration/manage-application-tokens.html>.

.. code-block:: bash

pip install -qU langchain-astradb

Key init args — indexing params: collection_name: str Name of the collection. embedding: Embeddings Embedding function to use.

Key init args — client params: api_endpoint: str Astra DB API endpoint. token: str API token for Astra DB usage. namespace: Optional[str] Namespace (aka keyspace) where the collection is created

Instantiate

Get your API endpoint and application token from the dashboard of your database.

Create a vector store and provide a LangChain embedding object for working with it:

.. code-block:: python

import getpass

from langchain_astradb import AstraDBVectorStore
from langchain_openai import OpenAIEmbeddings

ASTRA_DB_API_ENDPOINT = getpass.getpass("ASTRA_DB_API_ENDPOINT = ")
ASTRA_DB_APPLICATION_TOKEN = getpass.getpass(
    "ASTRA_DB_APPLICATION_TOKEN = "
)

vector_store = AstraDBVectorStore(
    collection_name="astra_vector_langchain",
    embedding=OpenAIEmbeddings(),
    api_endpoint=ASTRA_DB_API_ENDPOINT,
    token=ASTRA_DB_APPLICATION_TOKEN,
)

(Vectorize) Create a vector store where the embedding vector computation happens entirely on the server-side, using the vectorize <https://docs.datastax.com/en/astra-db-serverless/databases/embedding-generation.html>_ feature:

.. code-block:: python

import getpass
from astrapy.info import VectorServiceOptions

from langchain_astradb import AstraDBVectorStore

ASTRA_DB_API_ENDPOINT = getpass.getpass("ASTRA_DB_API_ENDPOINT = ")
ASTRA_DB_APPLICATION_TOKEN = getpass.getpass(
    "ASTRA_DB_APPLICATION_TOKEN = "
)

vector_store = AstraDBVectorStore(
    collection_name="astra_vectorize_langchain",
    api_endpoint=ASTRA_DB_API_ENDPOINT,
    token=ASTRA_DB_APPLICATION_TOKEN,
    collection_vector_service_options=VectorServiceOptions(
        provider="nvidia",
        model_name="NV-Embed-QA",
        # authentication=...,  # needed by some providers/models
    ),
)

(Hybrid) The underlying Astra DB typically supports hybrid search (i.e. lexical + vector ANN) to boost the results' accuracy. This is provisioned and used automatically when available. For manual control, use the collection_rerank and collection_lexical constructor parameters:

.. code-block:: python

import getpass
from astrapy.info import (
    CollectionLexicalOptions,
    CollectionRerankOptions,
    RerankServiceOptions,
    VectorServiceOptions,
)

from langchain_astradb import AstraDBVectorStore

ASTRA_DB_API_ENDPOINT = getpass.getpass("ASTRA_DB_API_ENDPOINT = ")
ASTRA_DB_APPLICATION_TOKEN = getpass.getpass(
    "ASTRA_DB_APPLICATION_TOKEN = "
)

vector_store = AstraDBVectorStore(
    collection_name="astra_vectorize_langchain",
    # embedding=...,  # needed unless using 'vectorize'
    api_endpoint=ASTRA_DB_API_ENDPOINT,
    token=ASTRA_DB_APPLICATION_TOKEN,
    collection_vector_service_options=VectorServiceOptions(
        ...
    ),  # see above
    collection_lexical=CollectionLexicalOptions(analyzer="standard"),
    collection_rerank=CollectionRerankOptions(
        service=RerankServiceOptions(
            provider="nvidia",
            model_name="nvidia/llama-3.2-nv-rerankqa-1b-v2",
        ),
    ),
    collection_reranking_api_key=...,  # if needed by the model/setup
)

Hybrid-related server upgrades may introduce a mismatch between the store defaults and a pre-existing collection: in case one such mismatch is reported (as a Data API "EXISTING_COLLECTION_DIFFERENT_SETTINGS" error), the options to resolve are: (1) use autodetect mode, (2) switch to setup_mode "OFF", or (3) explicitly specify lexical and/or rerank settings in the vector store constructor, to match the existing collection configuration. See here <https://github.com/langchain-ai/langchain-datastax/blob/main/libs/astradb/README.md#collection-defaults-mismatch>_ for more details.

(Autodetect) Let the vector store figure out the configuration (including vectorize and document encoding scheme on DB), by inspection of an existing collection:

.. code-block:: python

import getpass

from langchain_astradb import AstraDBVectorStore

ASTRA_DB_API_ENDPOINT = getpass.getpass("ASTRA_DB_API_ENDPOINT = ")
ASTRA_DB_APPLICATION_TOKEN = getpass.getpass(
    "ASTRA_DB_APPLICATION_TOKEN = "
)

vector_store = AstraDBVectorStore(
    collection_name="astra_existing_collection",
    # embedding=...,  # needed unless using 'vectorize'
    api_endpoint=ASTRA_DB_API_ENDPOINT,
    token=ASTRA_DB_APPLICATION_TOKEN,
    autodetect_collection=True,
)

(Non-Astra DB) This class can also target a non-Astra DB database, such as a self-deployed HCD, through the Data API:

.. code-block:: python

import getpass

from astrapy.authentication import UsernamePasswordTokenProvider

from langchain_astradb import AstraDBVectorStore

vector_store = AstraDBVectorStore(
    collection_name="astra_existing_collection",
    # embedding=...,  # needed unless using 'vectorize'
    api_endpoint="http://localhost:8181",
    token=UsernamePasswordTokenProvider(
        username="user",
        password="pwd",
    ),
    collection_vector_service_options=...,  # if 'vectorize'
)
Add Documents

.. code-block:: python

from langchain_core.documents import Document

document_1 = Document(page_content="foo", metadata={"baz": "bar"})
document_2 = Document(page_content="thud", metadata={"bar": "baz"})
document_3 = Document(page_content="i will be deleted :(")

documents = [document_1, document_2, document_3]
ids = ["1", "2", "3"]
vector_store.add_documents(documents=documents, ids=ids)
Delete Documents

.. code-block:: python

vector_store.delete(ids=["3"])
Search with filter

.. code-block:: python

results = vector_store.similarity_search(
    query="thud", k=1, filter={"bar": "baz"}
)
for doc in results:
    print(f"* {doc.page_content} [{doc.metadata}]")

.. code-block:: none

thud [{'bar': 'baz'}]
Search with score

.. code-block:: python

results = vector_store.similarity_search_with_score(query="qux", k=1)
for doc, score in results:
    print(f"* [SIM={score:3f}] {doc.page_content} [{doc.metadata}]")

.. code-block:: none

[SIM=0.916135] foo [{'baz': 'bar'}]
Async

.. code-block:: python

# add documents
await vector_store.aadd_documents(documents=documents, ids=ids)

# delete documents
await vector_store.adelete(ids=["3"])

# search
results = vector_store.asimilarity_search(query="thud", k=1)

# search with score
results = await vector_store.asimilarity_search_with_score(query="qux", k=1)
for doc, score in results:
    print(f"* [SIM={score:3f}] {doc.page_content} [{doc.metadata}]")

.. code-block:: none

[SIM=0.916135] foo [{'baz': 'bar'}]
Use as Retriever

.. code-block:: python

retriever = vector_store.as_retriever(
    search_type="similarity_score_threshold",
    search_kwargs={"k": 1, "score_threshold": 0.5},
)
retriever.invoke("thud")

.. code-block:: none

[Document(metadata={'bar': 'baz'}, page_content='thud')]

Methods:

Name Description
add_documents

Add or update documents in the vectorstore.

aadd_documents

Async run more documents through the embeddings and add to the vectorstore.

search

Return docs most similar to query using a specified search type.

asearch

Async return docs most similar to query using a specified search type.

similarity_search_with_relevance_scores

Return docs and relevance scores in the range [0, 1].

asimilarity_search_with_relevance_scores

Async return docs and relevance scores in the range [0, 1].

as_retriever

Return VectorStoreRetriever initialized from this VectorStore.

filter_to_query

Prepare a query for use on DB based on metadata filter.

__init__

A vector store which uses DataStax Astra DB as backend.

copy

Create a copy, possibly with changed attributes.

clear

Empty the collection of all its stored entries.

aclear

Empty the collection of all its stored entries.

delete_by_document_id

Remove a single document from the store, given its document ID.

adelete_by_document_id

Remove a single document from the store, given its document ID.

delete

Delete by vector ids.

adelete

Delete by vector ids.

delete_by_metadata_filter

Delete all documents matching a certain metadata filtering condition.

adelete_by_metadata_filter

Delete all documents matching a certain metadata filtering condition.

delete_collection

Completely delete the collection from the database.

adelete_collection

Completely delete the collection from the database.

add_texts

Run texts through the embeddings and add them to the vectorstore.

aadd_texts

Run texts through the embeddings and add them to the vectorstore.

update_metadata

Add/overwrite the metadata of existing documents.

aupdate_metadata

Add/overwrite the metadata of existing documents.

full_decode_astra_db_found_document

Decode an Astra DB document in full, i.e. into Document+embedding/similarity.

full_decode_astra_db_reranked_result

Full-decode an Astra DB find-and-rerank hit (Document+embedding/similarity).

run_query_raw

Execute a generic query on stored documents, returning Astra DB documents.

run_query

Execute a generic query on stored documents, returning Documents+other info.

arun_query_raw

Execute a generic query on stored documents, returning Astra DB documents.

arun_query

Execute a generic query on stored documents, returning Documents+other info.

metadata_search

Get documents via a metadata search.

ametadata_search

Get documents via a metadata search.

get_by_document_id

Retrieve a single document from the store, given its document ID.

aget_by_document_id

Retrieve a single document from the store, given its document ID.

get_by_ids

Get documents by their IDs.

get_by_document_ids

Get documents by their IDs.

aget_by_ids

Get documents by their IDs.

aget_by_document_ids

Get documents by their IDs.

similarity_search

Return docs most similar to query.

similarity_search_with_score

Return docs most similar to query with score.

similarity_search_with_score_id

Return docs most similar to the query with score and id.

similarity_search_by_vector

Return docs most similar to embedding vector.

similarity_search_with_score_by_vector

Return docs most similar to embedding vector with score.

similarity_search_with_score_id_by_vector

Return docs most similar to embedding vector with score and id.

asimilarity_search

Return docs most similar to query.

asimilarity_search_with_score

Return docs most similar to query with score.

asimilarity_search_with_score_id

Return docs most similar to the query with score and id.

asimilarity_search_by_vector

Return docs most similar to embedding vector.

asimilarity_search_with_score_by_vector

Return docs most similar to embedding vector with score.

asimilarity_search_with_score_id_by_vector

Return docs most similar to embedding vector with score and id.

similarity_search_with_embedding_by_vector

Return docs most similar to embedding vector with embedding.

asimilarity_search_with_embedding_by_vector

Return docs most similar to embedding vector with embedding.

similarity_search_with_embedding

Return docs most similar to the query with embedding.

asimilarity_search_with_embedding

Return docs most similar to the query with embedding.

max_marginal_relevance_search_by_vector

Return docs selected using the maximal marginal relevance.

amax_marginal_relevance_search_by_vector

Return docs selected using the maximal marginal relevance.

max_marginal_relevance_search

Return docs selected using the maximal marginal relevance.

amax_marginal_relevance_search

Return docs selected using the maximal marginal relevance.

from_texts

Create an Astra DB vectorstore from raw texts.

afrom_texts

Create an Astra DB vectorstore from raw texts.

from_documents

Create an Astra DB vectorstore from a document list.

afrom_documents

Create an Astra DB vectorstore from a document list.

Attributes:

Name Type Description
embeddings Embeddings | None

Accesses the supplied embeddings object.

embeddings property

embeddings: Embeddings | None

Accesses the supplied embeddings object.

If using server-side embeddings, this will return None.

add_documents

add_documents(
    documents: list[Document], **kwargs: Any
) -> list[str]

Add or update documents in the vectorstore.

Parameters:

Name Type Description Default
documents list[Document]

Documents to add to the vectorstore.

required
kwargs Any

Additional keyword arguments. if kwargs contains ids and documents contain ids, the ids in the kwargs will receive precedence.

{}

Returns:

Type Description
list[str]

List of IDs of the added texts.

aadd_documents async

aadd_documents(
    documents: list[Document], **kwargs: Any
) -> list[str]

Async run more documents through the embeddings and add to the vectorstore.

Parameters:

Name Type Description Default
documents list[Document]

Documents to add to the vectorstore.

required
kwargs Any

Additional keyword arguments.

{}

Returns:

Type Description
list[str]

List of IDs of the added texts.

search

search(
    query: str, search_type: str, **kwargs: Any
) -> list[Document]

Return docs most similar to query using a specified search type.

Parameters:

Name Type Description Default
query str

Input text

required
search_type str

Type of search to perform. Can be "similarity", "mmr", or "similarity_score_threshold".

required
**kwargs Any

Arguments to pass to the search method.

{}

Returns:

Type Description
list[Document]

List of Documents most similar to the query.

Raises:

Type Description
ValueError

If search_type is not one of "similarity", "mmr", or "similarity_score_threshold".

asearch async

asearch(
    query: str, search_type: str, **kwargs: Any
) -> list[Document]

Async return docs most similar to query using a specified search type.

Parameters:

Name Type Description Default
query str

Input text.

required
search_type str

Type of search to perform. Can be "similarity", "mmr", or "similarity_score_threshold".

required
**kwargs Any

Arguments to pass to the search method.

{}

Returns:

Type Description
list[Document]

List of Documents most similar to the query.

Raises:

Type Description
ValueError

If search_type is not one of "similarity", "mmr", or "similarity_score_threshold".

similarity_search_with_relevance_scores

similarity_search_with_relevance_scores(
    query: str, k: int = 4, **kwargs: Any
) -> list[tuple[Document, float]]

Return docs and relevance scores in the range [0, 1].

0 is dissimilar, 1 is most similar.

Parameters:

Name Type Description Default
query str

Input text.

required
k int

Number of Documents to return. Defaults to 4.

4
**kwargs Any

kwargs to be passed to similarity search. Should include: score_threshold: Optional, a floating point value between 0 to 1 to filter the resulting set of retrieved docs.

{}

Returns:

Type Description
list[tuple[Document, float]]

List of Tuples of (doc, similarity_score).

asimilarity_search_with_relevance_scores async

asimilarity_search_with_relevance_scores(
    query: str, k: int = 4, **kwargs: Any
) -> list[tuple[Document, float]]

Async return docs and relevance scores in the range [0, 1].

0 is dissimilar, 1 is most similar.

Parameters:

Name Type Description Default
query str

Input text.

required
k int

Number of Documents to return. Defaults to 4.

4
**kwargs Any

kwargs to be passed to similarity search. Should include: score_threshold: Optional, a floating point value between 0 to 1 to filter the resulting set of retrieved docs

{}

Returns:

Type Description
list[tuple[Document, float]]

List of Tuples of (doc, similarity_score)

as_retriever

as_retriever(**kwargs: Any) -> VectorStoreRetriever

Return VectorStoreRetriever initialized from this VectorStore.

Parameters:

Name Type Description Default
**kwargs Any

Keyword arguments to pass to the search function. Can include: search_type (Optional[str]): Defines the type of search that the Retriever should perform. Can be "similarity" (default), "mmr", or "similarity_score_threshold". search_kwargs (Optional[Dict]): Keyword arguments to pass to the search function. Can include things like: k: Amount of documents to return (Default: 4) score_threshold: Minimum relevance threshold for similarity_score_threshold fetch_k: Amount of documents to pass to MMR algorithm (Default: 20) lambda_mult: Diversity of results returned by MMR; 1 for minimum diversity and 0 for maximum. (Default: 0.5) filter: Filter by document metadata

{}

Returns:

Name Type Description
VectorStoreRetriever VectorStoreRetriever

Retriever class for VectorStore.

Examples:

.. code-block:: python

# Retrieve more documents with higher diversity
# Useful if your dataset has many similar documents
docsearch.as_retriever(
    search_type="mmr", search_kwargs={"k": 6, "lambda_mult": 0.25}
)

# Fetch more documents for the MMR algorithm to consider
# But only return the top 5
docsearch.as_retriever(
    search_type="mmr", search_kwargs={"k": 5, "fetch_k": 50}
)

# Only retrieve documents that have a relevance score
# Above a certain threshold
docsearch.as_retriever(
    search_type="similarity_score_threshold",
    search_kwargs={"score_threshold": 0.8},
)

# Only get the single most similar document from the dataset
docsearch.as_retriever(search_kwargs={"k": 1})

# Use a filter to only retrieve documents from a specific paper
docsearch.as_retriever(
    search_kwargs={"filter": {"paper_title": "GPT-4 Technical Report"}}
)

filter_to_query

filter_to_query(
    filter_dict: dict[str, Any] | None,
) -> dict[str, Any]

Prepare a query for use on DB based on metadata filter.

Encode an "abstract" filter clause on metadata into a query filter condition aware of the collection schema choice.

Parameters:

Name Type Description Default
filter_dict dict[str, Any] | None

a metadata condition in the form {"field": "value"} or related.

required

Returns:

Type Description
dict[str, Any]

the corresponding mapping ready for use in queries,

dict[str, Any]

aware of the details of the schema used to encode the document on DB.

__init__

__init__(
    *,
    collection_name: str,
    embedding: Embeddings | None = None,
    token: str | TokenProvider | None = None,
    api_endpoint: str | None = None,
    environment: str | None = None,
    namespace: str | None = None,
    metric: str | None = None,
    batch_size: int | None = None,
    bulk_insert_batch_concurrency: int | None = None,
    bulk_insert_overwrite_concurrency: int | None = None,
    bulk_delete_concurrency: int | None = None,
    setup_mode: SetupMode | None = None,
    pre_delete_collection: bool = False,
    metadata_indexing_include: Iterable[str] | None = None,
    metadata_indexing_exclude: Iterable[str] | None = None,
    collection_indexing_policy: (
        dict[str, Any] | None
    ) = None,
    collection_vector_service_options: (
        VectorServiceOptions | None
    ) = None,
    collection_embedding_api_key: (
        str | EmbeddingHeadersProvider | None
    ) = None,
    content_field: str | None = None,
    ignore_invalid_documents: bool = False,
    autodetect_collection: bool = False,
    ext_callers: (
        list[tuple[str | None, str | None] | str | None]
        | None
    ) = None,
    component_name: str = COMPONENT_NAME_VECTORSTORE,
    api_options: APIOptions | None = None,
    collection_rerank: (
        CollectionRerankOptions
        | RerankServiceOptions
        | None
    ) = None,
    collection_reranking_api_key: (
        str | RerankingHeadersProvider | None
    ) = None,
    collection_lexical: (
        str
        | dict[str, Any]
        | CollectionLexicalOptions
        | None
    ) = None,
    hybrid_search: HybridSearchMode | None = None,
    hybrid_limit_factor: (
        float
        | dict[str, float]
        | HybridLimitFactorPrescription
        | None
    ) = None
) -> None

A vector store which uses DataStax Astra DB as backend.

For more on Astra DB, visit https://docs.datastax.com/en/astra-db-serverless/index.html

Parameters:

Name Type Description Default
embedding Embeddings | None

the embeddings function or service to use. This enables client-side embedding functions or calls to external embedding providers. If embedding is passed, then collection_vector_service_options can not be provided.

None
collection_name str

name of the Astra DB collection to create/use.

required
token str | TokenProvider | None

API token for Astra DB usage, either in the form of a string or a subclass of astrapy.authentication.TokenProvider. If not provided, the environment variable ASTRA_DB_APPLICATION_TOKEN is inspected.

None
api_endpoint str | None

full URL to the API endpoint, such as https://<DB-ID>-us-east1.apps.astra.datastax.com. If not provided, the environment variable ASTRA_DB_API_ENDPOINT is inspected.

None
environment str | None

a string specifying the environment of the target Data API. If omitted, defaults to "prod" (Astra DB production). Other values are in astrapy.constants.Environment enum class.

None
namespace str | None

namespace (aka keyspace) where the collection is created. If not provided, the environment variable ASTRA_DB_KEYSPACE is inspected. Defaults to the database's "default namespace".

None
metric str | None

similarity function to use out of those available in Astra DB. If left out, it will use Astra DB API's defaults (i.e. "cosine" - but, for performance reasons, "dot_product" is suggested if embeddings are normalized to one).

None
batch_size int | None

Size of document chunks for each individual insertion API request. If not provided, astrapy defaults are applied.

None
bulk_insert_batch_concurrency int | None

Number of threads or coroutines to insert batches concurrently.

None
bulk_insert_overwrite_concurrency int | None

Number of threads or coroutines in a batch to insert pre-existing entries.

None
bulk_delete_concurrency int | None

Number of threads or coroutines for multiple-entry deletes.

None
setup_mode SetupMode | None

mode used to create the collection (SYNC, ASYNC or OFF).

None
pre_delete_collection bool

whether to delete the collection before creating it. If False and the collection already exists, the collection will be used as is.

False
metadata_indexing_include Iterable[str] | None

an allowlist of the specific metadata subfields that should be indexed for later filtering in searches.

None
metadata_indexing_exclude Iterable[str] | None

a denylist of the specific metadata subfields that should not be indexed for later filtering in searches.

None
collection_indexing_policy dict[str, Any] | None

a full "indexing" specification for what fields should be indexed for later filtering in searches. This dict must conform to to the API specifications (see https://docs.datastax.com/en/astra-db-serverless/api-reference/collections.html#the-indexing-option)

None
collection_vector_service_options VectorServiceOptions | None

specifies the use of server-side embeddings within Astra DB. If passing this parameter, embedding cannot be provided.

None
collection_embedding_api_key str | EmbeddingHeadersProvider | None

for usage of server-side embeddings within Astra DB. With this parameter one can supply an API Key that will be passed to Astra DB with each data request. This parameter can be either a string or a subclass of astrapy.authentication.EmbeddingHeadersProvider. This is useful when the service is configured for the collection, but no corresponding secret is stored within Astra's key management system.

None
content_field str | None

name of the field containing the textual content in the documents when saved on Astra DB. For vectorize collections, this cannot be specified; for non-vectorize collection, defaults to "content". The special value "*" can be passed only if autodetect_collection=True. In this case, the actual name of the key for the textual content is guessed by inspection of a few documents from the collection, under the assumption that the longer strings are the most likely candidates. Please understand the limitations of this method and get some understanding of your data before passing "*" for this parameter.

None
ignore_invalid_documents bool

if False (default), exceptions are raised when a document is found on the Astra DB collection that does not have the expected shape. If set to True, such results from the database are ignored and a warning is issued. Note that in this case a similarity search may end up returning fewer results than the required k.

False
autodetect_collection bool

if True, turns on autodetect behavior. The store will look for an existing collection of the provided name and infer the store settings from it. Default is False. In autodetect mode, content_field can be given as "*", meaning that an attempt will be made to determine it by inspection (unless vectorize is enabled, in which case content_field is ignored). In autodetect mode, the store not only determines whether embeddings are client- or server-side, but - most importantly - switches automatically between "nested" and "flat" representations of documents on DB (i.e. having the metadata key-value pairs grouped in a metadata field or spread at the documents' top-level). The former scheme is the native mode of the AstraDBVectorStore; the store resorts to the latter in case of vector collections populated with external means (such as a third-party data import tool) before applying an AstraDBVectorStore to them. Note that the following parameters cannot be used if this is True: metric, setup_mode, metadata_indexing_include, metadata_indexing_exclude, collection_indexing_policy, collection_vector_service_options.

False
ext_callers list[tuple[str | None, str | None] | str | None] | None

one or more caller identities to identify Data API calls in the User-Agent header. This is a list of (name, version) pairs, or just strings if no version info is provided, which, if supplied, becomes the leading part of the User-Agent string in all API requests related to this component.

None
component_name str

the string identifying this specific component in the stack of usage info passed as the User-Agent string to the Data API. Defaults to "langchain_vectorstore", but can be overridden if this component actually serves as the building block for another component (such as when the vector store is used within a GraphRetriever).

COMPONENT_NAME_VECTORSTORE
api_options APIOptions | None

an instance of astrapy.utils.api_options.APIOptions that can be supplied to customize the interaction with the Data API regarding serialization/deserialization, timeouts, custom headers and so on. The provided options are applied on top of settings already tailored to this library, and if specified will take precedence. Passing None (default) means no customization is requested. Refer to the astrapy documentation for details.

None
collection_rerank CollectionRerankOptions | RerankServiceOptions | None

providing reranking settings is necessary to run hybrid searches for similarity. This parameter can be an instance of the astrapy classes CollectionRerankOptions or RerankServiceOptions.

None
collection_reranking_api_key str | RerankingHeadersProvider | None

for usage of server-side reranking services within Astra DB. With this parameter one can supply an API Key that will be passed to Astra DB with each data request. This parameter can be either a string or a subclass of astrapy.authentication.RerankingHeadersProvider. This is useful when the service is configured for the collection, but no corresponding secret is stored within Astra's key management system.

None
collection_lexical str | dict[str, Any] | CollectionLexicalOptions | None

configuring a lexical analyzer is necessary to run lexical and hybrid searches. This parameter can be a string or dict, which is then passed as-is for the "analyzer" field of a createCollection's "$lexical.analyzer" value, or a ready-made astrapy CollectionLexicalOptions object.

None
hybrid_search HybridSearchMode | None

whether similarity searches should be run as Hybrid searches or not. Values are DEFAULT, ON or OFF. In case of DEFAULT, searches are performed as permitted by the collection configuration, with a preference for hybrid search. Forcing this setting to ON for a non-hybrid-enabled collection would result in a server error when running searches.

None
hybrid_limit_factor float | dict[str, float] | HybridLimitFactorPrescription | None

subsearch "limit" specification for hybrid searches. If omitted, hybrid searches do not specify it and leave the Data API to use its defaults. If a floating-point positive number is provided: each subsearch participating in the hybrid search (i.e. both the vector-based ANN and the lexical-based) will be requested to fecth up to int(k*hybrid_limit_factor) items, where k is the desired result count from the whole search. If a HybridLimitFactorPrescription is provided (see the class docstring for details), separate factors are applied to the vector and the lexical subsearches. Alternatively, a simple dictionary with keys "\(lexical" and "\)vector" achieves the same effect.

None

Raises:

Type Description
ValueError

if the parameters are inconsistent or invalid.

Note

For concurrency in synchronous :meth:~add_texts:, as a rule of thumb, on a typical client machine it is suggested to keep the quantity bulk_insert_batch_concurrency * bulk_insert_overwrite_concurrency much below 1000 to avoid exhausting the client multithreading/networking resources. The hardcoded defaults are somewhat conservative to meet most machines' specs, but a sensible choice to test may be:

  • bulk_insert_batch_concurrency = 80
  • bulk_insert_overwrite_concurrency = 10

A bit of experimentation is required to nail the best results here, depending on both the machine/network specs and the expected workload (specifically, how often a write is an update of an existing id). Remember you can pass concurrency settings to individual calls to :meth:~add_texts and :meth:~add_documents as well.

copy

copy(
    *,
    token: str | TokenProvider | None = None,
    ext_callers: (
        list[tuple[str | None, str | None] | str | None]
        | None
    ) = None,
    component_name: str | None = None,
    collection_embedding_api_key: (
        str | EmbeddingHeadersProvider | None
    ) = None,
    collection_reranking_api_key: (
        str | RerankingHeadersProvider | None
    ) = None
) -> AstraDBVectorStore

Create a copy, possibly with changed attributes.

This method creates a shallow copy of this environment. If a parameter is passed and differs from None, it will replace the corresponding value in the copy.

The method allows changing only the parameters that ensure the copy is functional and does not trigger side-effects: for example, one cannot create a copy acting on a new collection. In those cases, one should create a new instance of AstraDBVectorStore from scratch.

Parameters:

Name Type Description Default
token str | TokenProvider | None

API token for Astra DB usage, either in the form of a string or a subclass of astrapy.authentication.TokenProvider. In order to suppress token usage in the copy, explicitly pass astrapy.authentication.StaticTokenProvider(None).

None
ext_callers list[tuple[str | None, str | None] | str | None] | None

additional custom (caller_name, caller_version) pairs to attach to the User-Agent header when issuing Data API requests.

None
component_name str | None

a value for the LangChain component name to use when identifying the originator of the Data API requests.

None
collection_embedding_api_key str | EmbeddingHeadersProvider | None

the API Key to supply in each Data API request if necessary. This is necessary if using the Vectorize feature and no secret is stored with the database. In order to suppress the API Key in the copy, explicitly pass astrapy.authentication.EmbeddingAPIKeyHeaderProvider(None).

None
collection_reranking_api_key str | RerankingHeadersProvider | None

for usage of server-side reranking services within Astra DB. With this parameter one can supply an API Key that will be passed to Astra DB with each data request. This parameter can be either a string or a subclass of astrapy.authentication.RerankingHeadersProvider. This is useful when the service is configured for the collection, but no corresponding secret is stored within Astra's key management system.

None

Returns:

Type Description
AstraDBVectorStore

a shallow copy of this vector store, possibly with some changed

AstraDBVectorStore

attributes.

clear

clear() -> None

Empty the collection of all its stored entries.

aclear async

aclear() -> None

Empty the collection of all its stored entries.

delete_by_document_id

delete_by_document_id(document_id: str) -> bool

Remove a single document from the store, given its document ID.

Parameters:

Name Type Description Default
document_id str

The document ID

required

Returns:

Type Description
bool

True if a document has indeed been deleted, False if ID not found.

adelete_by_document_id async

adelete_by_document_id(document_id: str) -> bool

Remove a single document from the store, given its document ID.

Parameters:

Name Type Description Default
document_id str

The document ID

required

Returns:

Type Description
bool

True if a document has indeed been deleted, False if ID not found.

delete

delete(
    ids: Iterable[str] | None = None,
    concurrency: int | None = None,
    **kwargs: Any
) -> bool | None

Delete by vector ids.

Parameters:

Name Type Description Default
ids Iterable[str] | None

List of ids to delete.

None
concurrency int | None

max number of threads issuing single-doc delete requests. Defaults to vector-store overall setting.

None
**kwargs Any

Additional arguments are ignored.

{}

Returns:

Type Description
bool | None

True if deletion is (entirely) successful, False otherwise.

Raises:

Type Description
ValueError

if no ids are provided.

adelete async

adelete(
    ids: Iterable[str] | None = None,
    concurrency: int | None = None,
    **kwargs: Any
) -> bool | None

Delete by vector ids.

Parameters:

Name Type Description Default
ids Iterable[str] | None

List of ids to delete.

None
concurrency int | None

max number of simultaneous coroutines for single-doc delete requests. Defaults to vector-store overall setting.

None
**kwargs Any

Additional arguments are ignored.

{}

Returns:

Type Description
bool | None

True if deletion is (entirely) successful, False otherwise.

Raises:

Type Description
ValueError

if no ids are provided.

delete_by_metadata_filter

delete_by_metadata_filter(filter: dict[str, Any]) -> int

Delete all documents matching a certain metadata filtering condition.

This operation does not use the vector embeddings in any way, it simply removes all documents whose metadata match the provided condition.

Parameters:

Name Type Description Default
filter dict[str, Any]

Filter on the metadata to apply. The filter cannot be empty.

required

Returns:

Type Description
int

A number expressing the amount of deleted documents.

Raises:

Type Description
ValueError

if the provided filter is empty.

adelete_by_metadata_filter async

adelete_by_metadata_filter(filter: dict[str, Any]) -> int

Delete all documents matching a certain metadata filtering condition.

This operation does not use the vector embeddings in any way, it simply removes all documents whose metadata match the provided condition.

Parameters:

Name Type Description Default
filter dict[str, Any]

Filter on the metadata to apply. The filter cannot be empty.

required

Returns:

Type Description
int

A number expressing the amount of deleted documents.

Raises:

Type Description
ValueError

if the provided filter is empty.

delete_collection

delete_collection() -> None

Completely delete the collection from the database.

Completely delete the collection from the database (as opposed to :meth:~clear, which empties it only). Stored data is lost and unrecoverable, resources are freed. Use with caution.

adelete_collection async

adelete_collection() -> None

Completely delete the collection from the database.

Completely delete the collection from the database (as opposed to :meth:~aclear, which empties it only). Stored data is lost and unrecoverable, resources are freed. Use with caution.

add_texts

add_texts(
    texts: Iterable[str],
    metadatas: Iterable[dict] | None = None,
    ids: Iterable[str | None] | None = None,
    *,
    batch_size: int | None = None,
    batch_concurrency: int | None = None,
    overwrite_concurrency: int | None = None,
    **kwargs: Any
) -> list[str]

Run texts through the embeddings and add them to the vectorstore.

If passing explicit ids, those entries whose id is in the store already will be replaced.

Parameters:

Name Type Description Default
texts Iterable[str]

Texts to add to the vectorstore.

required
metadatas Iterable[dict] | None

Optional list of metadatas.

None
ids Iterable[str | None] | None

Optional list of ids.

None
batch_size int | None

Size of document chunks for each individual insertion API request. If not provided, defaults to the vector-store overall defaults (which in turn falls to astrapy defaults).

None
batch_concurrency int | None

number of threads to process insertion batches concurrently. Defaults to the vector-store overall setting if not provided.

None
overwrite_concurrency int | None

number of threads to process pre-existing documents in each batch. Defaults to the vector-store overall setting if not provided.

None
**kwargs Any

Additional arguments are ignored.

{}
Note

The allowed field names for the metadata document attributes must obey certain rules (such as: keys cannot start with a dollar sign and cannot be empty). See Naming Conventions <https://docs.datastax.com/en/astra-db-serverless/api-reference/dataapiclient.html#naming-conventions>_ for details.

Returns:

Type Description
list[str]

The list of ids of the added texts.

Raises:

Type Description
AstraDBVectorStoreError

if not all documents could be inserted.

aadd_texts async

aadd_texts(
    texts: Iterable[str],
    metadatas: Iterable[dict] | None = None,
    ids: Iterable[str | None] | None = None,
    *,
    batch_size: int | None = None,
    batch_concurrency: int | None = None,
    overwrite_concurrency: int | None = None,
    **kwargs: Any
) -> list[str]

Run texts through the embeddings and add them to the vectorstore.

If passing explicit ids, those entries whose id is in the store already will be replaced.

Parameters:

Name Type Description Default
texts Iterable[str]

Texts to add to the vectorstore.

required
metadatas Iterable[dict] | None

Optional list of metadatas.

None
ids Iterable[str | None] | None

Optional list of ids.

None
batch_size int | None

Size of document chunks for each individual insertion API request. If not provided, defaults to the vector-store overall defaults (which in turn falls to astrapy defaults).

None
batch_concurrency int | None

number of simultaneous coroutines to process insertion batches concurrently. Defaults to the vector-store overall setting if not provided.

None
overwrite_concurrency int | None

number of simultaneous coroutines to process pre-existing documents in each batch. Defaults to the vector-store overall setting if not provided.

None
**kwargs Any

Additional arguments are ignored.

{}
Note

The allowed field names for the metadata document attributes must obey certain rules (such as: keys cannot start with a dollar sign and cannot be empty). See Naming Conventions <https://docs.datastax.com/en/astra-db-serverless/api-reference/dataapiclient.html#naming-conventions>_ for details.

Returns:

Type Description
list[str]

The list of ids of the added texts.

Raises:

Type Description
AstraDBVectorStoreError

if not all documents could be inserted.

update_metadata

update_metadata(
    id_to_metadata: dict[str, dict],
    *,
    overwrite_concurrency: int | None = None
) -> int

Add/overwrite the metadata of existing documents.

For each document to update, the new metadata dictionary is appended to the existing metadata, overwriting individual keys that existed already.

Parameters:

Name Type Description Default
id_to_metadata dict[str, dict]

map from the Document IDs to modify to the new metadata for updating. Keys in this dictionary that do not correspond to an existing document will be silently ignored. The values of this map are metadata dictionaries for updating the documents. Any pre-existing metadata will be merged with these entries, which take precedence on a key-by-key basis.

required
overwrite_concurrency int | None

number of threads to process the updates. Defaults to the vector-store overall setting if not provided.

None

Returns:

Type Description
int

the number of documents successfully updated (i.e. found to exist,

int

since even an update with {} as the new metadata counts as successful.)

aupdate_metadata async

aupdate_metadata(
    id_to_metadata: dict[str, dict],
    *,
    overwrite_concurrency: int | None = None
) -> int

Add/overwrite the metadata of existing documents.

For each document to update, the new metadata dictionary is appended to the existing metadata, overwriting individual keys that existed already.

Parameters:

Name Type Description Default
id_to_metadata dict[str, dict]

map from the Document IDs to modify to the new metadata for updating. Keys in this dictionary that do not correspond to an existing document will be silently ignored. The values of this map are metadata dictionaries for updating the documents. Any pre-existing metadata will be merged with these entries, which take precedence on a key-by-key basis.

required
overwrite_concurrency int | None

number of asynchronous tasks to process the updates. Defaults to the vector-store overall setting if not provided.

None

Returns:

Type Description
int

the number of documents successfully updated (i.e. found to exist,

int

since even an update with {} as the new metadata counts as successful.)

full_decode_astra_db_found_document

full_decode_astra_db_found_document(
    astra_db_document: DocDict,
) -> AstraDBQueryResult | None

Decode an Astra DB document in full, i.e. into Document+embedding/similarity.

This operation returns a representation that is independent of the codec being used in the collection (whereas the input, a 'raw' Astra DB document, is codec-dependent).

The input raw document can carry information on embedding and similarity, depending on details of the query used to retrieve it. These can be set to None in the resulf if not found.

The whole method can return a None, to signal that the codec has refused the conversion (e.g. because the input document is deemed faulty).

Parameters:

Name Type Description Default
astra_db_document DocDict

a dictionary obtained through run_query_raw from the collection.

required

Returns:

Type Description
AstraDBQueryResult | None

a AstraDBQueryResult named tuple with Document, id, embedding (where applicable) and similarity (where applicable), or an overall None if the decoding is refused by the codec.

full_decode_astra_db_reranked_result

full_decode_astra_db_reranked_result(
    astra_db_reranked_result: RerankedResult[DocDict],
) -> AstraDBQueryResult | None

Full-decode an Astra DB find-and-rerank hit (Document+embedding/similarity).

This operation returns a representation that is independent of the codec being used in the collection (whereas the 'document' part of the input, a 'raw' Astra DB response from a find-and-rerank hybrid search, is codec-dependent).

The input raw document is what the find_and_rerank Astrapy method returns, i.e. an iterable over RerankedResult objects. Missing entries (such as the embedding) are set to None in the resulf if not found.

The whole method can return a None, to signal that the codec has refused the conversion (e.g. because the input document is deemed faulty).

Parameters:

Name Type Description Default
astra_db_reranked_result RerankedResult[DocDict]

a RerankedResult obtained by a find_and_rerank method call on the collection.

required

Returns:

Type Description
AstraDBQueryResult | None

a AstraDBQueryResult named tuple with Document, id, embedding (where applicable) and similarity (where applicable), or an overall None if the decoding is refused by the codec.

run_query_raw

run_query_raw(
    *,
    n: int,
    ids: list[str] | None = None,
    filter: dict[str, Any] | None = None,
    sort: dict[str, Any] | None = None,
    include_similarity: bool | None = None,
    include_sort_vector: bool = False,
    include_embeddings: bool = False
) -> (
    tuple[list[float] | None, Iterable[DocDict]]
    | Iterable[DocDict]
)

Execute a generic query on stored documents, returning Astra DB documents.

The return value has a variable format, depending on whether the 'sort vector' is requested back from the server.

Only the n parameter is required. Omitting all other parameters results in a query that matches each and every document found on the collection.

The method does not expose a projection directly, which is instead automatically determined based on the invocation options.

The returned documents are exactly as they come back from Astra DB (taking into account the projection as well). A further step, namely subsequent invocation of the convert_astra_db_document method, is required to reconstruct codec-independent Document objects. The reason for keeping the retrieval and the decoding steps separate is that a caller may want to first deduplicate/discard items, in order to convert only the items actually needed.

Parameters:

Name Type Description Default
n int

amount of items to return. Fewer items than n may be returned if the collection has not enough matches.

required
ids list[str] | None

a list of document IDs to restrict the query to. If this is supplied, only document with an ID among the provided one will match. If further query filters are provided (i.e. metadata), matches must satisfy both requirements.

None
filter dict[str, Any] | None

a metadata filtering part. If provided, it must refer to metadata keys by their bare name (such as {"key": 123}). This filter can combine nested conditions with "\(or"/"\)and" connectors, for example: - {"tag": "a"} - {"$or": [{"tag": "a"}, "label": "b"]} - {"$and": [{"tag": {"$in": ["a", "z"]}}, "label": "b"]}

None
sort dict[str, Any] | None

a 'sort' clause for the query, such as {"$vector": [...]}, {"$vectorize": "..."} or {"mdkey": 1}. Metadata sort conditions must be expressed by their 'bare' name.

None
include_similarity bool | None

whether to return similarity scores with each match. Requires vector sort.

None
include_sort_vector bool

whether to return the very query vector used for the ANN search alongside the iterable of results. Requires vector sort. Note that the shape of the return value depends on this parameter.

False
include_embeddings bool

whether to retrieve the matches' own embedding vectors.

False

Returns:

Type Description
tuple[list[float] | None, Iterable[DocDict]] | Iterable[DocDict]

The shape of the return value depends on the value of include_sort_vector:

tuple[list[float] | None, Iterable[DocDict]] | Iterable[DocDict]
  • if include_sort_vector = False, the return value is an iterable over Astra DB documents (dictionaries);
tuple[list[float] | None, Iterable[DocDict]] | Iterable[DocDict]
  • if include_sort_vector = True, the return value is a 2-item tuple (sort_v, astra_db_ite) tuple, where:
  • sort_v is the sort vector, if requested, or None if not available.
  • astra_db_ite is an iterable over Astra DB documents (dictionaries).

run_query

run_query(
    *,
    n: int,
    ids: list[str] | None = None,
    filter: dict[str, Any] | None = None,
    sort: dict[str, Any] | None = None,
    include_similarity: bool | None = None,
    include_sort_vector: bool = False,
    include_embeddings: bool = False
) -> (
    tuple[list[float] | None, Iterable[AstraDBQueryResult]]
    | Iterable[AstraDBQueryResult]
)

Execute a generic query on stored documents, returning Documents+other info.

The return value has a variable format, depending on whether the 'sort vector' is requested back from the server.

Only the n parameter is required. Omitting all other parameters results in a query that matches each and every document found on the collection.

The method does not expose a projection directly, which is instead automatically determined based on the invocation options.

The returned Document objects are codec-independent.

Parameters:

Name Type Description Default
n int

amount of items to return. Fewer items than n may be returned in the following cases: (a) the decoding skips some raw entries from the server; (b) the collection has not enough matches.

required
ids list[str] | None

a list of document IDs to restrict the query to. If this is supplied, only document with an ID among the provided one will match. If further query filters are provided (i.e. metadata), matches must satisfy both requirements.

None
filter dict[str, Any] | None

a metadata filtering part. If provided, it must refer to metadata keys by their bare name (such as {"key": 123}). This filter can combine nested conditions with "\(or"/"\)and" connectors, for example: - {"tag": "a"} - {"$or": [{"tag": "a"}, "label": "b"]} - {"$and": [{"tag": {"$in": ["a", "z"]}}, "label": "b"]}

None
sort dict[str, Any] | None

a 'sort' clause for the query, such as {"$vector": [...]}, {"$vectorize": "..."} or {"mdkey": 1}. Metadata sort conditions must be expressed by their 'bare' name.

None
include_similarity bool | None

whether to return similarity scores with each match. Requires vector sort.

None
include_sort_vector bool

whether to return the very query vector used for the ANN search alongside the iterable of results. Requires vector sort. Note that the shape of the return value depends on this parameter.

False
include_embeddings bool

whether to retrieve the matches' own embedding vectors.

False

Returns:

Type Description
tuple[list[float] | None, Iterable[AstraDBQueryResult]] | Iterable[AstraDBQueryResult]

The shape of the return value depends on the value of include_sort_vector:

tuple[list[float] | None, Iterable[AstraDBQueryResult]] | Iterable[AstraDBQueryResult]
  • if include_sort_vector = False, the return value is an iterable over the AstraDBQueryResult items returned by the query. Entries that fail the decoding step, if any, are discarded after the query, which may lead to fewer items being returned than the required n.
tuple[list[float] | None, Iterable[AstraDBQueryResult]] | Iterable[AstraDBQueryResult]
  • if include_sort_vector = True, the return value is a 2-item tuple (sort_v, results_ite) tuple, where:
  • sort_v is the sort vector, if requested, or None if not available.
  • results_ite is an iterable over AstraDBQueryResult items as above.

arun_query_raw async

arun_query_raw(
    *,
    n: int,
    ids: list[str] | None = None,
    filter: dict[str, Any] | None = None,
    sort: dict[str, Any] | None = None,
    include_similarity: bool | None = None,
    include_sort_vector: bool = False,
    include_embeddings: bool = False
) -> (
    tuple[list[float] | None, AsyncIterable[DocDict]]
    | AsyncIterable[DocDict]
)

Execute a generic query on stored documents, returning Astra DB documents.

The return value has a variable format, depending on whether the 'sort vector' is requested back from the server.

Only the n parameter is required. Omitting all other parameters results in a query that matches each and every document found on the collection.

The method does not expose a projection directly, which is instead automatically determined based on the invocation options.

The returned documents are exactly as they come back from Astra DB (taking into account the projection as well). A further step, namely subsequent invocation of the convert_astra_db_document method, is required to reconstruct codec-independent Document objects. The reason for keeping the retrieval and the decoding steps separate is that a caller may want to first deduplicate/discard items, in order to convert only the items actually needed.

Parameters:

Name Type Description Default
n int

amount of items to return. Fewer items than n may be returned in the following cases: (a) the decoding skips some raw entries from the server; (b) the collection has not enough matches.

required
ids list[str] | None

a list of document IDs to restrict the query to. If this is supplied, only document with an ID among the provided one will match. If further query filters are provided (i.e. metadata), matches must satisfy both requirements.

None
filter dict[str, Any] | None

a metadata filtering part. If provided, it must refer to metadata keys by their bare name (such as {"key": 123}). This filter can combine nested conditions with "\(or"/"\)and" connectors, for example: - {"tag": "a"} - {"$or": [{"tag": "a"}, "label": "b"]} - {"$and": [{"tag": {"$in": ["a", "z"]}}, "label": "b"]}

None
sort dict[str, Any] | None

a 'sort' clause for the query, such as {"$vector": [...]}, {"$vectorize": "..."} or {"mdkey": 1}. Metadata sort conditions must be expressed by their 'bare' name.

None
include_similarity bool | None

whether to return similarity scores with each match. Requires vector sort.

None
include_sort_vector bool

whether to return the very query vector used for the ANN search alongside the iterable of results. Requires vector sort. Note that the shape of the return value depends on this parameter.

False
include_embeddings bool

whether to retrieve the matches' own embedding vectors.

False

Returns:

Type Description
tuple[list[float] | None, AsyncIterable[DocDict]] | AsyncIterable[DocDict]

The shape of the return value depends on the value of include_sort_vector:

tuple[list[float] | None, AsyncIterable[DocDict]] | AsyncIterable[DocDict]
  • if include_sort_vector = False, the return value is an iterable over Astra DB documents (dictionaries);
tuple[list[float] | None, AsyncIterable[DocDict]] | AsyncIterable[DocDict]
  • if include_sort_vector = True, the return value is a 2-item tuple (sort_v, astra_db_ite) tuple, where:
  • sort_v is the sort vector, if requested, or None if not available.
  • astra_db_ite is an iterable over Astra DB documents (dictionaries).

arun_query async

arun_query(
    *,
    n: int,
    ids: list[str] | None = None,
    filter: dict[str, Any] | None = None,
    sort: dict[str, Any] | None = None,
    include_similarity: bool | None = None,
    include_sort_vector: bool = False,
    include_embeddings: bool = False
) -> (
    tuple[
        list[float] | None,
        AsyncIterable[AstraDBQueryResult],
    ]
    | AsyncIterable[AstraDBQueryResult]
)

Execute a generic query on stored documents, returning Documents+other info.

The return value has a variable format, depending on whether the 'sort vector' is requested back from the server.

Only the n parameter is required. Omitting all other parameters results in a query that matches each and every document found on the collection.

The method does not expose a projection directly, which is instead automatically determined based on the invocation options.

The returned Document objects are codec-independent.

Parameters:

Name Type Description Default
n int

amount of items to return. Fewer items than n may be returned if the collection has not enough matches.

required
ids list[str] | None

a list of document IDs to restrict the query to. If this is supplied, only document with an ID among the provided one will match. If further query filters are provided (i.e. metadata), matches must satisfy both requirements.

None
filter dict[str, Any] | None

a metadata filtering part. If provided, it must refer to metadata keys by their bare name (such as {"key": 123}). This filter can combine nested conditions with "\(or"/"\)and" connectors, for example: - {"tag": "a"} - {"$or": [{"tag": "a"}, "label": "b"]} - {"$and": [{"tag": {"$in": ["a", "z"]}}, "label": "b"]}

None
sort dict[str, Any] | None

a 'sort' clause for the query, such as {"$vector": [...]}, {"$vectorize": "..."} or {"mdkey": 1}. Metadata sort conditions must be expressed by their 'bare' name.

None
include_similarity bool | None

whether to return similarity scores with each match. Requires vector sort.

None
include_sort_vector bool

whether to return the very query vector used for the ANN search alongside the iterable of results. Requires vector sort. Note that the shape of the return value depends on this parameter.

False
include_embeddings bool

whether to retrieve the matches' own embedding vectors.

False

Returns:

Type Description
tuple[list[float] | None, AsyncIterable[AstraDBQueryResult]] | AsyncIterable[AstraDBQueryResult]

The shape of the return value depends on the value of include_sort_vector:

tuple[list[float] | None, AsyncIterable[AstraDBQueryResult]] | AsyncIterable[AstraDBQueryResult]
  • if include_sort_vector = False, the return value is an iterable over the AstraDBQueryResult items returned by the query. Entries that fail the decoding step, if any, are discarded after the query, which may lead to fewer items being returned than the required n.
tuple[list[float] | None, AsyncIterable[AstraDBQueryResult]] | AsyncIterable[AstraDBQueryResult]
  • if include_sort_vector = True, the return value is a 2-item tuple (sort_v, results_ite) tuple, where:
  • sort_v is the sort vector, if requested, or None if not available.
  • results_ite is an iterable over AstraDBQueryResult items as above.
metadata_search(
    filter: dict[str, Any] | None = None, n: int = 5
) -> list[Document]

Get documents via a metadata search.

Parameters:

Name Type Description Default
filter dict[str, Any] | None

the metadata to query for.

None
n int

the maximum number of documents to return.

5

Returns:

Type Description
list[Document]

The documents found.

ametadata_search(
    filter: dict[str, Any] | None = None, n: int = 5
) -> Iterable[Document]

Get documents via a metadata search.

Parameters:

Name Type Description Default
filter dict[str, Any] | None

the metadata to query for.

None
n int

the maximum number of documents to return.

5

Returns:

Type Description
Iterable[Document]

The documents found.

get_by_document_id

get_by_document_id(document_id: str) -> Document | None

Retrieve a single document from the store, given its document ID.

Parameters:

Name Type Description Default
document_id str

The document ID

required

Returns:

Type Description
Document | None

The the document if it exists. Otherwise None.

aget_by_document_id async

aget_by_document_id(document_id: str) -> Document | None

Retrieve a single document from the store, given its document ID.

Parameters:

Name Type Description Default
document_id str

The document ID

required

Returns:

Type Description
Document | None

The the document if it exists. Otherwise None.

get_by_ids

get_by_ids(
    ids: Sequence[str],
    /,
    batch_size: int | None = None,
    batch_concurrency: int | None = None,
) -> list[Document]

Get documents by their IDs.

The returned documents have the ID field set to the ID of the document in the vector store.

Fewer documents may be returned than requested if some IDs are not found or if there are duplicated IDs.

Users should not assume that the order of the returned documents matches the order of the input IDs. Instead, users should rely on the ID field of the returned documents.

Parameters:

Name Type Description Default
ids Sequence[str]

List of ids to retrieve.

required
batch_size int | None

If many IDs are requested, these are split in chunks and multiple requests are run and collated. This sets the size of each such chunk of IDs. Default is 80. The database sets a hard limit of 100.

None
batch_concurrency int | None

Number of threads for executing multiple requests if needed. Default is 20.

None

Returns:

Type Description
list[Document]

List of Documents.

get_by_document_ids

get_by_document_ids(
    ids: Sequence[str],
    /,
    batch_size: int | None = None,
    batch_concurrency: int | None = None,
) -> list[Document]

Get documents by their IDs.

The returned documents have the ID field set to the ID of the document in the vector store.

Fewer documents may be returned than requested if some IDs are not found or if there are duplicated IDs.

Users should not assume that the order of the returned documents matches the order of the input IDs. Instead, users should rely on the ID field of the returned documents.

Parameters:

Name Type Description Default
ids Sequence[str]

List of ids to retrieve.

required
batch_size int | None

If many IDs are requested, these are split in chunks and multiple requests are run and collated. This sets the size of each such chunk of IDs. Default is 80. The database sets a hard limit of 100.

None
batch_concurrency int | None

Number of threads for executing multiple requests if needed. Default is 20.

None

Returns:

Type Description
list[Document]

List of Documents.

aget_by_ids async

aget_by_ids(
    ids: Sequence[str],
    /,
    batch_size: int | None = None,
    batch_concurrency: int | None = None,
) -> list[Document]

Get documents by their IDs.

The returned documents have the ID field set to the ID of the document in the vector store.

Fewer documents may be returned than requested if some IDs are not found or if there are duplicated IDs.

Users should not assume that the order of the returned documents matches the order of the input IDs. Instead, users should rely on the ID field of the returned documents.

Parameters:

Name Type Description Default
ids Sequence[str]

List of ids to retrieve.

required
batch_size int | None

If many IDs are requested, these are split in chunks and multiple requests are run and collated. This sets the size of each such chunk of IDs. Default is 80. The database sets a hard limit of 100.

None
batch_concurrency int | None

Number of threads for executing multiple requests if needed. Default is 20.

None

Returns:

Type Description
list[Document]

List of Documents.

aget_by_document_ids async

aget_by_document_ids(
    ids: Sequence[str],
    /,
    batch_size: int | None = None,
    batch_concurrency: int | None = None,
) -> list[Document]

Get documents by their IDs.

The returned documents have the ID field set to the ID of the document in the vector store.

Fewer documents may be returned than requested if some IDs are not found or if there are duplicated IDs.

Users should not assume that the order of the returned documents matches the order of the input IDs. Instead, users should rely on the ID field of the returned documents.

Parameters:

Name Type Description Default
ids Sequence[str]

List of ids to retrieve.

required
batch_size int | None

If many IDs are requested, these are split in chunks and multiple requests are run and collated. This sets the size of each such chunk of IDs. Default is 80. The database sets a hard limit of 100.

None
batch_concurrency int | None

Number of threads for executing multiple requests if needed. Default is 20.

None

Returns:

Type Description
list[Document]

List of Documents.

similarity_search(
    query: str,
    k: int = 4,
    filter: dict[str, Any] | None = None,
    lexical_query: str | None = None,
    **kwargs: Any
) -> list[Document]

Return docs most similar to query.

Parameters:

Name Type Description Default
query str

Query to look up documents similar to.

required
k int

Number of Documents to return. Defaults to 4.

4
filter dict[str, Any] | None

Filter on the metadata to apply.

None
lexical_query str | None

for hybrid search, a specific query for the lexical portion of the retrieval. If omitted or empty, defaults to the same as 'query'. If passed on a non-hybrid search, an error is raised.

None
**kwargs Any

Additional arguments are ignored.

{}

Returns:

Type Description
list[Document]

The list of Documents most similar to the query.

similarity_search_with_score

similarity_search_with_score(
    query: str,
    k: int = 4,
    filter: dict[str, Any] | None = None,
    lexical_query: str | None = None,
) -> list[tuple[Document, float]]

Return docs most similar to query with score.

Parameters:

Name Type Description Default
query str

Query to look up documents similar to.

required
k int

Number of Documents to return. Defaults to 4.

4
filter dict[str, Any] | None

Filter on the metadata to apply.

None
lexical_query str | None

for hybrid search, a specific query for the lexical portion of the retrieval. If omitted or empty, defaults to the same as 'query'. If passed on a non-hybrid search, an error is raised.

None

Returns:

Type Description
list[tuple[Document, float]]

The list of (Document, score), the most similar to the query vector.

similarity_search_with_score_id

similarity_search_with_score_id(
    query: str,
    k: int = 4,
    filter: dict[str, Any] | None = None,
    lexical_query: str | None = None,
) -> list[tuple[Document, float, str]]

Return docs most similar to the query with score and id.

Parameters:

Name Type Description Default
query str

Query to look up documents similar to.

required
k int

Number of Documents to return. Defaults to 4.

4
filter dict[str, Any] | None

Filter on the metadata to apply.

None
lexical_query str | None

for hybrid search, a specific query for the lexical portion of the retrieval. If omitted or empty, defaults to the same as 'query'. If passed on a non-hybrid search, an error is raised.

None

Returns:

Type Description
list[tuple[Document, float, str]]

The list of (Document, score, id), the most similar to the query.

similarity_search_by_vector

similarity_search_by_vector(
    embedding: list[float],
    k: int = 4,
    filter: dict[str, Any] | None = None,
    **kwargs: Any
) -> list[Document]

Return docs most similar to embedding vector.

Parameters:

Name Type Description Default
embedding list[float]

Embedding to look up documents similar to.

required
k int

Number of Documents to return. Defaults to 4.

4
filter dict[str, Any] | None

Filter on the metadata to apply.

None
**kwargs Any

Additional arguments are ignored.

{}

Returns:

Type Description
list[Document]

The list of Documents most similar to the query vector.

similarity_search_with_score_by_vector

similarity_search_with_score_by_vector(
    embedding: list[float],
    k: int = 4,
    filter: dict[str, Any] | None = None,
) -> list[tuple[Document, float]]

Return docs most similar to embedding vector with score.

Parameters:

Name Type Description Default
embedding list[float]

Embedding to look up documents similar to.

required
k int

Number of Documents to return. Defaults to 4.

4
filter dict[str, Any] | None

Filter on the metadata to apply.

None

Returns:

Type Description
list[tuple[Document, float]]

The list of (Document, score), the most similar to the query vector.

similarity_search_with_score_id_by_vector

similarity_search_with_score_id_by_vector(
    embedding: list[float],
    k: int = 4,
    filter: dict[str, Any] | None = None,
) -> list[tuple[Document, float, str]]

Return docs most similar to embedding vector with score and id.

Parameters:

Name Type Description Default
embedding list[float]

Embedding to look up documents similar to.

required
k int

Number of Documents to return. Defaults to 4.

4
filter dict[str, Any] | None

Filter on the metadata to apply.

None

Returns:

Type Description
list[tuple[Document, float, str]]

The list of (Document, score, id), the most similar to the query vector.

Raises:

Type Description
ValueError

if the vector store uses server-side embeddings.

asimilarity_search(
    query: str,
    k: int = 4,
    filter: dict[str, Any] | None = None,
    lexical_query: str | None = None,
    **kwargs: Any
) -> list[Document]

Return docs most similar to query.

Parameters:

Name Type Description Default
query str

Query to look up documents similar to.

required
k int

Number of Documents to return. Defaults to 4.

4
filter dict[str, Any] | None

Filter on the metadata to apply.

None
lexical_query str | None

for hybrid search, a specific query for the lexical portion of the retrieval. If omitted or empty, defaults to the same as 'query'. If passed on a non-hybrid search, an error is raised.

None
**kwargs Any

Additional arguments are ignored.

{}

Returns:

Type Description
list[Document]

The list of Documents most similar to the query.

asimilarity_search_with_score async

asimilarity_search_with_score(
    query: str,
    k: int = 4,
    filter: dict[str, Any] | None = None,
    lexical_query: str | None = None,
) -> list[tuple[Document, float]]

Return docs most similar to query with score.

Parameters:

Name Type Description Default
query str

Query to look up documents similar to.

required
k int

Number of Documents to return. Defaults to 4.

4
filter dict[str, Any] | None

Filter on the metadata to apply.

None
lexical_query str | None

for hybrid search, a specific query for the lexical portion of the retrieval. If omitted or empty, defaults to the same as 'query'. If passed on a non-hybrid search, an error is raised.

None

Returns:

Type Description
list[tuple[Document, float]]

The list of (Document, score), the most similar to the query vector.

asimilarity_search_with_score_id async

asimilarity_search_with_score_id(
    query: str,
    k: int = 4,
    filter: dict[str, Any] | None = None,
    lexical_query: str | None = None,
) -> list[tuple[Document, float, str]]

Return docs most similar to the query with score and id.

Parameters:

Name Type Description Default
query str

Query to look up documents similar to.

required
k int

Number of Documents to return. Defaults to 4.

4
filter dict[str, Any] | None

Filter on the metadata to apply.

None
lexical_query str | None

for hybrid search, a specific query for the lexical portion of the retrieval. If omitted or empty, defaults to the same as 'query'. If passed on a non-hybrid search, an error is raised.

None

Returns:

Type Description
list[tuple[Document, float, str]]

The list of (Document, score, id), the most similar to the query.

asimilarity_search_by_vector async

asimilarity_search_by_vector(
    embedding: list[float],
    k: int = 4,
    filter: dict[str, Any] | None = None,
    **kwargs: Any
) -> list[Document]

Return docs most similar to embedding vector.

Parameters:

Name Type Description Default
embedding list[float]

Embedding to look up documents similar to.

required
k int

Number of Documents to return. Defaults to 4.

4
filter dict[str, Any] | None

Filter on the metadata to apply.

None
**kwargs Any

Additional arguments are ignored.

{}

Returns:

Type Description
list[Document]

The list of Documents most similar to the query vector.

asimilarity_search_with_score_by_vector async

asimilarity_search_with_score_by_vector(
    embedding: list[float],
    k: int = 4,
    filter: dict[str, Any] | None = None,
) -> list[tuple[Document, float]]

Return docs most similar to embedding vector with score.

Parameters:

Name Type Description Default
embedding list[float]

Embedding to look up documents similar to.

required
k int

Number of Documents to return. Defaults to 4.

4
filter dict[str, Any] | None

Filter on the metadata to apply.

None

Returns:

Type Description
list[tuple[Document, float]]

The list of (Document, score), the most similar to the query vector.

asimilarity_search_with_score_id_by_vector async

asimilarity_search_with_score_id_by_vector(
    embedding: list[float],
    k: int = 4,
    filter: dict[str, Any] | None = None,
) -> list[tuple[Document, float, str]]

Return docs most similar to embedding vector with score and id.

Parameters:

Name Type Description Default
embedding list[float]

Embedding to look up documents similar to.

required
k int

Number of Documents to return. Defaults to 4.

4
filter dict[str, Any] | None

Filter on the metadata to apply.

None

Returns:

Type Description
list[tuple[Document, float, str]]

The list of (Document, score, id), the most similar to the query vector.

Raises:

Type Description
ValueError

If the vector store uses server-side embeddings.

similarity_search_with_embedding_by_vector

similarity_search_with_embedding_by_vector(
    embedding: list[float],
    k: int = 4,
    filter: dict[str, Any] | None = None,
) -> list[tuple[Document, list[float]]]

Return docs most similar to embedding vector with embedding.

Parameters:

Name Type Description Default
embedding list[float]

Embedding to look up documents similar to.

required
k int

Number of Documents to return. Defaults to 4.

4
filter dict[str, Any] | None

Filter on the metadata to apply.

None

Returns:

Type Description
list[tuple[Document, list[float]]]

(The query embedding vector, The list of (Document, embedding),

list[tuple[Document, list[float]]]

the most similar to the query vector.).

asimilarity_search_with_embedding_by_vector async

asimilarity_search_with_embedding_by_vector(
    embedding: list[float],
    k: int = 4,
    filter: dict[str, Any] | None = None,
) -> list[tuple[Document, list[float]]]

Return docs most similar to embedding vector with embedding.

Parameters:

Name Type Description Default
embedding list[float]

Embedding to look up documents similar to.

required
k int

Number of Documents to return. Defaults to 4.

4
filter dict[str, Any] | None

Filter on the metadata to apply.

None

Returns:

Type Description
list[tuple[Document, list[float]]]

(The query embedding vector, The list of (Document, embedding),

list[tuple[Document, list[float]]]

the most similar to the query vector.).

similarity_search_with_embedding

similarity_search_with_embedding(
    query: str,
    k: int = 4,
    filter: dict[str, Any] | None = None,
) -> tuple[list[float], list[tuple[Document, list[float]]]]

Return docs most similar to the query with embedding.

Also includes the query embedding vector.

Parameters:

Name Type Description Default
query str

Query to look up documents similar to.

required
k int

Number of Documents to return. Defaults to 4.

4
filter dict[str, Any] | None

Filter on the metadata to apply.

None

Returns:

Type Description
list[float]

(The query embedding vector, The list of (Document, embedding),

list[tuple[Document, list[float]]]

the most similar to the query vector.).

asimilarity_search_with_embedding async

asimilarity_search_with_embedding(
    query: str,
    k: int = 4,
    filter: dict[str, Any] | None = None,
) -> tuple[list[float], list[tuple[Document, list[float]]]]

Return docs most similar to the query with embedding.

Also includes the query embedding vector.

Parameters:

Name Type Description Default
query str

Query to look up documents similar to.

required
k int

Number of Documents to return. Defaults to 4.

4
filter dict[str, Any] | None

Filter on the metadata to apply.

None

Returns:

Type Description
list[float]

(The query embedding vector, The list of (Document, embedding),

list[tuple[Document, list[float]]]

the most similar to the query vector.).

max_marginal_relevance_search_by_vector

max_marginal_relevance_search_by_vector(
    embedding: list[float],
    k: int = 4,
    fetch_k: int = 20,
    lambda_mult: float = 0.5,
    filter: dict[str, Any] | None = None,
    **kwargs: Any
) -> list[Document]

Return docs selected using the maximal marginal relevance.

Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.

Parameters:

Name Type Description Default
embedding list[float]

Embedding to look up documents similar to.

required
k int

Number of Documents to return.

4
fetch_k int

Number of Documents to fetch to pass to MMR algorithm.

20
lambda_mult float

Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity.

0.5
filter dict[str, Any] | None

Filter on the metadata to apply.

None
**kwargs Any

Additional arguments are ignored.

{}

Returns:

Type Description
list[Document]

The list of Documents selected by maximal marginal relevance.

amax_marginal_relevance_search_by_vector async

amax_marginal_relevance_search_by_vector(
    embedding: list[float],
    k: int = 4,
    fetch_k: int = 20,
    lambda_mult: float = 0.5,
    filter: dict[str, Any] | None = None,
    **kwargs: Any
) -> list[Document]

Return docs selected using the maximal marginal relevance.

Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.

Parameters:

Name Type Description Default
embedding list[float]

Embedding to look up documents similar to.

required
k int

Number of Documents to return.

4
fetch_k int

Number of Documents to fetch to pass to MMR algorithm.

20
lambda_mult float

Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity.

0.5
filter dict[str, Any] | None

Filter on the metadata to apply.

None
**kwargs Any

Additional arguments are ignored.

{}

Returns:

Type Description
list[Document]

The list of Documents selected by maximal marginal relevance.

max_marginal_relevance_search(
    query: str,
    k: int = 4,
    fetch_k: int = 20,
    lambda_mult: float = 0.5,
    filter: dict[str, Any] | None = None,
    **kwargs: Any
) -> list[Document]

Return docs selected using the maximal marginal relevance.

Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.

Parameters:

Name Type Description Default
query str

Query to look up documents similar to.

required
k int

Number of Documents to return.

4
fetch_k int

Number of Documents to fetch to pass to MMR algorithm.

20
lambda_mult float

Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity.

0.5
filter dict[str, Any] | None

Filter on the metadata to apply.

None
**kwargs Any

Additional arguments are ignored.

{}

Returns:

Type Description
list[Document]

The list of Documents selected by maximal marginal relevance.

amax_marginal_relevance_search(
    query: str,
    k: int = 4,
    fetch_k: int = 20,
    lambda_mult: float = 0.5,
    filter: dict[str, Any] | None = None,
    **kwargs: Any
) -> list[Document]

Return docs selected using the maximal marginal relevance.

Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.

Parameters:

Name Type Description Default
query str

Query to look up documents similar to.

required
k int

Number of Documents to return.

4
fetch_k int

Number of Documents to fetch to pass to MMR algorithm.

20
lambda_mult float

Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity.

0.5
filter dict[str, Any] | None

Filter on the metadata to apply.

None
**kwargs Any

Additional arguments are ignored.

{}

Returns:

Type Description
list[Document]

The list of Documents selected by maximal marginal relevance.

from_texts classmethod

from_texts(
    texts: Iterable[str],
    embedding: Embeddings | None = None,
    metadatas: Iterable[dict] | None = None,
    ids: Iterable[str | None] | None = None,
    **kwargs: Any
) -> AstraDBVectorStore

Create an Astra DB vectorstore from raw texts.

Parameters:

Name Type Description Default
texts Iterable[str]

the texts to insert.

required
embedding Embeddings | None

the embedding function to use in the store.

None
metadatas Iterable[dict] | None

metadata dicts for the texts.

None
ids Iterable[str | None] | None

ids to associate to the texts.

None
**kwargs Any

you can pass any argument that you would to :meth:~add_texts and/or to the AstraDBVectorStore constructor (see these methods for details). These arguments will be routed to the respective methods as they are.

{}

Returns:

Type Description
AstraDBVectorStore

an AstraDBVectorStore vectorstore.

afrom_texts async classmethod

afrom_texts(
    texts: Iterable[str],
    embedding: Embeddings | None = None,
    metadatas: Iterable[dict] | None = None,
    ids: Iterable[str | None] | None = None,
    **kwargs: Any
) -> AstraDBVectorStore

Create an Astra DB vectorstore from raw texts.

Parameters:

Name Type Description Default
texts Iterable[str]

the texts to insert.

required
embedding Embeddings | None

embedding function to use.

None
metadatas Iterable[dict] | None

metadata dicts for the texts.

None
ids Iterable[str | None] | None

ids to associate to the texts.

None
**kwargs Any

you can pass any argument that you would to :meth:~aadd_texts and/or to the AstraDBVectorStore constructor (see these methods for details). These arguments will be routed to the respective methods as they are.

{}

Returns:

Type Description
AstraDBVectorStore

an AstraDBVectorStore vectorstore.

from_documents classmethod

from_documents(
    documents: Iterable[Document],
    embedding: Embeddings | None = None,
    **kwargs: Any
) -> AstraDBVectorStore

Create an Astra DB vectorstore from a document list.

Utility method that defers to :meth:from_texts (see that one).

Parameters:

Name Type Description Default
texts

the texts to insert.

required
documents Iterable[Document]

a list of Document objects for insertion in the store.

required
embedding Embeddings | None

the embedding function to use in the store.

None
**kwargs Any

you can pass any argument that you would to :meth:~add_texts and/or to the AstraDBVectorStore constructor (see these methods for details). These arguments will be routed to the respective methods as they are.

{}

Returns:

Type Description
AstraDBVectorStore

an AstraDBVectorStore vectorstore.

afrom_documents async classmethod

afrom_documents(
    documents: Iterable[Document],
    embedding: Embeddings | None = None,
    **kwargs: Any
) -> AstraDBVectorStore

Create an Astra DB vectorstore from a document list.

Utility method that defers to :meth:afrom_texts (see that one).

Returns:

Type Description
AstraDBVectorStore

an AstraDBVectorStore vectorstore.

AstraDBVectorStoreError

Bases: Exception

An exception during vector-store activities.

This exception represents any operational exception occurring while performing an action within an AstraDBVectorStore.