Skip to content

langchain-astradb

PyPI - Version PyPI - License PyPI - Downloads

Reference docs

This page contains reference documentation for AstraDB. See the docs for conceptual guides, tutorials, and examples on using AstraDB.

langchain_astradb

Astra DB integration for LangChain.

This module provides several LangChain components using Astra DB as the backend.

For an overview, consult the integration docs page.

Provided components:

  • AstraDBVectorStore, a vector store backed by Astra DB, with Vectorize support, hybrid search and more.
  • AstraDBStore, AstraDBByteStore, key-value storage components for generic values and binary blobs, respectively
  • AstraDBCache, AstraDBSemanticCache, LLM response caches.
  • AstraDBChatMessageHistory, memory for use in chat interfaces.
  • AstraDBLoader, loaders of data from Astra DB collections.

AstraDBCache

Bases: BaseCache

METHOD DESCRIPTION
__init__

Cache using Astra DB as a backend, using a collection as a key-value store.

lookup

Look up based on prompt and llm_string.

alookup

Async look up based on prompt and llm_string.

update

Update cache based on prompt and llm_string.

aupdate

Async update cache based on prompt and llm_string.

delete_through_llm

A wrapper around delete with the LLM being passed.

adelete_through_llm

A wrapper around adelete with the LLM being passed.

delete

Evict from cache if there's an entry.

adelete

Evict from cache if there's an entry.

clear

Clear cache that can take additional keyword arguments.

aclear

Async clear cache that can take additional keyword arguments.

__init__

__init__(
    *,
    collection_name: str = ASTRA_DB_CACHE_DEFAULT_COLLECTION_NAME,
    token: str | TokenProvider | None = None,
    api_endpoint: str | None = None,
    namespace: str | None = None,
    environment: str | None = None,
    pre_delete_collection: bool = False,
    setup_mode: SetupMode = SYNC,
    ext_callers: list[tuple[str | None, str | None] | str | None] | None = None,
    api_options: APIOptions | None = None,
)

Cache using Astra DB as a backend, using a collection as a key-value store.

The lookup keys, combined in the _id of the documents, are:

  • prompt, a string
  • llm_string, a deterministic str representation of the model parameters. (needed to prevent same-prompt-different-model collisions)
PARAMETER DESCRIPTION
collection_name

name of the Astra DB collection to create/use.

TYPE: str DEFAULT: ASTRA_DB_CACHE_DEFAULT_COLLECTION_NAME

token

API token for Astra DB usage, either in the form of a string or a subclass of astrapy.authentication.TokenProvider. If not provided, the environment variable ASTRA_DB_APPLICATION_TOKEN is inspected.

TYPE: str | TokenProvider | None DEFAULT: None

api_endpoint

full URL to the API endpoint, such as https://<DB-ID>-us-east1.apps.astra.datastax.com. If not provided, the environment variable ASTRA_DB_API_ENDPOINT is inspected.

TYPE: str | None DEFAULT: None

namespace

namespace (aka keyspace) where the collection is created. If not provided, the environment variable ASTRA_DB_KEYSPACE is inspected. Defaults to the database's "default namespace".

TYPE: str | None DEFAULT: None

environment

a string specifying the environment of the target Data API. If omitted, defaults to "prod" (Astra DB production). Other values are in astrapy.constants.Environment enum class.

TYPE: str | None DEFAULT: None

setup_mode

mode used to create the Astra DB collection (SYNC, ASYNC or OFF).

TYPE: SetupMode DEFAULT: SYNC

pre_delete_collection

whether to delete the collection before creating it. If False and the collection already exists, the collection will be used as is.

TYPE: bool DEFAULT: False

ext_callers

one or more caller identities to identify Data API calls in the User-Agent header. This is a list of (name, version) pairs, or just strings if no version info is provided, which, if supplied, becomes the leading part of the User-Agent string in all API requests related to this component.

TYPE: list[tuple[str | None, str | None] | str | None] | None DEFAULT: None

api_options

an instance of astrapy.utils.api_options.APIOptions that can be supplied to customize the interaction with the Data API regarding serialization/deserialization, timeouts, custom headers and so on. The provided options are applied on top of settings already tailored to this library, and if specified will take precedence. Passing None (default) means no customization is requested. Refer to the astrapy documentation for details.

TYPE: APIOptions | None DEFAULT: None

lookup

lookup(prompt: str, llm_string: str) -> RETURN_VAL_TYPE | None

Look up based on prompt and llm_string.

A cache implementation is expected to generate a key from the 2-tuple of prompt and llm_string (e.g., by concatenating them with a delimiter).

PARAMETER DESCRIPTION
prompt

A string representation of the prompt. In the case of a chat model, the prompt is a non-trivial serialization of the prompt into the language model.

TYPE: str

llm_string

A string representation of the LLM configuration.

This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.).

These invocation parameters are serialized into a string representation.

TYPE: str

RETURNS DESCRIPTION
RETURN_VAL_TYPE | None

On a cache miss, return None. On a cache hit, return the cached value.

RETURN_VAL_TYPE | None

The cached value is a list of Generation (or subclasses).

alookup async

alookup(prompt: str, llm_string: str) -> RETURN_VAL_TYPE | None

Async look up based on prompt and llm_string.

A cache implementation is expected to generate a key from the 2-tuple of prompt and llm_string (e.g., by concatenating them with a delimiter).

PARAMETER DESCRIPTION
prompt

A string representation of the prompt. In the case of a chat model, the prompt is a non-trivial serialization of the prompt into the language model.

TYPE: str

llm_string

A string representation of the LLM configuration.

This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.).

These invocation parameters are serialized into a string representation.

TYPE: str

RETURNS DESCRIPTION
RETURN_VAL_TYPE | None

On a cache miss, return None. On a cache hit, return the cached value.

RETURN_VAL_TYPE | None

The cached value is a list of Generation (or subclasses).

update

update(prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None

Update cache based on prompt and llm_string.

The prompt and llm_string are used to generate a key for the cache. The key should match that of the lookup method.

PARAMETER DESCRIPTION
prompt

A string representation of the prompt. In the case of a chat model, the prompt is a non-trivial serialization of the prompt into the language model.

TYPE: str

llm_string

A string representation of the LLM configuration.

This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.).

These invocation parameters are serialized into a string representation.

TYPE: str

return_val

The value to be cached. The value is a list of Generation (or subclasses).

TYPE: RETURN_VAL_TYPE

aupdate async

aupdate(prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None

Async update cache based on prompt and llm_string.

The prompt and llm_string are used to generate a key for the cache. The key should match that of the look up method.

PARAMETER DESCRIPTION
prompt

A string representation of the prompt. In the case of a chat model, the prompt is a non-trivial serialization of the prompt into the language model.

TYPE: str

llm_string

A string representation of the LLM configuration.

This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.).

These invocation parameters are serialized into a string representation.

TYPE: str

return_val

The value to be cached. The value is a list of Generation (or subclasses).

TYPE: RETURN_VAL_TYPE

delete_through_llm

delete_through_llm(prompt: str, llm: LLM, stop: list[str] | None = None) -> None

A wrapper around delete with the LLM being passed.

In case the llm(prompt) calls have a stop param, you should pass it here.

adelete_through_llm async

adelete_through_llm(prompt: str, llm: LLM, stop: list[str] | None = None) -> None

A wrapper around adelete with the LLM being passed.

In case the llm(prompt) calls have a stop param, you should pass it here.

delete

delete(prompt: str, llm_string: str) -> None

Evict from cache if there's an entry.

adelete async

adelete(prompt: str, llm_string: str) -> None

Evict from cache if there's an entry.

clear

clear(**kwargs: Any) -> None

Clear cache that can take additional keyword arguments.

aclear async

aclear(**kwargs: Any) -> None

Async clear cache that can take additional keyword arguments.

AstraDBSemanticCache

Bases: BaseCache

METHOD DESCRIPTION
__init__

Astra DB semantic cache.

update

Update cache based on prompt and llm_string.

aupdate

Async update cache based on prompt and llm_string.

lookup

Look up based on prompt and llm_string.

alookup

Async look up based on prompt and llm_string.

lookup_with_id

Look up based on prompt and llm_string.

alookup_with_id

Look up based on prompt and llm_string.

lookup_with_id_through_llm

Look up based on prompt and LLM.

alookup_with_id_through_llm

Look up based on prompt and LLM.

delete_by_document_id

Delete by document ID.

adelete_by_document_id

Delete by document ID.

clear

Clear cache that can take additional keyword arguments.

aclear

Async clear cache that can take additional keyword arguments.

__init__

__init__(
    *,
    collection_name: str = ASTRA_DB_SEMANTIC_CACHE_DEFAULT_COLLECTION_NAME,
    token: str | TokenProvider | None = None,
    api_endpoint: str | None = None,
    namespace: str | None = None,
    environment: str | None = None,
    setup_mode: SetupMode = SYNC,
    pre_delete_collection: bool = False,
    embedding: Embeddings,
    metric: str | None = None,
    similarity_threshold: float = ASTRA_DB_SEMANTIC_CACHE_DEFAULT_THRESHOLD,
    ext_callers: list[tuple[str | None, str | None] | str | None] | None = None,
    api_options: APIOptions | None = None,
)

Astra DB semantic cache.

Cache that uses Astra DB as a vector-store backend for semantic (i.e. similarity-based) lookup.

It uses a single (vector) collection and can store cached values from several LLMs, so the LLM's 'llm_string' is stored in the document metadata.

You can choose the preferred similarity (or use the API default). The default score threshold is tuned to the default metric. Tune it carefully yourself if switching to another distance metric.

PARAMETER DESCRIPTION
collection_name

name of the Astra DB collection to create/use.

TYPE: str DEFAULT: ASTRA_DB_SEMANTIC_CACHE_DEFAULT_COLLECTION_NAME

token

API token for Astra DB usage, either in the form of a string or a subclass of astrapy.authentication.TokenProvider. If not provided, the environment variable ASTRA_DB_APPLICATION_TOKEN is inspected.

TYPE: str | TokenProvider | None DEFAULT: None

api_endpoint

full URL to the API endpoint, such as https://<DB-ID>-us-east1.apps.astra.datastax.com. If not provided, the environment variable ASTRA_DB_API_ENDPOINT is inspected.

TYPE: str | None DEFAULT: None

namespace

namespace (aka keyspace) where the collection is created. If not provided, the environment variable ASTRA_DB_KEYSPACE is inspected. Defaults to the database's "default namespace".

TYPE: str | None DEFAULT: None

environment

a string specifying the environment of the target Data API. If omitted, defaults to "prod" (Astra DB production). Other values are in astrapy.constants.Environment enum class.

TYPE: str | None DEFAULT: None

setup_mode

mode used to create the Astra DB collection (SYNC, ASYNC or OFF).

TYPE: SetupMode DEFAULT: SYNC

pre_delete_collection

whether to delete the collection before creating it. If False and the collection already exists, the collection will be used as is.

TYPE: bool DEFAULT: False

embedding

Embedding provider for semantic encoding and search.

TYPE: Embeddings

metric

the function to use for evaluating similarity of text embeddings. Defaults to 'cosine' (alternatives: 'euclidean', 'dot_product')

TYPE: str | None DEFAULT: None

similarity_threshold

the minimum similarity for accepting a (semantic-search) match.

TYPE: float DEFAULT: ASTRA_DB_SEMANTIC_CACHE_DEFAULT_THRESHOLD

ext_callers

one or more caller identities to identify Data API calls in the User-Agent header. This is a list of (name, version) pairs, or just strings if no version info is provided, which, if supplied, becomes the leading part of the User-Agent string in all API requests related to this component.

TYPE: list[tuple[str | None, str | None] | str | None] | None DEFAULT: None

api_options

an instance of astrapy.utils.api_options.APIOptions that can be supplied to customize the interaction with the Data API regarding serialization/deserialization, timeouts, custom headers and so on. The provided options are applied on top of settings already tailored to this library, and if specified will take precedence. Passing None (default) means no customization is requested. Refer to the astrapy documentation for details.

TYPE: APIOptions | None DEFAULT: None

update

update(prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None

Update cache based on prompt and llm_string.

The prompt and llm_string are used to generate a key for the cache. The key should match that of the lookup method.

PARAMETER DESCRIPTION
prompt

A string representation of the prompt. In the case of a chat model, the prompt is a non-trivial serialization of the prompt into the language model.

TYPE: str

llm_string

A string representation of the LLM configuration.

This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.).

These invocation parameters are serialized into a string representation.

TYPE: str

return_val

The value to be cached. The value is a list of Generation (or subclasses).

TYPE: RETURN_VAL_TYPE

aupdate async

aupdate(prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None

Async update cache based on prompt and llm_string.

The prompt and llm_string are used to generate a key for the cache. The key should match that of the look up method.

PARAMETER DESCRIPTION
prompt

A string representation of the prompt. In the case of a chat model, the prompt is a non-trivial serialization of the prompt into the language model.

TYPE: str

llm_string

A string representation of the LLM configuration.

This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.).

These invocation parameters are serialized into a string representation.

TYPE: str

return_val

The value to be cached. The value is a list of Generation (or subclasses).

TYPE: RETURN_VAL_TYPE

lookup

lookup(prompt: str, llm_string: str) -> RETURN_VAL_TYPE | None

Look up based on prompt and llm_string.

A cache implementation is expected to generate a key from the 2-tuple of prompt and llm_string (e.g., by concatenating them with a delimiter).

PARAMETER DESCRIPTION
prompt

A string representation of the prompt. In the case of a chat model, the prompt is a non-trivial serialization of the prompt into the language model.

TYPE: str

llm_string

A string representation of the LLM configuration.

This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.).

These invocation parameters are serialized into a string representation.

TYPE: str

RETURNS DESCRIPTION
RETURN_VAL_TYPE | None

On a cache miss, return None. On a cache hit, return the cached value.

RETURN_VAL_TYPE | None

The cached value is a list of Generation (or subclasses).

alookup async

alookup(prompt: str, llm_string: str) -> RETURN_VAL_TYPE | None

Async look up based on prompt and llm_string.

A cache implementation is expected to generate a key from the 2-tuple of prompt and llm_string (e.g., by concatenating them with a delimiter).

PARAMETER DESCRIPTION
prompt

A string representation of the prompt. In the case of a chat model, the prompt is a non-trivial serialization of the prompt into the language model.

TYPE: str

llm_string

A string representation of the LLM configuration.

This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.).

These invocation parameters are serialized into a string representation.

TYPE: str

RETURNS DESCRIPTION
RETURN_VAL_TYPE | None

On a cache miss, return None. On a cache hit, return the cached value.

RETURN_VAL_TYPE | None

The cached value is a list of Generation (or subclasses).

lookup_with_id

lookup_with_id(prompt: str, llm_string: str) -> tuple[str, RETURN_VAL_TYPE] | None

Look up based on prompt and llm_string.

PARAMETER DESCRIPTION
prompt

the prompt string to look up

TYPE: str

llm_string

the str representation of the model parameters

TYPE: str

RETURNS DESCRIPTION
tuple[str, RETURN_VAL_TYPE] | None

If there are hits, (document_id, cached_entry) for the top hit

alookup_with_id async

alookup_with_id(prompt: str, llm_string: str) -> tuple[str, RETURN_VAL_TYPE] | None

Look up based on prompt and llm_string.

PARAMETER DESCRIPTION
prompt

the prompt string to look up

TYPE: str

llm_string

the str representation of the model parameters

TYPE: str

RETURNS DESCRIPTION
tuple[str, RETURN_VAL_TYPE] | None

If there are hits, (document_id, cached_entry) for the top hit

lookup_with_id_through_llm

lookup_with_id_through_llm(
    prompt: str, llm: LLM, stop: list[str] | None = None
) -> tuple[str, RETURN_VAL_TYPE] | None

Look up based on prompt and LLM.

PARAMETER DESCRIPTION
prompt

the prompt string to look up

TYPE: str

llm

the LLM instance whose parameters are used in the lookup

TYPE: LLM

stop

optional list of stop words passed to the LLM calls

TYPE: list[str] | None DEFAULT: None

RETURNS DESCRIPTION
tuple[str, RETURN_VAL_TYPE] | None

If there are hits, (document_id, cached_entry) for the top hit.

alookup_with_id_through_llm async

alookup_with_id_through_llm(
    prompt: str, llm: LLM, stop: list[str] | None = None
) -> tuple[str, RETURN_VAL_TYPE] | None

Look up based on prompt and LLM.

PARAMETER DESCRIPTION
prompt

the prompt string to look up

TYPE: str

llm

the LLM instance whose parameters are used in the lookup

TYPE: LLM

stop

optional list of stop words passed to the LLM calls

TYPE: list[str] | None DEFAULT: None

RETURNS DESCRIPTION
tuple[str, RETURN_VAL_TYPE] | None

If there are hits, (document_id, cached_entry) for the top hit.

delete_by_document_id

delete_by_document_id(document_id: str) -> None

Delete by document ID.

Given this is a "similarity search" cache, an invalidation pattern that makes sense is first a lookup to get an ID, and then deleting with that ID. This is for the second step.

adelete_by_document_id async

adelete_by_document_id(document_id: str) -> None

Delete by document ID.

Given this is a "similarity search" cache, an invalidation pattern that makes sense is first a lookup to get an ID, and then deleting with that ID. This is for the second step.

clear

clear(**kwargs: Any) -> None

Clear cache that can take additional keyword arguments.

aclear async

aclear(**kwargs: Any) -> None

Async clear cache that can take additional keyword arguments.

AstraDBChatMessageHistory

Bases: BaseChatMessageHistory

METHOD DESCRIPTION
add_user_message

Convenience method for adding a human message string to the store.

add_ai_message

Convenience method for adding an AIMessage string to the store.

add_message

Add a Message object to the store.

__str__

Return a string representation of the chat history.

__init__

Chat message history that stores history in Astra DB.

aget_messages

Async version of getting messages.

add_messages

Add a list of messages.

aadd_messages

Async add a list of messages.

clear

Remove all messages from the store.

aclear

Async remove all messages from the store.

messages property writable

messages: list[BaseMessage]

Retrieve all session messages from DB.

add_user_message

add_user_message(message: HumanMessage | str) -> None

Convenience method for adding a human message string to the store.

Note

This is a convenience method. Code should favor the bulk add_messages interface instead to save on round-trips to the persistence layer.

This method may be deprecated in a future release.

PARAMETER DESCRIPTION
message

The HumanMessage to add to the store.

TYPE: HumanMessage | str

add_ai_message

add_ai_message(message: AIMessage | str) -> None

Convenience method for adding an AIMessage string to the store.

Note

This is a convenience method. Code should favor the bulk add_messages interface instead to save on round-trips to the persistence layer.

This method may be deprecated in a future release.

PARAMETER DESCRIPTION
message

The AIMessage to add.

TYPE: AIMessage | str

add_message

add_message(message: BaseMessage) -> None

Add a Message object to the store.

PARAMETER DESCRIPTION
message

A BaseMessage object to store.

TYPE: BaseMessage

RAISES DESCRIPTION
NotImplementedError

If the sub-class has not implemented an efficient add_messages method.

__str__

__str__() -> str

Return a string representation of the chat history.

__init__

__init__(
    *,
    session_id: str,
    collection_name: str = DEFAULT_COLLECTION_NAME,
    token: str | TokenProvider | None = None,
    api_endpoint: str | None = None,
    namespace: str | None = None,
    environment: str | None = None,
    setup_mode: SetupMode = SYNC,
    pre_delete_collection: bool = False,
    ext_callers: list[tuple[str | None, str | None] | str | None] | None = None,
    api_options: APIOptions | None = None,
) -> None

Chat message history that stores history in Astra DB.

PARAMETER DESCRIPTION
session_id

arbitrary key that is used to store the messages of a single chat session.

TYPE: str

collection_name

name of the Astra DB collection to create/use.

TYPE: str DEFAULT: DEFAULT_COLLECTION_NAME

token

API token for Astra DB usage, either in the form of a string or a subclass of astrapy.authentication.TokenProvider. If not provided, the environment variable ASTRA_DB_APPLICATION_TOKEN is inspected.

TYPE: str | TokenProvider | None DEFAULT: None

api_endpoint

full URL to the API endpoint, such as https://<DB-ID>-us-east1.apps.astra.datastax.com. If not provided, the environment variable ASTRA_DB_API_ENDPOINT is inspected.

TYPE: str | None DEFAULT: None

namespace

namespace (aka keyspace) where the collection is created. If not provided, the environment variable ASTRA_DB_KEYSPACE is inspected. Defaults to the database's "default namespace".

TYPE: str | None DEFAULT: None

environment

a string specifying the environment of the target Data API. If omitted, defaults to "prod" (Astra DB production). Other values are in astrapy.constants.Environment enum class.

TYPE: str | None DEFAULT: None

setup_mode

mode used to create the Astra DB collection (SYNC, ASYNC or OFF).

TYPE: SetupMode DEFAULT: SYNC

pre_delete_collection

whether to delete the collection before creating it. If False and the collection already exists, the collection will be used as is.

TYPE: bool DEFAULT: False

ext_callers

one or more caller identities to identify Data API calls in the User-Agent header. This is a list of (name, version) pairs, or just strings if no version info is provided, which, if supplied, becomes the leading part of the User-Agent string in all API requests related to this component.

TYPE: list[tuple[str | None, str | None] | str | None] | None DEFAULT: None

api_options

an instance of astrapy.utils.api_options.APIOptions that can be supplied to customize the interaction with the Data API regarding serialization/deserialization, timeouts, custom headers and so on. The provided options are applied on top of settings already tailored to this library, and if specified will take precedence. Passing None (default) means no customization is requested. Refer to the astrapy documentation for details.

TYPE: APIOptions | None DEFAULT: None

aget_messages async

aget_messages() -> list[BaseMessage]

Async version of getting messages.

Can over-ride this method to provide an efficient async implementation.

In general, fetching messages may involve IO to the underlying persistence layer.

RETURNS DESCRIPTION
list[BaseMessage]

The messages.

add_messages

add_messages(messages: Sequence[BaseMessage]) -> None

Add a list of messages.

Implementations should over-ride this method to handle bulk addition of messages in an efficient manner to avoid unnecessary round-trips to the underlying store.

PARAMETER DESCRIPTION
messages

A sequence of BaseMessage objects to store.

TYPE: Sequence[BaseMessage]

aadd_messages async

aadd_messages(messages: Sequence[BaseMessage]) -> None

Async add a list of messages.

PARAMETER DESCRIPTION
messages

A sequence of BaseMessage objects to store.

TYPE: Sequence[BaseMessage]

clear

clear() -> None

Remove all messages from the store.

aclear async

aclear() -> None

Async remove all messages from the store.

AstraDBLoader

Bases: BaseLoader

METHOD DESCRIPTION
load

Load data into Document objects.

load_and_split

Load Document and split into chunks. Chunks are returned as Document.

__init__

Load DataStax Astra DB documents.

lazy_load

A lazy loader for Document.

aload

Load data into Document objects.

alazy_load

A lazy loader for Document.

load

load() -> list[Document]

Load data into Document objects.

RETURNS DESCRIPTION
list[Document]

The documents.

load_and_split

load_and_split(text_splitter: TextSplitter | None = None) -> list[Document]

Load Document and split into chunks. Chunks are returned as Document.

Danger

Do not override this method. It should be considered to be deprecated!

PARAMETER DESCRIPTION
text_splitter

TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.

TYPE: TextSplitter | None DEFAULT: None

RAISES DESCRIPTION
ImportError

If langchain-text-splitters is not installed and no text_splitter is provided.

RETURNS DESCRIPTION
list[Document]

List of Document.

__init__

__init__(
    collection_name: str,
    *,
    token: str | TokenProvider | None = None,
    api_endpoint: str | None = None,
    environment: str | None = None,
    namespace: str | None = None,
    filter_criteria: dict[str, Any] | None = None,
    projection: dict[str, Any] | None = _NOT_SET,
    limit: int | None = None,
    nb_prefetched: int = _NOT_SET,
    page_content_mapper: Callable[[dict], str] = dumps,
    metadata_mapper: Callable[[dict], dict[str, Any]] | None = None,
    ext_callers: list[tuple[str | None, str | None] | str | None] | None = None,
    api_options: APIOptions | None = None,
) -> None

Load DataStax Astra DB documents.

PARAMETER DESCRIPTION
collection_name

name of the Astra DB collection to use.

TYPE: str

token

API token for Astra DB usage, either in the form of a string or a subclass of astrapy.authentication.TokenProvider. If not provided, the environment variable ASTRA_DB_APPLICATION_TOKEN is inspected.

TYPE: str | TokenProvider | None DEFAULT: None

api_endpoint

full URL to the API endpoint, such as https://<DB-ID>-us-east1.apps.astra.datastax.com. If not provided, the environment variable ASTRA_DB_API_ENDPOINT is inspected.

TYPE: str | None DEFAULT: None

environment

a string specifying the environment of the target Data API. If omitted, defaults to "prod" (Astra DB production). Other values are in astrapy.constants.Environment enum class.

TYPE: str | None DEFAULT: None

namespace

namespace (aka keyspace) where the collection resides. If not provided, the environment variable ASTRA_DB_KEYSPACE is inspected. Defaults to the database's "default namespace".

TYPE: str | None DEFAULT: None

filter_criteria

Criteria to filter documents.

TYPE: dict[str, Any] | None DEFAULT: None

projection

Specifies the fields to return. If not provided, reads fall back to the Data API default projection.

TYPE: dict[str, Any] | None DEFAULT: _NOT_SET

limit

a maximum number of documents to return in the read query.

TYPE: int | None DEFAULT: None

nb_prefetched

Max number of documents to pre-fetch. IGNORED starting from v. 0.3.5: astrapy v1.0+ does not support it.

TYPE: int DEFAULT: _NOT_SET

page_content_mapper

Function applied to collection documents to create the page_content of the LangChain Document. Defaults to json.dumps.

TYPE: Callable[[dict], str] DEFAULT: dumps

metadata_mapper

Function applied to collection documents to create the metadata of the LangChain Document. Defaults to returning the namespace, API endpoint and collection name.

TYPE: Callable[[dict], dict[str, Any]] | None DEFAULT: None

ext_callers

one or more caller identities to identify Data API calls in the User-Agent header. This is a list of (name, version) pairs, or just strings if no version info is provided, which, if supplied, becomes the leading part of the User-Agent string in all API requests related to this component.

TYPE: list[tuple[str | None, str | None] | str | None] | None DEFAULT: None

api_options

an instance of astrapy.utils.api_options.APIOptions that can be supplied to customize the interaction with the Data API regarding serialization/deserialization, timeouts, custom headers and so on. The provided options are applied on top of settings already tailored to this library, and if specified will take precedence. Passing None (default) means no customization is requested. Refer to the astrapy documentation for details.

TYPE: APIOptions | None DEFAULT: None

lazy_load

lazy_load() -> Iterator[Document]

A lazy loader for Document.

YIELDS DESCRIPTION
Document

The Document objects.

aload async

aload() -> list[Document]

Load data into Document objects.

RETURNS DESCRIPTION
list[Document]

The documents.

alazy_load async

alazy_load() -> AsyncIterator[Document]

A lazy loader for Document.

YIELDS DESCRIPTION
AsyncIterator[Document]

The Document objects.

AstraDBByteStore

Bases: AstraDBBaseStore[bytes], ByteStore

METHOD DESCRIPTION
mget

Get the values associated with the given keys.

amget

Async get the values associated with the given keys.

mset

Set the values for the given keys.

amset

Async set the values for the given keys.

mdelete

Delete the given keys and their associated values.

amdelete

Async delete the given keys and their associated values.

yield_keys

Get an iterator over keys that match the given prefix.

ayield_keys

Async get an iterator over keys that match the given prefix.

__init__

ByteStore implementation using DataStax AstraDB as the underlying store.

decode_value

Decodes value from Astra DB.

encode_value

Encodes value for Astra DB.

mget

mget(keys: Sequence[str]) -> list[V | None]

Get the values associated with the given keys.

PARAMETER DESCRIPTION
keys

A sequence of keys.

TYPE: Sequence[K]

RETURNS DESCRIPTION
list[V | None]

A sequence of optional values associated with the keys.

list[V | None]

If a key is not found, the corresponding value will be None.

amget async

amget(keys: Sequence[str]) -> list[V | None]

Async get the values associated with the given keys.

PARAMETER DESCRIPTION
keys

A sequence of keys.

TYPE: Sequence[K]

RETURNS DESCRIPTION
list[V | None]

A sequence of optional values associated with the keys.

list[V | None]

If a key is not found, the corresponding value will be None.

mset

mset(key_value_pairs: Sequence[tuple[str, V]]) -> None

Set the values for the given keys.

PARAMETER DESCRIPTION
key_value_pairs

A sequence of key-value pairs.

TYPE: Sequence[tuple[K, V]]

amset async

amset(key_value_pairs: Sequence[tuple[str, V]]) -> None

Async set the values for the given keys.

PARAMETER DESCRIPTION
key_value_pairs

A sequence of key-value pairs.

TYPE: Sequence[tuple[K, V]]

mdelete

mdelete(keys: Sequence[str]) -> None

Delete the given keys and their associated values.

PARAMETER DESCRIPTION
keys

A sequence of keys to delete.

TYPE: Sequence[K]

amdelete async

amdelete(keys: Sequence[str]) -> None

Async delete the given keys and their associated values.

PARAMETER DESCRIPTION
keys

A sequence of keys to delete.

TYPE: Sequence[K]

yield_keys

yield_keys(*, prefix: str | None = None) -> Iterator[str]

Get an iterator over keys that match the given prefix.

PARAMETER DESCRIPTION
prefix

The prefix to match.

TYPE: str | None DEFAULT: None

YIELDS DESCRIPTION
Iterator[K] | Iterator[str]

An iterator over keys that match the given prefix.

Iterator[K] | Iterator[str]

This method is allowed to return an iterator over either K or str

Iterator[K] | Iterator[str]

depending on what makes more sense for the given store.

ayield_keys async

ayield_keys(*, prefix: str | None = None) -> AsyncIterator[str]

Async get an iterator over keys that match the given prefix.

PARAMETER DESCRIPTION
prefix

The prefix to match.

TYPE: str | None DEFAULT: None

YIELDS DESCRIPTION
AsyncIterator[K] | AsyncIterator[str]

The keys that match the given prefix.

AsyncIterator[K] | AsyncIterator[str]

This method is allowed to return an iterator over either K or str

AsyncIterator[K] | AsyncIterator[str]

depending on what makes more sense for the given store.

__init__

__init__(
    *,
    collection_name: str,
    token: str | TokenProvider | None = None,
    api_endpoint: str | None = None,
    namespace: str | None = None,
    environment: str | None = None,
    pre_delete_collection: bool = False,
    setup_mode: SetupMode = SYNC,
    ext_callers: list[tuple[str | None, str | None] | str | None] | None = None,
    api_options: APIOptions | None = None,
) -> None

ByteStore implementation using DataStax AstraDB as the underlying store.

The bytes values are converted to base64 encoded strings.

Documents in the AstraDB collection will have the format

{
    "_id": "<key>",
    "value": "<byte64 string value>"
}
PARAMETER DESCRIPTION
collection_name

name of the Astra DB collection to create/use.

TYPE: str

token

API token for Astra DB usage, either in the form of a string or a subclass of astrapy.authentication.TokenProvider. If not provided, the environment variable ASTRA_DB_APPLICATION_TOKEN is inspected.

TYPE: str | TokenProvider | None DEFAULT: None

api_endpoint

full URL to the API endpoint, such as https://<DB-ID>-us-east1.apps.astra.datastax.com. If not provided, the environment variable ASTRA_DB_API_ENDPOINT is inspected.

TYPE: str | None DEFAULT: None

namespace

namespace (aka keyspace) where the collection is created. If not provided, the environment variable ASTRA_DB_KEYSPACE is inspected. Defaults to the database's "default namespace".

TYPE: str | None DEFAULT: None

environment

a string specifying the environment of the target Data API. If omitted, defaults to "prod" (Astra DB production). Other values are in astrapy.constants.Environment enum class.

TYPE: str | None DEFAULT: None

setup_mode

mode used to create the Astra DB collection (SYNC, ASYNC or OFF).

TYPE: SetupMode DEFAULT: SYNC

pre_delete_collection

whether to delete the collection before creating it. If False and the collection already exists, the collection will be used as is.

TYPE: bool DEFAULT: False

ext_callers

one or more caller identities to identify Data API calls in the User-Agent header. This is a list of (name, version) pairs, or just strings if no version info is provided, which, if supplied, becomes the leading part of the User-Agent string in all API requests related to this component.

TYPE: list[tuple[str | None, str | None] | str | None] | None DEFAULT: None

api_options

an instance of astrapy.utils.api_options.APIOptions that can be supplied to customize the interaction with the Data API regarding serialization/deserialization, timeouts, custom headers and so on. The provided options are applied on top of settings already tailored to this library, and if specified will take precedence. Passing None (default) means no customization is requested. Refer to the astrapy documentation for details.

TYPE: APIOptions | None DEFAULT: None

decode_value

decode_value(value: Any) -> bytes | None

Decodes value from Astra DB.

encode_value

encode_value(value: bytes | None) -> Any

Encodes value for Astra DB.

AstraDBStore

Bases: AstraDBBaseStore[Any]

METHOD DESCRIPTION
mget

Get the values associated with the given keys.

amget

Async get the values associated with the given keys.

mset

Set the values for the given keys.

amset

Async set the values for the given keys.

mdelete

Delete the given keys and their associated values.

amdelete

Async delete the given keys and their associated values.

yield_keys

Get an iterator over keys that match the given prefix.

ayield_keys

Async get an iterator over keys that match the given prefix.

__init__

BaseStore implementation using DataStax AstraDB as the underlying store.

decode_value

Decodes value from Astra DB.

encode_value

Encodes value for Astra DB.

mget

mget(keys: Sequence[str]) -> list[V | None]

Get the values associated with the given keys.

PARAMETER DESCRIPTION
keys

A sequence of keys.

TYPE: Sequence[K]

RETURNS DESCRIPTION
list[V | None]

A sequence of optional values associated with the keys.

list[V | None]

If a key is not found, the corresponding value will be None.

amget async

amget(keys: Sequence[str]) -> list[V | None]

Async get the values associated with the given keys.

PARAMETER DESCRIPTION
keys

A sequence of keys.

TYPE: Sequence[K]

RETURNS DESCRIPTION
list[V | None]

A sequence of optional values associated with the keys.

list[V | None]

If a key is not found, the corresponding value will be None.

mset

mset(key_value_pairs: Sequence[tuple[str, V]]) -> None

Set the values for the given keys.

PARAMETER DESCRIPTION
key_value_pairs

A sequence of key-value pairs.

TYPE: Sequence[tuple[K, V]]

amset async

amset(key_value_pairs: Sequence[tuple[str, V]]) -> None

Async set the values for the given keys.

PARAMETER DESCRIPTION
key_value_pairs

A sequence of key-value pairs.

TYPE: Sequence[tuple[K, V]]

mdelete

mdelete(keys: Sequence[str]) -> None

Delete the given keys and their associated values.

PARAMETER DESCRIPTION
keys

A sequence of keys to delete.

TYPE: Sequence[K]

amdelete async

amdelete(keys: Sequence[str]) -> None

Async delete the given keys and their associated values.

PARAMETER DESCRIPTION
keys

A sequence of keys to delete.

TYPE: Sequence[K]

yield_keys

yield_keys(*, prefix: str | None = None) -> Iterator[str]

Get an iterator over keys that match the given prefix.

PARAMETER DESCRIPTION
prefix

The prefix to match.

TYPE: str | None DEFAULT: None

YIELDS DESCRIPTION
Iterator[K] | Iterator[str]

An iterator over keys that match the given prefix.

Iterator[K] | Iterator[str]

This method is allowed to return an iterator over either K or str

Iterator[K] | Iterator[str]

depending on what makes more sense for the given store.

ayield_keys async

ayield_keys(*, prefix: str | None = None) -> AsyncIterator[str]

Async get an iterator over keys that match the given prefix.

PARAMETER DESCRIPTION
prefix

The prefix to match.

TYPE: str | None DEFAULT: None

YIELDS DESCRIPTION
AsyncIterator[K] | AsyncIterator[str]

The keys that match the given prefix.

AsyncIterator[K] | AsyncIterator[str]

This method is allowed to return an iterator over either K or str

AsyncIterator[K] | AsyncIterator[str]

depending on what makes more sense for the given store.

__init__

__init__(
    collection_name: str,
    *,
    token: str | TokenProvider | None = None,
    api_endpoint: str | None = None,
    namespace: str | None = None,
    environment: str | None = None,
    pre_delete_collection: bool = False,
    setup_mode: SetupMode = SYNC,
    ext_callers: list[tuple[str | None, str | None] | str | None] | None = None,
    api_options: APIOptions | None = None,
) -> None

BaseStore implementation using DataStax AstraDB as the underlying store.

The value type can be any type serializable by json.dumps. Can be used to store embeddings with the CacheBackedEmbeddings.

Documents in the AstraDB collection will have the format

{
    "_id": "<key>",
    "value": <value>
}
PARAMETER DESCRIPTION
collection_name

name of the Astra DB collection to create/use.

TYPE: str

token

API token for Astra DB usage, either in the form of a string or a subclass of astrapy.authentication.TokenProvider. If not provided, the environment variable ASTRA_DB_APPLICATION_TOKEN is inspected.

TYPE: str | TokenProvider | None DEFAULT: None

api_endpoint

full URL to the API endpoint, such as https://<DB-ID>-us-east1.apps.astra.datastax.com. If not provided, the environment variable ASTRA_DB_API_ENDPOINT is inspected.

TYPE: str | None DEFAULT: None

namespace

namespace (aka keyspace) where the collection is created. If not provided, the environment variable ASTRA_DB_KEYSPACE is inspected. Defaults to the database's "default namespace".

TYPE: str | None DEFAULT: None

environment

a string specifying the environment of the target Data API. If omitted, defaults to "prod" (Astra DB production). Other values are in astrapy.constants.Environment enum class.

TYPE: str | None DEFAULT: None

setup_mode

mode used to create the Astra DB collection (SYNC, ASYNC or OFF).

TYPE: SetupMode DEFAULT: SYNC

pre_delete_collection

whether to delete the collection before creating it. If False and the collection already exists, the collection will be used as is.

TYPE: bool DEFAULT: False

ext_callers

one or more caller identities to identify Data API calls in the User-Agent header. This is a list of (name, version) pairs, or just strings if no version info is provided, which, if supplied, becomes the leading part of the User-Agent string in all API requests related to this component.

TYPE: list[tuple[str | None, str | None] | str | None] | None DEFAULT: None

api_options

an instance of astrapy.utils.api_options.APIOptions that can be supplied to customize the interaction with the Data API regarding serialization/deserialization, timeouts, custom headers and so on. The provided options are applied on top of settings already tailored to this library, and if specified will take precedence. Passing None (default) means no customization is requested. Refer to the astrapy documentation for details.

TYPE: APIOptions | None DEFAULT: None

decode_value

decode_value(value: Any) -> Any

Decodes value from Astra DB.

encode_value

encode_value(value: Any) -> Any

Encodes value for Astra DB.

AstraDBVectorStore

Bases: VectorStore

A vector store which uses DataStax Astra DB as backend.

Setup

Install the langchain-astradb package and head to the AstraDB website, create an account, create a new database and create an application token.

pip install -qU langchain-astradb
Instantiate

Get your API endpoint and application token from the dashboard of your database.

Create a vector store and provide a LangChain embedding object for working with it:

import getpass

from langchain_astradb import AstraDBVectorStore
from langchain_openai import OpenAIEmbeddings

ASTRA_DB_API_ENDPOINT = getpass.getpass("ASTRA_DB_API_ENDPOINT = ")
ASTRA_DB_APPLICATION_TOKEN = getpass.getpass("ASTRA_DB_APPLICATION_TOKEN = ")

vector_store = AstraDBVectorStore(
    collection_name="astra_vector_langchain",
    embedding=OpenAIEmbeddings(),
    api_endpoint=ASTRA_DB_API_ENDPOINT,
    token=ASTRA_DB_APPLICATION_TOKEN,
)

(Vectorize) Create a vector store where the embedding vector computation happens entirely on the server-side, using the vectorize feature:

import getpass
from astrapy.info import VectorServiceOptions

from langchain_astradb import AstraDBVectorStore

ASTRA_DB_API_ENDPOINT = getpass.getpass("ASTRA_DB_API_ENDPOINT = ")
ASTRA_DB_APPLICATION_TOKEN = getpass.getpass("ASTRA_DB_APPLICATION_TOKEN = ")

vector_store = AstraDBVectorStore(
    collection_name="astra_vectorize_langchain",
    api_endpoint=ASTRA_DB_API_ENDPOINT,
    token=ASTRA_DB_APPLICATION_TOKEN,
    collection_vector_service_options=VectorServiceOptions(
        provider="nvidia",
        model_name="NV-Embed-QA",
        # authentication=...,  # needed by some providers/models
    ),
)

(Hybrid) The underlying Astra DB typically supports hybrid search (i.e. lexical + vector ANN) to boost the results' accuracy. This is provisioned and used automatically when available. For manual control, use the collection_rerank and collection_lexical constructor parameters:

import getpass
from astrapy.info import (
    CollectionLexicalOptions,
    CollectionRerankOptions,
    RerankServiceOptions,
    VectorServiceOptions,
)

from langchain_astradb import AstraDBVectorStore

ASTRA_DB_API_ENDPOINT = getpass.getpass("ASTRA_DB_API_ENDPOINT = ")
ASTRA_DB_APPLICATION_TOKEN = getpass.getpass("ASTRA_DB_APPLICATION_TOKEN = ")

vector_store = AstraDBVectorStore(
    collection_name="astra_vectorize_langchain",
    # embedding=...,  # needed unless using 'vectorize'
    api_endpoint=ASTRA_DB_API_ENDPOINT,
    token=ASTRA_DB_APPLICATION_TOKEN,
    collection_vector_service_options=VectorServiceOptions(...),  # see above
    collection_lexical=CollectionLexicalOptions(analyzer="standard"),
    collection_rerank=CollectionRerankOptions(
        service=RerankServiceOptions(
            provider="nvidia",
            model_name="nvidia/llama-3.2-nv-rerankqa-1b-v2",
        ),
    ),
    collection_reranking_api_key=...,  # if needed by the model/setup
)

Hybrid-related server upgrades may introduce a mismatch between the store defaults and a pre-existing collection: in case one such mismatch is reported (as a Data API "EXISTING_COLLECTION_DIFFERENT_SETTINGS" error), the options to resolve are: (1) use autodetect mode, (2) switch to setup_mode "OFF", or (3) explicitly specify lexical and/or rerank settings in the vector store constructor, to match the existing collection configuration. See here for more details.

(Autodetect) Let the vector store figure out the configuration (including vectorize and document encoding scheme on DB), by inspection of an existing collection:

import getpass

from langchain_astradb import AstraDBVectorStore

ASTRA_DB_API_ENDPOINT = getpass.getpass("ASTRA_DB_API_ENDPOINT = ")
ASTRA_DB_APPLICATION_TOKEN = getpass.getpass("ASTRA_DB_APPLICATION_TOKEN = ")

vector_store = AstraDBVectorStore(
    collection_name="astra_existing_collection",
    # embedding=...,  # needed unless using 'vectorize'
    api_endpoint=ASTRA_DB_API_ENDPOINT,
    token=ASTRA_DB_APPLICATION_TOKEN,
    autodetect_collection=True,
)

(Non-Astra DB) This class can also target a non-Astra DB database, such as a self-deployed HCD, through the Data API:

import getpass

from astrapy.authentication import UsernamePasswordTokenProvider

from langchain_astradb import AstraDBVectorStore

vector_store = AstraDBVectorStore(
    collection_name="astra_existing_collection",
    # embedding=...,  # needed unless using 'vectorize'
    api_endpoint="http://localhost:8181",
    token=UsernamePasswordTokenProvider(
        username="user",
        password="pwd",
    ),
    collection_vector_service_options=...,  # if 'vectorize'
)
Add Documents

Add one or more documents to the vector store. IDs are optional: if provided, and matching existing documents, an overwrite is performed.

from langchain_core.documents import Document

document_1 = Document(page_content="foo", metadata={"baz": "bar"})
document_2 = Document(page_content="thud", metadata={"bar": "baz"})
document_3 = Document(page_content="i will be deleted :(")

documents = [document_1, document_2, document_3]
ids = ["1", "2", "3"]
vector_store.add_documents(documents=documents, ids=ids)
Delete Documents

Delete one or more documents from the vector store by their IDs.

vector_store.delete(ids=["3"])
Search with filter

Specify metadata filters for a search. Simple key: value syntax for the filter means equality (with implied 'and'). More complex syntax is available, following the Data API specifications, see (docs)[https://docs.datastax.com/en/astra-db-serverless/api-reference/filter-operator-collections.html].

results = vector_store.similarity_search(
    query="thud", k=1, filter={"bar": "baz"}
)
for doc in results:
    print(f"{doc.page_content}[{doc.metadata}]")
thud[{"bar": "baz"}]
Search with score

Search results are returned with their similarity score.

results = vector_store.similarity_search_with_score(query="qux", k=1)
for doc, score in results:
    print(f"* [SIM={score:3f}] {doc.page_content} [{doc.metadata}]")
[SIM=0.916135] foo [{'baz': 'bar'}]
Async

All methods come with their async counterpart (method name prepended with a).

# add documents
await vector_store.aadd_documents(documents=documents, ids=ids)

# delete documents
await vector_store.adelete(ids=["3"])

# search
results = vector_store.asimilarity_search(query="thud", k=1)

# search with score
results = await vector_store.asimilarity_search_with_score(query="qux", k=1)
for doc, score in results:
    print(f"* [SIM={score:3f}] {doc.page_content} [{doc.metadata}]")
[SIM=0.916135] foo [{'baz': 'bar'}]
Use as Retriever

A Retriever can be spawned from the vector store for further usage.

retriever = vector_store.as_retriever(
    search_type="similarity_score_threshold",
    search_kwargs={"k": 1, "score_threshold": 0.5},
)
retriever.invoke("thud")
[Document(metadata={"bar": "baz"}, page_content="thud")]
METHOD DESCRIPTION
add_documents

Add or update documents in the VectorStore.

aadd_documents

Async run more documents through the embeddings and add to the VectorStore.

search

Return docs most similar to query using a specified search type.

asearch

Async return docs most similar to query using a specified search type.

similarity_search_with_relevance_scores

Return docs and relevance scores in the range [0, 1].

asimilarity_search_with_relevance_scores

Async return docs and relevance scores in the range [0, 1].

as_retriever

Return VectorStoreRetriever initialized from this VectorStore.

filter_to_query

Prepare a query for use on DB based on metadata filter.

__init__

A vector store which uses DataStax Astra DB as backend.

copy

Create a copy, possibly with changed attributes.

clear

Empty the collection of all its stored entries.

aclear

Empty the collection of all its stored entries.

delete_by_document_id

Remove a single document from the store, given its document ID.

adelete_by_document_id

Remove a single document from the store, given its document ID.

delete

Delete by vector ids.

adelete

Delete by vector ids.

delete_by_metadata_filter

Delete all documents matching a certain metadata filtering condition.

adelete_by_metadata_filter

Delete all documents matching a certain metadata filtering condition.

delete_collection

Completely delete the collection from the database.

adelete_collection

Completely delete the collection from the database.

add_texts

Run texts through the embeddings and add them to the vectorstore.

aadd_texts

Run texts through the embeddings and add them to the vectorstore.

update_metadata

Add/overwrite the metadata of existing documents.

aupdate_metadata

Add/overwrite the metadata of existing documents.

full_decode_astra_db_found_document

Decode an Astra DB document in full, i.e. into Document+embedding/similarity.

full_decode_astra_db_reranked_result

Full-decode an Astra DB find-and-rerank hit (Document+embedding/similarity).

run_query_raw

Execute a generic query on stored documents, returning Astra DB documents.

run_query

Execute a generic query on stored documents, returning Documents+other info.

arun_query_raw

Execute a generic query on stored documents, returning Astra DB documents.

arun_query

Execute a generic query on stored documents, returning Documents+other info.

metadata_search

Get documents via a metadata search.

ametadata_search

Get documents via a metadata search.

get_by_document_id

Retrieve a single document from the store, given its document ID.

aget_by_document_id

Retrieve a single document from the store, given its document ID.

get_by_ids

Get documents by their IDs.

get_by_document_ids

Get documents by their IDs.

aget_by_ids

Get documents by their IDs.

aget_by_document_ids

Get documents by their IDs.

similarity_search

Return docs most similar to query.

similarity_search_with_score

Return docs most similar to query with score.

similarity_search_with_score_id

Return docs most similar to the query with score and id.

similarity_search_by_vector

Return docs most similar to embedding vector.

similarity_search_with_score_by_vector

Return docs most similar to embedding vector with score.

similarity_search_with_score_id_by_vector

Return docs most similar to embedding vector with score and id.

asimilarity_search

Return docs most similar to query.

asimilarity_search_with_score

Return docs most similar to query with score.

asimilarity_search_with_score_id

Return docs most similar to the query with score and id.

asimilarity_search_by_vector

Return docs most similar to embedding vector.

asimilarity_search_with_score_by_vector

Return docs most similar to embedding vector with score.

asimilarity_search_with_score_id_by_vector

Return docs most similar to embedding vector with score and id.

similarity_search_with_embedding_by_vector

Return docs most similar to embedding vector with embedding.

asimilarity_search_with_embedding_by_vector

Return docs most similar to embedding vector with embedding.

similarity_search_with_embedding

Return docs most similar to the query with embedding.

asimilarity_search_with_embedding

Return docs most similar to the query with embedding.

max_marginal_relevance_search_by_vector

Return docs selected using the maximal marginal relevance.

amax_marginal_relevance_search_by_vector

Return docs selected using the maximal marginal relevance.

max_marginal_relevance_search

Return docs selected using the maximal marginal relevance.

amax_marginal_relevance_search

Return docs selected using the maximal marginal relevance.

from_texts

Create an Astra DB vectorstore from raw texts.

afrom_texts

Create an Astra DB vectorstore from raw texts.

from_documents

Create an Astra DB vectorstore from a document list.

afrom_documents

Create an Astra DB vectorstore from a document list.

embeddings property

embeddings: Embeddings | None

Accesses the supplied embeddings object.

If using server-side embeddings, this will return None.

add_documents

add_documents(documents: list[Document], **kwargs: Any) -> list[str]

Add or update documents in the VectorStore.

PARAMETER DESCRIPTION
documents

Documents to add to the VectorStore.

TYPE: list[Document]

**kwargs

Additional keyword arguments.

If kwargs contains IDs and documents contain ids, the IDs in the kwargs will receive precedence.

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
list[str]

List of IDs of the added texts.

aadd_documents async

aadd_documents(documents: list[Document], **kwargs: Any) -> list[str]

Async run more documents through the embeddings and add to the VectorStore.

PARAMETER DESCRIPTION
documents

Documents to add to the VectorStore.

TYPE: list[Document]

**kwargs

Additional keyword arguments.

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
list[str]

List of IDs of the added texts.

search

search(query: str, search_type: str, **kwargs: Any) -> list[Document]

Return docs most similar to query using a specified search type.

PARAMETER DESCRIPTION
query

Input text.

TYPE: str

search_type

Type of search to perform. Can be 'similarity', 'mmr', or 'similarity_score_threshold'.

TYPE: str

**kwargs

Arguments to pass to the search method.

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
list[Document]

List of Document objects most similar to the query.

RAISES DESCRIPTION
ValueError

If search_type is not one of 'similarity', 'mmr', or 'similarity_score_threshold'.

asearch async

asearch(query: str, search_type: str, **kwargs: Any) -> list[Document]

Async return docs most similar to query using a specified search type.

PARAMETER DESCRIPTION
query

Input text.

TYPE: str

search_type

Type of search to perform. Can be 'similarity', 'mmr', or 'similarity_score_threshold'.

TYPE: str

**kwargs

Arguments to pass to the search method.

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
list[Document]

List of Document objects most similar to the query.

RAISES DESCRIPTION
ValueError

If search_type is not one of 'similarity', 'mmr', or 'similarity_score_threshold'.

similarity_search_with_relevance_scores

similarity_search_with_relevance_scores(
    query: str, k: int = 4, **kwargs: Any
) -> list[tuple[Document, float]]

Return docs and relevance scores in the range [0, 1].

0 is dissimilar, 1 is most similar.

PARAMETER DESCRIPTION
query

Input text.

TYPE: str

k

Number of Document objects to return.

TYPE: int DEFAULT: 4

**kwargs

kwargs to be passed to similarity search. Should include score_threshold, An optional floating point value between 0 to 1 to filter the resulting set of retrieved docs

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
list[tuple[Document, float]]

List of tuples of (doc, similarity_score).

asimilarity_search_with_relevance_scores async

asimilarity_search_with_relevance_scores(
    query: str, k: int = 4, **kwargs: Any
) -> list[tuple[Document, float]]

Async return docs and relevance scores in the range [0, 1].

0 is dissimilar, 1 is most similar.

PARAMETER DESCRIPTION
query

Input text.

TYPE: str

k

Number of Document objects to return.

TYPE: int DEFAULT: 4

**kwargs

kwargs to be passed to similarity search. Should include score_threshold, An optional floating point value between 0 to 1 to filter the resulting set of retrieved docs

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
list[tuple[Document, float]]

List of tuples of (doc, similarity_score)

as_retriever

as_retriever(**kwargs: Any) -> VectorStoreRetriever

Return VectorStoreRetriever initialized from this VectorStore.

PARAMETER DESCRIPTION
**kwargs

Keyword arguments to pass to the search function. Can include:

  • search_type: Defines the type of search that the Retriever should perform. Can be 'similarity' (default), 'mmr', or 'similarity_score_threshold'.
  • search_kwargs: Keyword arguments to pass to the search function. Can include things like:

    • k: Amount of documents to return (Default: 4)
    • score_threshold: Minimum relevance threshold for similarity_score_threshold
    • fetch_k: Amount of documents to pass to MMR algorithm (Default: 20)
    • lambda_mult: Diversity of results returned by MMR; 1 for minimum diversity and 0 for maximum. (Default: 0.5)
    • filter: Filter by document metadata

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
VectorStoreRetriever

Retriever class for VectorStore.

Examples:

# Retrieve more documents with higher diversity
# Useful if your dataset has many similar documents
docsearch.as_retriever(
    search_type="mmr", search_kwargs={"k": 6, "lambda_mult": 0.25}
)

# Fetch more documents for the MMR algorithm to consider
# But only return the top 5
docsearch.as_retriever(search_type="mmr", search_kwargs={"k": 5, "fetch_k": 50})

# Only retrieve documents that have a relevance score
# Above a certain threshold
docsearch.as_retriever(
    search_type="similarity_score_threshold",
    search_kwargs={"score_threshold": 0.8},
)

# Only get the single most similar document from the dataset
docsearch.as_retriever(search_kwargs={"k": 1})

# Use a filter to only retrieve documents from a specific paper
docsearch.as_retriever(
    search_kwargs={"filter": {"paper_title": "GPT-4 Technical Report"}}
)

filter_to_query

filter_to_query(filter_dict: dict[str, Any] | None) -> dict[str, Any]

Prepare a query for use on DB based on metadata filter.

Encode an "abstract" filter clause on metadata into a query filter condition aware of the collection schema choice.

PARAMETER DESCRIPTION
filter_dict

a metadata condition in the form {"field": "value"} or related.

TYPE: dict[str, Any] | None

RETURNS DESCRIPTION
dict[str, Any]

the corresponding mapping ready for use in queries,

dict[str, Any]

aware of the details of the schema used to encode the document on DB.

__init__

__init__(
    *,
    collection_name: str,
    embedding: Embeddings | None = None,
    token: str | TokenProvider | None = None,
    api_endpoint: str | None = None,
    environment: str | None = None,
    namespace: str | None = None,
    metric: str | None = None,
    batch_size: int | None = None,
    bulk_insert_batch_concurrency: int | None = None,
    bulk_insert_overwrite_concurrency: int | None = None,
    bulk_delete_concurrency: int | None = None,
    setup_mode: SetupMode | None = None,
    pre_delete_collection: bool = False,
    metadata_indexing_include: Iterable[str] | None = None,
    metadata_indexing_exclude: Iterable[str] | None = None,
    collection_indexing_policy: dict[str, Any] | None = None,
    collection_vector_service_options: VectorServiceOptions | None = None,
    collection_embedding_api_key: str | EmbeddingHeadersProvider | None = None,
    content_field: str | None = None,
    ignore_invalid_documents: bool = False,
    autodetect_collection: bool = False,
    ext_callers: list[tuple[str | None, str | None] | str | None] | None = None,
    component_name: str = COMPONENT_NAME_VECTORSTORE,
    api_options: APIOptions | None = None,
    collection_rerank: CollectionRerankOptions | RerankServiceOptions | None = None,
    collection_reranking_api_key: str | RerankingHeadersProvider | None = None,
    collection_lexical: str | dict[str, Any] | CollectionLexicalOptions | None = None,
    hybrid_search: HybridSearchMode | None = None,
    hybrid_limit_factor: float
    | dict[str, float]
    | HybridLimitFactorPrescription
    | None = None,
) -> None

A vector store which uses DataStax Astra DB as backend.

For more on Astra DB, visit https://docs.datastax.com/en/astra-db-serverless/index.html

PARAMETER DESCRIPTION
embedding

the embeddings function or service to use. This enables client-side embedding functions or calls to external embedding providers. If embedding is passed, then collection_vector_service_options can not be provided.

TYPE: Embeddings | None DEFAULT: None

collection_name

name of the Astra DB collection to create/use.

TYPE: str

token

API token for Astra DB usage, either in the form of a string or a subclass of astrapy.authentication.TokenProvider. If not provided, the environment variable ASTRA_DB_APPLICATION_TOKEN is inspected.

TYPE: str | TokenProvider | None DEFAULT: None

api_endpoint

full URL to the API endpoint, such as https://<DB-ID>-us-east1.apps.astra.datastax.com. If not provided, the environment variable ASTRA_DB_API_ENDPOINT is inspected.

TYPE: str | None DEFAULT: None

environment

a string specifying the environment of the target Data API. If omitted, defaults to "prod" (Astra DB production). Other values are in astrapy.constants.Environment enum class.

TYPE: str | None DEFAULT: None

namespace

namespace (aka keyspace) where the collection is created. If not provided, the environment variable ASTRA_DB_KEYSPACE is inspected. Defaults to the database's "default namespace".

TYPE: str | None DEFAULT: None

metric

similarity function to use out of those available in Astra DB. If left out, it will use Astra DB API's defaults (i.e. "cosine" - but, for performance reasons, "dot_product" is suggested if embeddings are normalized to one).

TYPE: str | None DEFAULT: None

batch_size

Size of document chunks for each individual insertion API request. If not provided, astrapy defaults are applied.

TYPE: int | None DEFAULT: None

bulk_insert_batch_concurrency

Number of threads or coroutines to insert batches concurrently.

TYPE: int | None DEFAULT: None

bulk_insert_overwrite_concurrency

Number of threads or coroutines in a batch to insert pre-existing entries.

TYPE: int | None DEFAULT: None

bulk_delete_concurrency

Number of threads or coroutines for multiple-entry deletes.

TYPE: int | None DEFAULT: None

setup_mode

mode used to create the collection (SYNC, ASYNC or OFF).

TYPE: SetupMode | None DEFAULT: None

pre_delete_collection

whether to delete the collection before creating it. If False and the collection already exists, the collection will be used as is.

TYPE: bool DEFAULT: False

metadata_indexing_include

an allowlist of the specific metadata subfields that should be indexed for later filtering in searches.

TYPE: Iterable[str] | None DEFAULT: None

metadata_indexing_exclude

a denylist of the specific metadata subfields that should not be indexed for later filtering in searches.

TYPE: Iterable[str] | None DEFAULT: None

collection_indexing_policy

a full "indexing" specification for what fields should be indexed for later filtering in searches. This dict must conform to to the API specifications (see https://docs.datastax.com/en/astra-db-serverless/api-reference/collection-indexes.html)

TYPE: dict[str, Any] | None DEFAULT: None

collection_vector_service_options

specifies the use of server-side embeddings within Astra DB. If passing this parameter, embedding cannot be provided.

TYPE: VectorServiceOptions | None DEFAULT: None

collection_embedding_api_key

for usage of server-side embeddings within Astra DB. With this parameter one can supply an API Key that will be passed to Astra DB with each data request. This parameter can be either a string or a subclass of astrapy.authentication.EmbeddingHeadersProvider. This is useful when the service is configured for the collection, but no corresponding secret is stored within Astra's key management system.

TYPE: str | EmbeddingHeadersProvider | None DEFAULT: None

content_field

name of the field containing the textual content in the documents when saved on Astra DB. For vectorize collections, this cannot be specified; for non-vectorize collection, defaults to "content". The special value "*" can be passed only if autodetect_collection=True. In this case, the actual name of the key for the textual content is guessed by inspection of a few documents from the collection, under the assumption that the longer strings are the most likely candidates. Please understand the limitations of this method and get some understanding of your data before passing "*" for this parameter.

TYPE: str | None DEFAULT: None

ignore_invalid_documents

if False (default), exceptions are raised when a document is found on the Astra DB collection that does not have the expected shape. If set to True, such results from the database are ignored and a warning is issued. Note that in this case a similarity search may end up returning fewer results than the required k.

TYPE: bool DEFAULT: False

autodetect_collection

if True, turns on autodetect behavior. The store will look for an existing collection of the provided name and infer the store settings from it. Default is False. In autodetect mode, content_field can be given as "*", meaning that an attempt will be made to determine it by inspection (unless vectorize is enabled, in which case content_field is ignored). In autodetect mode, the store not only determines whether embeddings are client- or server-side, but - most importantly - switches automatically between "nested" and "flat" representations of documents on DB (i.e. having the metadata key-value pairs grouped in a metadata field or spread at the documents' top-level). The former scheme is the native mode of the AstraDBVectorStore; the store resorts to the latter in case of vector collections populated with external means (such as a third-party data import tool) before applying an AstraDBVectorStore to them. Note that the following parameters cannot be used if this is True: metric, setup_mode, metadata_indexing_include, metadata_indexing_exclude, collection_indexing_policy, collection_vector_service_options.

TYPE: bool DEFAULT: False

ext_callers

one or more caller identities to identify Data API calls in the User-Agent header. This is a list of (name, version) pairs, or just strings if no version info is provided, which, if supplied, becomes the leading part of the User-Agent string in all API requests related to this component.

TYPE: list[tuple[str | None, str | None] | str | None] | None DEFAULT: None

component_name

the string identifying this specific component in the stack of usage info passed as the User-Agent string to the Data API. Defaults to "langchain_vectorstore", but can be overridden if this component actually serves as the building block for another component (such as when the vector store is used within a GraphRetriever).

TYPE: str DEFAULT: COMPONENT_NAME_VECTORSTORE

api_options

an instance of astrapy.utils.api_options.APIOptions that can be supplied to customize the interaction with the Data API regarding serialization/deserialization, timeouts, custom headers and so on. The provided options are applied on top of settings already tailored to this library, and if specified will take precedence. Passing None (default) means no customization is requested. Refer to the astrapy documentation for details.

TYPE: APIOptions | None DEFAULT: None

collection_rerank

providing reranking settings is necessary to run hybrid searches for similarity. This parameter can be an instance of the astrapy classes CollectionRerankOptions or RerankServiceOptions.

TYPE: CollectionRerankOptions | RerankServiceOptions | None DEFAULT: None

collection_reranking_api_key

for usage of server-side reranking services within Astra DB. With this parameter one can supply an API Key that will be passed to Astra DB with each data request. This parameter can be either a string or a subclass of astrapy.authentication.RerankingHeadersProvider. This is useful when the service is configured for the collection, but no corresponding secret is stored within Astra's key management system.

TYPE: str | RerankingHeadersProvider | None DEFAULT: None

collection_lexical

configuring a lexical analyzer is necessary to run lexical and hybrid searches. This parameter can be a string or dict, which is then passed as-is for the "analyzer" field of a createCollection's "$lexical.analyzer" value, or a ready-made astrapy CollectionLexicalOptions object.

TYPE: str | dict[str, Any] | CollectionLexicalOptions | None DEFAULT: None

hybrid_search

whether similarity searches should be run as Hybrid searches or not. Values are DEFAULT, ON or OFF. In case of DEFAULT, searches are performed as permitted by the collection configuration, with a preference for hybrid search. Forcing this setting to ON for a non-hybrid-enabled collection would result in a server error when running searches.

TYPE: HybridSearchMode | None DEFAULT: None

hybrid_limit_factor

subsearch "limit" specification for hybrid searches. If omitted, hybrid searches do not specify it and leave the Data API to use its defaults. If a floating-point positive number is provided: each subsearch participating in the hybrid search (i.e. both the vector-based ANN and the lexical-based) will be requested to fecth up to int(k*hybrid_limit_factor) items, where k is the desired result count from the whole search. If a HybridLimitFactorPrescription is provided (see the class docstring for details), separate factors are applied to the vector and the lexical subsearches. Alternatively, a simple dictionary with keys "$lexical" and "$vector" achieves the same effect.

TYPE: float | dict[str, float] | HybridLimitFactorPrescription | None DEFAULT: None

RAISES DESCRIPTION
ValueError

if the parameters are inconsistent or invalid.

Note

For concurrency in synchronous add_texts, as a rule of thumb, on a typical client machine it is suggested to keep the quantity bulk_insert_batch_concurrency * bulk_insert_overwrite_concurrency much below 1000 to avoid exhausting the client multithreading/networking resources. The hardcoded defaults are somewhat conservative to meet most machines' specs, but a sensible choice to test may be:

  • bulk_insert_batch_concurrency = 80
  • bulk_insert_overwrite_concurrency = 10

A bit of experimentation is required to nail the best results here, depending on both the machine/network specs and the expected workload (specifically, how often a write is an update of an existing id). Remember you can pass concurrency settings to individual calls to add_texts and add_documentsas well.

copy

copy(
    *,
    token: str | TokenProvider | None = None,
    ext_callers: list[tuple[str | None, str | None] | str | None] | None = None,
    component_name: str | None = None,
    collection_embedding_api_key: str | EmbeddingHeadersProvider | None = None,
    collection_reranking_api_key: str | RerankingHeadersProvider | None = None,
) -> AstraDBVectorStore

Create a copy, possibly with changed attributes.

This method creates a shallow copy of this environment. If a parameter is passed and differs from None, it will replace the corresponding value in the copy.

The method allows changing only the parameters that ensure the copy is functional and does not trigger side-effects: for example, one cannot create a copy acting on a new collection. In those cases, one should create a new instance of AstraDBVectorStore from scratch.

PARAMETER DESCRIPTION
token

API token for Astra DB usage, either in the form of a string or a subclass of astrapy.authentication.TokenProvider. In order to suppress token usage in the copy, explicitly pass astrapy.authentication.StaticTokenProvider(None).

TYPE: str | TokenProvider | None DEFAULT: None

ext_callers

additional custom (caller_name, caller_version) pairs to attach to the User-Agent header when issuing Data API requests.

TYPE: list[tuple[str | None, str | None] | str | None] | None DEFAULT: None

component_name

a value for the LangChain component name to use when identifying the originator of the Data API requests.

TYPE: str | None DEFAULT: None

collection_embedding_api_key

the API Key to supply in each Data API request if necessary. This is necessary if using the Vectorize feature and no secret is stored with the database. In order to suppress the API Key in the copy, explicitly pass astrapy.authentication.EmbeddingAPIKeyHeaderProvider(None).

TYPE: str | EmbeddingHeadersProvider | None DEFAULT: None

collection_reranking_api_key

for usage of server-side reranking services within Astra DB. With this parameter one can supply an API Key that will be passed to Astra DB with each data request. This parameter can be either a string or a subclass of astrapy.authentication.RerankingHeadersProvider. This is useful when the service is configured for the collection, but no corresponding secret is stored within Astra's key management system.

TYPE: str | RerankingHeadersProvider | None DEFAULT: None

RETURNS DESCRIPTION
AstraDBVectorStore

a shallow copy of this vector store, possibly with some changed

AstraDBVectorStore

attributes.

clear

clear() -> None

Empty the collection of all its stored entries.

aclear async

aclear() -> None

Empty the collection of all its stored entries.

delete_by_document_id

delete_by_document_id(document_id: str) -> bool

Remove a single document from the store, given its document ID.

PARAMETER DESCRIPTION
document_id

The document ID

TYPE: str

RETURNS DESCRIPTION
bool

True if a document has indeed been deleted, False if ID not found.

adelete_by_document_id async

adelete_by_document_id(document_id: str) -> bool

Remove a single document from the store, given its document ID.

PARAMETER DESCRIPTION
document_id

The document ID

TYPE: str

RETURNS DESCRIPTION
bool

True if a document has indeed been deleted, False if ID not found.

delete

delete(
    ids: Iterable[str] | None = None, concurrency: int | None = None, **kwargs: Any
) -> bool | None

Delete by vector ids.

PARAMETER DESCRIPTION
ids

List of ids to delete.

TYPE: Iterable[str] | None DEFAULT: None

concurrency

max number of threads issuing single-doc delete requests. Defaults to vector-store overall setting.

TYPE: int | None DEFAULT: None

**kwargs

Additional arguments are ignored.

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
bool | None

True if deletion is (entirely) successful, False otherwise.

RAISES DESCRIPTION
ValueError

if no ids are provided.

adelete async

adelete(
    ids: Iterable[str] | None = None, concurrency: int | None = None, **kwargs: Any
) -> bool | None

Delete by vector ids.

PARAMETER DESCRIPTION
ids

List of ids to delete.

TYPE: Iterable[str] | None DEFAULT: None

concurrency

max number of simultaneous coroutines for single-doc delete requests. Defaults to vector-store overall setting.

TYPE: int | None DEFAULT: None

**kwargs

Additional arguments are ignored.

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
bool | None

True if deletion is (entirely) successful, False otherwise.

RAISES DESCRIPTION
ValueError

if no ids are provided.

delete_by_metadata_filter

delete_by_metadata_filter(filter: dict[str, Any]) -> int

Delete all documents matching a certain metadata filtering condition.

This operation does not use the vector embeddings in any way, it simply removes all documents whose metadata match the provided condition.

PARAMETER DESCRIPTION
filter

Filter on the metadata to apply. The filter cannot be empty.

TYPE: dict[str, Any]

RETURNS DESCRIPTION
int

A number expressing the amount of deleted documents.

RAISES DESCRIPTION
ValueError

if the provided filter is empty.

adelete_by_metadata_filter async

adelete_by_metadata_filter(filter: dict[str, Any]) -> int

Delete all documents matching a certain metadata filtering condition.

This operation does not use the vector embeddings in any way, it simply removes all documents whose metadata match the provided condition.

PARAMETER DESCRIPTION
filter

Filter on the metadata to apply. The filter cannot be empty.

TYPE: dict[str, Any]

RETURNS DESCRIPTION
int

A number expressing the amount of deleted documents.

RAISES DESCRIPTION
ValueError

if the provided filter is empty.

delete_collection

delete_collection() -> None

Completely delete the collection from the database.

Completely delete the collection from the database (as opposed to clear, which empties it only). Stored data is lost and unrecoverable, resources are freed. Use with caution.

adelete_collection async

adelete_collection() -> None

Completely delete the collection from the database.

Completely delete the collection from the database (as opposed to aclear, which empties it only). Stored data is lost and unrecoverable, resources are freed. Use with caution.

add_texts

add_texts(
    texts: Iterable[str],
    metadatas: Iterable[dict] | None = None,
    ids: Iterable[str | None] | None = None,
    *,
    batch_size: int | None = None,
    batch_concurrency: int | None = None,
    overwrite_concurrency: int | None = None,
    **kwargs: Any,
) -> list[str]

Run texts through the embeddings and add them to the vectorstore.

If passing explicit ids, those entries whose id is in the store already will be replaced.

PARAMETER DESCRIPTION
texts

Texts to add to the vectorstore.

TYPE: Iterable[str]

metadatas

Optional list of metadatas.

TYPE: Iterable[dict] | None DEFAULT: None

ids

Optional list of ids.

TYPE: Iterable[str | None] | None DEFAULT: None

batch_size

Size of document chunks for each individual insertion API request. If not provided, defaults to the vector-store overall defaults (which in turn falls to astrapy defaults).

TYPE: int | None DEFAULT: None

batch_concurrency

number of threads to process insertion batches concurrently. Defaults to the vector-store overall setting if not provided.

TYPE: int | None DEFAULT: None

overwrite_concurrency

number of threads to process pre-existing documents in each batch. Defaults to the vector-store overall setting if not provided.

TYPE: int | None DEFAULT: None

**kwargs

Additional arguments are ignored.

TYPE: Any DEFAULT: {}

Note

The allowed field names for the metadata document attributes must obey certain rules (such as: keys cannot start with a dollar sign and cannot be empty). See the document field naming rules.

RETURNS DESCRIPTION
list[str]

The list of ids of the added texts.

RAISES DESCRIPTION
AstraDBVectorStoreError

if not all documents could be inserted.

aadd_texts async

aadd_texts(
    texts: Iterable[str],
    metadatas: Iterable[dict] | None = None,
    ids: Iterable[str | None] | None = None,
    *,
    batch_size: int | None = None,
    batch_concurrency: int | None = None,
    overwrite_concurrency: int | None = None,
    **kwargs: Any,
) -> list[str]

Run texts through the embeddings and add them to the vectorstore.

If passing explicit ids, those entries whose id is in the store already will be replaced.

PARAMETER DESCRIPTION
texts

Texts to add to the vectorstore.

TYPE: Iterable[str]

metadatas

Optional list of metadatas.

TYPE: Iterable[dict] | None DEFAULT: None

ids

Optional list of ids.

TYPE: Iterable[str | None] | None DEFAULT: None

batch_size

Size of document chunks for each individual insertion API request. If not provided, defaults to the vector-store overall defaults (which in turn falls to astrapy defaults).

TYPE: int | None DEFAULT: None

batch_concurrency

number of simultaneous coroutines to process insertion batches concurrently. Defaults to the vector-store overall setting if not provided.

TYPE: int | None DEFAULT: None

overwrite_concurrency

number of simultaneous coroutines to process pre-existing documents in each batch. Defaults to the vector-store overall setting if not provided.

TYPE: int | None DEFAULT: None

**kwargs

Additional arguments are ignored.

TYPE: Any DEFAULT: {}

Note

The allowed field names for the metadata document attributes must obey certain rules (such as: keys cannot start with a dollar sign and cannot be empty). See the document field naming rules.

RETURNS DESCRIPTION
list[str]

The list of ids of the added texts.

RAISES DESCRIPTION
AstraDBVectorStoreError

if not all documents could be inserted.

update_metadata

update_metadata(
    id_to_metadata: dict[str, dict], *, overwrite_concurrency: int | None = None
) -> int

Add/overwrite the metadata of existing documents.

For each document to update, the new metadata dictionary is appended to the existing metadata, overwriting individual keys that existed already.

PARAMETER DESCRIPTION
id_to_metadata

map from the Document IDs to modify to the new metadata for updating. Keys in this dictionary that do not correspond to an existing document will be silently ignored. The values of this map are metadata dictionaries for updating the documents. Any pre-existing metadata will be merged with these entries, which take precedence on a key-by-key basis.

TYPE: dict[str, dict]

overwrite_concurrency

number of threads to process the updates. Defaults to the vector-store overall setting if not provided.

TYPE: int | None DEFAULT: None

RETURNS DESCRIPTION
int

the number of documents successfully updated (i.e. found to exist,

int

since even an update with {} as the new metadata counts as successful.)

aupdate_metadata async

aupdate_metadata(
    id_to_metadata: dict[str, dict], *, overwrite_concurrency: int | None = None
) -> int

Add/overwrite the metadata of existing documents.

For each document to update, the new metadata dictionary is appended to the existing metadata, overwriting individual keys that existed already.

PARAMETER DESCRIPTION
id_to_metadata

map from the Document IDs to modify to the new metadata for updating. Keys in this dictionary that do not correspond to an existing document will be silently ignored. The values of this map are metadata dictionaries for updating the documents. Any pre-existing metadata will be merged with these entries, which take precedence on a key-by-key basis.

TYPE: dict[str, dict]

overwrite_concurrency

number of asynchronous tasks to process the updates. Defaults to the vector-store overall setting if not provided.

TYPE: int | None DEFAULT: None

RETURNS DESCRIPTION
int

the number of documents successfully updated (i.e. found to exist,

int

since even an update with {} as the new metadata counts as successful.)

full_decode_astra_db_found_document

full_decode_astra_db_found_document(
    astra_db_document: DocDict,
) -> AstraDBQueryResult | None

Decode an Astra DB document in full, i.e. into Document+embedding/similarity.

This operation returns a representation that is independent of the codec being used in the collection (whereas the input, a 'raw' Astra DB document, is codec-dependent).

The input raw document can carry information on embedding and similarity, depending on details of the query used to retrieve it. These can be set to None in the resulf if not found.

The whole method can return a None, to signal that the codec has refused the conversion (e.g. because the input document is deemed faulty).

PARAMETER DESCRIPTION
astra_db_document

a dictionary obtained through run_query_raw from the collection.

TYPE: DocDict

RETURNS DESCRIPTION
AstraDBQueryResult | None

a AstraDBQueryResult named tuple with Document, id, embedding (where applicable) and similarity (where applicable), or an overall None if the decoding is refused by the codec.

full_decode_astra_db_reranked_result

full_decode_astra_db_reranked_result(
    astra_db_reranked_result: RerankedResult[DocDict],
) -> AstraDBQueryResult | None

Full-decode an Astra DB find-and-rerank hit (Document+embedding/similarity).

This operation returns a representation that is independent of the codec being used in the collection (whereas the 'document' part of the input, a 'raw' Astra DB response from a find-and-rerank hybrid search, is codec-dependent).

The input raw document is what the find_and_rerank Astrapy method returns, i.e. an iterable over RerankedResult objects. Missing entries (such as the embedding) are set to None in the resulf if not found.

The whole method can return a None, to signal that the codec has refused the conversion (e.g. because the input document is deemed faulty).

PARAMETER DESCRIPTION
astra_db_reranked_result

a RerankedResult obtained by a find_and_rerank method call on the collection.

TYPE: RerankedResult[DocDict]

RETURNS DESCRIPTION
AstraDBQueryResult | None

a AstraDBQueryResult named tuple with Document, id, embedding (where applicable) and similarity (where applicable), or an overall None if the decoding is refused by the codec.

run_query_raw

run_query_raw(
    *,
    n: int,
    ids: list[str] | None = None,
    filter: dict[str, Any] | None = None,
    sort: dict[str, Any] | None = None,
    include_similarity: bool | None = None,
    include_sort_vector: bool = False,
    include_embeddings: bool = False,
) -> tuple[list[float] | None, Iterable[DocDict]] | Iterable[DocDict]

Execute a generic query on stored documents, returning Astra DB documents.

The return value has a variable format, depending on whether the 'sort vector' is requested back from the server.

Only the n parameter is required. Omitting all other parameters results in a query that matches each and every document found on the collection.

The method does not expose a projection directly, which is instead automatically determined based on the invocation options.

The returned documents are exactly as they come back from Astra DB (taking into account the projection as well). A further step, namely subsequent invocation of the convert_astra_db_document method, is required to reconstruct codec-independent Document objects. The reason for keeping the retrieval and the decoding steps separate is that a caller may want to first deduplicate/discard items, in order to convert only the items actually needed.

PARAMETER DESCRIPTION
n

amount of items to return. Fewer items than n may be returned if the collection has not enough matches.

TYPE: int

ids

a list of document IDs to restrict the query to. If this is supplied, only document with an ID among the provided one will match. If further query filters are provided (i.e. metadata), matches must satisfy both requirements.

TYPE: list[str] | None DEFAULT: None

filter

a metadata filtering part. If provided, it must refer to metadata keys by their bare name (such as {"key": 123}). This filter can combine nested conditions with "\(or"/"\)and" connectors, for example: - {"tag": "a"} - {"$or": [{"tag": "a"}, "label": "b"]} - {"$and": [{"tag": {"$in": ["a", "z"]}}, "label": "b"]}

TYPE: dict[str, Any] | None DEFAULT: None

sort

a 'sort' clause for the query, such as {"$vector": [...]}, {"$vectorize": "..."} or {"mdkey": 1}. Metadata sort conditions must be expressed by their 'bare' name.

TYPE: dict[str, Any] | None DEFAULT: None

include_similarity

whether to return similarity scores with each match. Requires vector sort.

TYPE: bool | None DEFAULT: None

include_sort_vector

whether to return the very query vector used for the ANN search alongside the iterable of results. Requires vector sort. Note that the shape of the return value depends on this parameter.

TYPE: bool DEFAULT: False

include_embeddings

whether to retrieve the matches' own embedding vectors.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
tuple[list[float] | None, Iterable[DocDict]] | Iterable[DocDict]

The shape of the return value depends on the value of include_sort_vector:

tuple[list[float] | None, Iterable[DocDict]] | Iterable[DocDict]
  • if include_sort_vector = False, the return value is an iterable over Astra DB documents (dictionaries);
tuple[list[float] | None, Iterable[DocDict]] | Iterable[DocDict]
  • if include_sort_vector = True, the return value is a 2-item tuple (sort_v, astra_db_ite) tuple, where:
  • sort_v is the sort vector, if requested, or None if not available.
  • astra_db_ite is an iterable over Astra DB documents (dictionaries).

run_query

run_query(
    *,
    n: int,
    ids: list[str] | None = None,
    filter: dict[str, Any] | None = None,
    sort: dict[str, Any] | None = None,
    include_similarity: bool | None = None,
    include_sort_vector: bool = False,
    include_embeddings: bool = False,
) -> (
    tuple[list[float] | None, Iterable[AstraDBQueryResult]]
    | Iterable[AstraDBQueryResult]
)

Execute a generic query on stored documents, returning Documents+other info.

The return value has a variable format, depending on whether the 'sort vector' is requested back from the server.

Only the n parameter is required. Omitting all other parameters results in a query that matches each and every document found on the collection.

The method does not expose a projection directly, which is instead automatically determined based on the invocation options.

The returned Document objects are codec-independent.

PARAMETER DESCRIPTION
n

amount of items to return. Fewer items than n may be returned in the following cases: (a) the decoding skips some raw entries from the server; (b) the collection has not enough matches.

TYPE: int

ids

a list of document IDs to restrict the query to. If this is supplied, only document with an ID among the provided one will match. If further query filters are provided (i.e. metadata), matches must satisfy both requirements.

TYPE: list[str] | None DEFAULT: None

filter

a metadata filtering part. If provided, it must refer to metadata keys by their bare name (such as {"key": 123}). This filter can combine nested conditions with "\(or"/"\)and" connectors, for example: - {"tag": "a"} - {"$or": [{"tag": "a"}, "label": "b"]} - {"$and": [{"tag": {"$in": ["a", "z"]}}, "label": "b"]}

TYPE: dict[str, Any] | None DEFAULT: None

sort

a 'sort' clause for the query, such as {"$vector": [...]}, {"$vectorize": "..."} or {"mdkey": 1}. Metadata sort conditions must be expressed by their 'bare' name.

TYPE: dict[str, Any] | None DEFAULT: None

include_similarity

whether to return similarity scores with each match. Requires vector sort.

TYPE: bool | None DEFAULT: None

include_sort_vector

whether to return the very query vector used for the ANN search alongside the iterable of results. Requires vector sort. Note that the shape of the return value depends on this parameter.

TYPE: bool DEFAULT: False

include_embeddings

whether to retrieve the matches' own embedding vectors.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
tuple[list[float] | None, Iterable[AstraDBQueryResult]] | Iterable[AstraDBQueryResult]

The shape of the return value depends on the value of include_sort_vector:

tuple[list[float] | None, Iterable[AstraDBQueryResult]] | Iterable[AstraDBQueryResult]
  • if include_sort_vector = False, the return value is an iterable over the AstraDBQueryResult items returned by the query. Entries that fail the decoding step, if any, are discarded after the query, which may lead to fewer items being returned than the required n.
tuple[list[float] | None, Iterable[AstraDBQueryResult]] | Iterable[AstraDBQueryResult]
  • if include_sort_vector = True, the return value is a 2-item tuple (sort_v, results_ite) tuple, where:
  • sort_v is the sort vector, if requested, or None if not available.
  • results_ite is an iterable over AstraDBQueryResult items as above.

arun_query_raw async

arun_query_raw(
    *,
    n: int,
    ids: list[str] | None = None,
    filter: dict[str, Any] | None = None,
    sort: dict[str, Any] | None = None,
    include_similarity: bool | None = None,
    include_sort_vector: bool = False,
    include_embeddings: bool = False,
) -> tuple[list[float] | None, AsyncIterable[DocDict]] | AsyncIterable[DocDict]

Execute a generic query on stored documents, returning Astra DB documents.

The return value has a variable format, depending on whether the 'sort vector' is requested back from the server.

Only the n parameter is required. Omitting all other parameters results in a query that matches each and every document found on the collection.

The method does not expose a projection directly, which is instead automatically determined based on the invocation options.

The returned documents are exactly as they come back from Astra DB (taking into account the projection as well). A further step, namely subsequent invocation of the convert_astra_db_document method, is required to reconstruct codec-independent Document objects. The reason for keeping the retrieval and the decoding steps separate is that a caller may want to first deduplicate/discard items, in order to convert only the items actually needed.

PARAMETER DESCRIPTION
n

amount of items to return. Fewer items than n may be returned in the following cases: (a) the decoding skips some raw entries from the server; (b) the collection has not enough matches.

TYPE: int

ids

a list of document IDs to restrict the query to. If this is supplied, only document with an ID among the provided one will match. If further query filters are provided (i.e. metadata), matches must satisfy both requirements.

TYPE: list[str] | None DEFAULT: None

filter

a metadata filtering part. If provided, it must refer to metadata keys by their bare name (such as {"key": 123}). This filter can combine nested conditions with "\(or"/"\)and" connectors, for example: - {"tag": "a"} - {"$or": [{"tag": "a"}, "label": "b"]} - {"$and": [{"tag": {"$in": ["a", "z"]}}, "label": "b"]}

TYPE: dict[str, Any] | None DEFAULT: None

sort

a 'sort' clause for the query, such as {"$vector": [...]}, {"$vectorize": "..."} or {"mdkey": 1}. Metadata sort conditions must be expressed by their 'bare' name.

TYPE: dict[str, Any] | None DEFAULT: None

include_similarity

whether to return similarity scores with each match. Requires vector sort.

TYPE: bool | None DEFAULT: None

include_sort_vector

whether to return the very query vector used for the ANN search alongside the iterable of results. Requires vector sort. Note that the shape of the return value depends on this parameter.

TYPE: bool DEFAULT: False

include_embeddings

whether to retrieve the matches' own embedding vectors.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
tuple[list[float] | None, AsyncIterable[DocDict]] | AsyncIterable[DocDict]

The shape of the return value depends on the value of include_sort_vector:

tuple[list[float] | None, AsyncIterable[DocDict]] | AsyncIterable[DocDict]
  • if include_sort_vector = False, the return value is an iterable over Astra DB documents (dictionaries);
tuple[list[float] | None, AsyncIterable[DocDict]] | AsyncIterable[DocDict]
  • if include_sort_vector = True, the return value is a 2-item tuple (sort_v, astra_db_ite) tuple, where:
  • sort_v is the sort vector, if requested, or None if not available.
  • astra_db_ite is an iterable over Astra DB documents (dictionaries).

arun_query async

arun_query(
    *,
    n: int,
    ids: list[str] | None = None,
    filter: dict[str, Any] | None = None,
    sort: dict[str, Any] | None = None,
    include_similarity: bool | None = None,
    include_sort_vector: bool = False,
    include_embeddings: bool = False,
) -> (
    tuple[list[float] | None, AsyncIterable[AstraDBQueryResult]]
    | AsyncIterable[AstraDBQueryResult]
)

Execute a generic query on stored documents, returning Documents+other info.

The return value has a variable format, depending on whether the 'sort vector' is requested back from the server.

Only the n parameter is required. Omitting all other parameters results in a query that matches each and every document found on the collection.

The method does not expose a projection directly, which is instead automatically determined based on the invocation options.

The returned Document objects are codec-independent.

PARAMETER DESCRIPTION
n

amount of items to return. Fewer items than n may be returned if the collection has not enough matches.

TYPE: int

ids

a list of document IDs to restrict the query to. If this is supplied, only document with an ID among the provided one will match. If further query filters are provided (i.e. metadata), matches must satisfy both requirements.

TYPE: list[str] | None DEFAULT: None

filter

a metadata filtering part. If provided, it must refer to metadata keys by their bare name (such as {"key": 123}). This filter can combine nested conditions with "\(or"/"\)and" connectors, for example: - {"tag": "a"} - {"$or": [{"tag": "a"}, "label": "b"]} - {"$and": [{"tag": {"$in": ["a", "z"]}}, "label": "b"]}

TYPE: dict[str, Any] | None DEFAULT: None

sort

a 'sort' clause for the query, such as {"$vector": [...]}, {"$vectorize": "..."} or {"mdkey": 1}. Metadata sort conditions must be expressed by their 'bare' name.

TYPE: dict[str, Any] | None DEFAULT: None

include_similarity

whether to return similarity scores with each match. Requires vector sort.

TYPE: bool | None DEFAULT: None

include_sort_vector

whether to return the very query vector used for the ANN search alongside the iterable of results. Requires vector sort. Note that the shape of the return value depends on this parameter.

TYPE: bool DEFAULT: False

include_embeddings

whether to retrieve the matches' own embedding vectors.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
tuple[list[float] | None, AsyncIterable[AstraDBQueryResult]] | AsyncIterable[AstraDBQueryResult]

The shape of the return value depends on the value of include_sort_vector:

tuple[list[float] | None, AsyncIterable[AstraDBQueryResult]] | AsyncIterable[AstraDBQueryResult]
  • if include_sort_vector = False, the return value is an iterable over the AstraDBQueryResult items returned by the query. Entries that fail the decoding step, if any, are discarded after the query, which may lead to fewer items being returned than the required n.
tuple[list[float] | None, AsyncIterable[AstraDBQueryResult]] | AsyncIterable[AstraDBQueryResult]
  • if include_sort_vector = True, the return value is a 2-item tuple (sort_v, results_ite) tuple, where:
  • sort_v is the sort vector, if requested, or None if not available.
  • results_ite is an iterable over AstraDBQueryResult items as above.
metadata_search(filter: dict[str, Any] | None = None, n: int = 5) -> list[Document]

Get documents via a metadata search.

PARAMETER DESCRIPTION
filter

the metadata to query for.

TYPE: dict[str, Any] | None DEFAULT: None

n

the maximum number of documents to return.

TYPE: int DEFAULT: 5

RETURNS DESCRIPTION
list[Document]

The documents found.

ametadata_search(
    filter: dict[str, Any] | None = None, n: int = 5
) -> Iterable[Document]

Get documents via a metadata search.

PARAMETER DESCRIPTION
filter

the metadata to query for.

TYPE: dict[str, Any] | None DEFAULT: None

n

the maximum number of documents to return.

TYPE: int DEFAULT: 5

RETURNS DESCRIPTION
Iterable[Document]

The documents found.

get_by_document_id

get_by_document_id(document_id: str) -> Document | None

Retrieve a single document from the store, given its document ID.

PARAMETER DESCRIPTION
document_id

The document ID

TYPE: str

RETURNS DESCRIPTION
Document | None

The the document if it exists. Otherwise None.

aget_by_document_id async

aget_by_document_id(document_id: str) -> Document | None

Retrieve a single document from the store, given its document ID.

PARAMETER DESCRIPTION
document_id

The document ID

TYPE: str

RETURNS DESCRIPTION
Document | None

The the document if it exists. Otherwise None.

get_by_ids

get_by_ids(
    ids: Sequence[str],
    /,
    batch_size: int | None = None,
    batch_concurrency: int | None = None,
) -> list[Document]

Get documents by their IDs.

The returned documents have the ID field set to the ID of the document in the vector store.

Fewer documents may be returned than requested if some IDs are not found or if there are duplicated IDs.

Users should not assume that the order of the returned documents matches the order of the input IDs. Instead, users should rely on the ID field of the returned documents.

PARAMETER DESCRIPTION
ids

List of ids to retrieve.

TYPE: Sequence[str]

batch_size

If many IDs are requested, these are split in chunks and multiple requests are run and collated. This sets the size of each such chunk of IDs. Default is 80. The database sets a hard limit of 100.

TYPE: int | None DEFAULT: None

batch_concurrency

Number of threads for executing multiple requests if needed. Default is 20.

TYPE: int | None DEFAULT: None

RETURNS DESCRIPTION
list[Document]

List of Documents.

get_by_document_ids

get_by_document_ids(
    ids: Sequence[str],
    /,
    batch_size: int | None = None,
    batch_concurrency: int | None = None,
) -> list[Document]

Get documents by their IDs.

The returned documents have the ID field set to the ID of the document in the vector store.

Fewer documents may be returned than requested if some IDs are not found or if there are duplicated IDs.

Users should not assume that the order of the returned documents matches the order of the input IDs. Instead, users should rely on the ID field of the returned documents.

PARAMETER DESCRIPTION
ids

List of ids to retrieve.

TYPE: Sequence[str]

batch_size

If many IDs are requested, these are split in chunks and multiple requests are run and collated. This sets the size of each such chunk of IDs. Default is 80. The database sets a hard limit of 100.

TYPE: int | None DEFAULT: None

batch_concurrency

Number of threads for executing multiple requests if needed. Default is 20.

TYPE: int | None DEFAULT: None

RETURNS DESCRIPTION
list[Document]

List of Documents.

aget_by_ids async

aget_by_ids(
    ids: Sequence[str],
    /,
    batch_size: int | None = None,
    batch_concurrency: int | None = None,
) -> list[Document]

Get documents by their IDs.

The returned documents have the ID field set to the ID of the document in the vector store.

Fewer documents may be returned than requested if some IDs are not found or if there are duplicated IDs.

Users should not assume that the order of the returned documents matches the order of the input IDs. Instead, users should rely on the ID field of the returned documents.

PARAMETER DESCRIPTION
ids

List of ids to retrieve.

TYPE: Sequence[str]

batch_size

If many IDs are requested, these are split in chunks and multiple requests are run and collated. This sets the size of each such chunk of IDs. Default is 80. The database sets a hard limit of 100.

TYPE: int | None DEFAULT: None

batch_concurrency

Number of threads for executing multiple requests if needed. Default is 20.

TYPE: int | None DEFAULT: None

RETURNS DESCRIPTION
list[Document]

List of Documents.

aget_by_document_ids async

aget_by_document_ids(
    ids: Sequence[str],
    /,
    batch_size: int | None = None,
    batch_concurrency: int | None = None,
) -> list[Document]

Get documents by their IDs.

The returned documents have the ID field set to the ID of the document in the vector store.

Fewer documents may be returned than requested if some IDs are not found or if there are duplicated IDs.

Users should not assume that the order of the returned documents matches the order of the input IDs. Instead, users should rely on the ID field of the returned documents.

PARAMETER DESCRIPTION
ids

List of ids to retrieve.

TYPE: Sequence[str]

batch_size

If many IDs are requested, these are split in chunks and multiple requests are run and collated. This sets the size of each such chunk of IDs. Default is 80. The database sets a hard limit of 100.

TYPE: int | None DEFAULT: None

batch_concurrency

Number of threads for executing multiple requests if needed. Default is 20.

TYPE: int | None DEFAULT: None

RETURNS DESCRIPTION
list[Document]

List of Documents.

similarity_search(
    query: str,
    k: int = 4,
    filter: dict[str, Any] | None = None,
    lexical_query: str | None = None,
    **kwargs: Any,
) -> list[Document]

Return docs most similar to query.

PARAMETER DESCRIPTION
query

Query to look up documents similar to.

TYPE: str

k

Number of Documents to return. Defaults to 4.

TYPE: int DEFAULT: 4

filter

Filter on the metadata to apply.

TYPE: dict[str, Any] | None DEFAULT: None

lexical_query

for hybrid search, a specific query for the lexical portion of the retrieval. If omitted or empty, defaults to the same as 'query'. If passed on a non-hybrid search, an error is raised.

TYPE: str | None DEFAULT: None

**kwargs

Additional arguments are ignored.

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
list[Document]

The list of Documents most similar to the query.

similarity_search_with_score

similarity_search_with_score(
    query: str,
    k: int = 4,
    filter: dict[str, Any] | None = None,
    lexical_query: str | None = None,
) -> list[tuple[Document, float]]

Return docs most similar to query with score.

PARAMETER DESCRIPTION
query

Query to look up documents similar to.

TYPE: str

k

Number of Documents to return. Defaults to 4.

TYPE: int DEFAULT: 4

filter

Filter on the metadata to apply.

TYPE: dict[str, Any] | None DEFAULT: None

lexical_query

for hybrid search, a specific query for the lexical portion of the retrieval. If omitted or empty, defaults to the same as 'query'. If passed on a non-hybrid search, an error is raised.

TYPE: str | None DEFAULT: None

RETURNS DESCRIPTION
list[tuple[Document, float]]

The list of (Document, score), the most similar to the query vector.

similarity_search_with_score_id

similarity_search_with_score_id(
    query: str,
    k: int = 4,
    filter: dict[str, Any] | None = None,
    lexical_query: str | None = None,
) -> list[tuple[Document, float, str]]

Return docs most similar to the query with score and id.

PARAMETER DESCRIPTION
query

Query to look up documents similar to.

TYPE: str

k

Number of Documents to return. Defaults to 4.

TYPE: int DEFAULT: 4

filter

Filter on the metadata to apply.

TYPE: dict[str, Any] | None DEFAULT: None

lexical_query

for hybrid search, a specific query for the lexical portion of the retrieval. If omitted or empty, defaults to the same as 'query'. If passed on a non-hybrid search, an error is raised.

TYPE: str | None DEFAULT: None

RETURNS DESCRIPTION
list[tuple[Document, float, str]]

The list of (Document, score, id), the most similar to the query.

similarity_search_by_vector

similarity_search_by_vector(
    embedding: list[float],
    k: int = 4,
    filter: dict[str, Any] | None = None,
    **kwargs: Any,
) -> list[Document]

Return docs most similar to embedding vector.

PARAMETER DESCRIPTION
embedding

Embedding to look up documents similar to.

TYPE: list[float]

k

Number of Documents to return. Defaults to 4.

TYPE: int DEFAULT: 4

filter

Filter on the metadata to apply.

TYPE: dict[str, Any] | None DEFAULT: None

**kwargs

Additional arguments are ignored.

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
list[Document]

The list of Documents most similar to the query vector.

similarity_search_with_score_by_vector

similarity_search_with_score_by_vector(
    embedding: list[float], k: int = 4, filter: dict[str, Any] | None = None
) -> list[tuple[Document, float]]

Return docs most similar to embedding vector with score.

PARAMETER DESCRIPTION
embedding

Embedding to look up documents similar to.

TYPE: list[float]

k

Number of Documents to return. Defaults to 4.

TYPE: int DEFAULT: 4

filter

Filter on the metadata to apply.

TYPE: dict[str, Any] | None DEFAULT: None

RETURNS DESCRIPTION
list[tuple[Document, float]]

The list of (Document, score), the most similar to the query vector.

similarity_search_with_score_id_by_vector

similarity_search_with_score_id_by_vector(
    embedding: list[float], k: int = 4, filter: dict[str, Any] | None = None
) -> list[tuple[Document, float, str]]

Return docs most similar to embedding vector with score and id.

PARAMETER DESCRIPTION
embedding

Embedding to look up documents similar to.

TYPE: list[float]

k

Number of Documents to return. Defaults to 4.

TYPE: int DEFAULT: 4

filter

Filter on the metadata to apply.

TYPE: dict[str, Any] | None DEFAULT: None

RETURNS DESCRIPTION
list[tuple[Document, float, str]]

The list of (Document, score, id), the most similar to the query vector.

RAISES DESCRIPTION
ValueError

if the vector store uses server-side embeddings.

asimilarity_search(
    query: str,
    k: int = 4,
    filter: dict[str, Any] | None = None,
    lexical_query: str | None = None,
    **kwargs: Any,
) -> list[Document]

Return docs most similar to query.

PARAMETER DESCRIPTION
query

Query to look up documents similar to.

TYPE: str

k

Number of Documents to return. Defaults to 4.

TYPE: int DEFAULT: 4

filter

Filter on the metadata to apply.

TYPE: dict[str, Any] | None DEFAULT: None

lexical_query

for hybrid search, a specific query for the lexical portion of the retrieval. If omitted or empty, defaults to the same as 'query'. If passed on a non-hybrid search, an error is raised.

TYPE: str | None DEFAULT: None

**kwargs

Additional arguments are ignored.

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
list[Document]

The list of Documents most similar to the query.

asimilarity_search_with_score async

asimilarity_search_with_score(
    query: str,
    k: int = 4,
    filter: dict[str, Any] | None = None,
    lexical_query: str | None = None,
) -> list[tuple[Document, float]]

Return docs most similar to query with score.

PARAMETER DESCRIPTION
query

Query to look up documents similar to.

TYPE: str

k

Number of Documents to return. Defaults to 4.

TYPE: int DEFAULT: 4

filter

Filter on the metadata to apply.

TYPE: dict[str, Any] | None DEFAULT: None

lexical_query

for hybrid search, a specific query for the lexical portion of the retrieval. If omitted or empty, defaults to the same as 'query'. If passed on a non-hybrid search, an error is raised.

TYPE: str | None DEFAULT: None

RETURNS DESCRIPTION
list[tuple[Document, float]]

The list of (Document, score), the most similar to the query vector.

asimilarity_search_with_score_id async

asimilarity_search_with_score_id(
    query: str,
    k: int = 4,
    filter: dict[str, Any] | None = None,
    lexical_query: str | None = None,
) -> list[tuple[Document, float, str]]

Return docs most similar to the query with score and id.

PARAMETER DESCRIPTION
query

Query to look up documents similar to.

TYPE: str

k

Number of Documents to return. Defaults to 4.

TYPE: int DEFAULT: 4

filter

Filter on the metadata to apply.

TYPE: dict[str, Any] | None DEFAULT: None

lexical_query

for hybrid search, a specific query for the lexical portion of the retrieval. If omitted or empty, defaults to the same as 'query'. If passed on a non-hybrid search, an error is raised.

TYPE: str | None DEFAULT: None

RETURNS DESCRIPTION
list[tuple[Document, float, str]]

The list of (Document, score, id), the most similar to the query.

asimilarity_search_by_vector async

asimilarity_search_by_vector(
    embedding: list[float],
    k: int = 4,
    filter: dict[str, Any] | None = None,
    **kwargs: Any,
) -> list[Document]

Return docs most similar to embedding vector.

PARAMETER DESCRIPTION
embedding

Embedding to look up documents similar to.

TYPE: list[float]

k

Number of Documents to return. Defaults to 4.

TYPE: int DEFAULT: 4

filter

Filter on the metadata to apply.

TYPE: dict[str, Any] | None DEFAULT: None

**kwargs

Additional arguments are ignored.

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
list[Document]

The list of Documents most similar to the query vector.

asimilarity_search_with_score_by_vector async

asimilarity_search_with_score_by_vector(
    embedding: list[float], k: int = 4, filter: dict[str, Any] | None = None
) -> list[tuple[Document, float]]

Return docs most similar to embedding vector with score.

PARAMETER DESCRIPTION
embedding

Embedding to look up documents similar to.

TYPE: list[float]

k

Number of Documents to return. Defaults to 4.

TYPE: int DEFAULT: 4

filter

Filter on the metadata to apply.

TYPE: dict[str, Any] | None DEFAULT: None

RETURNS DESCRIPTION
list[tuple[Document, float]]

The list of (Document, score), the most similar to the query vector.

asimilarity_search_with_score_id_by_vector async

asimilarity_search_with_score_id_by_vector(
    embedding: list[float], k: int = 4, filter: dict[str, Any] | None = None
) -> list[tuple[Document, float, str]]

Return docs most similar to embedding vector with score and id.

PARAMETER DESCRIPTION
embedding

Embedding to look up documents similar to.

TYPE: list[float]

k

Number of Documents to return. Defaults to 4.

TYPE: int DEFAULT: 4

filter

Filter on the metadata to apply.

TYPE: dict[str, Any] | None DEFAULT: None

RETURNS DESCRIPTION
list[tuple[Document, float, str]]

The list of (Document, score, id), the most similar to the query vector.

RAISES DESCRIPTION
ValueError

If the vector store uses server-side embeddings.

similarity_search_with_embedding_by_vector

similarity_search_with_embedding_by_vector(
    embedding: list[float], k: int = 4, filter: dict[str, Any] | None = None
) -> list[tuple[Document, list[float]]]

Return docs most similar to embedding vector with embedding.

PARAMETER DESCRIPTION
embedding

Embedding to look up documents similar to.

TYPE: list[float]

k

Number of Documents to return. Defaults to 4.

TYPE: int DEFAULT: 4

filter

Filter on the metadata to apply.

TYPE: dict[str, Any] | None DEFAULT: None

RETURNS DESCRIPTION
list[tuple[Document, list[float]]]

(The query embedding vector, The list of (Document, embedding),

list[tuple[Document, list[float]]]

the most similar to the query vector.).

asimilarity_search_with_embedding_by_vector async

asimilarity_search_with_embedding_by_vector(
    embedding: list[float], k: int = 4, filter: dict[str, Any] | None = None
) -> list[tuple[Document, list[float]]]

Return docs most similar to embedding vector with embedding.

PARAMETER DESCRIPTION
embedding

Embedding to look up documents similar to.

TYPE: list[float]

k

Number of Documents to return. Defaults to 4.

TYPE: int DEFAULT: 4

filter

Filter on the metadata to apply.

TYPE: dict[str, Any] | None DEFAULT: None

RETURNS DESCRIPTION
list[tuple[Document, list[float]]]

(The query embedding vector, The list of (Document, embedding),

list[tuple[Document, list[float]]]

the most similar to the query vector.).

similarity_search_with_embedding

similarity_search_with_embedding(
    query: str, k: int = 4, filter: dict[str, Any] | None = None
) -> tuple[list[float], list[tuple[Document, list[float]]]]

Return docs most similar to the query with embedding.

Also includes the query embedding vector.

PARAMETER DESCRIPTION
query

Query to look up documents similar to.

TYPE: str

k

Number of Documents to return. Defaults to 4.

TYPE: int DEFAULT: 4

filter

Filter on the metadata to apply.

TYPE: dict[str, Any] | None DEFAULT: None

RETURNS DESCRIPTION
list[float]

(The query embedding vector, The list of (Document, embedding),

list[tuple[Document, list[float]]]

the most similar to the query vector.).

asimilarity_search_with_embedding async

asimilarity_search_with_embedding(
    query: str, k: int = 4, filter: dict[str, Any] | None = None
) -> tuple[list[float], list[tuple[Document, list[float]]]]

Return docs most similar to the query with embedding.

Also includes the query embedding vector.

PARAMETER DESCRIPTION
query

Query to look up documents similar to.

TYPE: str

k

Number of Documents to return. Defaults to 4.

TYPE: int DEFAULT: 4

filter

Filter on the metadata to apply.

TYPE: dict[str, Any] | None DEFAULT: None

RETURNS DESCRIPTION
list[float]

(The query embedding vector, The list of (Document, embedding),

list[tuple[Document, list[float]]]

the most similar to the query vector.).

max_marginal_relevance_search_by_vector

max_marginal_relevance_search_by_vector(
    embedding: list[float],
    k: int = 4,
    fetch_k: int = 20,
    lambda_mult: float = 0.5,
    filter: dict[str, Any] | None = None,
    **kwargs: Any,
) -> list[Document]

Return docs selected using the maximal marginal relevance.

Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.

PARAMETER DESCRIPTION
embedding

Embedding to look up documents similar to.

TYPE: list[float]

k

Number of Documents to return.

TYPE: int DEFAULT: 4

fetch_k

Number of Documents to fetch to pass to MMR algorithm.

TYPE: int DEFAULT: 20

lambda_mult

Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity.

TYPE: float DEFAULT: 0.5

filter

Filter on the metadata to apply.

TYPE: dict[str, Any] | None DEFAULT: None

**kwargs

Additional arguments are ignored.

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
list[Document]

The list of Documents selected by maximal marginal relevance.

amax_marginal_relevance_search_by_vector async

amax_marginal_relevance_search_by_vector(
    embedding: list[float],
    k: int = 4,
    fetch_k: int = 20,
    lambda_mult: float = 0.5,
    filter: dict[str, Any] | None = None,
    **kwargs: Any,
) -> list[Document]

Return docs selected using the maximal marginal relevance.

Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.

PARAMETER DESCRIPTION
embedding

Embedding to look up documents similar to.

TYPE: list[float]

k

Number of Documents to return.

TYPE: int DEFAULT: 4

fetch_k

Number of Documents to fetch to pass to MMR algorithm.

TYPE: int DEFAULT: 20

lambda_mult

Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity.

TYPE: float DEFAULT: 0.5

filter

Filter on the metadata to apply.

TYPE: dict[str, Any] | None DEFAULT: None

**kwargs

Additional arguments are ignored.

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
list[Document]

The list of Documents selected by maximal marginal relevance.

max_marginal_relevance_search(
    query: str,
    k: int = 4,
    fetch_k: int = 20,
    lambda_mult: float = 0.5,
    filter: dict[str, Any] | None = None,
    **kwargs: Any,
) -> list[Document]

Return docs selected using the maximal marginal relevance.

Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.

PARAMETER DESCRIPTION
query

Query to look up documents similar to.

TYPE: str

k

Number of Documents to return.

TYPE: int DEFAULT: 4

fetch_k

Number of Documents to fetch to pass to MMR algorithm.

TYPE: int DEFAULT: 20

lambda_mult

Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity.

TYPE: float DEFAULT: 0.5

filter

Filter on the metadata to apply.

TYPE: dict[str, Any] | None DEFAULT: None

**kwargs

Additional arguments are ignored.

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
list[Document]

The list of Documents selected by maximal marginal relevance.

amax_marginal_relevance_search(
    query: str,
    k: int = 4,
    fetch_k: int = 20,
    lambda_mult: float = 0.5,
    filter: dict[str, Any] | None = None,
    **kwargs: Any,
) -> list[Document]

Return docs selected using the maximal marginal relevance.

Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.

PARAMETER DESCRIPTION
query

Query to look up documents similar to.

TYPE: str

k

Number of Documents to return.

TYPE: int DEFAULT: 4

fetch_k

Number of Documents to fetch to pass to MMR algorithm.

TYPE: int DEFAULT: 20

lambda_mult

Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity.

TYPE: float DEFAULT: 0.5

filter

Filter on the metadata to apply.

TYPE: dict[str, Any] | None DEFAULT: None

**kwargs

Additional arguments are ignored.

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
list[Document]

The list of Documents selected by maximal marginal relevance.

from_texts classmethod

from_texts(
    texts: Iterable[str],
    embedding: Embeddings | None = None,
    metadatas: Iterable[dict] | None = None,
    ids: Iterable[str | None] | None = None,
    **kwargs: Any,
) -> AstraDBVectorStore

Create an Astra DB vectorstore from raw texts.

PARAMETER DESCRIPTION
texts

the texts to insert.

TYPE: Iterable[str]

embedding

the embedding function to use in the store.

TYPE: Embeddings | None DEFAULT: None

metadatas

metadata dicts for the texts.

TYPE: Iterable[dict] | None DEFAULT: None

ids

ids to associate to the texts.

TYPE: Iterable[str | None] | None DEFAULT: None

**kwargs

you can pass any argument that you would to add_texts and/or to the AstraDBVectorStore constructor (see these methods for details). These arguments will be routed to the respective methods as they are.

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
AstraDBVectorStore

an AstraDBVectorStore vectorstore.

afrom_texts async classmethod

afrom_texts(
    texts: Iterable[str],
    embedding: Embeddings | None = None,
    metadatas: Iterable[dict] | None = None,
    ids: Iterable[str | None] | None = None,
    **kwargs: Any,
) -> AstraDBVectorStore

Create an Astra DB vectorstore from raw texts.

PARAMETER DESCRIPTION
texts

the texts to insert.

TYPE: Iterable[str]

embedding

embedding function to use.

TYPE: Embeddings | None DEFAULT: None

metadatas

metadata dicts for the texts.

TYPE: Iterable[dict] | None DEFAULT: None

ids

ids to associate to the texts.

TYPE: Iterable[str | None] | None DEFAULT: None

**kwargs

you can pass any argument that you would to aadd_texts and/or to the AstraDBVectorStore constructor (see these methods for details). These arguments will be routed to the respective methods as they are.

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
AstraDBVectorStore

an AstraDBVectorStore vectorstore.

from_documents classmethod

from_documents(
    documents: Iterable[Document], embedding: Embeddings | None = None, **kwargs: Any
) -> AstraDBVectorStore

Create an Astra DB vectorstore from a document list.

Utility method that defers to from_texts.

PARAMETER DESCRIPTION
documents

a list of Document objects for insertion in the store.

TYPE: Iterable[Document]

embedding

the embedding function to use in the store.

TYPE: Embeddings | None DEFAULT: None

**kwargs

you can pass any argument that you would to add_texts and/or to the AstraDBVectorStore constructor (see these methods for details). These arguments will be routed to the respective methods as they are.

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
AstraDBVectorStore

an AstraDBVectorStore vectorstore.

afrom_documents async classmethod

afrom_documents(
    documents: Iterable[Document], embedding: Embeddings | None = None, **kwargs: Any
) -> AstraDBVectorStore

Create an Astra DB vectorstore from a document list.

Utility method that defers to afrom_texts.

PARAMETER DESCRIPTION
documents

a list of Document objects for insertion in the store.

TYPE: Iterable[Document]

embedding

the embedding function to use in the store.

TYPE: Embeddings | None DEFAULT: None

**kwargs

you can pass any argument that you would to aadd_texts and/or to the AstraDBVectorStore constructor (see these methods for details). These arguments will be routed to the respective methods as they are.

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
AstraDBVectorStore

an AstraDBVectorStore vectorstore.

AstraDBVectorStoreError

Bases: Exception

An exception during vector-store activities.

This exception represents any operational exception occurring while performing an action within an AstraDBVectorStore.