# AstraDBVectorStore

> **Class** in `langchain_astradb`

📖 [View in docs](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore)

A vector store which uses DataStax Astra DB as backend.

## Signature

```python
AstraDBVectorStore(
    self,
    *,
    collection_name: str,
    embedding: Embeddings | None = None,
    token: str | TokenProvider | None = None,
    api_endpoint: str | None = None,
    environment: str | None = None,
    namespace: str | None = None,
    metric: str | None = None,
    batch_size: int | None = None,
    bulk_insert_batch_concurrency: int | None = None,
    bulk_insert_overwrite_concurrency: int | None = None,
    bulk_delete_concurrency: int | None = None,
    setup_mode: SetupMode | None = None,
    pre_delete_collection: bool = False,
    metadata_indexing_include: Iterable[str] | None = None,
    metadata_indexing_exclude: Iterable[str] | None = None,
    collection_indexing_policy: dict[str, Any] | None = None,
    collection_vector_service_options: VectorServiceOptions | None = None,
    collection_embedding_api_key: str | EmbeddingHeadersProvider | None = None,
    content_field: str | None = None,
    ignore_invalid_documents: bool = False,
    autodetect_collection: bool = False,
    ext_callers: list[tuple[str | None, str | None] | str | None] | None = None,
    component_name: str = COMPONENT_NAME_VECTORSTORE,
    api_options: APIOptions | None = None,
    collection_rerank: CollectionRerankOptions | RerankServiceOptions | None = None,
    collection_reranking_api_key: str | RerankingHeadersProvider | None = None,
    collection_lexical: str | dict[str, Any] | CollectionLexicalOptions | None = None,
    hybrid_search: HybridSearchMode | None = None,
    hybrid_limit_factor: float | dict[str, float] | HybridLimitFactorPrescription | None = None,
)
```

## Description

**Setup:**

Install the `langchain-astradb` package and head to the
[AstraDB website](https://astra.datastax.com), create an account, create a
new database and
[create an application token](https://docs.datastax.com/en/astra-db-serverless/administration/manage-application-tokens.html).

```bash
pip install -qU langchain-astradb
```

**Instantiate:**

Get your API endpoint and application token from the dashboard of your database.

Create a vector store and provide a LangChain embedding object for working with
it:

```python
import getpass

from langchain_astradb import AstraDBVectorStore
from langchain_openai import OpenAIEmbeddings

ASTRA_DB_API_ENDPOINT = getpass.getpass("ASTRA_DB_API_ENDPOINT = ")
ASTRA_DB_APPLICATION_TOKEN = getpass.getpass("ASTRA_DB_APPLICATION_TOKEN = ")

vector_store = AstraDBVectorStore(
    collection_name="astra_vector_langchain",
    embedding=OpenAIEmbeddings(),
    api_endpoint=ASTRA_DB_API_ENDPOINT,
    token=ASTRA_DB_APPLICATION_TOKEN,
)
```

(Vectorize) Create a vector store where the embedding vector computation
happens entirely on the server-side, using the
[vectorize](https://docs.datastax.com/en/astra-db-serverless/databases/embedding-generation.html)
feature:

```python
import getpass
from astrapy.info import VectorServiceOptions

from langchain_astradb import AstraDBVectorStore

ASTRA_DB_API_ENDPOINT = getpass.getpass("ASTRA_DB_API_ENDPOINT = ")
ASTRA_DB_APPLICATION_TOKEN = getpass.getpass("ASTRA_DB_APPLICATION_TOKEN = ")

vector_store = AstraDBVectorStore(
    collection_name="astra_vectorize_langchain",
    api_endpoint=ASTRA_DB_API_ENDPOINT,
    token=ASTRA_DB_APPLICATION_TOKEN,
    collection_vector_service_options=VectorServiceOptions(
        provider="nvidia",
        model_name="NV-Embed-QA",
        # authentication=...,  # needed by some providers/models
    ),
)
```

(Hybrid) The underlying Astra DB typically supports hybrid search
(i.e. lexical + vector ANN) to boost the results' accuracy.
This is provisioned and used automatically when available. For manual control,
use the `collection_rerank` and `collection_lexical` constructor parameters:

```python
import getpass
from astrapy.info import (
    CollectionLexicalOptions,
    CollectionRerankOptions,
    RerankServiceOptions,
    VectorServiceOptions,
)

from langchain_astradb import AstraDBVectorStore

ASTRA_DB_API_ENDPOINT = getpass.getpass("ASTRA_DB_API_ENDPOINT = ")
ASTRA_DB_APPLICATION_TOKEN = getpass.getpass("ASTRA_DB_APPLICATION_TOKEN = ")

vector_store = AstraDBVectorStore(
    collection_name="astra_vectorize_langchain",
    # embedding=...,  # needed unless using 'vectorize'
    api_endpoint=ASTRA_DB_API_ENDPOINT,
    token=ASTRA_DB_APPLICATION_TOKEN,
    collection_vector_service_options=VectorServiceOptions(...),  # see above
    collection_lexical=CollectionLexicalOptions(analyzer="standard"),
    collection_rerank=CollectionRerankOptions(
        service=RerankServiceOptions(
            provider="nvidia",
            model_name="nvidia/llama-3.2-nv-rerankqa-1b-v2",
        ),
    ),
    collection_reranking_api_key=...,  # if needed by the model/setup
)
```

Hybrid-related server upgrades may introduce a mismatch between the store
defaults and a pre-existing collection: in case one such mismatch is
reported (as a Data API "EXISTING_COLLECTION_DIFFERENT_SETTINGS" error),
the options to resolve are:
(1) use autodetect mode, (2) switch to `setup_mode` "OFF", or
(3) explicitly specify lexical and/or rerank settings in the vector
store constructor, to match the existing collection configuration. See
[here](https://github.com/langchain-ai/langchain-datastax/blob/main/libs/astradb/README.md#collection-defaults-mismatch)
for more details.

(Autodetect) Let the vector store figure out the configuration (including
vectorize and document encoding scheme on DB), by inspection of an existing
collection:

```python
import getpass

from langchain_astradb import AstraDBVectorStore

ASTRA_DB_API_ENDPOINT = getpass.getpass("ASTRA_DB_API_ENDPOINT = ")
ASTRA_DB_APPLICATION_TOKEN = getpass.getpass("ASTRA_DB_APPLICATION_TOKEN = ")

vector_store = AstraDBVectorStore(
    collection_name="astra_existing_collection",
    # embedding=...,  # needed unless using 'vectorize'
    api_endpoint=ASTRA_DB_API_ENDPOINT,
    token=ASTRA_DB_APPLICATION_TOKEN,
    autodetect_collection=True,
)
```

(Non-Astra DB) This class can also target a non-Astra DB database, such as a
self-deployed HCD, through the Data API:

```python
import getpass

from astrapy.authentication import UsernamePasswordTokenProvider

from langchain_astradb import AstraDBVectorStore

vector_store = AstraDBVectorStore(
    collection_name="astra_existing_collection",
    # embedding=...,  # needed unless using 'vectorize'
    api_endpoint="http://localhost:8181",
    token=UsernamePasswordTokenProvider(
        username="user",
        password="pwd",
    ),
    collection_vector_service_options=...,  # if 'vectorize'
)
```

**Add Documents:**

Add one or more documents to the vector store. IDs are optional: if provided,
and matching existing documents, an overwrite is performed.

```python
from langchain_core.documents import Document

document_1 = Document(page_content="foo", metadata={"baz": "bar"})
document_2 = Document(page_content="thud", metadata={"bar": "baz"})
document_3 = Document(page_content="i will be deleted :(")

documents = [document_1, document_2, document_3]
ids = ["1", "2", "3"]
vector_store.add_documents(documents=documents, ids=ids)
```

**Delete Documents:**

Delete one or more documents from the vector store by their IDs.

```python
vector_store.delete(ids=["3"])
```

**Search:**

Run a similarity search with a provided query string.

```python
results = vector_store.similarity_search(query="thud", k=1)
for doc in results:
    print(f"{doc.page_content}[{doc.metadata}]")
```

```
thud[{"bar": "baz"}]
```

**Search with filter:**

Specify metadata filters for a search. Simple `key: value` syntax
for the filter means equality (with implied 'and').
More complex syntax is available, following the Data API specifications, see
(docs)[https://docs.datastax.com/en/astra-db-serverless/api-reference/filter-operator-collections.html].

```python
results = vector_store.similarity_search(
    query="thud", k=1, filter={"bar": "baz"}
)
for doc in results:
    print(f"{doc.page_content}[{doc.metadata}]")
```

```
thud[{"bar": "baz"}]
```

**Search with score:**

Search results are returned with their similarity score.

```python
results = vector_store.similarity_search_with_score(query="qux", k=1)
for doc, score in results:
    print(f"* [SIM={score:3f}] {doc.page_content} [{doc.metadata}]")
```

```
[SIM=0.916135] foo [{'baz': 'bar'}]
```

**Async:**

All methods come with their async counterpart (method name prepended with `a`).

```python
# add documents
await vector_store.aadd_documents(documents=documents, ids=ids)

# delete documents
await vector_store.adelete(ids=["3"])

# search
results = vector_store.asimilarity_search(query="thud", k=1)

# search with score
results = await vector_store.asimilarity_search_with_score(query="qux", k=1)
for doc, score in results:
    print(f"* [SIM={score:3f}] {doc.page_content} [{doc.metadata}]")
```

```
[SIM=0.916135] foo [{'baz': 'bar'}]
```

**Use as Retriever:**

A Retriever can be spawned from the vector store for further usage.

```python
retriever = vector_store.as_retriever(
    search_type="similarity_score_threshold",
    search_kwargs={"k": 1, "score_threshold": 0.5},
)
retriever.invoke("thud")
```

```
[Document(metadata={"bar": "baz"}, page_content="thud")]
```

## Parameters

| Name | Type | Required | Description |
|------|------|----------|-------------|
| `embedding` | `Embeddings \| None` | No | the embeddings function or service to use. This enables client-side embedding functions or calls to external embedding providers. If `embedding` is passed, then `collection_vector_service_options` can not be provided. (default: `None`) |
| `collection_name` | `str` | Yes | name of the Astra DB collection to create/use. |
| `token` | `str \| TokenProvider \| None` | No | API token for Astra DB usage, either in the form of a string or a subclass of `astrapy.authentication.TokenProvider`. If not provided, the environment variable ASTRA_DB_APPLICATION_TOKEN is inspected. (default: `None`) |
| `api_endpoint` | `str \| None` | No | full URL to the API endpoint, such as `https://<DB-ID>-us-east1.apps.astra.datastax.com`. If not provided, the environment variable ASTRA_DB_API_ENDPOINT is inspected. (default: `None`) |
| `environment` | `str \| None` | No | a string specifying the environment of the target Data API. If omitted, defaults to "prod" (Astra DB production). Other values are in `astrapy.constants.Environment` enum class. (default: `None`) |
| `namespace` | `str \| None` | No | namespace (aka keyspace) where the collection is created. If not provided, the environment variable ASTRA_DB_KEYSPACE is inspected. Defaults to the database's "default namespace". (default: `None`) |
| `metric` | `str \| None` | No | similarity function to use out of those available in Astra DB. If left out, it will use Astra DB API's defaults (i.e. "cosine" - but, for performance reasons, "dot_product" is suggested if embeddings are normalized to one). (default: `None`) |
| `batch_size` | `int \| None` | No | Size of document chunks for each individual insertion API request. If not provided, astrapy defaults are applied. (default: `None`) |
| `bulk_insert_batch_concurrency` | `int \| None` | No | Number of threads or coroutines to insert batches concurrently. (default: `None`) |
| `bulk_insert_overwrite_concurrency` | `int \| None` | No | Number of threads or coroutines in a batch to insert pre-existing entries. (default: `None`) |
| `bulk_delete_concurrency` | `int \| None` | No | Number of threads or coroutines for multiple-entry deletes. (default: `None`) |
| `setup_mode` | `SetupMode \| None` | No | mode used to create the collection (SYNC, ASYNC or OFF). (default: `None`) |
| `pre_delete_collection` | `bool` | No | whether to delete the collection before creating it. If False and the collection already exists, the collection will be used as is. (default: `False`) |
| `metadata_indexing_include` | `Iterable[str] \| None` | No | an allowlist of the specific metadata subfields that should be indexed for later filtering in searches. (default: `None`) |
| `metadata_indexing_exclude` | `Iterable[str] \| None` | No | a denylist of the specific metadata subfields that should not be indexed for later filtering in searches. (default: `None`) |
| `collection_indexing_policy` | `dict[str, Any] \| None` | No | a full "indexing" specification for what fields should be indexed for later filtering in searches. This dict must conform to to the API specifications (see https://docs.datastax.com/en/astra-db-serverless/api-reference/collection-indexes.html) (default: `None`) |
| `collection_vector_service_options` | `VectorServiceOptions \| None` | No | specifies the use of server-side embeddings within Astra DB. If passing this parameter, `embedding` cannot be provided. (default: `None`) |
| `collection_embedding_api_key` | `str \| EmbeddingHeadersProvider \| None` | No | for usage of server-side embeddings within Astra DB. With this parameter one can supply an API Key that will be passed to Astra DB with each data request. This parameter can be either a string or a subclass of `astrapy.authentication.EmbeddingHeadersProvider`. This is useful when the service is configured for the collection, but no corresponding secret is stored within Astra's key management system. (default: `None`) |
| `content_field` | `str \| None` | No | name of the field containing the textual content in the documents when saved on Astra DB. For vectorize collections, this cannot be specified; for non-vectorize collection, defaults to "content". The special value "*" can be passed only if autodetect_collection=True. In this case, the actual name of the key for the textual content is guessed by inspection of a few documents from the collection, under the assumption that the longer strings are the most likely candidates. Please understand the limitations of this method and get some understanding of your data before passing `"*"` for this parameter. (default: `None`) |
| `ignore_invalid_documents` | `bool` | No | if False (default), exceptions are raised when a document is found on the Astra DB collection that does not have the expected shape. If set to True, such results from the database are ignored and a warning is issued. Note that in this case a similarity search may end up returning fewer results than the required `k`. (default: `False`) |
| `autodetect_collection` | `bool` | No | if True, turns on autodetect behavior. The store will look for an existing collection of the provided name and infer the store settings from it. Default is False. In autodetect mode, `content_field` can be given as `"*"`, meaning that an attempt will be made to determine it by inspection (unless vectorize is enabled, in which case `content_field` is ignored). In autodetect mode, the store not only determines whether embeddings are client- or server-side, but - most importantly - switches automatically between "nested" and "flat" representations of documents on DB (i.e. having the metadata key-value pairs grouped in a `metadata` field or spread at the documents' top-level). The former scheme is the native mode of the AstraDBVectorStore; the store resorts to the latter in case of vector collections populated with external means (such as a third-party data import tool) before applying an AstraDBVectorStore to them. Note that the following parameters cannot be used if this is True: `metric`, `setup_mode`, `metadata_indexing_include`, `metadata_indexing_exclude`, `collection_indexing_policy`, `collection_vector_service_options`. (default: `False`) |
| `ext_callers` | `list[tuple[str \| None, str \| None] \| str \| None] \| None` | No | one or more caller identities to identify Data API calls in the User-Agent header. This is a list of (name, version) pairs, or just strings if no version info is provided, which, if supplied, becomes the leading part of the User-Agent string in all API requests related to this component. (default: `None`) |
| `component_name` | `str` | No | the string identifying this specific component in the stack of usage info passed as the User-Agent string to the Data API. Defaults to "langchain_vectorstore", but can be overridden if this component actually serves as the building block for another component (such as when the vector store is used within a `GraphRetriever`). (default: `COMPONENT_NAME_VECTORSTORE`) |
| `api_options` | `APIOptions \| None` | No | an instance of `astrapy.utils.api_options.APIOptions` that can be supplied to customize the interaction with the Data API regarding serialization/deserialization, timeouts, custom headers and so on. The provided options are applied on top of settings already tailored to this library, and if specified will take precedence. Passing None (default) means no customization is requested. Refer to the astrapy documentation for details. (default: `None`) |
| `collection_rerank` | `CollectionRerankOptions \| RerankServiceOptions \| None` | No | providing reranking settings is necessary to run hybrid searches for similarity. This parameter can be an instance of the astrapy classes `CollectionRerankOptions` or `RerankServiceOptions`. (default: `None`) |
| `collection_reranking_api_key` | `str \| RerankingHeadersProvider \| None` | No | for usage of server-side reranking services within Astra DB. With this parameter one can supply an API Key that will be passed to Astra DB with each data request. This parameter can be either a string or a subclass of `astrapy.authentication.RerankingHeadersProvider`. This is useful when the service is configured for the collection, but no corresponding secret is stored within Astra's key management system. (default: `None`) |
| `collection_lexical` | `str \| dict[str, Any] \| CollectionLexicalOptions \| None` | No | configuring a lexical analyzer is necessary to run lexical and hybrid searches. This parameter can be a string or dict, which is then passed as-is for the "analyzer" field of a createCollection's `"$lexical.analyzer"` value, or a ready-made astrapy `CollectionLexicalOptions` object. (default: `None`) |
| `hybrid_search` | `HybridSearchMode \| None` | No | whether similarity searches should be run as Hybrid searches or not. Values are DEFAULT, ON or OFF. In case of DEFAULT, searches are performed as permitted by the collection configuration, with a preference for hybrid search. Forcing this setting to ON for a non-hybrid-enabled collection would result in a server error when running searches. (default: `None`) |
| `hybrid_limit_factor` | `float \| dict[str, float] \| HybridLimitFactorPrescription \| None` | No | subsearch "limit" specification for hybrid searches. If omitted, hybrid searches do not specify it and leave the Data API to use its defaults. If a floating-point positive number is provided: each subsearch participating in the hybrid search (i.e. both the vector-based ANN and the lexical-based) will be requested to fecth up to `int(k*hybrid_limit_factor)` items, where `k` is the desired result count from the whole search. If a `HybridLimitFactorPrescription` is provided (see the class docstring for details), separate factors are applied to the vector and the lexical subsearches. Alternatively, a simple dictionary with keys `"$lexical"` and `"$vector"` achieves the same effect. (default: `None`) |

## Extends

- `VectorStore`

## Constructors

```python
__init__(
    self,
    *,
    collection_name: str,
    embedding: Embeddings | None = None,
    token: str | TokenProvider | None = None,
    api_endpoint: str | None = None,
    environment: str | None = None,
    namespace: str | None = None,
    metric: str | None = None,
    batch_size: int | None = None,
    bulk_insert_batch_concurrency: int | None = None,
    bulk_insert_overwrite_concurrency: int | None = None,
    bulk_delete_concurrency: int | None = None,
    setup_mode: SetupMode | None = None,
    pre_delete_collection: bool = False,
    metadata_indexing_include: Iterable[str] | None = None,
    metadata_indexing_exclude: Iterable[str] | None = None,
    collection_indexing_policy: dict[str, Any] | None = None,
    collection_vector_service_options: VectorServiceOptions | None = None,
    collection_embedding_api_key: str | EmbeddingHeadersProvider | None = None,
    content_field: str | None = None,
    ignore_invalid_documents: bool = False,
    autodetect_collection: bool = False,
    ext_callers: list[tuple[str | None, str | None] | str | None] | None = None,
    component_name: str = COMPONENT_NAME_VECTORSTORE,
    api_options: APIOptions | None = None,
    collection_rerank: CollectionRerankOptions | RerankServiceOptions | None = None,
    collection_reranking_api_key: str | RerankingHeadersProvider | None = None,
    collection_lexical: str | dict[str, Any] | CollectionLexicalOptions | None = None,
    hybrid_search: HybridSearchMode | None = None,
    hybrid_limit_factor: float | dict[str, float] | HybridLimitFactorPrescription | None = None,
) -> None
```

| Name | Type |
|------|------|
| `collection_name` | `str` |
| `embedding` | `Embeddings \| None` |
| `token` | `str \| TokenProvider \| None` |
| `api_endpoint` | `str \| None` |
| `environment` | `str \| None` |
| `namespace` | `str \| None` |
| `metric` | `str \| None` |
| `batch_size` | `int \| None` |
| `bulk_insert_batch_concurrency` | `int \| None` |
| `bulk_insert_overwrite_concurrency` | `int \| None` |
| `bulk_delete_concurrency` | `int \| None` |
| `setup_mode` | `SetupMode \| None` |
| `pre_delete_collection` | `bool` |
| `metadata_indexing_include` | `Iterable[str] \| None` |
| `metadata_indexing_exclude` | `Iterable[str] \| None` |
| `collection_indexing_policy` | `dict[str, Any] \| None` |
| `collection_vector_service_options` | `VectorServiceOptions \| None` |
| `collection_embedding_api_key` | `str \| EmbeddingHeadersProvider \| None` |
| `content_field` | `str \| None` |
| `ignore_invalid_documents` | `bool` |
| `autodetect_collection` | `bool` |
| `ext_callers` | `list[tuple[str \| None, str \| None] \| str \| None] \| None` |
| `component_name` | `str` |
| `api_options` | `APIOptions \| None` |
| `collection_rerank` | `CollectionRerankOptions \| RerankServiceOptions \| None` |
| `collection_reranking_api_key` | `str \| RerankingHeadersProvider \| None` |
| `collection_lexical` | `str \| dict[str, Any] \| CollectionLexicalOptions \| None` |
| `hybrid_search` | `HybridSearchMode \| None` |
| `hybrid_limit_factor` | `float \| dict[str, float] \| HybridLimitFactorPrescription \| None` |


## Properties

- `collection_name`
- `token`
- `api_endpoint`
- `environment`
- `namespace`
- `indexing_policy`
- `autodetect_collection`
- `embedding_dimension`
- `embedding`
- `metric`
- `collection_embedding_api_key`
- `collection_vector_service_options`
- `document_codec`
- `batch_size`
- `bulk_insert_batch_concurrency`
- `bulk_insert_overwrite_concurrency`
- `bulk_delete_concurrency`
- `has_lexical`
- `has_hybrid`
- `hybrid_search`
- `hybrid_limit_factor`
- `collection_reranking_api_key`
- `astra_env`
- `embeddings`

## Methods

- [`filter_to_query()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/filter_to_query)
- [`copy()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/copy)
- [`clear()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/clear)
- [`aclear()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/aclear)
- [`delete_by_document_id()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/delete_by_document_id)
- [`adelete_by_document_id()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/adelete_by_document_id)
- [`delete()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/delete)
- [`adelete()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/adelete)
- [`delete_by_metadata_filter()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/delete_by_metadata_filter)
- [`adelete_by_metadata_filter()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/adelete_by_metadata_filter)
- [`delete_collection()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/delete_collection)
- [`adelete_collection()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/adelete_collection)
- [`add_texts()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/add_texts)
- [`aadd_texts()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/aadd_texts)
- [`update_metadata()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/update_metadata)
- [`aupdate_metadata()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/aupdate_metadata)
- [`full_decode_astra_db_found_document()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/full_decode_astra_db_found_document)
- [`full_decode_astra_db_reranked_result()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/full_decode_astra_db_reranked_result)
- [`run_query_raw()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/run_query_raw)
- [`run_query()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/run_query)
- [`arun_query_raw()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/arun_query_raw)
- [`arun_query()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/arun_query)
- [`metadata_search()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/metadata_search)
- [`ametadata_search()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/ametadata_search)
- [`get_by_document_id()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/get_by_document_id)
- [`aget_by_document_id()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/aget_by_document_id)
- [`get_by_ids()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/get_by_ids)
- [`get_by_document_ids()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/get_by_document_ids)
- [`aget_by_ids()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/aget_by_ids)
- [`aget_by_document_ids()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/aget_by_document_ids)
- [`similarity_search()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/similarity_search)
- [`similarity_search_with_score()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/similarity_search_with_score)
- [`similarity_search_with_score_id()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/similarity_search_with_score_id)
- [`similarity_search_by_vector()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/similarity_search_by_vector)
- [`similarity_search_with_score_by_vector()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/similarity_search_with_score_by_vector)
- [`similarity_search_with_score_id_by_vector()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/similarity_search_with_score_id_by_vector)
- [`asimilarity_search()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/asimilarity_search)
- [`asimilarity_search_with_score()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/asimilarity_search_with_score)
- [`asimilarity_search_with_score_id()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/asimilarity_search_with_score_id)
- [`asimilarity_search_by_vector()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/asimilarity_search_by_vector)
- [`asimilarity_search_with_score_by_vector()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/asimilarity_search_with_score_by_vector)
- [`asimilarity_search_with_score_id_by_vector()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/asimilarity_search_with_score_id_by_vector)
- [`similarity_search_with_embedding_by_vector()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/similarity_search_with_embedding_by_vector)
- [`asimilarity_search_with_embedding_by_vector()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/asimilarity_search_with_embedding_by_vector)
- [`similarity_search_with_embedding()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/similarity_search_with_embedding)
- [`asimilarity_search_with_embedding()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/asimilarity_search_with_embedding)
- [`max_marginal_relevance_search_by_vector()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/max_marginal_relevance_search_by_vector)
- [`amax_marginal_relevance_search_by_vector()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/amax_marginal_relevance_search_by_vector)
- [`max_marginal_relevance_search()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/max_marginal_relevance_search)
- [`amax_marginal_relevance_search()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/amax_marginal_relevance_search)
- [`from_texts()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/from_texts)
- [`afrom_texts()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/afrom_texts)
- [`from_documents()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/from_documents)
- [`afrom_documents()`](https://reference.langchain.com/python/langchain-astradb/vectorstores/AstraDBVectorStore/afrom_documents)

---

[View source on GitHub](https://github.com/langchain-ai/langchain-datastax/blob/f4a6aef74d38ee804b0d407e19359c6e45989068/libs/astradb/langchain_astradb/vectorstores.py#L398)