# InMemoryVectorStore

> **Class** in `langchain_aws`

📖 [View in docs](https://reference.langchain.com/python/langchain-aws/vectorstores/inmemorydb/base/InMemoryVectorStore)

InMemoryVectorStore vector database.

To use, you should have the `redis` python package installed
for  AWS MemoryDB:

    ```bash
    pip install redis
    ```

Once running, you can connect to the MemoryDB server with the following url schemas:
- redis://<host>:<port> # simple connection
- redis://<username>:<password>@<host>:<port> # connection with authentication
- rediss://<host>:<port> # connection with SSL
- rediss://<username>:<password>@<host>:<port> # connection with SSL and auth

Examples:

The following examples show various ways to use the Redis VectorStore with
LangChain.

## Signature

```python
InMemoryVectorStore(
    self,
    redis_url: str,
    index_name: str,
    embedding: Embeddings,
    index_schema: Optional[Union[Dict[str, ListOfDict], str, os.PathLike]] = None,
    vector_schema: Optional[Dict[str, Union[str, int]]] = None,
    relevance_score_fn: Optional[Callable[[float], float]] = None,
    key_prefix: Optional[str] = None,
    **kwargs: Any = {},
)
```

## Description

**For all the following examples assume we have the following imports:**

```python
from langchain_aws.vectorstores import InMemoryVectorStore
```

Initialize, create index, and load Documents:
    ```python
    from langchain_aws.vectorstores import InMemoryVectorStore

    rds = InMemoryVectorStore.from_documents(
        documents, # a list of Document objects from loaders or created
        embeddings, # an Embeddings object
        redis_url="redis://cluster_endpoint:6379",
    )
    ```

Initialize, create index, and load Documents with metadata:
    ```python
    rds = InMemoryVectorStore.from_texts(
        texts, # a list of strings
        metadata, # a list of metadata dicts
        embeddings, # an Embeddings object
        redis_url="redis://cluster_endpoint:6379",
    )
    ```

Initialize, create index, and load Documents with metadata and return keys

    ```python
    rds, keys = InMemoryVectorStore.from_texts_return_keys(
        texts, # a list of strings
        metadata, # a list of metadata dicts
        embeddings, # an Embeddings object
        redis_url="redis://cluster_endpoint:6379",
    )
    ```

For use cases where the index needs to stay alive, you can initialize
with an index name such that it's easier to reference later

    ```python
    rds = InMemoryVectorStore.from_texts(
        texts, # a list of strings
        metadata, # a list of metadata dicts
        embeddings, # an Embeddings object
        index_name="my-index",
        redis_url="redis://cluster_endpoint:6379",
    )
    ```

Initialize and connect to an existing index (from above)

    ```python
    # must pass in schema and key_prefix from another index
    existing_rds = InMemoryVectorStore.from_existing_index(
        embeddings, # an Embeddings object
        index_name="my-index",
        schema=rds.schema, # schema dumped from another index
        key_prefix=rds.key_prefix, # key prefix from another index
        redis_url="redis://username:password@cluster_endpoint:6379",
    )
    ```

Advanced examples:

Custom vector schema can be supplied to change the way that
MemoryDB creates the underlying vector schema. This is useful
for production use cases where you want to optimize the
vector schema for your use case. ex. using HNSW instead of
FLAT (knn) which is the default

    ```python
    vector_schema = {
        "algorithm": "HNSW"
    }

    rds = InMemoryVectorStore.from_texts(
        texts, # a list of strings
        metadata, # a list of metadata dicts
        embeddings, # an Embeddings object
        vector_schema=vector_schema,
        redis_url="redis://cluster_endpoint:6379",
    )
    ```

Custom index schema can be supplied to change the way that the
metadata is indexed. This is useful for you would like to use the
hybrid querying (filtering) capability of MemoryDB.

By default, this implementation will automatically generate the index
schema according to the following rules:
    - All strings are indexed as text fields
    - All numbers are indexed as numeric fields
    - All lists of strings are indexed as tag fields (joined by
        langchain_aws.vectorstores.inmemorydb.constants.INMEMORYDB_TAG_SEPARATOR)
    - All None values are not indexed but still stored in MemoryDB these are
        not retrievable through the interface here, but the raw MemoryDB client
        can be used to retrieve them.
    - All other types are not indexed

To override these rules, you can pass in a custom index schema like the following

    ```yaml
    tag:
        - name: credit_score
    text:
        - name: user
        - name: job
    ```

Typically, the `credit_score` field would be a text field since it's a string,
however, we can override this behavior by specifying the field type as shown with
the yaml config (can also be a dictionary) above and the code below.

    ```python
    rds = InMemoryVectorStore.from_texts(
        texts, # a list of strings
        metadata, # a list of metadata dicts
        embeddings, # an Embeddings object
        index_schema="path/to/index_schema.yaml", # can also be a dictionary
        redis_url="redis://cluster_endpoint:6379",
    )
    ```

When connecting to an existing index where a custom schema has been applied, it's
important to pass in the same schema to the `from_existing_index` method.
Otherwise, the schema for newly added samples will be incorrect and metadata
will not be returned.

## Extends

- `VectorStore`

## Constructors

```python
__init__(
    self,
    redis_url: str,
    index_name: str,
    embedding: Embeddings,
    index_schema: Optional[Union[Dict[str, ListOfDict], str, os.PathLike]] = None,
    vector_schema: Optional[Dict[str, Union[str, int]]] = None,
    relevance_score_fn: Optional[Callable[[float], float]] = None,
    key_prefix: Optional[str] = None,
    **kwargs: Any = {},
)
```

| Name | Type |
|------|------|
| `redis_url` | `str` |
| `index_name` | `str` |
| `embedding` | `Embeddings` |
| `index_schema` | `Optional[Union[Dict[str, ListOfDict], str, os.PathLike]]` |
| `vector_schema` | `Optional[Dict[str, Union[str, int]]]` |
| `relevance_score_fn` | `Optional[Callable[[float], float]]` |
| `key_prefix` | `Optional[str]` |


## Properties

- `DEFAULT_VECTOR_SCHEMA`
- `index_name`
- `client`
- `relevance_score_fn`
- `key_prefix`
- `embeddings`
- `schema`

## Methods

- [`from_texts_return_keys()`](https://reference.langchain.com/python/langchain-aws/vectorstores/inmemorydb/base/InMemoryVectorStore/from_texts_return_keys)
- [`from_texts()`](https://reference.langchain.com/python/langchain-aws/vectorstores/inmemorydb/base/InMemoryVectorStore/from_texts)
- [`from_existing_index()`](https://reference.langchain.com/python/langchain-aws/vectorstores/inmemorydb/base/InMemoryVectorStore/from_existing_index)
- [`write_schema()`](https://reference.langchain.com/python/langchain-aws/vectorstores/inmemorydb/base/InMemoryVectorStore/write_schema)
- [`delete()`](https://reference.langchain.com/python/langchain-aws/vectorstores/inmemorydb/base/InMemoryVectorStore/delete)
- [`drop_index()`](https://reference.langchain.com/python/langchain-aws/vectorstores/inmemorydb/base/InMemoryVectorStore/drop_index)
- [`add_texts()`](https://reference.langchain.com/python/langchain-aws/vectorstores/inmemorydb/base/InMemoryVectorStore/add_texts)
- [`as_retriever()`](https://reference.langchain.com/python/langchain-aws/vectorstores/inmemorydb/base/InMemoryVectorStore/as_retriever)
- [`similarity_search_with_score()`](https://reference.langchain.com/python/langchain-aws/vectorstores/inmemorydb/base/InMemoryVectorStore/similarity_search_with_score)
- [`similarity_search()`](https://reference.langchain.com/python/langchain-aws/vectorstores/inmemorydb/base/InMemoryVectorStore/similarity_search)
- [`similarity_search_by_vector()`](https://reference.langchain.com/python/langchain-aws/vectorstores/inmemorydb/base/InMemoryVectorStore/similarity_search_by_vector)
- [`max_marginal_relevance_search()`](https://reference.langchain.com/python/langchain-aws/vectorstores/inmemorydb/base/InMemoryVectorStore/max_marginal_relevance_search)

---

[View source on GitHub](https://github.com/langchain-ai/langchain-aws/blob/2f5e41cef9442ec840c0d8401e34dea74b061ba0/libs/aws/langchain_aws/vectorstores/inmemorydb/base.py#L70)