Typesense(
self,
typesense_client: Client,
embedding: Embeddings,
*,
typesense_collection_name: Optional[str| Name | Type |
|---|---|
| typesense_client | Client |
| embedding | Embeddings |
| typesense_collection_name | Optional[str] |
| text_key | str |
Typesense vector store.
To use, you should have the typesense python package installed.
Example:
.. code-block:: python
from langchain_community.embedding.openai import OpenAIEmbeddings from langchain_community.vectorstores import Typesense import typesense
node = { "host": "localhost", # For Typesense Cloud use xxx.a1.typesense.net "port": "8108", # For Typesense Cloud use 443 "protocol": "http" # For Typesense Cloud use https } typesense_client = typesense.Client( { "nodes": [node], "api_key": "<API_KEY>", "connection_timeout_seconds": 2 } ) typesense_collection_name = "langchain-memory"
embedding = OpenAIEmbeddings() vectorstore = Typesense( typesense_client=typesense_client, embedding=embedding, typesense_collection_name=typesense_collection_name, text_key="text", )
Run more texts through the embedding and add to the vectorstore.
Return typesense documents most similar to query, along with scores.
Return typesense documents most similar to query.
Initialize Typesense directly from client parameters.
Construct Typesense wrapper from raw text.