Construct ElasticsearchStore wrapper from documents.
from_documents(
cls,
documents: List[Document],
embedding: Optional[Embeddings] = None,
bulk_kwargs: Optional[Dict] = None,
**kwargs: Any = {}
) -> ElasticsearchStoreExample:
.. code-block:: python
from langchain_community.vectorstores import ElasticsearchStore from langchain_community.embeddings.openai import OpenAIEmbeddings
db = ElasticsearchStore.from_documents( texts, embeddings, index_name="langchain-demo", es_url="http://localhost:9200" )
| Name | Type | Description |
|---|---|---|
texts* | unknown | List of texts to add to the Elasticsearch index. |
embedding | Optional[Embeddings] | Default: NoneEmbedding function to use to embed the texts. Do not provide if using a strategy that doesn't require inference. |
metadatas* | unknown | Optional list of metadatas associated with the texts. |
index_name* | unknown | Name of the Elasticsearch index to create. |
es_url* | unknown | URL of the Elasticsearch instance to connect to. |
cloud_id* | unknown | Cloud ID of the Elasticsearch instance to connect to. |
es_user* | unknown | Username to use when connecting to Elasticsearch. |
es_password* | unknown | Password to use when connecting to Elasticsearch. |
es_api_key* | unknown | API key to use when connecting to Elasticsearch. |
es_connection* | unknown | Optional pre-existing Elasticsearch connection. |
vector_query_field* | unknown | Optional. Name of the field to store the embedding vectors in. |
query_field* | unknown | Optional. Name of the field to store the texts in. |
bulk_kwargs | Optional[Dict] | Default: NoneOptional. Additional arguments to pass to Elasticsearch bulk. |