Construct Pinecone wrapper from raw documents.
from_texts(
cls,
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
batch_size: int = 32,
text_key: str = 'text',
namespace: Optional[str] = None,
index_name: Optional[str] = None,
upsert_kwargs: Optional[dict] = None,
pool_threads: int = 4,
embeddings_chunk_size: int = 1000,
async_req: bool = True,
*,
id_prefix: Optional[str] = None,
**kwargs: Any = {}
) -> PineconeVectorStoreThis is a user-friendly interface that:
This is intended to be a quick way to get started.
The pool_threads affects the speed of the upsert operations.
Setup: set the PINECONE_API_KEY environment variable to your Pinecone API key.
Example:
from langchain_pinecone import PineconeVectorStore, PineconeEmbeddings
embeddings = PineconeEmbeddings(model="multilingual-e5-large")
index_name = "my-index"
vectorstore = PineconeVectorStore.from_texts(
texts,
index_name=index_name,
embedding=embedding,
namespace=namespace,
)