MistralAIEmbeddings()MistralAI embedding model integration.
Setup:
Install langchain_mistralai and set environment variable
MISTRAL_API_KEY.
pip install -U langchain_mistralai
export MISTRAL_API_KEY="your-api-key"
Key init args — completion params:
model:
Name of MistralAI model to use.
Key init args — client params:
api_key:
The API key for the MistralAI API. If not provided, it will be read from the
environment variable MISTRAL_API_KEY.
max_concurrent_requests: int
max_retries:
The number of times to retry a request if it fails.
timeout:
The number of seconds to wait for a response before timing out.
wait_time:
The number of seconds to wait before retrying a request in case of 429
error.
max_concurrent_requests:
The maximum number of concurrent requests to make to the Mistral API.
See full list of supported init args and their descriptions in the params section.
Instantiate:
from __module_name__ import MistralAIEmbeddings
embed = MistralAIEmbeddings(
model="mistral-embed",
# api_key="...",
# other params...
)
Embed single text:
input_text = "The meaning of life is 42"
vector = embed.embed_query(input_text)
print(vector[:3])
[-0.024603435769677162, -0.007543657906353474, 0.0039630369283258915]
Embed multiple text:
input_texts = ["Document 1...", "Document 2..."]
vectors = embed.embed_documents(input_texts)
print(len(vectors))
# The first 3 coordinates for the first vector
print(vectors[0][:3])
2
[-0.024603435769677162, -0.007543657906353474, 0.0039630369283258915]
Async:
vector = await embed.aembed_query(input_text)
print(vector[:3])
# multiple:
# await embed.aembed_documents(input_texts)
[-0.009100092574954033, 0.005071679595857859, -0.0029193938244134188]