Quantized bi-encoders embedding models.
Please ensure that you have installed optimum-intel and ipex.
Input:
model_name: str = Model name. max_seq_len: int = The maximum sequence length for tokenization. (default 512) pooling_strategy: str = "mean" or "cls", pooling strategy for the final layer. (default "mean") query_instruction: Optional[str] = An instruction to add to the query before embedding. (default None) document_instruction: Optional[str] = An instruction to add to each document before embedding. (default None) padding: Optional[bool] = Whether to add padding during tokenization or not. (default True) model_kwargs: Optional[Dict] = Parameters to add to the model during initialization. (default {}) encode_kwargs: Optional[Dict] = Parameters to add during the embedding forward pass. (default {})
Example:
from langchain_community.embeddings import QuantizedBiEncoderEmbeddings
model_name = "Intel/bge-small-en-v1.5-rag-int8-static" encode_kwargs = {'normalize_embeddings': True} hf = QuantizedBiEncoderEmbeddings( model_name, encode_kwargs=encode_kwargs, query_instruction="Represent this sentence for searching relevant passages: " )