Leverage Itrex runtime to unlock the performance of compressed NLP models.
Please ensure that you have installed intel-extension-for-transformers.
Input:
model_name: str = Model name. max_seq_len: int = The maximum sequence length for tokenization. (default 512) pooling_strategy: str = "mean" or "cls", pooling strategy for the final layer. (default "mean") query_instruction: Optional[str] = An instruction to add to the query before embedding. (default None) document_instruction: Optional[str] = An instruction to add to each document before embedding. (default None) padding: Optional[bool] = Whether to add padding during tokenization or not. (default True) model_kwargs: Optional[Dict] = Parameters to add to the model during initialization. (default {}) encode_kwargs: Optional[Dict] = Parameters to add during the embedding forward pass. (default {}) onnx_file_name: Optional[str] = File name of onnx optimized model which is exported by itrex. (default "int8-model.onnx")
Example:
.. code-block:: python
from langchain_community.embeddings import QuantizedBgeEmbeddings
model_name = "Intel/bge-small-en-v1.5-sts-int8-static-inc" encode_kwargs = {'normalize_embeddings': True} hf = QuantizedBgeEmbeddings( model_name, encode_kwargs=encode_kwargs, query_instruction="Represent this sentence for searching relevant passages: " )