# IpexLLMBgeEmbeddings

> **Class** in `langchain_community`

📖 [View in docs](https://reference.langchain.com/python/langchain-community/embeddings/ipex_llm/IpexLLMBgeEmbeddings)

Wrapper around the BGE embedding model
with IPEX-LLM optimizations on Intel CPUs and GPUs.

To use, you should have the ``ipex-llm``
and ``sentence_transformers`` package installed. Refer to
`here <https://python.langchain.com/v0.1/docs/integrations/text_embedding/ipex_llm/>`_
for installation on Intel CPU.

## Signature

```python
IpexLLMBgeEmbeddings(
    self,
    **kwargs: Any = {},
)
```

## Description

**Example on Intel CPU:**

.. code-block:: python

from langchain_community.embeddings import IpexLLMBgeEmbeddings

embedding_model = IpexLLMBgeEmbeddings(
    model_name="BAAI/bge-large-en-v1.5",
    model_kwargs={},
    encode_kwargs={"normalize_embeddings": True},
)

Refer to
`here <https://python.langchain.com/v0.1/docs/integrations/text_embedding/ipex_llm_gpu/>`_
for installation on Intel GPU.

**Example on Intel GPU:**

.. code-block:: python

from langchain_community.embeddings import IpexLLMBgeEmbeddings

embedding_model = IpexLLMBgeEmbeddings(
    model_name="BAAI/bge-large-en-v1.5",
    model_kwargs={"device": "xpu"},
    encode_kwargs={"normalize_embeddings": True},
)

## Extends

- `BaseModel`
- `Embeddings`

## Constructors

```python
__init__(
    self,
    **kwargs: Any = {},
)
```


## Properties

- `client`
- `model_name`
- `cache_folder`
- `model_kwargs`
- `encode_kwargs`
- `query_instruction`
- `embed_instruction`
- `model_config`

## Methods

- [`embed_documents()`](https://reference.langchain.com/python/langchain-community/embeddings/ipex_llm/IpexLLMBgeEmbeddings/embed_documents)
- [`embed_query()`](https://reference.langchain.com/python/langchain-community/embeddings/ipex_llm/IpexLLMBgeEmbeddings/embed_query)

---

[View source on GitHub](https://github.com/langchain-ai/langchain-community/blob/4b280287bd55b99b44db2dd849f02d66c89534d5/libs/community/langchain_community/embeddings/ipex_llm.py#L16)