# OllamaEmbeddings

> **Class** in `langchain_ollama`

📖 [View in docs](https://reference.langchain.com/python/langchain-ollama/embeddings/OllamaEmbeddings)

Ollama embedding model integration.

## Signature

```python
OllamaEmbeddings()
```

## Description

**Set up a local Ollama instance:**

[Install the Ollama package](https://github.com/ollama/ollama) and set up a
local Ollama instance.

You will need to choose a model to serve.

You can view a list of available models via [the model library](https://ollama.com/library).

To fetch a model from the Ollama model library use `ollama pull <name-of-model>`.

For example, to pull the llama3 model:

```bash
ollama pull llama3
```

This will download the default tagged version of the model.
Typically, the default points to the latest, smallest sized-parameter model.

* On Mac, the models will be downloaded to `~/.ollama/models`
* On Linux (or WSL), the models will be stored at `/usr/share/ollama/.ollama/models`

You can specify the exact version of the model of interest
as such `ollama pull vicuna:13b-v1.5-16k-q4_0`.

To view pulled models:

```bash
ollama list
```

To start serving:

```bash
ollama serve
```

View the Ollama documentation for more commands.

```bash
ollama help
```

Install the `langchain-ollama` integration package:
    ```bash
    pip install -U langchain_ollama
    ```

Key init args — completion params:
    model: str
        Name of Ollama model to use.
    base_url: str | None
        Base url the model is hosted under.

See full list of supported init args and their descriptions in the params section.

**Instantiate:**

```python
from langchain_ollama import OllamaEmbeddings

embed = OllamaEmbeddings(model="llama3")
```

**Embed single text:**

```python
input_text = "The meaning of life is 42"
vector = embed.embed_query(input_text)
print(vector[:3])
```

```python
[-0.024603435769677162, -0.007543657906353474, 0.0039630369283258915]
```

**Embed multiple texts:**

```python
input_texts = ["Document 1...", "Document 2..."]
vectors = embed.embed_documents(input_texts)
print(len(vectors))
# The first 3 coordinates for the first vector
print(vectors[0][:3])
```

```python
2
[-0.024603435769677162, -0.007543657906353474, 0.0039630369283258915]
```

**Async:**

```python
vector = await embed.aembed_query(input_text)
print(vector[:3])

# multiple:
# await embed.aembed_documents(input_texts)
```

```python
[-0.009100092574954033, 0.005071679595857859, -0.0029193938244134188]
```

## Extends

- `BaseModel`
- `Embeddings`

## Properties

- `model`
- `dimensions`
- `validate_model_on_init`
- `base_url`
- `client_kwargs`
- `async_client_kwargs`
- `sync_client_kwargs`
- `mirostat`
- `mirostat_eta`
- `mirostat_tau`
- `num_ctx`
- `num_gpu`
- `keep_alive`
- `num_thread`
- `repeat_last_n`
- `repeat_penalty`
- `temperature`
- `stop`
- `tfs_z`
- `top_k`
- `top_p`
- `model_config`

## Methods

- [`embed_documents()`](https://reference.langchain.com/python/langchain-ollama/embeddings/OllamaEmbeddings/embed_documents)
- [`embed_query()`](https://reference.langchain.com/python/langchain-ollama/embeddings/OllamaEmbeddings/embed_query)
- [`aembed_documents()`](https://reference.langchain.com/python/langchain-ollama/embeddings/OllamaEmbeddings/aembed_documents)
- [`aembed_query()`](https://reference.langchain.com/python/langchain-ollama/embeddings/OllamaEmbeddings/aembed_query)

---

[View source on GitHub](https://github.com/langchain-ai/langchain/blob/b302691ff9ad841804e93e5addbdc53b6974473b/libs/partners/ollama/langchain_ollama/embeddings.py#L25)