# LlamafileEmbeddings

> **Class** in `langchain_community`

📖 [View in docs](https://reference.langchain.com/python/langchain-community/embeddings/llamafile/LlamafileEmbeddings)

Llamafile lets you distribute and run large language models with a
single file.

To get started, see: https://github.com/Mozilla-Ocho/llamafile

To use this class, you will need to first:

1. Download a llamafile.
2. Make the downloaded file executable: `chmod +x path/to/model.llamafile`
3. Start the llamafile in server mode with embeddings enabled:

    `./path/to/model.llamafile --server --nobrowser --embedding`

## Signature

```python
LlamafileEmbeddings()
```

## Description

**Example:**

.. code-block:: python

from langchain_community.embeddings import LlamafileEmbeddings
embedder = LlamafileEmbeddings()
doc_embeddings = embedder.embed_documents(
    [
        "Alpha is the first letter of the Greek alphabet",
        "Beta is the second letter of the Greek alphabet",
    ]
)
query_embedding = embedder.embed_query(
    "What is the second letter of the Greek alphabet"
)

## Extends

- `BaseModel`
- `Embeddings`

## Properties

- `base_url`
- `request_timeout`

## Methods

- [`embed_documents()`](https://reference.langchain.com/python/langchain-community/embeddings/llamafile/LlamafileEmbeddings/embed_documents)
- [`embed_query()`](https://reference.langchain.com/python/langchain-community/embeddings/llamafile/LlamafileEmbeddings/embed_query)

---

[View source on GitHub](https://github.com/langchain-ai/langchain-community/blob/4b280287bd55b99b44db2dd849f02d66c89534d5/libs/community/langchain_community/embeddings/llamafile.py#L11)