# Llamafile

> **Class** in `langchain_community`

📖 [View in docs](https://reference.langchain.com/python/langchain-community/llms/llamafile/Llamafile)

Llamafile lets you distribute and run large language models with a
single file.

To get started, see: https://github.com/Mozilla-Ocho/llamafile

To use this class, you will need to first:

1. Download a llamafile.
2. Make the downloaded file executable: `chmod +x path/to/model.llamafile`
3. Start the llamafile in server mode:

    `./path/to/model.llamafile --server --nobrowser`

## Signature

```python
Llamafile()
```

## Description

**Example:**

.. code-block:: python

from langchain_community.llms import Llamafile
llm = Llamafile()
llm.invoke("Tell me a joke.")

## Extends

- `LLM`

## Properties

- `base_url`
- `request_timeout`
- `streaming`
- `seed`
- `temperature`
- `top_k`
- `top_p`
- `min_p`
- `n_predict`
- `n_keep`
- `tfs_z`
- `typical_p`
- `repeat_penalty`
- `repeat_last_n`
- `penalize_nl`
- `presence_penalty`
- `frequency_penalty`
- `mirostat`
- `mirostat_tau`
- `mirostat_eta`
- `model_config`

---

[View source on GitHub](https://github.com/langchain-ai/langchain-community/blob/4b280287bd55b99b44db2dd849f02d66c89534d5/libs/community/langchain_community/llms/llamafile.py#L15)