# MLXPipeline

> **Class** in `langchain_community`

📖 [View in docs](https://reference.langchain.com/python/langchain-community/llms/mlx_pipeline/MLXPipeline)

MLX Pipeline API.

To use, you should have the ``mlx-lm`` python package installed.

## Signature

```python
MLXPipeline()
```

## Description

**Example using from_model_id:**

.. code-block:: python

from langchain_community.llms import MLXPipeline
pipe = MLXPipeline.from_model_id(
    model_id="mlx-community/quantized-gemma-2b",
    pipeline_kwargs={"max_tokens": 10, "temp": 0.7},
)

Example passing model and tokenizer in directly:
.. code-block:: python

    from langchain_community.llms import MLXPipeline
    from mlx_lm import load
    model_id="mlx-community/quantized-gemma-2b"
    model, tokenizer = load(model_id)
    pipe = MLXPipeline(model=model, tokenizer=tokenizer)

## Extends

- `LLM`

## Properties

- `model_id`
- `model`
- `tokenizer`
- `tokenizer_config`
- `adapter_file`
- `lazy`
- `pipeline_kwargs`
- `model_config`

## Methods

- [`from_model_id()`](https://reference.langchain.com/python/langchain-community/llms/mlx_pipeline/MLXPipeline/from_model_id)

---

[View source on GitHub](https://github.com/langchain-ai/langchain-community/blob/a6a6079511ac8a5c1293337f88096b8641562e77/libs/community/langchain_community/llms/mlx_pipeline.py#L16)