ChatMLX(
self,
**kwargs: Any = {},
)MLX chat models.
Works with MLXPipeline LLM.
To use, you should have the mlx-lm python package installed.
Example:
.. code-block:: python
from langchain_community.chat_models import chatMLX from langchain_community.llms import MLXPipeline
llm = MLXPipeline.from_model_id( model_id="mlx-community/quantized-gemma-2b-it", ) chat = chatMLX(llm=llm)