# ChatHuggingFace

> **Class** in `langchain_huggingface`

📖 [View in docs](https://reference.langchain.com/python/langchain-huggingface/chat_models/huggingface/ChatHuggingFace)

Hugging Face LLM's as ChatModels.

Works with `HuggingFaceTextGenInference`, `HuggingFaceEndpoint`,
`HuggingFaceHub`, and `HuggingFacePipeline` LLMs.

Upon instantiating this class, the model_id is resolved from the url
provided to the LLM, and the appropriate tokenizer is loaded from
the HuggingFace Hub.

## Signature

```python
ChatHuggingFace(
    self,
    **kwargs: Any = {},
)
```

## Description

**Setup:**

Install `langchain-huggingface` and ensure your Hugging Face token
is saved.

```bash
pip install langchain-huggingface
```

```python
from huggingface_hub import login

login()  # You will be prompted for your HF key, which will then be saved locally
```

Key init args — completion params:
    llm:
        LLM to be used.

Key init args — client params:
    custom_get_token_ids:
        Optional encoder to use for counting tokens.
    metadata:
        Metadata to add to the run trace.
    tags:
        Tags to add to the run trace.
    verbose:
        Whether to print out response text.

See full list of supported init args and their descriptions in the params
section.

**Instantiate:**

```python
from langchain_huggingface import HuggingFaceEndpoint,
ChatHuggingFace

model = HuggingFaceEndpoint(
    repo_id="microsoft/Phi-3-mini-4k-instruct",
    task="text-generation",
    max_new_tokens=512,
    do_sample=False,
    repetition_penalty=1.03,
)

chat = ChatHuggingFace(llm=model, verbose=True)
```

**Invoke:**

```python
messages = [
    ("system", "You are a helpful translator. Translate the user
    sentence to French."),
    ("human", "I love programming."),
]

chat(...).invoke(messages)
```

```python
AIMessage(content='Je ai une passion pour le programme.\n\nIn
French, we use "ai" for masculine subjects and "a" for feminine
subjects. Since "programming" is gender-neutral in English, we
will go with the masculine "programme".\n\nConfirmation: "J\'aime
le programme." is more commonly used. The sentence above is
technically accurate, but less commonly used in spoken French as
"ai" is used less frequently in everyday speech.',
response_metadata={'token_usage': ChatCompletionOutputUsage
(completion_tokens=100, prompt_tokens=55, total_tokens=155),
'model': '', 'finish_reason': 'length'},
id='run-874c24b7-0272-4c99-b259-5d6d7facbc56-0')
```

**Stream:**

```python
for chunk in chat.stream(messages):
    print(chunk)
```

```python
content='Je ai une passion pour le programme.\n\nIn French, we use
"ai" for masculine subjects and "a" for feminine subjects.
Since "programming" is gender-neutral in English,
we will go with the masculine "programme".\n\nConfirmation:
"J\'aime le programme." is more commonly used. The sentence
above is technically accurate, but less commonly used in spoken
French as "ai" is used less frequently in everyday speech.'
response_metadata={'token_usage': ChatCompletionOutputUsage
(completion_tokens=100, prompt_tokens=55, total_tokens=155),
'model': '', 'finish_reason': 'length'}
id='run-7d7b1967-9612-4f9a-911a-b2b5ca85046a-0'
```

**Async:**

```python
await chat.ainvoke(messages)
```

```python
AIMessage(content='Je déaime le programming.\n\nLittérale : Je
(j\'aime) déaime (le) programming.\n\nNote: "Programming" in
French is "programmation". But here, I used "programming" instead
of "programmation" because the user said "I love programming"
instead of "I love programming (in French)", which would be
"J\'aime la programmation". By translating the sentence
literally, I preserved the original meaning of the user\'s
sentence.', id='run-fd850318-e299-4735-b4c6-3496dc930b1d-0')
```

**Tool calling:**

```python
from pydantic import BaseModel, Field

class GetWeather(BaseModel):
    '''Get the current weather in a given location'''

    location: str = Field(..., description="The city and state,
    e.g. San Francisco, CA")

class GetPopulation(BaseModel):
    '''Get the current population in a given location'''

    location: str = Field(..., description="The city and state,
    e.g. San Francisco, CA")

chat_with_tools = chat.bind_tools([GetWeather, GetPopulation])
ai_msg = chat_with_tools.invoke("Which city is hotter today and
which is bigger: LA or NY?")
ai_msg.tool_calls
```

```python
[
    {
        "name": "GetPopulation",
        "args": {"location": "Los Angeles, CA"},
        "id": "0",
    }
]
```

Response metadata
```python
ai_msg = chat.invoke(messages)
ai_msg.response_metadata
```

```python
{
    "token_usage": ChatCompletionOutputUsage(
        completion_tokens=100, prompt_tokens=8, total_tokens=108
    ),
    "model": "",
    "finish_reason": "length",
}
```

## Extends

- `BaseChatModel`

## Constructors

```python
__init__(
    self,
    **kwargs: Any = {},
)
```


## Properties

- `llm`
- `tokenizer`
- `model_id`
- `temperature`
- `stop`
- `presence_penalty`
- `frequency_penalty`
- `seed`
- `logprobs`
- `top_logprobs`
- `logit_bias`
- `streaming`
- `stream_usage`
- `n`
- `top_p`
- `max_tokens`
- `model_kwargs`

## Methods

- [`validate_llm()`](https://reference.langchain.com/python/langchain-huggingface/chat_models/huggingface/ChatHuggingFace/validate_llm)
- [`from_model_id()`](https://reference.langchain.com/python/langchain-huggingface/chat_models/huggingface/ChatHuggingFace/from_model_id)
- [`bind_tools()`](https://reference.langchain.com/python/langchain-huggingface/chat_models/huggingface/ChatHuggingFace/bind_tools)
- [`with_structured_output()`](https://reference.langchain.com/python/langchain-huggingface/chat_models/huggingface/ChatHuggingFace/with_structured_output)

---

[View source on GitHub](https://github.com/langchain-ai/langchain/blob/9f232caa7a8fe1ca042a401942d5d90d54ceb1a6/libs/partners/huggingface/langchain_huggingface/chat_models/huggingface.py#L324)