# ChatNVIDIA

> **Class** in `langchain_nvidia_ai_endpoints`

📖 [View in docs](https://reference.langchain.com/python/langchain-nvidia-ai-endpoints/chat_models/ChatNVIDIA)

NVIDIA chat model.

## Signature

```python
ChatNVIDIA(
    self,
    *,
    model: Optional[str] = None,
    nvidia_api_key: Optional[str] = None,
    api_key: Optional[str] = None,
    base_url: Optional[str] = None,
    temperature: Optional[float] = None,
    max_completion_tokens: Optional[int] = None,
    top_p: Optional[float] = None,
    seed: Optional[int] = None,
    stop: Optional[Union[str, List[str]]] = None,
    default_headers: Optional[Dict[str, str]] = None,
    **kwargs: Any = {},
)
```

## Description

**Example:**

```python
from langchain_nvidia_ai_endpoints import ChatNVIDIA

model = ChatNVIDIA(model="meta/llama2-70b")
response = model.invoke("Hello")
```

## Parameters

| Name | Type | Required | Description |
|------|------|----------|-------------|
| `model` | `Optional[str]` | No | The model to use for chat. (default: `None`) |
| `nvidia_api_key` | `Optional[str]` | No | The API key to use for connecting to the hosted NIM. (default: `None`) |
| `api_key` | `Optional[str]` | No | Alternative to `nvidia_api_key`. (default: `None`) |
| `base_url` | `Optional[str]` | No | The base URL of the NIM to connect to.  Format for base URL is `http://host:port` (default: `None`) |
| `temperature` | `Optional[float]` | No | Sampling temperature in `[0, 2]`. (default: `None`) |
| `max_completion_tokens` | `Optional[int]` | No | Maximum number of tokens to generate. (default: `None`) |
| `top_p` | `Optional[float]` | No | Top-p for distribution sampling in `[0, 1]`. (default: `None`) |
| `seed` | `Optional[int]` | No | A seed for deterministic results. (default: `None`) |
| `stop` | `Optional[Union[str, List[str]]]` | No | A string or list of strings specifying stop sequences. (default: `None`) |
| `default_headers` | `Optional[Dict[str, str]]` | No | Default headers merged into all requests. (default: `None`) |
| `**kwargs` | `Any` | No | Additional parameters passed to the underlying client. (default: `{}`) |

## Extends

- `BaseChatModel`

## Constructors

```python
__init__(
    self,
    *,
    model: Optional[str] = None,
    nvidia_api_key: Optional[str] = None,
    api_key: Optional[str] = None,
    base_url: Optional[str] = None,
    temperature: Optional[float] = None,
    max_completion_tokens: Optional[int] = None,
    top_p: Optional[float] = None,
    seed: Optional[int] = None,
    stop: Optional[Union[str, List[str]]] = None,
    default_headers: Optional[Dict[str, str]] = None,
    **kwargs: Any = {},
)
```

| Name | Type |
|------|------|
| `model` | `Optional[str]` |
| `nvidia_api_key` | `Optional[str]` |
| `api_key` | `Optional[str]` |
| `base_url` | `Optional[str]` |
| `temperature` | `Optional[float]` |
| `max_completion_tokens` | `Optional[int]` |
| `top_p` | `Optional[float]` |
| `seed` | `Optional[int]` |
| `stop` | `Optional[Union[str, List[str]]]` |
| `default_headers` | `Optional[Dict[str, str]]` |


## Properties

- `model_config`
- `base_url`
- `model`
- `temperature`
- `max_tokens`
- `top_p`
- `seed`
- `stop`
- `stream_options`
- `default_headers`
- `model_kwargs`
- `profile`
- `available_models`

## Methods

- [`build_extra()`](https://reference.langchain.com/python/langchain-nvidia-ai-endpoints/chat_models/ChatNVIDIA/build_extra)
- [`get_available_models()`](https://reference.langchain.com/python/langchain-nvidia-ai-endpoints/chat_models/ChatNVIDIA/get_available_models)
- [`bind_tools()`](https://reference.langchain.com/python/langchain-nvidia-ai-endpoints/chat_models/ChatNVIDIA/bind_tools)
- [`bind_functions()`](https://reference.langchain.com/python/langchain-nvidia-ai-endpoints/chat_models/ChatNVIDIA/bind_functions)
- [`with_structured_output()`](https://reference.langchain.com/python/langchain-nvidia-ai-endpoints/chat_models/ChatNVIDIA/with_structured_output)
- [`with_thinking_mode()`](https://reference.langchain.com/python/langchain-nvidia-ai-endpoints/chat_models/ChatNVIDIA/with_thinking_mode)

---

[View source on GitHub](https://github.com/langchain-ai/langchain-nvidia/blob/0f17fc4ec134f9d86ba79cbc2a2d95760953ea41/libs/ai-endpoints/langchain_nvidia_ai_endpoints/chat_models.py#L381)