# register_model

> **Function** in `langchain_nvidia_ai_endpoints`

📖 [View in docs](https://reference.langchain.com/python/langchain-nvidia-ai-endpoints/_statics/register_model)

Register a model as a known model.

Must be done at the beginning of a program, at least before the model is used or
available models are listed.

For instance:

```python
from langchain_nvidia_ai_endpoints import register_model, Model

register_model(
    Model(
        id="my-custom-model-name",
        model_type="chat",
        client="ChatNVIDIA",
        endpoint="http://host:port/path-to-my-model"
    )
)
llm = ChatNVIDIA(model="my-custom-model-name")
```

Be sure that the `id` matches the model parameter the endpoint expects.

Supported model types are chat models, which must accept and produce chat completion
payloads.

Supported model clients are `ChatNVIDIA`, for chat models.

Endpoint is required.

Use this instead of passing `base_url` to a client constructor when the model's
endpoint supports inference and not `/v1/models` listing.

Use `base_url` when the model's endpoint supports `/v1/models` listing and inference
on a known path, e.g. `/v1/chat/completions`.

## Signature

```python
register_model(
    model: Model,
) -> None
```

---

[View source on GitHub](https://github.com/langchain-ai/langchain-nvidia/blob/5bfb68d5b10aa0330a6b79a36375b9bc0c6acef7/libs/ai-endpoints/langchain_nvidia_ai_endpoints/_statics.py#L1041)