# get_inference_priority

> **Function** in `langchain_nvidia_ai_endpoints`

📖 [View in docs](https://reference.langchain.com/python/langchain-nvidia-ai-endpoints/decorators/get_inference_priority)

Return the active inference priority, or *None* if unset.

## Signature

```python
get_inference_priority() -> Optional[int]
```

---

[View source on GitHub](https://github.com/langchain-ai/langchain-nvidia/blob/5bfb68d5b10aa0330a6b79a36375b9bc0c6acef7/libs/ai-endpoints/langchain_nvidia_ai_endpoints/decorators.py#L44)