# ChatPerplexity

> **Class** in `langchain_perplexity`

📖 [View in docs](https://reference.langchain.com/python/langchain-perplexity/chat_models/ChatPerplexity)

`Perplexity AI` Chat models API.

## Signature

```python
ChatPerplexity()
```

## Description

**Setup:**

To use, you should have the environment variable `PPLX_API_KEY` set to your API key.
Any parameters that are valid to be passed to the perplexity.create call
can be passed in, even if not explicitly saved on this class.

```bash
export PPLX_API_KEY=your_api_key
```

Key init args - completion params:
    model:
        Name of the model to use. e.g. "sonar"
    temperature:
        Sampling temperature to use.
    max_tokens:
        Maximum number of tokens to generate.
    streaming:
        Whether to stream the results or not.

Key init args - client params:
    pplx_api_key:
        API key for PerplexityChat API.
    request_timeout:
        Timeout for requests to PerplexityChat completion API.
    max_retries:
        Maximum number of retries to make when generating.

See full list of supported init args and their descriptions in the params section.

Instantiate:

```python
from langchain_perplexity import ChatPerplexity

model = ChatPerplexity(model="sonar", temperature=0.7)
```

Invoke:

```python
messages = [("system", "You are a chatbot."), ("user", "Hello!")]
model.invoke(messages)
```

Invoke with structured output:

```python
from pydantic import BaseModel

class StructuredOutput(BaseModel):
    role: str
    content: str

model.with_structured_output(StructuredOutput)
model.invoke(messages)
```

Stream:
```python
for chunk in model.stream(messages):
    print(chunk.content)
```

Token usage:
```python
response = model.invoke(messages)
response.usage_metadata
```

Response metadata:
```python
response = model.invoke(messages)
response.response_metadata
```

## Extends

- `BaseChatModel`

## Properties

- `client`
- `async_client`
- `model`
- `temperature`
- `model_kwargs`
- `pplx_api_key`
- `request_timeout`
- `max_retries`
- `streaming`
- `max_tokens`
- `search_mode`
- `reasoning_effort`
- `language_preference`
- `search_domain_filter`
- `return_images`
- `return_related_questions`
- `search_recency_filter`
- `search_after_date_filter`
- `search_before_date_filter`
- `last_updated_after_filter`
- `last_updated_before_filter`
- `disable_search`
- `enable_search_classifier`
- `web_search_options`
- `media_response`
- `model_config`
- `lc_secrets`

## Methods

- [`build_extra()`](https://reference.langchain.com/python/langchain-perplexity/chat_models/ChatPerplexity/build_extra)
- [`validate_environment()`](https://reference.langchain.com/python/langchain-perplexity/chat_models/ChatPerplexity/validate_environment)
- [`with_structured_output()`](https://reference.langchain.com/python/langchain-perplexity/chat_models/ChatPerplexity/with_structured_output)

---

[View source on GitHub](https://github.com/langchain-ai/langchain/blob/f0c5a28fa05adcda89aebcb449d897245ab21fa4/libs/partners/perplexity/langchain_perplexity/chat_models.py#L105)