# ChatParallelWeb

> **Class** in `langchain_parallel`

📖 [View in docs](https://reference.langchain.com/python/langchain-parallel/chat_models/ChatParallelWeb)

Parallel Web chat model integration.

This integration connects to Parallel's Chat API, which provides
real-time web research capabilities through an OpenAI-compatible interface.

## Signature

```python
ChatParallelWeb()
```

## Description

**Setup:**

Install `langchain-parallel` and set environment variable
`PARALLEL_API_KEY`.

```bash
pip install -U langchain-parallel
export PARALLEL_API_KEY="your-api-key"
```

Key init args — completion params:
    model: str
        Name of Parallel Web model to use. Defaults to "speed".
    temperature: Optional[float]
        Sampling temperature (ignored by Parallel).
    max_tokens: Optional[int]
        Max number of tokens to generate (ignored by Parallel).

Key init args — client params:
    timeout: Optional[float]
        Timeout for requests.
    max_retries: int
        Max number of retries.
    api_key: Optional[str]
        Parallel API key. If not passed in will be read from env var
        PARALLEL_API_KEY.
    base_url: str
        Base URL for Parallel API. Defaults to "https://api.parallel.ai".

**Instantiate:**

```python
from langchain_parallel import ChatParallelWeb

llm = ChatParallelWeb(
    model="speed",
    temperature=0.7,
    max_tokens=None,
    timeout=None,
    max_retries=2,
    # api_key="...",
    # other params...
)
```

**Invoke:**

```python
messages = [
    (
        "system",
        "You are a helpful assistant with access to real-time web "
        "information."
    ),
    ("human", "What are the latest developments in AI?"),
]
llm.invoke(messages)
```

**Stream:**

```python
for chunk in llm.stream(messages):
    print(chunk.content, end="")
```

**Async:**

```python
await llm.ainvoke(messages)

# stream:
async for chunk in llm.astream(messages):
    print(chunk.content, end="")

# batch:
await llm.abatch([messages])
```

**Token usage:**

```python
ai_msg = llm.invoke(messages)
ai_msg.usage_metadata
```

**Response metadata:**

```python
ai_msg = llm.invoke(messages)
ai_msg.response_metadata
```

## Extends

- `BaseChatModel`

## Properties

- `model`
- `api_key`
- `base_url`
- `temperature`
- `max_tokens`
- `timeout`
- `max_retries`
- `response_format`
- `tools`
- `tool_choice`
- `stream_options`
- `top_p`
- `frequency_penalty`
- `presence_penalty`
- `logit_bias`
- `seed`
- `user`
- `client`
- `async_client`
- `lc_secrets`
- `lc_attributes`

## Methods

- [`validate_environment()`](https://reference.langchain.com/python/langchain-parallel/chat_models/ChatParallelWeb/validate_environment)
- [`get_lc_namespace()`](https://reference.langchain.com/python/langchain-parallel/chat_models/ChatParallelWeb/get_lc_namespace)
- [`is_lc_serializable()`](https://reference.langchain.com/python/langchain-parallel/chat_models/ChatParallelWeb/is_lc_serializable)

---

[View source on GitHub](https://github.com/parallel-web/langchain-parallel/blob/7946e2f5339c689b452621744a27f1a019215639/langchain_parallel/chat_models.py#L128)