ChatOpenRouter()Application URL for OpenRouter attribution.
Maps to HTTP-Referer header.
See https://openrouter.ai/docs/app-attribution for details.
Application title for OpenRouter attribution.
Maps to X-Title header.
See https://openrouter.ai/docs/app-attribution for details.
Reasoning settings to pass to OpenRouter.
Controls how many tokens the model allocates for internal chain-of-thought reasoning.
Accepts an openrouter.components.OpenResponsesReasoningConfig or an
equivalent dict.
Supported keys:
effort: Controls reasoning token budget.
Values: 'xhigh', 'high', 'medium', 'low', 'minimal', 'none'.
summary: Controls verbosity of the reasoning summary returned in the
response.
Values: 'auto', 'concise', 'detailed'.
Example: {"effort": "high", "summary": "auto"}
See https://openrouter.ai/docs/guides/best-practices/reasoning-tokens
OpenRouter chat model integration.
OpenRouter is a unified API that provides access to hundreds of models from multiple providers (OpenAI, Anthropic, Google, Meta, etc.).
Install langchain-openrouter and set environment variable
OPENROUTER_API_KEY.
pip install -U langchain-openrouter
export OPENROUTER_API_KEY="your-api-key"| Param | Type | Description |
|---|---|---|
model |
str |
Model name, e.g. 'openai/gpt-4o-mini'. |
temperature |
`float | None` |
max_tokens |
`int | None` |
| Param | Type | Description |
|---|---|---|
api_key |
`str | None` |
base_url |
`str | None` |
timeout |
`int | None` |
app_url |
`str | None` |
app_title |
`str | None` |
max_retries |
int |
Max retries (default 2). Set to 0 to disable. |
from langchain_openrouter import ChatOpenRouter
model = ChatOpenRouter(
model="anthropic/claude-sonnet-4-5",
temperature=0,
# api_key="...",
# openrouter_provider={"order": ["Anthropic"]},
)See https://openrouter.ai/docs for platform documentation.