Chat models¶
Reference docs
This page contains reference documentation for chat models. See the docs for conceptual guides, tutorials, and examples on using chat models.
langchain.chat_models
¶
Entrypoint to using chat models in LangChain.
init_chat_model
¶
init_chat_model(
model: str | None = None,
*,
model_provider: str | None = None,
configurable_fields: Literal["any"] | list[str] | tuple[str, ...] | None = None,
config_prefix: str | None = None,
**kwargs: Any,
) -> BaseChatModel | _ConfigurableModel
Initialize a chat model from any supported provider using a unified interface.
Two main use cases:
- Fixed model – specify the model upfront and get a ready-to-use chat model.
- Configurable model – choose to specify parameters (including model name) at
runtime via
config. Makes it easy to switch between models/providers without changing your code
Note
Requires the integration package for the chosen model provider to be installed.
See the model_provider parameter below for specific package names
(e.g., pip install langchain-openai).
Refer to the provider integration's API reference
for supported model parameters to use as **kwargs.
| PARAMETER | DESCRIPTION |
|---|---|
|
The name or ID of the model, e.g. You can also specify model and model provider in a single argument using
Will attempt to infer The following providers will be inferred based on these model prefixes:
TYPE:
|
|
The model provider if not specified as part of the model arg (see above). Supported
TYPE:
|
|
Which model parameters are configurable at runtime:
Fields are assumed to have If If Security note Setting Make sure that if you're accepting untrusted configurations that you
enumerate the
TYPE:
|
|
Optional prefix for configuration keys. Useful when you have multiple configurable models in the same application. If If
TYPE:
|
|
Additional model-specific keyword args to pass to the underlying
chat model's
Refer to the specific model provider's integration reference for all available parameters.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
BaseChatModel | _ConfigurableModel
|
A |
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If |
ImportError
|
If the model provider integration package is not installed. |
Initialize a non-configurable model
# pip install langchain langchain-openai langchain-anthropic langchain-google-vertexai
from langchain.chat_models import init_chat_model
o3_mini = init_chat_model("openai:o3-mini", temperature=0)
claude_sonnet = init_chat_model("anthropic:claude-sonnet-4-5-20250929", temperature=0)
gemini_2-5_flash = init_chat_model("google_vertexai:gemini-2.5-flash", temperature=0)
o3_mini.invoke("what's your name")
claude_sonnet.invoke("what's your name")
gemini_2-5_flash.invoke("what's your name")
Partially configurable model with no default
# pip install langchain langchain-openai langchain-anthropic
from langchain.chat_models import init_chat_model
# (We don't need to specify configurable=True if a model isn't specified.)
configurable_model = init_chat_model(temperature=0)
configurable_model.invoke("what's your name", config={"configurable": {"model": "gpt-4o"}})
# Use GPT-4o to generate the response
configurable_model.invoke(
"what's your name",
config={"configurable": {"model": "claude-sonnet-4-5-20250929"}},
)
Fully configurable model with a default
# pip install langchain langchain-openai langchain-anthropic
from langchain.chat_models import init_chat_model
configurable_model_with_default = init_chat_model(
"openai:gpt-4o",
configurable_fields="any", # This allows us to configure other params like temperature, max_tokens, etc at runtime.
config_prefix="foo",
temperature=0,
)
configurable_model_with_default.invoke("what's your name")
# GPT-4o response with temperature 0 (as set in default)
configurable_model_with_default.invoke(
"what's your name",
config={
"configurable": {
"foo_model": "anthropic:claude-sonnet-4-5-20250929",
"foo_temperature": 0.6,
}
},
)
# Override default to use Sonnet 4.5 with temperature 0.6 to generate response
Bind tools to a configurable model
You can call any chat model declarative methods on a configurable model in the same way that you would with a normal model:
# pip install langchain langchain-openai langchain-anthropic
from langchain.chat_models import init_chat_model
from pydantic import BaseModel, Field
class GetWeather(BaseModel):
'''Get the current weather in a given location'''
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
class GetPopulation(BaseModel):
'''Get the current population in a given location'''
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
configurable_model = init_chat_model(
"gpt-4o", configurable_fields=("model", "model_provider"), temperature=0
)
configurable_model_with_tools = configurable_model.bind_tools(
[
GetWeather,
GetPopulation,
]
)
configurable_model_with_tools.invoke(
"Which city is hotter today and which is bigger: LA or NY?"
)
# Use GPT-4o
configurable_model_with_tools.invoke(
"Which city is hotter today and which is bigger: LA or NY?",
config={"configurable": {"model": "claude-sonnet-4-5-20250929"}},
)
# Use Sonnet 4.5