Chat models¶
langchain-classic documentation
These docs cover the langchain-classic package. This package will be maintained for security vulnerabilities until December 2026. Users are encouraged to migrate to the langchain package for the latest features and improvements. See docs for langchain
langchain_classic.chat_models
¶
Chat Models are a variation on language models.
While Chat Models use language models under the hood, the interface they expose is a bit different. Rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.
| FUNCTION | DESCRIPTION |
|---|---|
init_chat_model |
Initialize a chat model in a single line using the model's name and provider. |
init_chat_model
¶
init_chat_model(
model: str | None = None,
*,
model_provider: str | None = None,
configurable_fields: Literal["any"] | list[str] | tuple[str, ...] | None = None,
config_prefix: str | None = None,
**kwargs: Any,
) -> BaseChatModel | _ConfigurableModel
Initialize a chat model in a single line using the model's name and provider.
Note
Requires the integration package for your model provider to be installed.
See the model_provider parameter below for specific package names
(e.g., pip install langchain-openai).
Refer to the provider integration's API reference for supported model parameters.
| PARAMETER | DESCRIPTION |
|---|---|
model
|
The name of the model, e.g. You can also specify model and model provider in a single argument using:
TYPE:
|
model_provider
|
The model provider if not specified as part of the model arg
(see above). Supported
Will attempt to infer
TYPE:
|
configurable_fields
|
Which model parameters are configurable:
Fields are assumed to have Security note Setting
TYPE:
|
config_prefix
|
If
TYPE:
|
temperature
|
Model temperature.
|
max_tokens
|
Max output tokens.
|
timeout
|
The maximum time (in seconds) to wait for a response from the model before canceling the request.
|
max_retries
|
The maximum number of attempts the system will make to resend a request if it fails due to issues like network timeouts or rate limits.
|
base_url
|
The URL of the API endpoint where requests are sent.
|
rate_limiter
|
A
|
kwargs
|
Additional model-specific keyword args to pass to
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
BaseChatModel | _ConfigurableModel
|
A |
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If |
ImportError
|
If the model provider integration package is not installed. |
Initialize a non-configurable model
# pip install langchain langchain-openai langchain-anthropic langchain-google-vertexai
from langchain_classic.chat_models import init_chat_model
o3_mini = init_chat_model("openai:o3-mini", temperature=0)
claude_sonnet = init_chat_model("anthropic:claude-sonnet-4-5", temperature=0)
gemini_2_flash = init_chat_model(
"google_vertexai:gemini-2.5-flash", temperature=0
)
o3_mini.invoke("what's your name")
claude_sonnet.invoke("what's your name")
gemini_2_flash.invoke("what's your name")
Partially configurable model with no default
# pip install langchain langchain-openai langchain-anthropic
from langchain_classic.chat_models import init_chat_model
# We don't need to specify configurable=True if a model isn't specified.
configurable_model = init_chat_model(temperature=0)
configurable_model.invoke(
"what's your name", config={"configurable": {"model": "gpt-4o"}}
)
# GPT-4o response
configurable_model.invoke(
"what's your name",
config={"configurable": {"model": "claude-sonnet-4-5"}},
)
Fully configurable model with a default
# pip install langchain langchain-openai langchain-anthropic
from langchain_classic.chat_models import init_chat_model
configurable_model_with_default = init_chat_model(
"openai:gpt-4o",
configurable_fields="any", # This allows us to configure other params like temperature, max_tokens, etc at runtime.
config_prefix="foo",
temperature=0,
)
configurable_model_with_default.invoke("what's your name")
# GPT-4o response with temperature 0
configurable_model_with_default.invoke(
"what's your name",
config={
"configurable": {
"foo_model": "anthropic:claude-sonnet-4-5",
"foo_temperature": 0.6,
}
},
)
Bind tools to a configurable model
You can call any chat model declarative methods on a configurable model in the same way that you would with a normal model:
# pip install langchain langchain-openai langchain-anthropic
from langchain_classic.chat_models import init_chat_model
from pydantic import BaseModel, Field
class GetWeather(BaseModel):
'''Get the current weather in a given location'''
location: str = Field(
..., description="The city and state, e.g. San Francisco, CA"
)
class GetPopulation(BaseModel):
'''Get the current population in a given location'''
location: str = Field(
..., description="The city and state, e.g. San Francisco, CA"
)
configurable_model = init_chat_model(
"gpt-4o", configurable_fields=("model", "model_provider"), temperature=0
)
configurable_model_with_tools = configurable_model.bind_tools(
[
GetWeather,
GetPopulation,
]
)
configurable_model_with_tools.invoke(
"Which city is hotter today and which is bigger: LA or NY?"
)
configurable_model_with_tools.invoke(
"Which city is hotter today and which is bigger: LA or NY?",
config={"configurable": {"model": "claude-sonnet-4-5"}},
)
Behavior changed in 0.2.8
Support for configurable_fields and config_prefix added.
Behavior changed in 0.2.12
Support for Ollama via langchain-ollama package added
(langchain_ollama.ChatOllama). Previously,
the now-deprecated langchain-community version of Ollama was imported
(langchain_community.chat_models.ChatOllama).
Support for AWS Bedrock models via the Converse API added
(model_provider="bedrock_converse").
Behavior changed in 0.3.5
Out of beta.
Behavior changed in 0.3.19
Support for Deepseek, IBM, Nvidia, and xAI models added.