Skip to content

Chat models

langchain-classic documentation

These docs cover the langchain-classic package. This package will be maintained for security vulnerabilities until December 2026. Users are encouraged to migrate to the langchain package for the latest features and improvements. See docs for langchain

langchain_classic.chat_models

Chat Models are a variation on language models.

While Chat Models use language models under the hood, the interface they expose is a bit different. Rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.

FUNCTION DESCRIPTION
init_chat_model

Initialize a chat model in a single line using the model's name and provider.

init_chat_model

init_chat_model(
    model: str | None = None,
    *,
    model_provider: str | None = None,
    configurable_fields: Literal["any"] | list[str] | tuple[str, ...] | None = None,
    config_prefix: str | None = None,
    **kwargs: Any,
) -> BaseChatModel | _ConfigurableModel

Initialize a chat model in a single line using the model's name and provider.

Note

Requires the integration package for your model provider to be installed.

See the model_provider parameter below for specific package names (e.g., pip install langchain-openai).

Refer to the provider integration's API reference for supported model parameters.

PARAMETER DESCRIPTION
model

The name of the model, e.g. 'o3-mini', 'claude-sonnet-4-5'.

You can also specify model and model provider in a single argument using:

'{model_provider}:{model}' format, e.g. 'openai:o1'.

TYPE: str | None DEFAULT: None

model_provider

The model provider if not specified as part of the model arg (see above). Supported model_provider values and the corresponding integration package are:

Will attempt to infer model_provider from model if not specified. The following providers will be inferred based on these model prefixes:

  • gpt-... | o1... | o3... -> openai
  • claude... -> anthropic
  • amazon... -> bedrock
  • gemini... -> google_vertexai
  • command... -> cohere
  • accounts/fireworks... -> fireworks
  • mistral... -> mistralai
  • deepseek... -> deepseek
  • grok... -> xai
  • sonar... -> perplexity

TYPE: str | None DEFAULT: None

configurable_fields

Which model parameters are configurable:

  • None: No configurable fields.
  • 'any': All fields are configurable. See security note below.
  • list[str] | Tuple[str, ...]: Specified fields are configurable.

Fields are assumed to have config_prefix stripped if there is a config_prefix. If model is specified, then defaults to None. If model is not specified, then defaults to ("model", "model_provider").

Security note

Setting configurable_fields="any" means fields like api_key, base_url, etc. can be altered at runtime, potentially redirecting model requests to a different service/user. Make sure that if you're accepting untrusted configurations that you enumerate the configurable_fields=(...) explicitly.

TYPE: Literal['any'] | list[str] | tuple[str, ...] | None DEFAULT: None

config_prefix

If 'config_prefix' is a non-empty string then model will be configurable at runtime via the config["configurable"]["{config_prefix}_{param}"] keys. If 'config_prefix' is an empty string then model will be configurable via config["configurable"]["{param}"].

TYPE: str | None DEFAULT: None

temperature

Model temperature.

max_tokens

Max output tokens.

timeout

The maximum time (in seconds) to wait for a response from the model before canceling the request.

max_retries

The maximum number of attempts the system will make to resend a request if it fails due to issues like network timeouts or rate limits.

base_url

The URL of the API endpoint where requests are sent.

rate_limiter

A BaseRateLimiter to space out requests to avoid exceeding rate limits.

kwargs

Additional model-specific keyword args to pass to <<selected chat model>>.__init__(model=model_name, **kwargs).

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
BaseChatModel | _ConfigurableModel

A BaseChatModel corresponding to the model_name and model_provider specified if configurability is inferred to be False. If configurable, a chat model emulator that initializes the underlying model at runtime once a config is passed in.

RAISES DESCRIPTION
ValueError

If model_provider cannot be inferred or isn't supported.

ImportError

If the model provider integration package is not installed.

Initialize a non-configurable model
# pip install langchain langchain-openai langchain-anthropic langchain-google-vertexai
from langchain_classic.chat_models import init_chat_model

o3_mini = init_chat_model("openai:o3-mini", temperature=0)
claude_sonnet = init_chat_model("anthropic:claude-sonnet-4-5", temperature=0)
gemini_2_flash = init_chat_model(
    "google_vertexai:gemini-2.5-flash", temperature=0
)

o3_mini.invoke("what's your name")
claude_sonnet.invoke("what's your name")
gemini_2_flash.invoke("what's your name")
Partially configurable model with no default
# pip install langchain langchain-openai langchain-anthropic
from langchain_classic.chat_models import init_chat_model

# We don't need to specify configurable=True if a model isn't specified.
configurable_model = init_chat_model(temperature=0)

configurable_model.invoke(
    "what's your name", config={"configurable": {"model": "gpt-4o"}}
)
# GPT-4o response

configurable_model.invoke(
    "what's your name",
    config={"configurable": {"model": "claude-sonnet-4-5"}},
)
Fully configurable model with a default
# pip install langchain langchain-openai langchain-anthropic
from langchain_classic.chat_models import init_chat_model

configurable_model_with_default = init_chat_model(
    "openai:gpt-4o",
    configurable_fields="any",  # This allows us to configure other params like temperature, max_tokens, etc at runtime.
    config_prefix="foo",
    temperature=0,
)

configurable_model_with_default.invoke("what's your name")
# GPT-4o response with temperature 0

configurable_model_with_default.invoke(
    "what's your name",
    config={
        "configurable": {
            "foo_model": "anthropic:claude-sonnet-4-5",
            "foo_temperature": 0.6,
        }
    },
)
Bind tools to a configurable model

You can call any chat model declarative methods on a configurable model in the same way that you would with a normal model:

# pip install langchain langchain-openai langchain-anthropic
from langchain_classic.chat_models import init_chat_model
from pydantic import BaseModel, Field


class GetWeather(BaseModel):
    '''Get the current weather in a given location'''

    location: str = Field(
        ..., description="The city and state, e.g. San Francisco, CA"
    )


class GetPopulation(BaseModel):
    '''Get the current population in a given location'''

    location: str = Field(
        ..., description="The city and state, e.g. San Francisco, CA"
    )


configurable_model = init_chat_model(
    "gpt-4o", configurable_fields=("model", "model_provider"), temperature=0
)

configurable_model_with_tools = configurable_model.bind_tools(
    [
        GetWeather,
        GetPopulation,
    ]
)
configurable_model_with_tools.invoke(
    "Which city is hotter today and which is bigger: LA or NY?"
)

configurable_model_with_tools.invoke(
    "Which city is hotter today and which is bigger: LA or NY?",
    config={"configurable": {"model": "claude-sonnet-4-5"}},
)

Behavior changed in 0.2.8

Support for configurable_fields and config_prefix added.

Behavior changed in 0.2.12

Support for Ollama via langchain-ollama package added (langchain_ollama.ChatOllama). Previously, the now-deprecated langchain-community version of Ollama was imported (langchain_community.chat_models.ChatOllama).

Support for AWS Bedrock models via the Converse API added (model_provider="bedrock_converse").

Behavior changed in 0.3.5

Out of beta.

Behavior changed in 0.3.19

Support for Deepseek, IBM, Nvidia, and xAI models added.