LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicchat_modelsbaseinit_chat_model
    Function●Since v1.0

    init_chat_model

    Initialize a chat model from any supported provider using a unified interface.

    Two main use cases:

    1. Fixed model – specify the model upfront and get back a ready-to-use chat model.
    2. Configurable model – choose to specify parameters (including model name) at runtime via config. Makes it easy to switch between models/providers without changing your code
    Note

    Requires the integration package for the chosen model provider to be installed.

    See the model_provider parameter below for specific package names (e.g., pip install langchain-openai).

    Refer to the provider integration's API reference for supported model parameters to use as **kwargs.

    Copy
    init_chat_model(
      model: str | None = None,
      *,
      model_provider: str | None = None,
      configurable_fields: Literal['any'] | list[str] | tuple[str, ...] | None = None,
      config_prefix: str | None = None,
      **kwargs: Any = {}
    ) -> BaseChatModel | _ConfigurableModel
    Initialize a non-configurable model
    # pip install langchain langchain-openai langchain-anthropic langchain-google-vertexai
    
    from langchain_classic.chat_models import init_chat_model
    
    o3_mini = init_chat_model("openai:o3-mini", temperature=0)
    claude_sonnet = init_chat_model("anthropic:claude-sonnet-4-5-20250929", temperature=0)
    gemini_2-5_flash = init_chat_model(
        "google_vertexai:gemini-2.5-flash", temperature=0
    )
    
    o3_mini.invoke("what's your name")
    claude_sonnet.invoke("what's your name")
    gemini_2-5_flash.invoke("what's your name")
    Partially configurable model with no default
    # pip install langchain langchain-openai langchain-anthropic
    
    from langchain_classic.chat_models import init_chat_model
    
    # (We don't need to specify configurable=True if a model isn't specified.)
    configurable_model = init_chat_model(temperature=0)
    
    configurable_model.invoke(
        "what's your name", config={"configurable": {"model": "gpt-4o"}}
    )
    # Use GPT-4o to generate the response
    
    configurable_model.invoke(
        "what's your name",
        config={"configurable": {"model": "claude-sonnet-4-5-20250929"}},
    )
    Fully configurable model with a default
    # pip install langchain langchain-openai langchain-anthropic
    
    from langchain_classic.chat_models import init_chat_model
    
    configurable_model_with_default = init_chat_model(
        "openai:gpt-4o",
        configurable_fields="any",  # This allows us to configure other params like temperature, max_tokens, etc at runtime.
        config_prefix="foo",
        temperature=0,
    )
    
    configurable_model_with_default.invoke("what's your name")
    # GPT-4o response with temperature 0 (as set in default)
    
    configurable_model_with_default.invoke(
        "what's your name",
        config={
            "configurable": {
                "foo_model": "anthropic:claude-sonnet-4-5-20250929",
                "foo_temperature": 0.6,
            }
        },
    )
    # Override default to use Sonnet 4.5 with temperature 0.6 to generate response
    Bind tools to a configurable model

    You can call any chat model declarative methods on a configurable model in the same way that you would with a normal model:

    # pip install langchain langchain-openai langchain-anthropic
    
    from langchain_classic.chat_models import init_chat_model
    from pydantic import BaseModel, Field
    
    class GetWeather(BaseModel):
        '''Get the current weather in a given location'''
    
        location: str = Field(
            ..., description="The city and state, e.g. San Francisco, CA"
        )
    
    class GetPopulation(BaseModel):
        '''Get the current population in a given location'''
    
        location: str = Field(
            ..., description="The city and state, e.g. San Francisco, CA"
        )
    
    configurable_model = init_chat_model(
        "gpt-4o", configurable_fields=("model", "model_provider"), temperature=0
    )
    
    configurable_model_with_tools = configurable_model.bind_tools(
        [
            GetWeather,
            GetPopulation,
        ]
    )
    configurable_model_with_tools.invoke(
        "Which city is hotter today and which is bigger: LA or NY?"
    )
    # Use GPT-4o
    
    configurable_model_with_tools.invoke(
        "Which city is hotter today and which is bigger: LA or NY?",
        config={"configurable": {"model": "claude-sonnet-4-5-20250929"}},
    )
    # Use Sonnet 4.5
    Behavior changed in langchain 0.2.8

    Support for configurable_fields and config_prefix added.

    Behavior changed in langchain 0.2.12

    Support for Ollama via langchain-ollama package added (langchain_ollama.ChatOllama). Previously, the now-deprecated langchain-community version of Ollama was imported (langchain_community.chat_models.ChatOllama).

    Support for AWS Bedrock models via the Converse API added (model_provider="bedrock_converse").

    Behavior changed in langchain 0.3.5

    Out of beta.

    Behavior changed in langchain 0.3.19

    Support for Deepseek, IBM, Nvidia, and xAI models added.

    Used in Docs

    • Build a multi-source knowledge base with routing
    • Build a personal assistant with subagents
    • Build a SQL agent
    • Build customer support with handoffs
    • Custom middleware
    (28 more not shown)

    Parameters

    NameTypeDescription
    modelstr | None
    Default:None

    The name or ID of the model, e.g. 'o3-mini', 'claude-sonnet-4-5-20250929'.

    You can also specify model and model provider in a single argument using '{model_provider}:{model}' format, e.g. 'openai:o1'.

    Will attempt to infer model_provider from model if not specified.

    The following providers will be inferred based on these model prefixes:

    • gpt-... | o1... | o3... -> openai
    • claude... -> anthropic
    • amazon... -> bedrock
    • gemini... -> google_vertexai
    • command... -> cohere
    • accounts/fireworks... -> fireworks
    • mistral... -> mistralai
    • deepseek... -> deepseek
    • grok... -> xai
    • sonar... -> perplexity
    model_providerstr | None
    Default:None

    The model provider if not specified as part of the model arg (see above).

    Supported model_provider values and the corresponding integration package are:

    • openai -> langchain-openai
    • anthropic -> langchain-anthropic
    • azure_openai -> langchain-openai
    • azure_ai -> langchain-azure-ai
    • google_vertexai -> langchain-google-vertexai
    • google_genai -> langchain-google-genai
    • bedrock -> langchain-aws
    • bedrock_converse -> langchain-aws
    • cohere -> langchain-cohere
    • fireworks -> langchain-fireworks
    • together -> langchain-together
    • mistralai -> langchain-mistralai
    • huggingface -> langchain-huggingface
    • groq -> langchain-groq
    • ollama -> langchain-ollama
    • google_anthropic_vertex -> langchain-google-vertexai
    • deepseek -> langchain-deepseek
    • ibm -> langchain-ibm
    • nvidia -> langchain-nvidia-ai-endpoints
    • xai -> langchain-xai
    • perplexity -> langchain-perplexity
    configurable_fieldsLiteral['any'] | list[str] | tuple[str, ...] | None
    Default:None

    Which model parameters are configurable at runtime:

    • None: No configurable fields (i.e., a fixed model).
    • 'any': All fields are configurable. See security note below.
    • list[str] | Tuple[str, ...]: Specified fields are configurable.

    Fields are assumed to have config_prefix stripped if a config_prefix is specified.

    If model is specified, then defaults to None.

    If model is not specified, then defaults to ("model", "model_provider").

    Security note

    Setting configurable_fields="any" means fields like api_key, base_url, etc., can be altered at runtime, potentially redirecting model requests to a different service/user.

    Make sure that if you're accepting untrusted configurations that you enumerate the configurable_fields=(...) explicitly.

    config_prefixstr | None
    Default:None

    Optional prefix for configuration keys.

    Useful when you have multiple configurable models in the same application.

    If 'config_prefix' is a non-empty string then model will be configurable at runtime via the config["configurable"]["{config_prefix}_{param}"] keys. See examples below.

    If 'config_prefix' is an empty string then model will be configurable via config["configurable"]["{param}"].

    **kwargsAny
    Default:{}

    Additional model-specific keyword args to pass to the underlying chat model's __init__ method. Common parameters include:

    • temperature: Model temperature for controlling randomness.
    • max_tokens: Maximum number of output tokens.
    • timeout: Maximum time (in seconds) to wait for a response.
    • max_retries: Maximum number of retry attempts for failed requests.
    • base_url: Custom API endpoint URL.
    • rate_limiter: A BaseRateLimiter instance to control request rate.

    Refer to the specific model provider's integration reference for all available parameters.

    View source on GitHub