Skip to content

Chat Models

Chat models.

Modules:

Name Description
base

Factory functions for chat models.

Classes:

Name Description
BaseChatModel

Base class for chat models.

Functions:

Name Description
init_chat_model

Initialize a ChatModel from the model name and provider.

BaseChatModel

Bases: BaseLanguageModel[AIMessage], ABC

Base class for chat models.

Key imperative methods

Methods that actually call the underlying model.

+---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+ | Method | Input | Output | Description | +===========================+================================================================+=====================================================================+==================================================================================================+ | invoke | str | list[dict | tuple | BaseMessage] | PromptValue | BaseMessage | A single chat model call. | +---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+ | ainvoke | ''' | BaseMessage | Defaults to running invoke in an async executor. | +---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+ | stream | ''' | Iterator[BaseMessageChunk] | Defaults to yielding output of invoke. | +---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+ | astream | ''' | AsyncIterator[BaseMessageChunk] | Defaults to yielding output of ainvoke. | +---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+ | astream_events | ''' | AsyncIterator[StreamEvent] | Event types: 'on_chat_model_start', 'on_chat_model_stream', 'on_chat_model_end'. | +---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+ | batch | list['''] | list[BaseMessage] | Defaults to running invoke in concurrent threads. | +---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+ | abatch | list['''] | list[BaseMessage] | Defaults to running ainvoke in concurrent threads. | +---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+ | batch_as_completed | list['''] | Iterator[tuple[int, Union[BaseMessage, Exception]]] | Defaults to running invoke in concurrent threads. | +---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+ | abatch_as_completed | list['''] | AsyncIterator[tuple[int, Union[BaseMessage, Exception]]] | Defaults to running ainvoke in concurrent threads. | +---------------------------+----------------------------------------------------------------+---------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+

This table provides a brief overview of the main imperative methods. Please see the base Runnable reference for full documentation.

Key declarative methods

Methods for creating another Runnable using the ChatModel.

+----------------------------------+-----------------------------------------------------------------------------------------------------------+ | Method | Description | +==================================+===========================================================================================================+ | bind_tools | Create ChatModel that can call tools. | +----------------------------------+-----------------------------------------------------------------------------------------------------------+ | with_structured_output | Create wrapper that structures model output using schema. | +----------------------------------+-----------------------------------------------------------------------------------------------------------+ | with_retry | Create wrapper that retries model calls on failure. | +----------------------------------+-----------------------------------------------------------------------------------------------------------+ | with_fallbacks | Create wrapper that falls back to other models on failure. | +----------------------------------+-----------------------------------------------------------------------------------------------------------+ | configurable_fields | Specify init args of the model that can be configured at runtime via the RunnableConfig. | +----------------------------------+-----------------------------------------------------------------------------------------------------------+ | configurable_alternatives | Specify alternative models which can be swapped in at runtime via the RunnableConfig. | +----------------------------------+-----------------------------------------------------------------------------------------------------------+

This table provides a brief overview of the main declarative methods. Please see the reference for each method for full documentation.

Creating custom chat model

Custom chat model implementations should inherit from this class. Please reference the table below for information about which methods and properties are required or optional for implementations.

+----------------------------------+--------------------------------------------------------------------+-------------------+ | Method/Property | Description | Required/Optional | +==================================+====================================================================+===================+ | _generate | Use to generate a chat result from a prompt | Required | +----------------------------------+--------------------------------------------------------------------+-------------------+ | _llm_type (property) | Used to uniquely identify the type of the model. Used for logging. | Required | +----------------------------------+--------------------------------------------------------------------+-------------------+ | _identifying_params (property) | Represent model parameterization for tracing purposes. | Optional | +----------------------------------+--------------------------------------------------------------------+-------------------+ | _stream | Use to implement streaming | Optional | +----------------------------------+--------------------------------------------------------------------+-------------------+ | _agenerate | Use to implement a native async method | Optional | +----------------------------------+--------------------------------------------------------------------+-------------------+ | _astream | Use to implement async version of _stream | Optional | +----------------------------------+--------------------------------------------------------------------+-------------------+

Follow the guide for more information on how to implement a custom Chat Model: Guide.

Methods:

Name Description
get_name

Get the name of the Runnable.

get_input_schema

Get a pydantic model that can be used to validate input to the Runnable.

get_input_jsonschema

Get a JSON schema that represents the input to the Runnable.

get_output_schema

Get a pydantic model that can be used to validate output to the Runnable.

get_output_jsonschema

Get a JSON schema that represents the output of the Runnable.

config_schema

The type of config this Runnable accepts specified as a pydantic model.

get_config_jsonschema

Get a JSON schema that represents the config of the Runnable.

get_graph

Return a graph representation of this Runnable.

get_prompts

Return a list of prompts used by this Runnable.

__or__

Runnable "or" operator.

__ror__

Runnable "reverse-or" operator.

pipe

Pipe runnables.

pick

Pick keys from the output dict of this Runnable.

assign

Assigns new fields to the dict output of this Runnable.

batch

Default implementation runs invoke in parallel using a thread pool executor.

batch_as_completed

Run invoke in parallel on a list of inputs.

abatch

Default implementation runs ainvoke in parallel using asyncio.gather.

abatch_as_completed

Run ainvoke in parallel on a list of inputs.

astream_log

Stream all output from a Runnable, as reported to the callback system.

astream_events

Generate a stream of events.

transform

Transform inputs to outputs.

atransform

Transform inputs to outputs.

bind

Bind arguments to a Runnable, returning a new Runnable.

with_config

Bind config to a Runnable, returning a new Runnable.

with_listeners

Bind lifecycle listeners to a Runnable, returning a new Runnable.

with_alisteners

Bind async lifecycle listeners to a Runnable.

with_types

Bind input and output types to a Runnable, returning a new Runnable.

with_retry

Create a new Runnable that retries the original Runnable on exceptions.

map

Return a new Runnable that maps a list of inputs to a list of outputs.

with_fallbacks

Add fallbacks to a Runnable, returning a new Runnable.

as_tool

Create a BaseTool from a Runnable.

__init__
is_lc_serializable

Is this class serializable?

get_lc_namespace

Get the namespace of the langchain object.

lc_id

Return a unique identifier for this class for serialization purposes.

to_json

Serialize the Runnable to JSON.

to_json_not_implemented

Serialize a "not implemented" object.

configurable_fields

Configure particular Runnable fields at runtime.

configurable_alternatives

Configure alternatives for Runnables that can be set at runtime.

set_verbose

If verbose is None, set it.

get_token_ids

Return the ordered ids of the tokens in a text.

get_num_tokens

Get the number of tokens present in the text.

get_num_tokens_from_messages

Get the number of tokens in the messages.

raise_deprecation

Emit deprecation warning if callback_manager is used.

generate

Pass a sequence of prompts to the model and return model generations.

agenerate

Asynchronously pass a sequence of prompts to a model and return generations.

__call__

Call the model.

call_as_llm

Call the model.

predict

Predict the next message.

dict

Return a dictionary of the LLM.

bind_tools

Bind tools to the model.

with_structured_output

Model wrapper that returns outputs formatted to match the given schema.

Attributes:

Name Type Description
InputType TypeAlias

Get the input type for this runnable.

input_schema type[BaseModel]

The type of input this Runnable accepts specified as a pydantic model.

output_schema type[BaseModel]

Output schema.

config_specs list[ConfigurableFieldSpec]

List configurable fields for this Runnable.

lc_secrets dict[str, str]

A map of constructor argument names to secret ids.

lc_attributes dict

List of attribute names that should be included in the serialized kwargs.

cache Union[BaseCache, bool, None]

Whether to cache the response.

verbose bool

Whether to print out response text.

callbacks Callbacks

Callbacks to add to the run trace.

tags Optional[list[str]]

Tags to add to the run trace.

metadata Optional[dict[str, Any]]

Metadata to add to the run trace.

custom_get_token_ids Optional[Callable[[str], list[int]]]

Optional encoder to use for counting tokens.

rate_limiter Optional[BaseRateLimiter]

An optional rate limiter to use for limiting the number of requests.

disable_streaming Union[bool, Literal['tool_calling']]

Whether to disable streaming for this model.

output_version Optional[str]

Version of AIMessage output format to store in message content.

OutputType Any

Get the output type for this runnable.

InputType property

InputType: TypeAlias

Get the input type for this runnable.

input_schema property

input_schema: type[BaseModel]

The type of input this Runnable accepts specified as a pydantic model.

output_schema property

output_schema: type[BaseModel]

Output schema.

The type of output this Runnable produces specified as a pydantic model.

config_specs property

config_specs: list[ConfigurableFieldSpec]

List configurable fields for this Runnable.

lc_secrets property

lc_secrets: dict[str, str]

A map of constructor argument names to secret ids.

For example,

lc_attributes property

lc_attributes: dict

List of attribute names that should be included in the serialized kwargs.

These attributes must be accepted by the constructor. Default is an empty dictionary.

cache class-attribute instance-attribute

cache: Union[BaseCache, bool, None] = Field(
    default=None, exclude=True
)

Whether to cache the response.

  • If true, will use the global cache.
  • If false, will not use a cache
  • If None, will use the global cache if it's set, otherwise no cache.
  • If instance of BaseCache, will use the provided cache.

Caching is not currently supported for streaming methods of models.

verbose class-attribute instance-attribute

verbose: bool = Field(
    default_factory=_get_verbosity, exclude=True, repr=False
)

Whether to print out response text.

callbacks class-attribute instance-attribute

callbacks: Callbacks = Field(default=None, exclude=True)

Callbacks to add to the run trace.

tags class-attribute instance-attribute

tags: Optional[list[str]] = Field(
    default=None, exclude=True
)

Tags to add to the run trace.

metadata class-attribute instance-attribute

metadata: Optional[dict[str, Any]] = Field(
    default=None, exclude=True
)

Metadata to add to the run trace.

custom_get_token_ids class-attribute instance-attribute

custom_get_token_ids: Optional[
    Callable[[str], list[int]]
] = Field(default=None, exclude=True)

Optional encoder to use for counting tokens.

rate_limiter class-attribute instance-attribute

rate_limiter: Optional[BaseRateLimiter] = Field(
    default=None, exclude=True
)

An optional rate limiter to use for limiting the number of requests.

disable_streaming class-attribute instance-attribute

disable_streaming: Union[bool, Literal["tool_calling"]] = (
    False
)

Whether to disable streaming for this model.

If streaming is bypassed, then stream()/astream()/astream_events() will defer to invoke()/ainvoke().

  • If True, will always bypass streaming case.
  • If 'tool_calling', will bypass streaming case only when the model is called with a tools keyword argument. In other words, LangChain will automatically switch to non-streaming behavior (invoke()) only when the tools argument is provided. This offers the best of both worlds.
  • If False (default), will always use streaming case if available.

The main reason for this flag is that code might be written using stream() and a user may want to swap out a given model for another model whose the implementation does not properly support streaming.

output_version class-attribute instance-attribute

output_version: Optional[str] = Field(
    default_factory=from_env(
        "LC_OUTPUT_VERSION", default=None
    )
)

Version of AIMessage output format to store in message content.

AIMessage.content_blocks will lazily parse the contents of content into a standard format. This flag can be used to additionally store the standard format in message content, e.g., for serialization purposes.

Supported values:

  • "v0": provider-specific format in content (can lazily-parse with .content_blocks)
  • "v1": standardized format in content (consistent with .content_blocks)

Partner packages (e.g., langchain-openai) can also use this field to roll out new content formats in a backward-compatible way.

.. versionadded:: 1.0

OutputType property

OutputType: Any

Get the output type for this runnable.

get_name

get_name(
    suffix: Optional[str] = None,
    *,
    name: Optional[str] = None,
) -> str

Get the name of the Runnable.

Parameters:

Name Type Description Default
suffix Optional[str]

An optional suffix to append to the name.

None
name Optional[str]

An optional name to use instead of the Runnable's name.

None

Returns:

Type Description
str

The name of the Runnable.

get_input_schema

get_input_schema(
    config: Optional[RunnableConfig] = None,
) -> type[BaseModel]

Get a pydantic model that can be used to validate input to the Runnable.

Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the Runnable is invoked with.

This method allows to get an input schema for a specific configuration.

Parameters:

Name Type Description Default
config Optional[RunnableConfig]

A config to use when generating the schema.

None

Returns:

Type Description
type[BaseModel]

A pydantic model that can be used to validate input.

get_input_jsonschema

get_input_jsonschema(
    config: Optional[RunnableConfig] = None,
) -> dict[str, Any]

Get a JSON schema that represents the input to the Runnable.

Parameters:

Name Type Description Default
config Optional[RunnableConfig]

A config to use when generating the schema.

None

Returns:

Type Description
dict[str, Any]

A JSON schema that represents the input to the Runnable.

Example:

.. code-block:: python

    from langchain_core.runnables import RunnableLambda


    def add_one(x: int) -> int:
        return x + 1


    runnable = RunnableLambda(add_one)

    print(runnable.get_input_jsonschema())

.. versionadded:: 0.3.0

get_output_schema

get_output_schema(
    config: Optional[RunnableConfig] = None,
) -> type[BaseModel]

Get a pydantic model that can be used to validate output to the Runnable.

Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the Runnable is invoked with.

This method allows to get an output schema for a specific configuration.

Parameters:

Name Type Description Default
config Optional[RunnableConfig]

A config to use when generating the schema.

None

Returns:

Type Description
type[BaseModel]

A pydantic model that can be used to validate output.

get_output_jsonschema

get_output_jsonschema(
    config: Optional[RunnableConfig] = None,
) -> dict[str, Any]

Get a JSON schema that represents the output of the Runnable.

Parameters:

Name Type Description Default
config Optional[RunnableConfig]

A config to use when generating the schema.

None

Returns:

Type Description
dict[str, Any]

A JSON schema that represents the output of the Runnable.

Example:

.. code-block:: python

    from langchain_core.runnables import RunnableLambda


    def add_one(x: int) -> int:
        return x + 1


    runnable = RunnableLambda(add_one)

    print(runnable.get_output_jsonschema())

.. versionadded:: 0.3.0

config_schema

config_schema(
    *, include: Optional[Sequence[str]] = None
) -> type[BaseModel]

The type of config this Runnable accepts specified as a pydantic model.

To mark a field as configurable, see the configurable_fields and configurable_alternatives methods.

Parameters:

Name Type Description Default
include Optional[Sequence[str]]

A list of fields to include in the config schema.

None

Returns:

Type Description
type[BaseModel]

A pydantic model that can be used to validate config.

get_config_jsonschema

get_config_jsonschema(
    *, include: Optional[Sequence[str]] = None
) -> dict[str, Any]

Get a JSON schema that represents the config of the Runnable.

Parameters:

Name Type Description Default
include Optional[Sequence[str]]

A list of fields to include in the config schema.

None

Returns:

Type Description
dict[str, Any]

A JSON schema that represents the config of the Runnable.

.. versionadded:: 0.3.0

get_graph

get_graph(config: Optional[RunnableConfig] = None) -> Graph

Return a graph representation of this Runnable.

get_prompts

get_prompts(
    config: Optional[RunnableConfig] = None,
) -> list[BasePromptTemplate]

Return a list of prompts used by this Runnable.

__or__

__or__(
    other: Union[
        Runnable[Any, Other],
        Callable[[Iterator[Any]], Iterator[Other]],
        Callable[
            [AsyncIterator[Any]], AsyncIterator[Other]
        ],
        Callable[[Any], Other],
        Mapping[
            str,
            Union[
                Runnable[Any, Other],
                Callable[[Any], Other],
                Any,
            ],
        ],
    ],
) -> RunnableSerializable[Input, Other]

Runnable "or" operator.

Compose this Runnable with another object to create a RunnableSequence.

Parameters:

Name Type Description Default
other Union[Runnable[Any, Other], Callable[[Iterator[Any]], Iterator[Other]], Callable[[AsyncIterator[Any]], AsyncIterator[Other]], Callable[[Any], Other], Mapping[str, Union[Runnable[Any, Other], Callable[[Any], Other], Any]]]

Another Runnable or a Runnable-like object.

required

Returns:

Type Description
RunnableSerializable[Input, Other]

A new Runnable.

__ror__

__ror__(
    other: Union[
        Runnable[Other, Any],
        Callable[[Iterator[Other]], Iterator[Any]],
        Callable[
            [AsyncIterator[Other]], AsyncIterator[Any]
        ],
        Callable[[Other], Any],
        Mapping[
            str,
            Union[
                Runnable[Other, Any],
                Callable[[Other], Any],
                Any,
            ],
        ],
    ],
) -> RunnableSerializable[Other, Output]

Runnable "reverse-or" operator.

Compose this Runnable with another object to create a RunnableSequence.

Parameters:

Name Type Description Default
other Union[Runnable[Other, Any], Callable[[Iterator[Other]], Iterator[Any]], Callable[[AsyncIterator[Other]], AsyncIterator[Any]], Callable[[Other], Any], Mapping[str, Union[Runnable[Other, Any], Callable[[Other], Any], Any]]]

Another Runnable or a Runnable-like object.

required

Returns:

Type Description
RunnableSerializable[Other, Output]

A new Runnable.

pipe

pipe(
    *others: Union[
        Runnable[Any, Other], Callable[[Any], Other]
    ],
    name: Optional[str] = None,
) -> RunnableSerializable[Input, Other]

Pipe runnables.

Compose this Runnable with Runnable-like objects to make a RunnableSequence.

Equivalent to RunnableSequence(self, *others) or self | others[0] | ...

Example:

.. code-block:: python

    from langchain_core.runnables import RunnableLambda


    def add_one(x: int) -> int:
        return x + 1


    def mul_two(x: int) -> int:
        return x * 2


    runnable_1 = RunnableLambda(add_one)
    runnable_2 = RunnableLambda(mul_two)
    sequence = runnable_1.pipe(runnable_2)
    # Or equivalently:
    # sequence = runnable_1 | runnable_2
    # sequence = RunnableSequence(first=runnable_1, last=runnable_2)
    sequence.invoke(1)
    await sequence.ainvoke(1)
    # -> 4

    sequence.batch([1, 2, 3])
    await sequence.abatch([1, 2, 3])
    # -> [4, 6, 8]

Parameters:

Name Type Description Default
*others Union[Runnable[Any, Other], Callable[[Any], Other]]

Other Runnable or Runnable-like objects to compose

()
name Optional[str]

An optional name for the resulting RunnableSequence.

None

Returns:

Type Description
RunnableSerializable[Input, Other]

A new Runnable.

pick

pick(
    keys: Union[str, list[str]],
) -> RunnableSerializable[Any, Any]

Pick keys from the output dict of this Runnable.

Pick single key:

.. code-block:: python

    import json

    from langchain_core.runnables import RunnableLambda, RunnableMap

    as_str = RunnableLambda(str)
    as_json = RunnableLambda(json.loads)
    chain = RunnableMap(str=as_str, json=as_json)

    chain.invoke("[1, 2, 3]")
    # -> {"str": "[1, 2, 3]", "json": [1, 2, 3]}

    json_only_chain = chain.pick("json")
    json_only_chain.invoke("[1, 2, 3]")
    # -> [1, 2, 3]

Pick list of keys:

.. code-block:: python

    from typing import Any

    import json

    from langchain_core.runnables import RunnableLambda, RunnableMap

    as_str = RunnableLambda(str)
    as_json = RunnableLambda(json.loads)


    def as_bytes(x: Any) -> bytes:
        return bytes(x, "utf-8")


    chain = RunnableMap(
        str=as_str, json=as_json, bytes=RunnableLambda(as_bytes)
    )

    chain.invoke("[1, 2, 3]")
    # -> {"str": "[1, 2, 3]", "json": [1, 2, 3], "bytes": b"[1, 2, 3]"}

    json_and_bytes_chain = chain.pick(["json", "bytes"])
    json_and_bytes_chain.invoke("[1, 2, 3]")
    # -> {"json": [1, 2, 3], "bytes": b"[1, 2, 3]"}

Parameters:

Name Type Description Default
keys Union[str, list[str]]

A key or list of keys to pick from the output dict.

required

Returns:

Type Description
RunnableSerializable[Any, Any]

a new Runnable.

assign

assign(
    **kwargs: Union[
        Runnable[dict[str, Any], Any],
        Callable[[dict[str, Any]], Any],
        Mapping[
            str,
            Union[
                Runnable[dict[str, Any], Any],
                Callable[[dict[str, Any]], Any],
            ],
        ],
    ],
) -> RunnableSerializable[Any, Any]

Assigns new fields to the dict output of this Runnable.

.. code-block:: python

from langchain_community.llms.fake import FakeStreamingListLLM
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import SystemMessagePromptTemplate
from langchain_core.runnables import Runnable
from operator import itemgetter

prompt = (
    SystemMessagePromptTemplate.from_template("You are a nice assistant.")
    + "{question}"
)
llm = FakeStreamingListLLM(responses=["foo-lish"])

chain: Runnable = prompt | llm | {"str": StrOutputParser()}

chain_with_assign = chain.assign(hello=itemgetter("str") | llm)

print(chain_with_assign.input_schema.model_json_schema())
# {'title': 'PromptInput', 'type': 'object', 'properties':
{'question': {'title': 'Question', 'type': 'string'}}}
print(chain_with_assign.output_schema.model_json_schema())
# {'title': 'RunnableSequenceOutput', 'type': 'object', 'properties':
{'str': {'title': 'Str',
'type': 'string'}, 'hello': {'title': 'Hello', 'type': 'string'}}}

Parameters:

Name Type Description Default
**kwargs Union[Runnable[dict[str, Any], Any], Callable[[dict[str, Any]], Any], Mapping[str, Union[Runnable[dict[str, Any], Any], Callable[[dict[str, Any]], Any]]]]

A mapping of keys to Runnable or Runnable-like objects that will be invoked with the entire output dict of this Runnable.

{}

Returns:

Type Description
RunnableSerializable[Any, Any]

A new Runnable.

batch

batch(
    inputs: list[Input],
    config: Optional[
        Union[RunnableConfig, list[RunnableConfig]]
    ] = None,
    *,
    return_exceptions: bool = False,
    **kwargs: Optional[Any],
) -> list[Output]

Default implementation runs invoke in parallel using a thread pool executor.

The default implementation of batch works well for IO bound runnables.

Subclasses should override this method if they can batch more efficiently; e.g., if the underlying Runnable uses an API which supports a batch mode.

Parameters:

Name Type Description Default
inputs list[Input]

A list of inputs to the Runnable.

required
config Optional[Union[RunnableConfig, list[RunnableConfig]]]

A config to use when invoking the Runnable. The config supports standard keys like 'tags', 'metadata' for tracing purposes, 'max_concurrency' for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Defaults to None.

None
return_exceptions bool

Whether to return exceptions instead of raising them. Defaults to False.

False
**kwargs Optional[Any]

Additional keyword arguments to pass to the Runnable.

{}

Returns:

Type Description
list[Output]

A list of outputs from the Runnable.

batch_as_completed

batch_as_completed(
    inputs: Sequence[Input],
    config: Optional[
        Union[RunnableConfig, Sequence[RunnableConfig]]
    ] = None,
    *,
    return_exceptions: bool = False,
    **kwargs: Optional[Any],
) -> Iterator[tuple[int, Union[Output, Exception]]]

Run invoke in parallel on a list of inputs.

Yields results as they complete.

Parameters:

Name Type Description Default
inputs Sequence[Input]

A list of inputs to the Runnable.

required
config Optional[Union[RunnableConfig, Sequence[RunnableConfig]]]

A config to use when invoking the Runnable. The config supports standard keys like 'tags', 'metadata' for tracing purposes, 'max_concurrency' for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Defaults to None.

None
return_exceptions bool

Whether to return exceptions instead of raising them. Defaults to False.

False
**kwargs Optional[Any]

Additional keyword arguments to pass to the Runnable.

{}

Yields:

Type Description
tuple[int, Union[Output, Exception]]

Tuples of the index of the input and the output from the Runnable.

abatch async

abatch(
    inputs: list[Input],
    config: Optional[
        Union[RunnableConfig, list[RunnableConfig]]
    ] = None,
    *,
    return_exceptions: bool = False,
    **kwargs: Optional[Any],
) -> list[Output]

Default implementation runs ainvoke in parallel using asyncio.gather.

The default implementation of batch works well for IO bound runnables.

Subclasses should override this method if they can batch more efficiently; e.g., if the underlying Runnable uses an API which supports a batch mode.

Parameters:

Name Type Description Default
inputs list[Input]

A list of inputs to the Runnable.

required
config Optional[Union[RunnableConfig, list[RunnableConfig]]]

A config to use when invoking the Runnable. The config supports standard keys like 'tags', 'metadata' for tracing purposes, 'max_concurrency' for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Defaults to None.

None
return_exceptions bool

Whether to return exceptions instead of raising them. Defaults to False.

False
**kwargs Optional[Any]

Additional keyword arguments to pass to the Runnable.

{}

Returns:

Type Description
list[Output]

A list of outputs from the Runnable.

abatch_as_completed async

abatch_as_completed(
    inputs: Sequence[Input],
    config: Optional[
        Union[RunnableConfig, Sequence[RunnableConfig]]
    ] = None,
    *,
    return_exceptions: bool = False,
    **kwargs: Optional[Any],
) -> AsyncIterator[tuple[int, Union[Output, Exception]]]

Run ainvoke in parallel on a list of inputs.

Yields results as they complete.

Parameters:

Name Type Description Default
inputs Sequence[Input]

A list of inputs to the Runnable.

required
config Optional[Union[RunnableConfig, Sequence[RunnableConfig]]]

A config to use when invoking the Runnable. The config supports standard keys like 'tags', 'metadata' for tracing purposes, 'max_concurrency' for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Defaults to None.

None
return_exceptions bool

Whether to return exceptions instead of raising them. Defaults to False.

False
kwargs Optional[Any]

Additional keyword arguments to pass to the Runnable.

{}

Yields:

Type Description
AsyncIterator[tuple[int, Union[Output, Exception]]]

A tuple of the index of the input and the output from the Runnable.

astream_log async

astream_log(
    input: Any,
    config: Optional[RunnableConfig] = None,
    *,
    diff: bool = True,
    with_streamed_output_list: bool = True,
    include_names: Optional[Sequence[str]] = None,
    include_types: Optional[Sequence[str]] = None,
    include_tags: Optional[Sequence[str]] = None,
    exclude_names: Optional[Sequence[str]] = None,
    exclude_types: Optional[Sequence[str]] = None,
    exclude_tags: Optional[Sequence[str]] = None,
    **kwargs: Any,
) -> Union[
    AsyncIterator[RunLogPatch], AsyncIterator[RunLog]
]

Stream all output from a Runnable, as reported to the callback system.

This includes all inner runs of LLMs, Retrievers, Tools, etc.

Output is streamed as Log objects, which include a list of Jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run.

The Jsonpatch ops can be applied in order to construct state.

Parameters:

Name Type Description Default
input Any

The input to the Runnable.

required
config Optional[RunnableConfig]

The config to use for the Runnable.

None
diff bool

Whether to yield diffs between each step or the current state.

True
with_streamed_output_list bool

Whether to yield the streamed_output list.

True
include_names Optional[Sequence[str]]

Only include logs with these names.

None
include_types Optional[Sequence[str]]

Only include logs with these types.

None
include_tags Optional[Sequence[str]]

Only include logs with these tags.

None
exclude_names Optional[Sequence[str]]

Exclude logs with these names.

None
exclude_types Optional[Sequence[str]]

Exclude logs with these types.

None
exclude_tags Optional[Sequence[str]]

Exclude logs with these tags.

None
kwargs Any

Additional keyword arguments to pass to the Runnable.

{}

Yields:

Type Description
Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]

A RunLogPatch or RunLog object.

astream_events async

astream_events(
    input: Any,
    config: Optional[RunnableConfig] = None,
    *,
    version: Literal["v1", "v2"] = "v2",
    include_names: Optional[Sequence[str]] = None,
    include_types: Optional[Sequence[str]] = None,
    include_tags: Optional[Sequence[str]] = None,
    exclude_names: Optional[Sequence[str]] = None,
    exclude_types: Optional[Sequence[str]] = None,
    exclude_tags: Optional[Sequence[str]] = None,
    **kwargs: Any,
) -> AsyncIterator[StreamEvent]

Generate a stream of events.

Use to create an iterator over StreamEvents that provide real-time information about the progress of the Runnable, including StreamEvents from intermediate results.

A StreamEvent is a dictionary with the following schema:

  • event: str - Event names are of the format: on_[runnable_type]_(start|stream|end).
  • name: str - The name of the Runnable that generated the event.
  • run_id: str - randomly generated ID associated with the given execution of the Runnable that emitted the event. A child Runnable that gets invoked as part of the execution of a parent Runnable is assigned its own unique ID.
  • parent_ids: list[str] - The IDs of the parent runnables that generated the event. The root Runnable will have an empty list. The order of the parent IDs is from the root to the immediate parent. Only available for v2 version of the API. The v1 version of the API will return an empty list.
  • tags: Optional[list[str]] - The tags of the Runnable that generated the event.
  • metadata: Optional[dict[str, Any]] - The metadata of the Runnable that generated the event.
  • data: dict[str, Any]

Below is a table that illustrates some events that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table.

.. note:: This reference table is for the v2 version of the schema.

+--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | event | name | chunk | input | output | +==========================+==================+=====================================+===================================================+=====================================================+ | on_chat_model_start | [model name] | | {"messages": [[SystemMessage, HumanMessage]]} | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_chat_model_stream | [model name] | AIMessageChunk(content="hello") | | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_chat_model_end | [model name] | | {"messages": [[SystemMessage, HumanMessage]]} | AIMessageChunk(content="hello world") | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_llm_start | [model name] | | {'input': 'hello'} | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_llm_stream | [model name] | 'Hello' | | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_llm_end | [model name] | | 'Hello human!' | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_chain_start | format_docs | | | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_chain_stream | format_docs | 'hello world!, goodbye world!' | | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_chain_end | format_docs | | [Document(...)] | 'hello world!, goodbye world!' | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_tool_start | some_tool | | {"x": 1, "y": "2"} | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_tool_end | some_tool | | | {"x": 1, "y": "2"} | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_retriever_start | [retriever name] | | {"query": "hello"} | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_retriever_end | [retriever name] | | {"query": "hello"} | [Document(...), ..] | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_prompt_start | [template_name] | | {"question": "hello"} | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_prompt_end | [template_name] | | {"question": "hello"} | ChatPromptValue(messages: [SystemMessage, ...]) | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+

In addition to the standard events, users can also dispatch custom events (see example below).

Custom events will be only be surfaced with in the v2 version of the API!

A custom event has following format:

+-----------+------+-----------------------------------------------------------------------------------------------------------+ | Attribute | Type | Description | +===========+======+===========================================================================================================+ | name | str | A user defined name for the event. | +-----------+------+-----------------------------------------------------------------------------------------------------------+ | data | Any | The data associated with the event. This can be anything, though we suggest making it JSON serializable. | +-----------+------+-----------------------------------------------------------------------------------------------------------+

Here are declarations associated with the standard events shown above:

format_docs:

.. code-block:: python

def format_docs(docs: list[Document]) -> str:
    '''Format the docs.'''
    return ", ".join([doc.page_content for doc in docs])


format_docs = RunnableLambda(format_docs)

some_tool:

.. code-block:: python

@tool
def some_tool(x: int, y: str) -> dict:
    '''Some_tool.'''
    return {"x": x, "y": y}

prompt:

.. code-block:: python

template = ChatPromptTemplate.from_messages(
    [
        ("system", "You are Cat Agent 007"),
        ("human", "{question}"),
    ]
).with_config({"run_name": "my_template", "tags": ["my_template"]})

Example:

.. code-block:: python

from langchain_core.runnables import RunnableLambda


async def reverse(s: str) -> str:
    return s[::-1]


chain = RunnableLambda(func=reverse)

events = [
    event async for event in chain.astream_events("hello", version="v2")
]

# will produce the following events (run_id, and parent_ids
# has been omitted for brevity):
[
    {
        "data": {"input": "hello"},
        "event": "on_chain_start",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
    {
        "data": {"chunk": "olleh"},
        "event": "on_chain_stream",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
    {
        "data": {"output": "olleh"},
        "event": "on_chain_end",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
]

Example: Dispatch Custom Event

.. code-block:: python

from langchain_core.callbacks.manager import (
    adispatch_custom_event,
)
from langchain_core.runnables import RunnableLambda, RunnableConfig
import asyncio


async def slow_thing(some_input: str, config: RunnableConfig) -> str:
    """Do something that takes a long time."""
    await asyncio.sleep(1) # Placeholder for some slow operation
    await adispatch_custom_event(
        "progress_event",
        {"message": "Finished step 1 of 3"},
        config=config # Must be included for python < 3.10
    )
    await asyncio.sleep(1) # Placeholder for some slow operation
    await adispatch_custom_event(
        "progress_event",
        {"message": "Finished step 2 of 3"},
        config=config # Must be included for python < 3.10
    )
    await asyncio.sleep(1) # Placeholder for some slow operation
    return "Done"

slow_thing = RunnableLambda(slow_thing)

async for event in slow_thing.astream_events("some_input", version="v2"):
    print(event)

Parameters:

Name Type Description Default
input Any

The input to the Runnable.

required
config Optional[RunnableConfig]

The config to use for the Runnable.

None
version Literal['v1', 'v2']

The version of the schema to use either 'v2' or 'v1'. Users should use 'v2'. 'v1' is for backwards compatibility and will be deprecated in 0.4.0. No default will be assigned until the API is stabilized. custom events will only be surfaced in 'v2'.

'v2'
include_names Optional[Sequence[str]]

Only include events from Runnables with matching names.

None
include_types Optional[Sequence[str]]

Only include events from Runnables with matching types.

None
include_tags Optional[Sequence[str]]

Only include events from Runnables with matching tags.

None
exclude_names Optional[Sequence[str]]

Exclude events from Runnables with matching names.

None
exclude_types Optional[Sequence[str]]

Exclude events from Runnables with matching types.

None
exclude_tags Optional[Sequence[str]]

Exclude events from Runnables with matching tags.

None
kwargs Any

Additional keyword arguments to pass to the Runnable. These will be passed to astream_log as this implementation of astream_events is built on top of astream_log.

{}

Yields:

Type Description
AsyncIterator[StreamEvent]

An async stream of StreamEvents.

Raises:

Type Description
NotImplementedError

If the version is not 'v1' or 'v2'.

transform

transform(
    input: Iterator[Input],
    config: Optional[RunnableConfig] = None,
    **kwargs: Optional[Any],
) -> Iterator[Output]

Transform inputs to outputs.

Default implementation of transform, which buffers input and calls astream.

Subclasses should override this method if they can start producing output while input is still being generated.

Parameters:

Name Type Description Default
input Iterator[Input]

An iterator of inputs to the Runnable.

required
config Optional[RunnableConfig]

The config to use for the Runnable. Defaults to None.

None
kwargs Optional[Any]

Additional keyword arguments to pass to the Runnable.

{}

Yields:

Type Description
Output

The output of the Runnable.

atransform async

atransform(
    input: AsyncIterator[Input],
    config: Optional[RunnableConfig] = None,
    **kwargs: Optional[Any],
) -> AsyncIterator[Output]

Transform inputs to outputs.

Default implementation of atransform, which buffers input and calls astream.

Subclasses should override this method if they can start producing output while input is still being generated.

Parameters:

Name Type Description Default
input AsyncIterator[Input]

An async iterator of inputs to the Runnable.

required
config Optional[RunnableConfig]

The config to use for the Runnable. Defaults to None.

None
kwargs Optional[Any]

Additional keyword arguments to pass to the Runnable.

{}

Yields:

Type Description
AsyncIterator[Output]

The output of the Runnable.

bind

bind(**kwargs: Any) -> Runnable[Input, Output]

Bind arguments to a Runnable, returning a new Runnable.

Useful when a Runnable in a chain requires an argument that is not in the output of the previous Runnable or included in the user input.

Parameters:

Name Type Description Default
kwargs Any

The arguments to bind to the Runnable.

{}

Returns:

Type Description
Runnable[Input, Output]

A new Runnable with the arguments bound.

Example:

.. code-block:: python

from langchain_ollama import ChatOllama
from langchain_core.output_parsers import StrOutputParser

llm = ChatOllama(model="llama2")

# Without bind.
chain = llm | StrOutputParser()

chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two three four five.'

# With bind.
chain = llm.bind(stop=["three"]) | StrOutputParser()

chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two'

with_config

with_config(
    config: Optional[RunnableConfig] = None, **kwargs: Any
) -> Runnable[Input, Output]

Bind config to a Runnable, returning a new Runnable.

Parameters:

Name Type Description Default
config Optional[RunnableConfig]

The config to bind to the Runnable.

None
kwargs Any

Additional keyword arguments to pass to the Runnable.

{}

Returns:

Type Description
Runnable[Input, Output]

A new Runnable with the config bound.

with_listeners

with_listeners(
    *,
    on_start: Optional[
        Union[
            Callable[[Run], None],
            Callable[[Run, RunnableConfig], None],
        ]
    ] = None,
    on_end: Optional[
        Union[
            Callable[[Run], None],
            Callable[[Run, RunnableConfig], None],
        ]
    ] = None,
    on_error: Optional[
        Union[
            Callable[[Run], None],
            Callable[[Run, RunnableConfig], None],
        ]
    ] = None,
) -> Runnable[Input, Output]

Bind lifecycle listeners to a Runnable, returning a new Runnable.

The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run.

Parameters:

Name Type Description Default
on_start Optional[Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]]

Called before the Runnable starts running, with the Run object. Defaults to None.

None
on_end Optional[Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]]

Called after the Runnable finishes running, with the Run object. Defaults to None.

None
on_error Optional[Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]]

Called if the Runnable throws an error, with the Run object. Defaults to None.

None

Returns:

Type Description
Runnable[Input, Output]

A new Runnable with the listeners bound.

Example:

.. code-block:: python

from langchain_core.runnables import RunnableLambda
from langchain_core.tracers.schemas import Run

import time


def test_runnable(time_to_sleep: int):
    time.sleep(time_to_sleep)


def fn_start(run_obj: Run):
    print("start_time:", run_obj.start_time)


def fn_end(run_obj: Run):
    print("end_time:", run_obj.end_time)


chain = RunnableLambda(test_runnable).with_listeners(
    on_start=fn_start, on_end=fn_end
)
chain.invoke(2)

with_alisteners

with_alisteners(
    *,
    on_start: Optional[AsyncListener] = None,
    on_end: Optional[AsyncListener] = None,
    on_error: Optional[AsyncListener] = None,
) -> Runnable[Input, Output]

Bind async lifecycle listeners to a Runnable.

Returns a new Runnable.

The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run.

Parameters:

Name Type Description Default
on_start Optional[AsyncListener]

Called asynchronously before the Runnable starts running, with the Run object. Defaults to None.

None
on_end Optional[AsyncListener]

Called asynchronously after the Runnable finishes running, with the Run object. Defaults to None.

None
on_error Optional[AsyncListener]

Called asynchronously if the Runnable throws an error, with the Run object. Defaults to None.

None

Returns:

Type Description
Runnable[Input, Output]

A new Runnable with the listeners bound.

Example:

.. code-block:: python

from langchain_core.runnables import RunnableLambda, Runnable
from datetime import datetime, timezone
import time
import asyncio

def format_t(timestamp: float) -> str:
    return datetime.fromtimestamp(timestamp, tz=timezone.utc).isoformat()

async def test_runnable(time_to_sleep : int):
    print(f"Runnable[{time_to_sleep}s]: starts at {format_t(time.time())}")
    await asyncio.sleep(time_to_sleep)
    print(f"Runnable[{time_to_sleep}s]: ends at {format_t(time.time())}")

async def fn_start(run_obj : Runnable):
    print(f"on start callback starts at {format_t(time.time())}")
    await asyncio.sleep(3)
    print(f"on start callback ends at {format_t(time.time())}")

async def fn_end(run_obj : Runnable):
    print(f"on end callback starts at {format_t(time.time())}")
    await asyncio.sleep(2)
    print(f"on end callback ends at {format_t(time.time())}")

runnable = RunnableLambda(test_runnable).with_alisteners(
    on_start=fn_start,
    on_end=fn_end
)
async def concurrent_runs():
    await asyncio.gather(runnable.ainvoke(2), runnable.ainvoke(3))

asyncio.run(concurrent_runs())
Result:
on start callback starts at 2025-03-01T07:05:22.875378+00:00
on start callback starts at 2025-03-01T07:05:22.875495+00:00
on start callback ends at 2025-03-01T07:05:25.878862+00:00
on start callback ends at 2025-03-01T07:05:25.878947+00:00
Runnable[2s]: starts at 2025-03-01T07:05:25.879392+00:00
Runnable[3s]: starts at 2025-03-01T07:05:25.879804+00:00
Runnable[2s]: ends at 2025-03-01T07:05:27.881998+00:00
on end callback starts at 2025-03-01T07:05:27.882360+00:00
Runnable[3s]: ends at 2025-03-01T07:05:28.881737+00:00
on end callback starts at 2025-03-01T07:05:28.882428+00:00
on end callback ends at 2025-03-01T07:05:29.883893+00:00
on end callback ends at 2025-03-01T07:05:30.884831+00:00

with_types

with_types(
    *,
    input_type: Optional[type[Input]] = None,
    output_type: Optional[type[Output]] = None,
) -> Runnable[Input, Output]

Bind input and output types to a Runnable, returning a new Runnable.

Parameters:

Name Type Description Default
input_type Optional[type[Input]]

The input type to bind to the Runnable. Defaults to None.

None
output_type Optional[type[Output]]

The output type to bind to the Runnable. Defaults to None.

None

Returns:

Type Description
Runnable[Input, Output]

A new Runnable with the types bound.

with_retry

with_retry(
    *,
    retry_if_exception_type: tuple[
        type[BaseException], ...
    ] = (Exception,),
    wait_exponential_jitter: bool = True,
    exponential_jitter_params: Optional[
        ExponentialJitterParams
    ] = None,
    stop_after_attempt: int = 3,
) -> Runnable[Input, Output]

Create a new Runnable that retries the original Runnable on exceptions.

Parameters:

Name Type Description Default
retry_if_exception_type tuple[type[BaseException], ...]

A tuple of exception types to retry on. Defaults to (Exception,).

(Exception,)
wait_exponential_jitter bool

Whether to add jitter to the wait time between retries. Defaults to True.

True
stop_after_attempt int

The maximum number of attempts to make before giving up. Defaults to 3.

3
exponential_jitter_params Optional[ExponentialJitterParams]

Parameters for tenacity.wait_exponential_jitter. Namely: initial, max, exp_base, and jitter (all float values).

None

Returns:

Type Description
Runnable[Input, Output]

A new Runnable that retries the original Runnable on exceptions.

Example:

.. code-block:: python

from langchain_core.runnables import RunnableLambda

count = 0


def _lambda(x: int) -> None:
    global count
    count = count + 1
    if x == 1:
        raise ValueError("x is 1")
    else:
        pass


runnable = RunnableLambda(_lambda)
try:
    runnable.with_retry(
        stop_after_attempt=2,
        retry_if_exception_type=(ValueError,),
    ).invoke(1)
except ValueError:
    pass

assert count == 2

map

map() -> Runnable[list[Input], list[Output]]

Return a new Runnable that maps a list of inputs to a list of outputs.

Calls invoke with each input.

Returns:

Type Description
Runnable[list[Input], list[Output]]

A new Runnable that maps a list of inputs to a list of outputs.

Example:

.. code-block:: python

        from langchain_core.runnables import RunnableLambda


        def _lambda(x: int) -> int:
            return x + 1


        runnable = RunnableLambda(_lambda)
        print(runnable.map().invoke([1, 2, 3]))  # [2, 3, 4]

with_fallbacks

with_fallbacks(
    fallbacks: Sequence[Runnable[Input, Output]],
    *,
    exceptions_to_handle: tuple[
        type[BaseException], ...
    ] = (Exception,),
    exception_key: Optional[str] = None,
) -> RunnableWithFallbacks[Input, Output]

Add fallbacks to a Runnable, returning a new Runnable.

The new Runnable will try the original Runnable, and then each fallback in order, upon failures.

Parameters:

Name Type Description Default
fallbacks Sequence[Runnable[Input, Output]]

A sequence of runnables to try if the original Runnable fails.

required
exceptions_to_handle tuple[type[BaseException], ...]

A tuple of exception types to handle. Defaults to (Exception,).

(Exception,)
exception_key Optional[str]

If string is specified then handled exceptions will be passed to fallbacks as part of the input under the specified key. If None, exceptions will not be passed to fallbacks. If used, the base Runnable and its fallbacks must accept a dictionary as input. Defaults to None.

None

Returns:

Type Description
RunnableWithFallbacks[Input, Output]

A new Runnable that will try the original Runnable, and then each

RunnableWithFallbacks[Input, Output]

fallback in order, upon failures.

Example:

.. code-block:: python

    from typing import Iterator

    from langchain_core.runnables import RunnableGenerator


    def _generate_immediate_error(input: Iterator) -> Iterator[str]:
        raise ValueError()
        yield ""


    def _generate(input: Iterator) -> Iterator[str]:
        yield from "foo bar"


    runnable = RunnableGenerator(_generate_immediate_error).with_fallbacks(
        [RunnableGenerator(_generate)]
    )
    print("".join(runnable.stream({})))  # foo bar

Parameters:

Name Type Description Default
fallbacks Sequence[Runnable[Input, Output]]

A sequence of runnables to try if the original Runnable fails.

required
exceptions_to_handle tuple[type[BaseException], ...]

A tuple of exception types to handle.

(Exception,)
exception_key Optional[str]

If string is specified then handled exceptions will be passed to fallbacks as part of the input under the specified key. If None, exceptions will not be passed to fallbacks. If used, the base Runnable and its fallbacks must accept a dictionary as input.

None

Returns:

Type Description
RunnableWithFallbacks[Input, Output]

A new Runnable that will try the original Runnable, and then each

RunnableWithFallbacks[Input, Output]

fallback in order, upon failures.

as_tool

as_tool(
    args_schema: Optional[type[BaseModel]] = None,
    *,
    name: Optional[str] = None,
    description: Optional[str] = None,
    arg_types: Optional[dict[str, type]] = None,
) -> BaseTool

Create a BaseTool from a Runnable.

as_tool will instantiate a BaseTool with a name, description, and args_schema from a Runnable. Where possible, schemas are inferred from runnable.get_input_schema. Alternatively (e.g., if the Runnable takes a dict as input and the specific dict keys are not typed), the schema can be specified directly with args_schema. You can also pass arg_types to just specify the required arguments and their types.

Parameters:

Name Type Description Default
args_schema Optional[type[BaseModel]]

The schema for the tool. Defaults to None.

None
name Optional[str]

The name of the tool. Defaults to None.

None
description Optional[str]

The description of the tool. Defaults to None.

None
arg_types Optional[dict[str, type]]

A dictionary of argument names to types. Defaults to None.

None

Returns:

Type Description
BaseTool

A BaseTool instance.

Typed dict input:

.. code-block:: python

from typing_extensions import TypedDict
from langchain_core.runnables import RunnableLambda


class Args(TypedDict):
    a: int
    b: list[int]


def f(x: Args) -> str:
    return str(x["a"] * max(x["b"]))


runnable = RunnableLambda(f)
as_tool = runnable.as_tool()
as_tool.invoke({"a": 3, "b": [1, 2]})

dict input, specifying schema via args_schema:

.. code-block:: python

from typing import Any
from pydantic import BaseModel, Field
from langchain_core.runnables import RunnableLambda

def f(x: dict[str, Any]) -> str:
    return str(x["a"] * max(x["b"]))

class FSchema(BaseModel):
    """Apply a function to an integer and list of integers."""

    a: int = Field(..., description="Integer")
    b: list[int] = Field(..., description="List of ints")

runnable = RunnableLambda(f)
as_tool = runnable.as_tool(FSchema)
as_tool.invoke({"a": 3, "b": [1, 2]})

dict input, specifying schema via arg_types:

.. code-block:: python

from typing import Any
from langchain_core.runnables import RunnableLambda


def f(x: dict[str, Any]) -> str:
    return str(x["a"] * max(x["b"]))


runnable = RunnableLambda(f)
as_tool = runnable.as_tool(arg_types={"a": int, "b": list[int]})
as_tool.invoke({"a": 3, "b": [1, 2]})

String input:

.. code-block:: python

from langchain_core.runnables import RunnableLambda


def f(x: str) -> str:
    return x + "a"


def g(x: str) -> str:
    return x + "z"


runnable = RunnableLambda(f) | g
as_tool = runnable.as_tool()
as_tool.invoke("b")

.. versionadded:: 0.2.14

__init__

__init__(*args: Any, **kwargs: Any) -> None

is_lc_serializable classmethod

is_lc_serializable() -> bool

Is this class serializable?

By design, even if a class inherits from Serializable, it is not serializable by default. This is to prevent accidental serialization of objects that should not be serialized.

Returns:

Type Description
bool

Whether the class is serializable. Default is False.

get_lc_namespace classmethod

get_lc_namespace() -> list[str]

Get the namespace of the langchain object.

For example, if the class is langchain.llms.openai.OpenAI, then the namespace is ["langchain", "llms", "openai"]

Returns:

Type Description
list[str]

The namespace as a list of strings.

lc_id classmethod

lc_id() -> list[str]

Return a unique identifier for this class for serialization purposes.

The unique identifier is a list of strings that describes the path to the object. For example, for the class langchain.llms.openai.OpenAI, the id is ["langchain", "llms", "openai", "OpenAI"].

to_json

to_json() -> Union[
    SerializedConstructor, SerializedNotImplemented
]

Serialize the Runnable to JSON.

Returns:

Type Description
Union[SerializedConstructor, SerializedNotImplemented]

A JSON-serializable representation of the Runnable.

to_json_not_implemented

to_json_not_implemented() -> SerializedNotImplemented

Serialize a "not implemented" object.

Returns:

Type Description
SerializedNotImplemented

SerializedNotImplemented.

configurable_fields

configurable_fields(
    **kwargs: AnyConfigurableField,
) -> RunnableSerializable[Input, Output]

Configure particular Runnable fields at runtime.

Parameters:

Name Type Description Default
**kwargs AnyConfigurableField

A dictionary of ConfigurableField instances to configure.

{}

Raises:

Type Description
ValueError

If a configuration key is not found in the Runnable.

Returns:

Type Description
RunnableSerializable[Input, Output]

A new Runnable with the fields configured.

.. code-block:: python

from langchain_core.runnables import ConfigurableField
from langchain_openai import ChatOpenAI

model = ChatOpenAI(max_tokens=20).configurable_fields(
    max_tokens=ConfigurableField(
        id="output_token_number",
        name="Max tokens in the output",
        description="The maximum number of tokens in the output",
    )
)

# max_tokens = 20
print(
    "max_tokens_20: ", model.invoke("tell me something about chess").content
)

# max_tokens = 200
print(
    "max_tokens_200: ",
    model.with_config(configurable={"output_token_number": 200})
    .invoke("tell me something about chess")
    .content,
)

configurable_alternatives

configurable_alternatives(
    which: ConfigurableField,
    *,
    default_key: str = "default",
    prefix_keys: bool = False,
    **kwargs: Union[
        Runnable[Input, Output],
        Callable[[], Runnable[Input, Output]],
    ],
) -> RunnableSerializable[Input, Output]

Configure alternatives for Runnables that can be set at runtime.

Parameters:

Name Type Description Default
which ConfigurableField

The ConfigurableField instance that will be used to select the alternative.

required
default_key str

The default key to use if no alternative is selected. Defaults to 'default'.

'default'
prefix_keys bool

Whether to prefix the keys with the ConfigurableField id. Defaults to False.

False
**kwargs Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]

A dictionary of keys to Runnable instances or callables that return Runnable instances.

{}

Returns:

Type Description
RunnableSerializable[Input, Output]

A new Runnable with the alternatives configured.

.. code-block:: python

from langchain_anthropic import ChatAnthropic
from langchain_core.runnables.utils import ConfigurableField
from langchain_openai import ChatOpenAI

model = ChatAnthropic(
    model_name="claude-3-7-sonnet-20250219"
).configurable_alternatives(
    ConfigurableField(id="llm"),
    default_key="anthropic",
    openai=ChatOpenAI(),
)

# uses the default model ChatAnthropic
print(model.invoke("which organization created you?").content)

# uses ChatOpenAI
print(
    model.with_config(configurable={"llm": "openai"})
    .invoke("which organization created you?")
    .content
)

set_verbose

set_verbose(verbose: Optional[bool]) -> bool

If verbose is None, set it.

This allows users to pass in None as verbose to access the global setting.

Parameters:

Name Type Description Default
verbose Optional[bool]

The verbosity setting to use.

required

Returns:

Type Description
bool

The verbosity setting to use.

get_token_ids

get_token_ids(text: str) -> list[int]

Return the ordered ids of the tokens in a text.

Parameters:

Name Type Description Default
text str

The string input to tokenize.

required

Returns:

Type Description
list[int]

A list of ids corresponding to the tokens in the text, in order they occur

list[int]

in the text.

get_num_tokens

get_num_tokens(text: str) -> int

Get the number of tokens present in the text.

Useful for checking if an input fits in a model's context window.

Parameters:

Name Type Description Default
text str

The string input to tokenize.

required

Returns:

Type Description
int

The integer number of tokens in the text.

get_num_tokens_from_messages

get_num_tokens_from_messages(
    messages: list[BaseMessage],
    tools: Optional[Sequence] = None,
) -> int

Get the number of tokens in the messages.

Useful for checking if an input fits in a model's context window.

.. note:: The base implementation of get_num_tokens_from_messages ignores tool schemas.

Parameters:

Name Type Description Default
messages list[BaseMessage]

The message inputs to tokenize.

required
tools Optional[Sequence]

If provided, sequence of dict, BaseModel, function, or BaseTools to be converted to tool schemas.

None

Returns:

Type Description
int

The sum of the number of tokens across the messages.

raise_deprecation classmethod

raise_deprecation(values: dict) -> Any

Emit deprecation warning if callback_manager is used.

Parameters:

Name Type Description Default
values Dict

Values to validate.

required

Returns:

Name Type Description
Dict Any

Validated values.

generate

generate(
    messages: list[list[BaseMessage]],
    stop: Optional[list[str]] = None,
    callbacks: Callbacks = None,
    *,
    tags: Optional[list[str]] = None,
    metadata: Optional[dict[str, Any]] = None,
    run_name: Optional[str] = None,
    run_id: Optional[UUID] = None,
    **kwargs: Any,
) -> LLMResult

Pass a sequence of prompts to the model and return model generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:

  1. Take advantage of batched calls,
  2. Need more output from the model than just the top generated value,
  3. Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).

Parameters:

Name Type Description Default
messages list[list[BaseMessage]]

List of list of messages.

required
stop Optional[list[str]]

Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

None
callbacks Callbacks

Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

None
tags Optional[list[str]]

The tags to apply.

None
metadata Optional[dict[str, Any]]

The metadata to apply.

None
run_name Optional[str]

The name of the run.

None
run_id Optional[UUID]

The ID of the run.

None
**kwargs Any

Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

{}

Returns:

Type Description
LLMResult

An LLMResult, which contains a list of candidate Generations for each input

LLMResult

prompt and additional model provider-specific output.

agenerate async

agenerate(
    messages: list[list[BaseMessage]],
    stop: Optional[list[str]] = None,
    callbacks: Callbacks = None,
    *,
    tags: Optional[list[str]] = None,
    metadata: Optional[dict[str, Any]] = None,
    run_name: Optional[str] = None,
    run_id: Optional[UUID] = None,
    **kwargs: Any,
) -> LLMResult

Asynchronously pass a sequence of prompts to a model and return generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:

  1. Take advantage of batched calls,
  2. Need more output from the model than just the top generated value,
  3. Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).

Parameters:

Name Type Description Default
messages list[list[BaseMessage]]

List of list of messages.

required
stop Optional[list[str]]

Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

None
callbacks Callbacks

Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

None
tags Optional[list[str]]

The tags to apply.

None
metadata Optional[dict[str, Any]]

The metadata to apply.

None
run_name Optional[str]

The name of the run.

None
run_id Optional[UUID]

The ID of the run.

None
**kwargs Any

Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

{}

Returns:

Type Description
LLMResult

An LLMResult, which contains a list of candidate Generations for each input

LLMResult

prompt and additional model provider-specific output.

__call__

__call__(
    messages: list[BaseMessage],
    stop: Optional[list[str]] = None,
    callbacks: Callbacks = None,
    **kwargs: Any,
) -> BaseMessage

Call the model.

Parameters:

Name Type Description Default
messages list[BaseMessage]

List of messages.

required
stop Optional[list[str]]

Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

None
callbacks Callbacks

Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

None
**kwargs Any

Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

{}

Raises:

Type Description
ValueError

If the generation is not a chat generation.

Returns:

Type Description
BaseMessage

The model output message.

call_as_llm

call_as_llm(
    message: str,
    stop: Optional[list[str]] = None,
    **kwargs: Any,
) -> str

Call the model.

Parameters:

Name Type Description Default
message str

The input message.

required
stop Optional[list[str]]

Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

None
**kwargs Any

Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

{}

Returns:

Type Description
str

The model output string.

predict

predict(
    text: str,
    *,
    stop: Optional[Sequence[str]] = None,
    **kwargs: Any,
) -> str

Predict the next message.

Parameters:

Name Type Description Default
text str

The input message.

required
stop Optional[Sequence[str]]

Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

None
**kwargs Any

Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

{}

Raises:

Type Description
ValueError

If the output is not a string.

Returns:

Type Description
str

The predicted output string.

dict

dict(**kwargs: Any) -> dict

Return a dictionary of the LLM.

bind_tools

bind_tools(
    tools: Sequence[
        Union[Dict[str, Any], type, Callable, BaseTool]
    ],
    *,
    tool_choice: Optional[Union[str]] = None,
    **kwargs: Any,
) -> Runnable[LanguageModelInput, AIMessage]

Bind tools to the model.

Parameters:

Name Type Description Default
tools Sequence[Union[Dict[str, Any], type, Callable, BaseTool]]

Sequence of tools to bind to the model.

required
tool_choice Optional[Union[str]]

The tool to use. If "any" then any tool can be used.

None

Returns:

Type Description
Runnable[LanguageModelInput, AIMessage]

A Runnable that returns a message.

with_structured_output

with_structured_output(
    schema: Union[Dict, type],
    *,
    include_raw: bool = False,
    **kwargs: Any,
) -> Runnable[LanguageModelInput, Union[Dict, BaseModel]]

Model wrapper that returns outputs formatted to match the given schema.

Parameters:

Name Type Description Default
schema Union[Dict, type]

The output schema. Can be passed in as:

  • an OpenAI function/tool schema,
  • a JSON Schema,
  • a TypedDict class,
  • or a Pydantic class.

If schema is a Pydantic class then the model output will be a Pydantic instance of that class, and the model-generated fields will be validated by the Pydantic class. Otherwise the model output will be a dict and will not be validated. See :meth:langchain_core.utils.function_calling.convert_to_openai_tool for more on how to properly specify types and descriptions of schema fields when specifying a Pydantic or TypedDict class.

required
include_raw bool

If False then only the parsed structured output is returned. If an error occurs during model output parsing it will be raised. If True then both the raw model response (a BaseMessage) and the parsed model response will be returned. If an error occurs during output parsing it will be caught and returned as well. The final output is always a dict with keys 'raw', 'parsed', and 'parsing_error'.

False

Raises:

Type Description
ValueError

If there are any unsupported kwargs.

NotImplementedError

If the model does not implement with_structured_output().

Returns:

Type Description
Runnable[LanguageModelInput, Union[Dict, BaseModel]]

A Runnable that takes same inputs as a :class:langchain_core.language_models.chat.BaseChatModel.

Runnable[LanguageModelInput, Union[Dict, BaseModel]]

If include_raw is False and schema is a Pydantic class, Runnable outputs

Runnable[LanguageModelInput, Union[Dict, BaseModel]]

an instance of schema (i.e., a Pydantic object).

Runnable[LanguageModelInput, Union[Dict, BaseModel]]

Otherwise, if include_raw is False then Runnable outputs a dict.

Runnable[LanguageModelInput, Union[Dict, BaseModel]]

If include_raw is True, then Runnable outputs a dict with keys:

Runnable[LanguageModelInput, Union[Dict, BaseModel]]
  • 'raw': BaseMessage
Runnable[LanguageModelInput, Union[Dict, BaseModel]]
  • 'parsed': None if there was a parsing error, otherwise the type depends on the schema as described above.
Runnable[LanguageModelInput, Union[Dict, BaseModel]]
  • 'parsing_error': Optional[BaseException]
Pydantic schema (include_raw=False):

.. code-block:: python

from pydantic import BaseModel


class AnswerWithJustification(BaseModel):
    '''An answer to the user question along with justification for the answer.'''

    answer: str
    justification: str


llm = ChatModel(model="model-name", temperature=0)
structured_llm = llm.with_structured_output(AnswerWithJustification)

structured_llm.invoke(
    "What weighs more a pound of bricks or a pound of feathers"
)

# -> AnswerWithJustification(
#     answer='They weigh the same',
#     justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.'
# )
Pydantic schema (include_raw=True):

.. code-block:: python

from pydantic import BaseModel


class AnswerWithJustification(BaseModel):
    '''An answer to the user question along with justification for the answer.'''

    answer: str
    justification: str


llm = ChatModel(model="model-name", temperature=0)
structured_llm = llm.with_structured_output(
    AnswerWithJustification, include_raw=True
)

structured_llm.invoke(
    "What weighs more a pound of bricks or a pound of feathers"
)
# -> {
#     'raw': AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_Ao02pnFYXD6GN1yzc0uXPsvF', 'function': {'arguments': '{"answer":"They weigh the same.","justification":"Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ."}', 'name': 'AnswerWithJustification'}, 'type': 'function'}]}),
#     'parsed': AnswerWithJustification(answer='They weigh the same.', justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.'),
#     'parsing_error': None
# }
Dict schema (include_raw=False):

.. code-block:: python

from pydantic import BaseModel
from langchain_core.utils.function_calling import convert_to_openai_tool


class AnswerWithJustification(BaseModel):
    '''An answer to the user question along with justification for the answer.'''

    answer: str
    justification: str


dict_schema = convert_to_openai_tool(AnswerWithJustification)
llm = ChatModel(model="model-name", temperature=0)
structured_llm = llm.with_structured_output(dict_schema)

structured_llm.invoke(
    "What weighs more a pound of bricks or a pound of feathers"
)
# -> {
#     'answer': 'They weigh the same',
#     'justification': 'Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume and density of the two substances differ.'
# }

.. versionchanged:: 0.2.26

    Added support for TypedDict class.

init_chat_model

init_chat_model(
    model: str | None = None,
    *,
    model_provider: str | None = None,
    configurable_fields: Literal["any"]
    | list[str]
    | tuple[str, ...]
    | None = None,
    config_prefix: str | None = None,
    **kwargs: Any,
) -> BaseChatModel | _ConfigurableModel

Initialize a ChatModel from the model name and provider.

Note: Must have the integration package corresponding to the model provider installed.

Parameters:

Name Type Description Default
model str | None

The name of the model, e.g. "o3-mini", "claude-3-5-sonnet-latest". You can also specify model and model provider in a single argument using '{model_provider}:{model}' format, e.g. "openai:o1".

None
model_provider str | None

The model provider if not specified as part of model arg (see above). Supported model_provider values and the corresponding integration package are:

  • 'openai' -> langchain-openai
  • 'anthropic' -> langchain-anthropic
  • 'azure_openai' -> langchain-openai
  • 'azure_ai' -> langchain-azure-ai
  • 'google_vertexai' -> langchain-google-vertexai
  • 'google_genai' -> langchain-google-genai
  • 'bedrock' -> langchain-aws
  • 'bedrock_converse' -> langchain-aws
  • 'cohere' -> langchain-cohere
  • 'fireworks' -> langchain-fireworks
  • 'together' -> langchain-together
  • 'mistralai' -> langchain-mistralai
  • 'huggingface' -> langchain-huggingface
  • 'groq' -> langchain-groq
  • 'ollama' -> langchain-ollama
  • 'google_anthropic_vertex' -> langchain-google-vertexai
  • 'deepseek' -> langchain-deepseek
  • 'ibm' -> langchain-ibm
  • 'nvidia' -> langchain-nvidia-ai-endpoints
  • 'xai' -> langchain-xai
  • 'perplexity' -> langchain-perplexity

Will attempt to infer model_provider from model if not specified. The following providers will be inferred based on these model prefixes:

  • 'gpt-...' | 'o1...' | 'o3...' -> 'openai'
  • 'claude...' -> 'anthropic'
  • 'amazon....' -> 'bedrock'
  • 'gemini...' -> 'google_vertexai'
  • 'command...' -> 'cohere'
  • 'accounts/fireworks...' -> 'fireworks'
  • 'mistral...' -> 'mistralai'
  • 'deepseek...' -> 'deepseek'
  • 'grok...' -> 'xai'
  • 'sonar...' -> 'perplexity'
None
configurable_fields Literal['any'] | list[str] | tuple[str, ...] | None

Which model parameters are configurable:

  • None: No configurable fields.
  • "any": All fields are configurable. See Security Note below.
  • Union[List[str], Tuple[str, ...]]: Specified fields are configurable.

Fields are assumed to have config_prefix stripped if there is a config_prefix. If model is specified, then defaults to None. If model is not specified, then defaults to ("model", "model_provider").

Security Note: Setting configurable_fields="any" means fields like api_key, base_url, etc. can be altered at runtime, potentially redirecting model requests to a different service/user. Make sure that if you're accepting untrusted configurations that you enumerate the configurable_fields=(...) explicitly.

None
config_prefix str | None

If config_prefix is a non-empty string then model will be configurable at runtime via the config["configurable"]["{config_prefix}_{param}"] keys. If config_prefix is an empty string then model will be configurable via config["configurable"]["{param}"].

None
temperature

Model temperature.

required
max_tokens

Max output tokens.

required
timeout

The maximum time (in seconds) to wait for a response from the model before canceling the request.

required
max_retries

The maximum number of attempts the system will make to resend a request if it fails due to issues like network timeouts or rate limits.

required
base_url

The URL of the API endpoint where requests are sent.

required
rate_limiter

A BaseRateLimiter to space out requests to avoid exceeding rate limits.

required
kwargs Any

Additional model-specific keyword args to pass to <<selected ChatModel>>.__init__(model=model_name, **kwargs).

{}

Returns:

Type Description
BaseChatModel | _ConfigurableModel

A BaseChatModel corresponding to the model_name and model_provider specified if

BaseChatModel | _ConfigurableModel

configurability is inferred to be False. If configurable, a chat model emulator

BaseChatModel | _ConfigurableModel

that initializes the underlying model at runtime once a config is passed in.

Raises:

Type Description
ValueError

If model_provider cannot be inferred or isn't supported.

ImportError

If the model provider integration package is not installed.

.. dropdown:: Init non-configurable model :open:

.. code-block:: python

    # pip install langchain langchain-openai langchain-anthropic langchain-google-vertexai
    from langchain.chat_models import init_chat_model

    o3_mini = init_chat_model("openai:o3-mini", temperature=0)
    claude_sonnet = init_chat_model("anthropic:claude-3-5-sonnet-latest", temperature=0)
    gemini_2_flash = init_chat_model("google_vertexai:gemini-2.5-flash", temperature=0)

    o3_mini.invoke("what's your name")
    claude_sonnet.invoke("what's your name")
    gemini_2_flash.invoke("what's your name")

.. dropdown:: Partially configurable model with no default

.. code-block:: python

    # pip install langchain langchain-openai langchain-anthropic
    from langchain.chat_models import init_chat_model

    # We don't need to specify configurable=True if a model isn't specified.
    configurable_model = init_chat_model(temperature=0)

    configurable_model.invoke(
        "what's your name", config={"configurable": {"model": "gpt-4o"}}
    )
    # GPT-4o response

    configurable_model.invoke(
        "what's your name", config={"configurable": {"model": "claude-3-5-sonnet-latest"}}
    )
    # claude-3.5 sonnet response

.. dropdown:: Fully configurable model with a default

.. code-block:: python

    # pip install langchain langchain-openai langchain-anthropic
    from langchain.chat_models import init_chat_model

    configurable_model_with_default = init_chat_model(
        "openai:gpt-4o",
        configurable_fields="any",  # this allows us to configure other params like temperature, max_tokens, etc at runtime.
        config_prefix="foo",
        temperature=0,
    )

    configurable_model_with_default.invoke("what's your name")
    # GPT-4o response with temperature 0

    configurable_model_with_default.invoke(
        "what's your name",
        config={
            "configurable": {
                "foo_model": "anthropic:claude-3-5-sonnet-latest",
                "foo_temperature": 0.6,
            }
        },
    )
    # Claude-3.5 sonnet response with temperature 0.6

.. dropdown:: Bind tools to a configurable model

You can call any ChatModel declarative methods on a configurable model in the
same way that you would with a normal model.

.. code-block:: python

    # pip install langchain langchain-openai langchain-anthropic
    from langchain.chat_models import init_chat_model
    from pydantic import BaseModel, Field


    class GetWeather(BaseModel):
        '''Get the current weather in a given location'''

        location: str = Field(..., description="The city and state, e.g. San Francisco, CA")


    class GetPopulation(BaseModel):
        '''Get the current population in a given location'''

        location: str = Field(..., description="The city and state, e.g. San Francisco, CA")


    configurable_model = init_chat_model(
        "gpt-4o", configurable_fields=("model", "model_provider"), temperature=0
    )

    configurable_model_with_tools = configurable_model.bind_tools(
        [GetWeather, GetPopulation]
    )
    configurable_model_with_tools.invoke(
        "Which city is hotter today and which is bigger: LA or NY?"
    )
    # GPT-4o response with tool calls

    configurable_model_with_tools.invoke(
        "Which city is hotter today and which is bigger: LA or NY?",
        config={"configurable": {"model": "claude-3-5-sonnet-latest"}},
    )
    # Claude-3.5 sonnet response with tools

.. versionadded:: 0.2.7

.. versionchanged:: 0.2.8

Support for ``configurable_fields`` and ``config_prefix`` added.

.. versionchanged:: 0.2.12

Support for Ollama via langchain-ollama package added
(langchain_ollama.ChatOllama). Previously,
the now-deprecated langchain-community version of Ollama was imported
(langchain_community.chat_models.ChatOllama).

Support for AWS Bedrock models via the Converse API added
(model_provider="bedrock_converse").

.. versionchanged:: 0.3.5

Out of beta.

.. versionchanged:: 0.3.19

Support for Deepseek, IBM, Nvidia, and xAI models added.