Tags to add to the run trace.
The name of the Runnable. Used for debugging and tracing.
Input type.
The type of input this Runnable accepts specified as a Pydantic model.
Output schema.
List configurable fields for this Runnable.
Get the name of the Runnable.
Get a Pydantic model that can be used to validate input to the Runnable.
Get a JSON schema that represents the input to the Runnable.
Get a Pydantic model that can be used to validate output to the Runnable.
Version of AIMessage output format to store in message content.
AIMessage.content_blocks will lazily parse the contents of content into a
standard format. This flag can be used to additionally store the standard format
in message content, e.g., for serialization purposes.
Supported values:
'v0': provider-specific format in content (can lazily-parse with
content_blocks)'v1': standardized format in content (consistent with content_blocks)Partner packages (e.g.,
langchain-openai) can also use this
field to roll out new content formats in a backward-compatible way.
Base class for chat models.
Key imperative methods:
Methods that actually call the underlying model.
This table provides a brief overview of the main imperative methods. Please see the base Runnable reference for full documentation.
| Method | Input | Output | Description |
|---|---|---|---|
invoke |
str | list[dict | tuple | BaseMessage] | PromptValue |
BaseMessage |
A single chat model call. |
ainvoke |
''' |
BaseMessage |
Defaults to running invoke in an async executor. |
stream |
''' |
Iterator[BaseMessageChunk] |
Defaults to yielding output of invoke. |
astream |
''' |
AsyncIterator[BaseMessageChunk] |
Defaults to yielding output of ainvoke. |
astream_events |
''' |
AsyncIterator[StreamEvent] |
Event types: on_chat_model_start, on_chat_model_stream, on_chat_model_end. |
batch |
list['''] |
list[BaseMessage] |
Defaults to running invoke in concurrent threads. |
abatch |
list['''] |
list[BaseMessage] |
Defaults to running ainvoke in concurrent threads. |
batch_as_completed |
list['''] |
Iterator[tuple[int, Union[BaseMessage, Exception]]] |
Defaults to running invoke in concurrent threads. |
abatch_as_completed |
list['''] |
AsyncIterator[tuple[int, Union[BaseMessage, Exception]]] |
Defaults to running ainvoke in concurrent threads. |
Key declarative methods:
Methods for creating another Runnable using the chat model.
This table provides a brief overview of the main declarative methods. Please see the reference for each method for full documentation.
| Method | Description |
|---|---|
bind_tools |
Create chat model that can call tools. |
with_structured_output |
Create wrapper that structures model output using schema. |
with_retry |
Create wrapper that retries model calls on failure. |
with_fallbacks |
Create wrapper that falls back to other models on failure. |
configurable_fields |
Specify init args of the model that can be configured at runtime via the RunnableConfig. |
configurable_alternatives |
Specify alternative models which can be swapped in at runtime via the RunnableConfig. |
Creating custom chat model:
Custom chat model implementations should inherit from this class. Please reference the table below for information about which methods and properties are required or optional for implementations.
| Method/Property | Description | Required |
|---|---|---|
_generate |
Use to generate a chat result from a prompt | Required |
_llm_type (property) |
Used to uniquely identify the type of the model. Used for logging. | Required |
_identifying_params (property) |
Represent model parameterization for tracing purposes. | Optional |
_stream |
Use to implement streaming | Optional |
_agenerate |
Use to implement a native async method | Optional |
_astream |
Use to implement async version of _stream |
Optional |
Get a JSON schema that represents the output of the Runnable.