langchain-xai¶
LangChain integration with xAI.
Modules:
| Name | Description |
|---|---|
chat_models |
Wrapper around xAI's Chat Completions API. |
Classes:
| Name | Description |
|---|---|
ChatXAI |
ChatXAI chat model. |
ChatXAI
¶
Bases: BaseChatOpenAI
ChatXAI chat model.
Refer to xAI's documentation <https://docs.x.ai/docs/api-reference#chat-completions>__
for more nuanced details on the API's behavior and supported parameters.
Setup
Install langchain-xai and set environment variable XAI_API_KEY.
.. code-block:: bash
pip install -U langchain-xai
export XAI_API_KEY="your-api-key"
Key init args â completion params:
model: str
Name of model to use.
temperature: float
Sampling temperature between 0 and 2. Higher values mean more random completions,
while lower values (like 0.2) mean more focused and deterministic completions.
(Default: 1.)
max_tokens: Optional[int]
Max number of tokens to generate. Refer to your model's documentation <https://docs.x.ai/docs/models#model-pricing>__
for the maximum number of tokens it can generate.
logprobs: Optional[bool]
Whether to return logprobs.
Key init args â client params:
timeout: Union[float, Tuple[float, float], Any, None]
Timeout for requests.
max_retries: int
Max number of retries.
api_key: Optional[str]
xAI API key. If not passed in will be read from env var XAI_API_KEY.
Instantiate
.. code-block:: python
from langchain_xai import ChatXAI
llm = ChatXAI(
model="grok-4",
temperature=0,
max_tokens=None,
timeout=None,
max_retries=2,
# api_key="...",
# other params...
)
Invoke
.. code-block:: python
messages = [
(
"system",
"You are a helpful translator. Translate the user sentence to French.",
),
("human", "I love programming."),
]
llm.invoke(messages)
.. code-block:: python
AIMessage(
content="J'adore la programmation.",
response_metadata={
"token_usage": {
"completion_tokens": 9,
"prompt_tokens": 32,
"total_tokens": 41,
},
"model_name": "grok-4",
"system_fingerprint": None,
"finish_reason": "stop",
"logprobs": None,
},
id="run-168dceca-3b8b-4283-94e3-4c739dbc1525-0",
usage_metadata={
"input_tokens": 32,
"output_tokens": 9,
"total_tokens": 41,
},
)
Stream
.. code-block:: python
for chunk in llm.stream(messages):
print(chunk.text, end="")
.. code-block:: python
content='J' id='run-1bc996b5-293f-4114-96a1-e0f755c05eb9'
content="'" id='run-1bc996b5-293f-4114-96a1-e0f755c05eb9'
content='ad' id='run-1bc996b5-293f-4114-96a1-e0f755c05eb9'
content='ore' id='run-1bc996b5-293f-4114-96a1-e0f755c05eb9'
content=' la' id='run-1bc996b5-293f-4114-96a1-e0f755c05eb9'
content=' programm' id='run-1bc996b5-293f-4114-96a1-e0f755c05eb9'
content='ation' id='run-1bc996b5-293f-4114-96a1-e0f755c05eb9'
content='.' id='run-1bc996b5-293f-4114-96a1-e0f755c05eb9'
content='' response_metadata={'finish_reason': 'stop', 'model_name': 'grok-4'} id='run-1bc996b5-293f-4114-96a1-e0f755c05eb9'
Async
.. code-block:: python
await llm.ainvoke(messages)
# stream:
# async for chunk in (await llm.astream(messages))
# batch:
# await llm.abatch([messages])
.. code-block:: python
AIMessage(
content="J'adore la programmation.",
response_metadata={
"token_usage": {
"completion_tokens": 9,
"prompt_tokens": 32,
"total_tokens": 41,
},
"model_name": "grok-4",
"system_fingerprint": None,
"finish_reason": "stop",
"logprobs": None,
},
id="run-09371a11-7f72-4c53-8e7c-9de5c238b34c-0",
usage_metadata={
"input_tokens": 32,
"output_tokens": 9,
"total_tokens": 41,
},
)
Reasoning
Certain xAI models <https://docs.x.ai/docs/models#model-pricing>__ support reasoning,
which allows the model to provide reasoning content along with the response.
If provided, reasoning content is returned under the additional_kwargs field of the
AIMessage or AIMessageChunk.
If supported, reasoning effort can be specified in the model constructor's extra_body
argument, which will control the amount of reasoning the model does. The value can be one of
'low' or 'high'.
.. code-block:: python
model = ChatXAI(
model="grok-3-mini",
extra_body={"reasoning_effort": "high"},
)
Note
As of 2025-07-10, reasoning_content is only returned in Grok 3 models, such as
Grok 3 Mini <https://docs.x.ai/docs/models/grok-3-mini>__.
Note
Note that in Grok 4 <https://docs.x.ai/docs/models/grok-4-0709>__, as of 2025-07-10,
reasoning is not exposed in reasoning_content (other than initial 'Thinking...' text),
reasoning cannot be disabled, and the reasoning_effort cannot be specified.
Tool calling / function calling: .. code-block:: python
from pydantic import BaseModel, Field
llm = ChatXAI(model="grok-4")
class GetWeather(BaseModel):
'''Get the current weather in a given location'''
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
class GetPopulation(BaseModel):
'''Get the current population in a given location'''
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
llm_with_tools = llm.bind_tools([GetWeather, GetPopulation])
ai_msg = llm_with_tools.invoke("Which city is bigger: LA or NY?")
ai_msg.tool_calls
.. code-block:: python
[
{
"name": "GetPopulation",
"args": {"location": "NY"},
"id": "call_m5tstyn2004pre9bfuxvom8x",
"type": "tool_call",
},
{
"name": "GetPopulation",
"args": {"location": "LA"},
"id": "call_0vjgq455gq1av5sp9eb1pw6a",
"type": "tool_call",
},
]
!!! note
With stream response, the tool / function call will be returned in whole in a
single chunk, instead of being streamed across chunks.
Tool choice can be controlled by setting the ``tool_choice`` parameter in the model
constructor's ``extra_body`` argument. For example, to disable tool / function calling:
.. code-block:: python
llm = ChatXAI(model="grok-4", extra_body={"tool_choice": "none"})
To require that the model always calls a tool / function, set ``tool_choice`` to ``'required'``:
.. code-block:: python
llm = ChatXAI(model="grok-4", extra_body={"tool_choice": "required"})
To specify a tool / function to call, set ``tool_choice`` to the name of the tool / function:
.. code-block:: python
from pydantic import BaseModel, Field
llm = ChatXAI(
model="grok-4",
extra_body={
"tool_choice": {"type": "function", "function": {"name": "GetWeather"}}
},
)
class GetWeather(BaseModel):
\"\"\"Get the current weather in a given location\"\"\"
location: str = Field(..., description='The city and state, e.g. San Francisco, CA')
class GetPopulation(BaseModel):
\"\"\"Get the current population in a given location\"\"\"
location: str = Field(..., description='The city and state, e.g. San Francisco, CA')
llm_with_tools = llm.bind_tools([GetWeather, GetPopulation])
ai_msg = llm_with_tools.invoke(
"Which city is bigger: LA or NY?",
)
ai_msg.tool_calls
The resulting tool call would be:
.. code-block:: python
[
{
"name": "GetWeather",
"args": {"location": "Los Angeles, CA"},
"id": "call_81668711",
"type": "tool_call",
}
]
Parallel tool calling / parallel function calling: By default, parallel tool / function calling is enabled, so you can process multiple function calls in one request/response cycle. When two or more tool calls are required, all of the tool call requests will be included in the response body.
Structured output
.. code-block:: python
from typing import Optional
from pydantic import BaseModel, Field
class Joke(BaseModel):
'''Joke to tell user.'''
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
rating: Optional[int] = Field(description="How funny the joke is, from 1 to 10")
structured_llm = llm.with_structured_output(Joke)
structured_llm.invoke("Tell me a joke about cats")
.. code-block:: python
Joke(
setup="Why was the cat sitting on the computer?",
punchline="To keep an eye on the mouse!",
rating=7,
)
Live Search
xAI supports a Live Search <https://docs.x.ai/docs/guides/live-search>__
feature that enables Grok to ground its answers using results from web searches.
.. code-block:: python
from langchain_xai import ChatXAI
llm = ChatXAI(
model="grok-4",
search_parameters={
"mode": "auto",
# Example optional parameters below:
"max_search_results": 3,
"from_date": "2025-05-26",
"to_date": "2025-05-27",
},
)
llm.invoke("Provide me a digest of world news in the last 24 hours.")
Note
Citations <https://docs.x.ai/docs/guides/live-search#returning-citations>__
are only available in Grok 3 <https://docs.x.ai/docs/models/grok-3>__.
Token usage
.. code-block:: python
ai_msg = llm.invoke(messages)
ai_msg.usage_metadata
.. code-block:: python
{"input_tokens": 37, "output_tokens": 6, "total_tokens": 43}
Logprobs
.. code-block:: python
logprobs_llm = llm.bind(logprobs=True)
messages = [("human", "Say Hello World! Do not return anything else.")]
ai_msg = logprobs_llm.invoke(messages)
ai_msg.response_metadata["logprobs"]
.. code-block:: python
{
"content": None,
"token_ids": [22557, 3304, 28808, 2],
"tokens": [" Hello", " World", "!", "</s>"],
"token_logprobs": [-4.7683716e-06, -5.9604645e-07, 0, -0.057373047],
}
Response metadata .. code-block:: python
ai_msg = llm.invoke(messages)
ai_msg.response_metadata
.. code-block:: python
{
"token_usage": {
"completion_tokens": 4,
"prompt_tokens": 19,
"total_tokens": 23,
},
"model_name": "grok-4",
"system_fingerprint": None,
"finish_reason": "stop",
"logprobs": None,
}
Methods:
| Name | Description |
|---|---|
get_name |
Get the name of the |
get_input_schema |
Get a pydantic model that can be used to validate input to the Runnable. |
get_input_jsonschema |
Get a JSON schema that represents the input to the |
get_output_schema |
Get a pydantic model that can be used to validate output to the |
get_output_jsonschema |
Get a JSON schema that represents the output of the |
config_schema |
The type of config this |
get_config_jsonschema |
Get a JSON schema that represents the config of the |
get_graph |
Return a graph representation of this |
get_prompts |
Return a list of prompts used by this |
__or__ |
Runnable "or" operator. |
__ror__ |
Runnable "reverse-or" operator. |
pipe |
Pipe runnables. |
pick |
Pick keys from the output dict of this |
assign |
Assigns new fields to the dict output of this |
batch |
Default implementation runs invoke in parallel using a thread pool executor. |
batch_as_completed |
Run |
abatch |
Default implementation runs |
abatch_as_completed |
Run |
astream_log |
Stream all output from a |
astream_events |
Generate a stream of events. |
transform |
Transform inputs to outputs. |
atransform |
Transform inputs to outputs. |
bind |
Bind arguments to a |
with_config |
Bind config to a |
with_listeners |
Bind lifecycle listeners to a |
with_alisteners |
Bind async lifecycle listeners to a |
with_types |
Bind input and output types to a |
with_retry |
Create a new Runnable that retries the original Runnable on exceptions. |
map |
Return a new |
with_fallbacks |
Add fallbacks to a |
as_tool |
Create a |
__init__ |
|
lc_id |
Return a unique identifier for this class for serialization purposes. |
to_json |
Serialize the |
to_json_not_implemented |
Serialize a "not implemented" object. |
configurable_fields |
Configure particular |
configurable_alternatives |
Configure alternatives for |
set_verbose |
If verbose is None, set it. |
get_token_ids |
Get the tokens present in the text with tiktoken package. |
get_num_tokens |
Get the number of tokens present in the text. |
get_num_tokens_from_messages |
Calculate num tokens for |
generate |
Pass a sequence of prompts to the model and return model generations. |
agenerate |
Asynchronously pass a sequence of prompts to a model and return generations. |
dict |
Return a dictionary of the LLM. |
bind_tools |
Bind tool-like objects to this chat model. |
build_extra |
Build extra kwargs from additional params that were passed in. |
validate_temperature |
Validate temperature parameter for different models. |
get_lc_namespace |
Get the namespace of the langchain object. |
is_lc_serializable |
Return whether this model can be serialized by LangChain. |
validate_environment |
Validate that api key and python package exists in environment. |
with_structured_output |
Model wrapper that returns outputs formatted to match the given schema. |
Attributes:
| Name | Type | Description |
|---|---|---|
InputType |
TypeAlias
|
Get the input type for this runnable. |
OutputType |
Any
|
Get the output type for this runnable. |
input_schema |
type[BaseModel]
|
The type of input this |
output_schema |
type[BaseModel]
|
Output schema. |
config_specs |
list[ConfigurableFieldSpec]
|
List configurable fields for this |
cache |
BaseCache | bool | None
|
Whether to cache the response. |
verbose |
bool
|
Whether to print out response text. |
callbacks |
Callbacks
|
Callbacks to add to the run trace. |
tags |
list[str] | None
|
Tags to add to the run trace. |
metadata |
dict[str, Any] | None
|
Metadata to add to the run trace. |
custom_get_token_ids |
Callable[[str], list[int]] | None
|
Optional encoder to use for counting tokens. |
rate_limiter |
BaseRateLimiter | None
|
An optional rate limiter to use for limiting the number of requests. |
disable_streaming |
bool | Literal['tool_calling']
|
Whether to disable streaming for this model. |
output_version |
Optional[str]
|
Version of AIMessage output format to use. |
temperature |
Optional[float]
|
What sampling temperature to use. |
model_kwargs |
dict[str, Any]
|
Holds any model parameters valid for |
openai_organization |
Optional[str]
|
Automatically inferred from env var |
request_timeout |
Union[float, tuple[float, float], Any, None]
|
Timeout for requests to OpenAI completion API. Can be float, |
stream_usage |
Optional[bool]
|
Whether to include usage metadata in streaming output. If enabled, an additional |
max_retries |
Optional[int]
|
Maximum number of retries to make when generating. |
presence_penalty |
Optional[float]
|
Penalizes repeated tokens. |
frequency_penalty |
Optional[float]
|
Penalizes repeated tokens according to frequency. |
seed |
Optional[int]
|
Seed for generation |
logprobs |
Optional[bool]
|
Whether to return logprobs. |
top_logprobs |
Optional[int]
|
Number of most likely tokens to return at each token position, each with |
logit_bias |
Optional[dict[int, int]]
|
Modify the likelihood of specified tokens appearing in the completion. |
streaming |
bool
|
Whether to stream the results or not. |
n |
Optional[int]
|
Number of chat completions to generate for each prompt. |
top_p |
Optional[float]
|
Total probability mass of tokens to consider at each step. |
max_tokens |
Optional[int]
|
Maximum number of tokens to generate. |
reasoning_effort |
Optional[str]
|
Constrains effort on reasoning for reasoning models. For use with the Chat |
reasoning |
Optional[dict[str, Any]]
|
Reasoning parameters for reasoning models, i.e., OpenAI o-series models (o1, o3, |
verbosity |
Optional[str]
|
Controls the verbosity level of responses for reasoning models. For use with the |
tiktoken_model_name |
Optional[str]
|
The model name to pass to tiktoken when using this class. |
http_client |
Union[Any, None]
|
Optional |
http_async_client |
Union[Any, None]
|
Optional |
stop |
Optional[Union[list[str], str]]
|
Default stop sequences. |
extra_body |
Optional[Mapping[str, Any]]
|
Optional additional JSON properties to include in the request parameters when |
include_response_headers |
bool
|
Whether to include response headers in the output message |
disabled_params |
Optional[dict[str, Any]]
|
Parameters of the OpenAI client or chat.completions endpoint that should be |
include |
Optional[list[str]]
|
Additional fields to include in generations from Responses API. |
service_tier |
Optional[str]
|
Latency tier for request. Options are |
store |
Optional[bool]
|
If True, OpenAI may store response data for future use. Defaults to True |
truncation |
Optional[str]
|
Truncation strategy (Responses API). Can be |
use_previous_response_id |
bool
|
If True, always pass |
use_responses_api |
Optional[bool]
|
Whether to use the Responses API instead of the Chat API. |
model_name |
str
|
Model name to use. |
xai_api_key |
Optional[SecretStr]
|
xAI API key. |
xai_api_base |
str
|
Base URL path for API requests. |
search_parameters |
Optional[dict[str, Any]]
|
Parameters for search requests. Example: |
lc_secrets |
dict[str, str]
|
A map of constructor argument names to secret ids. |
lc_attributes |
dict[str, Any]
|
List of attribute names that should be included in the serialized kwargs. |
input_schema
property
¶
input_schema: type[BaseModel]
The type of input this Runnable accepts specified as a pydantic model.
output_schema
property
¶
output_schema: type[BaseModel]
Output schema.
The type of output this Runnable produces specified as a pydantic model.
config_specs
property
¶
config_specs: list[ConfigurableFieldSpec]
List configurable fields for this Runnable.
cache
class-attribute
instance-attribute
¶
cache: BaseCache | bool | None = Field(
default=None, exclude=True
)
Whether to cache the response.
- If true, will use the global cache.
- If false, will not use a cache
- If None, will use the global cache if it's set, otherwise no cache.
- If instance of
BaseCache, will use the provided cache.
Caching is not currently supported for streaming methods of models.
verbose
class-attribute
instance-attribute
¶
verbose: bool = Field(
default_factory=_get_verbosity, exclude=True, repr=False
)
Whether to print out response text.
callbacks
class-attribute
instance-attribute
¶
Callbacks to add to the run trace.
tags
class-attribute
instance-attribute
¶
Tags to add to the run trace.
metadata
class-attribute
instance-attribute
¶
Metadata to add to the run trace.
custom_get_token_ids
class-attribute
instance-attribute
¶
Optional encoder to use for counting tokens.
rate_limiter
class-attribute
instance-attribute
¶
rate_limiter: BaseRateLimiter | None = Field(
default=None, exclude=True
)
An optional rate limiter to use for limiting the number of requests.
disable_streaming
class-attribute
instance-attribute
¶
Whether to disable streaming for this model.
If streaming is bypassed, then stream()/astream()/astream_events() will
defer to invoke()/ainvoke().
- If True, will always bypass streaming case.
- If
'tool_calling', will bypass streaming case only when the model is called with atoolskeyword argument. In other words, LangChain will automatically switch to non-streaming behavior (invoke()) only when the tools argument is provided. This offers the best of both worlds. - If False (default), will always use streaming case if available.
The main reason for this flag is that code might be written using stream() and
a user may want to swap out a given model for another model whose the implementation
does not properly support streaming.
output_version
class-attribute
instance-attribute
¶
output_version: Optional[str] = Field(
default_factory=from_env(
"LC_OUTPUT_VERSION", default=None
)
)
Version of AIMessage output format to use.
This field is used to roll-out new output formats for chat model AIMessages in a backwards-compatible way.
Supported values:
'v0': AIMessage format as of langchain-openai 0.3.x.'responses/v1': Formats Responses API output items into AIMessage content blocks (Responses API only)"v1": v1 of LangChain cross-provider standard.
Behavior changed in 1.0.0
Default updated to "responses/v1".
.. versionchanged:: 1.0.0
Default updated to ``"responses/v1"``.
temperature
class-attribute
instance-attribute
¶
What sampling temperature to use.
model_kwargs
class-attribute
instance-attribute
¶
Holds any model parameters valid for create call not explicitly specified.
openai_organization
class-attribute
instance-attribute
¶
Automatically inferred from env var OPENAI_ORG_ID if not provided.
request_timeout
class-attribute
instance-attribute
¶
request_timeout: Union[
float, tuple[float, float], Any, None
] = Field(default=None, alias="timeout")
Timeout for requests to OpenAI completion API. Can be float, httpx.Timeout or
None.
stream_usage
class-attribute
instance-attribute
¶
Whether to include usage metadata in streaming output. If enabled, an additional message chunk will be generated during the stream including usage metadata.
This parameter is enabled unless openai_api_base is set or the model is
initialized with a custom client, as many chat completions APIs do not support
streaming token usage.
Added in version 0.3.9
Behavior changed in 0.3.35
Enabled for default base URL and client.
max_retries
class-attribute
instance-attribute
¶
Maximum number of retries to make when generating.
presence_penalty
class-attribute
instance-attribute
¶
Penalizes repeated tokens.
frequency_penalty
class-attribute
instance-attribute
¶
Penalizes repeated tokens according to frequency.
logprobs
class-attribute
instance-attribute
¶
Whether to return logprobs.
top_logprobs
class-attribute
instance-attribute
¶
Number of most likely tokens to return at each token position, each with
an associated log probability. logprobs must be set to true
if this parameter is used.
logit_bias
class-attribute
instance-attribute
¶
Modify the likelihood of specified tokens appearing in the completion.
streaming
class-attribute
instance-attribute
¶
streaming: bool = False
Whether to stream the results or not.
n
class-attribute
instance-attribute
¶
Number of chat completions to generate for each prompt.
top_p
class-attribute
instance-attribute
¶
Total probability mass of tokens to consider at each step.
max_tokens
class-attribute
instance-attribute
¶
Maximum number of tokens to generate.
reasoning_effort
class-attribute
instance-attribute
¶
Constrains effort on reasoning for reasoning models. For use with the Chat Completions API.
Reasoning models only, like OpenAI o1, o3, and o4-mini.
Currently supported values are 'minimal', 'low', 'medium', and
'high'. Reducing reasoning effort can result in faster responses and fewer
tokens used on reasoning in a response.
Added in version 0.2.14
reasoning
class-attribute
instance-attribute
¶
Reasoning parameters for reasoning models, i.e., OpenAI o-series models (o1, o3, o4-mini, etc.). For use with the Responses API.
Example:
.. code-block:: python
reasoning={
"effort": "medium", # can be "low", "medium", or "high"
"summary": "auto", # can be "auto", "concise", or "detailed"
}
Added in version 0.3.24
verbosity
class-attribute
instance-attribute
¶
Controls the verbosity level of responses for reasoning models. For use with the Responses API.
Currently supported values are 'low', 'medium', and 'high'.
Controls how detailed the model's responses are.
Added in version 0.3.28
tiktoken_model_name
class-attribute
instance-attribute
¶
The model name to pass to tiktoken when using this class. Tiktoken is used to count the number of tokens in documents to constrain them to be under a certain limit. By default, when set to None, this will be the same as the embedding model name. However, there are some cases where you may want to use this Embedding class with a model name not supported by tiktoken. This can include when using Azure embeddings or when using one of the many model providers that expose an OpenAI-like API but with different models. In those cases, in order to avoid erroring when tiktoken is called, you can specify a model name to use here.
http_client
class-attribute
instance-attribute
¶
Optional httpx.Client. Only used for sync invocations. Must specify
http_async_client as well if you'd like a custom client for async
invocations.
http_async_client
class-attribute
instance-attribute
¶
Optional httpx.AsyncClient. Only used for async invocations. Must specify
http_client as well if you'd like a custom client for sync invocations.
stop
class-attribute
instance-attribute
¶
Default stop sequences.
extra_body
class-attribute
instance-attribute
¶
Optional additional JSON properties to include in the request parameters when making requests to OpenAI compatible APIs, such as vLLM, LM Studio, or other providers.
This is the recommended way to pass custom parameters that are specific to your OpenAI-compatible API provider but not part of the standard OpenAI API.
Examples:
- LM Studio TTL parameter:
extra_body={"ttl": 300} - vLLM custom parameters:
extra_body={"use_beam_search": True} - Any other provider-specific parameters
Note
Do NOT use model_kwargs for custom parameters that are not part of the
standard OpenAI API, as this will cause errors when making API calls. Use
extra_body instead.
include_response_headers
class-attribute
instance-attribute
¶
include_response_headers: bool = False
Whether to include response headers in the output message response_metadata.
disabled_params
class-attribute
instance-attribute
¶
Parameters of the OpenAI client or chat.completions endpoint that should be disabled for the given model.
Should be specified as {"param": None | ['val1', 'val2']} where the key is the
parameter and the value is either None, meaning that parameter should never be
used, or it's a list of disabled values for the parameter.
For example, older models may not support the 'parallel_tool_calls' parameter at
all, in which case disabled_params={"parallel_tool_calls": None} can be passed
in.
If a parameter is disabled then it will not be used by default in any methods, e.g.
in langchain_openai.chat_models.base.ChatOpenAI.with_structured_output.
However this does not prevent a user from directly passed in the parameter during
invocation.
include
class-attribute
instance-attribute
¶
Additional fields to include in generations from Responses API.
Supported values:
'file_search_call.results''message.input_image.image_url''computer_call_output.output.image_url''reasoning.encrypted_content''code_interpreter_call.outputs'
Added in version 0.3.24
service_tier
class-attribute
instance-attribute
¶
Latency tier for request. Options are 'auto', 'default', or 'flex'.
Relevant for users of OpenAI's scale tier service.
store
class-attribute
instance-attribute
¶
If True, OpenAI may store response data for future use. Defaults to True for the Responses API and False for the Chat Completions API.
Added in version 0.3.24
truncation
class-attribute
instance-attribute
¶
Truncation strategy (Responses API). Can be 'auto' or 'disabled'
(default). If 'auto', model may drop input items from the middle of the
message sequence to fit the context window.
Added in version 0.3.24
use_previous_response_id
class-attribute
instance-attribute
¶
use_previous_response_id: bool = False
If True, always pass previous_response_id using the ID of the most recent
response. Responses API only.
Input messages up to the most recent response will be dropped from request payloads.
For example, the following two are equivalent:
.. code-block:: python
llm = ChatOpenAI(
model="o4-mini",
use_previous_response_id=True,
)
llm.invoke(
[
HumanMessage("Hello"),
AIMessage("Hi there!", response_metadata={"id": "resp_123"}),
HumanMessage("How are you?"),
]
)
.. code-block:: python
llm = ChatOpenAI(model="o4-mini", use_responses_api=True)
llm.invoke([HumanMessage("How are you?")], previous_response_id="resp_123")
Added in version 0.3.26
use_responses_api
class-attribute
instance-attribute
¶
Whether to use the Responses API instead of the Chat API.
If not specified then will be inferred based on invocation params.
Added in version 0.3.9
model_name
class-attribute
instance-attribute
¶
model_name: str = Field(default='grok-4', alias='model')
Model name to use.
xai_api_key
class-attribute
instance-attribute
¶
xai_api_key: Optional[SecretStr] = Field(
alias="api_key",
default_factory=secret_from_env(
"XAI_API_KEY", default=None
),
)
xAI API key.
Automatically read from env variable XAI_API_KEY if not provided.
xai_api_base
class-attribute
instance-attribute
¶
xai_api_base: str = Field(default='https://api.x.ai/v1/')
Base URL path for API requests.
search_parameters
class-attribute
instance-attribute
¶
Parameters for search requests. Example: {"mode": "auto"}.
lc_secrets
property
¶
A map of constructor argument names to secret ids.
For example, {"xai_api_key": "XAI_API_KEY"}
lc_attributes
property
¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
get_name
¶
get_input_schema
¶
get_input_schema(
config: RunnableConfig | None = None,
) -> type[BaseModel]
Get a pydantic model that can be used to validate input to the Runnable.
Runnables that leverage the configurable_fields and
configurable_alternatives methods will have a dynamic input schema that
depends on which configuration the Runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
RunnableConfig | None
|
A config to use when generating the schema. |
None
|
Returns:
| Type | Description |
|---|---|
type[BaseModel]
|
A pydantic model that can be used to validate input. |
get_input_jsonschema
¶
Get a JSON schema that represents the input to the Runnable.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
RunnableConfig | None
|
A config to use when generating the schema. |
None
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A JSON schema that represents the input to the |
Example
Added in version 0.3.0
get_output_schema
¶
get_output_schema(
config: RunnableConfig | None = None,
) -> type[BaseModel]
Get a pydantic model that can be used to validate output to the Runnable.
Runnables that leverage the configurable_fields and
configurable_alternatives methods will have a dynamic output schema that
depends on which configuration the Runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
RunnableConfig | None
|
A config to use when generating the schema. |
None
|
Returns:
| Type | Description |
|---|---|
type[BaseModel]
|
A pydantic model that can be used to validate output. |
get_output_jsonschema
¶
Get a JSON schema that represents the output of the Runnable.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
RunnableConfig | None
|
A config to use when generating the schema. |
None
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A JSON schema that represents the output of the |
Example
Added in version 0.3.0
config_schema
¶
The type of config this Runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include
|
Sequence[str] | None
|
A list of fields to include in the config schema. |
None
|
Returns:
| Type | Description |
|---|---|
type[BaseModel]
|
A pydantic model that can be used to validate config. |
get_config_jsonschema
¶
Get a JSON schema that represents the config of the Runnable.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include
|
Sequence[str] | None
|
A list of fields to include in the config schema. |
None
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A JSON schema that represents the config of the |
Added in version 0.3.0
get_graph
¶
Return a graph representation of this Runnable.
get_prompts
¶
get_prompts(
config: RunnableConfig | None = None,
) -> list[BasePromptTemplate]
Return a list of prompts used by this Runnable.
__or__
¶
__or__(
other: (
Runnable[Any, Other]
| Callable[[Iterator[Any]], Iterator[Other]]
| Callable[
[AsyncIterator[Any]], AsyncIterator[Other]
]
| Callable[[Any], Other]
| Mapping[
str,
Runnable[Any, Other]
| Callable[[Any], Other]
| Any,
]
),
) -> RunnableSerializable[Input, Other]
Runnable "or" operator.
Compose this Runnable with another object to create a
RunnableSequence.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
other
|
Runnable[Any, Other] | Callable[[Iterator[Any]], Iterator[Other]] | Callable[[AsyncIterator[Any]], AsyncIterator[Other]] | Callable[[Any], Other] | Mapping[str, Runnable[Any, Other] | Callable[[Any], Other] | Any]
|
Another |
required |
Returns:
| Type | Description |
|---|---|
RunnableSerializable[Input, Other]
|
A new |
__ror__
¶
__ror__(
other: (
Runnable[Other, Any]
| Callable[[Iterator[Other]], Iterator[Any]]
| Callable[
[AsyncIterator[Other]], AsyncIterator[Any]
]
| Callable[[Other], Any]
| Mapping[
str,
Runnable[Other, Any]
| Callable[[Other], Any]
| Any,
]
),
) -> RunnableSerializable[Other, Output]
Runnable "reverse-or" operator.
Compose this Runnable with another object to create a
RunnableSequence.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
other
|
Runnable[Other, Any] | Callable[[Iterator[Other]], Iterator[Any]] | Callable[[AsyncIterator[Other]], AsyncIterator[Any]] | Callable[[Other], Any] | Mapping[str, Runnable[Other, Any] | Callable[[Other], Any] | Any]
|
Another |
required |
Returns:
| Type | Description |
|---|---|
RunnableSerializable[Other, Output]
|
A new |
pipe
¶
pipe(
*others: Runnable[Any, Other] | Callable[[Any], Other],
name: str | None = None
) -> RunnableSerializable[Input, Other]
Pipe runnables.
Compose this Runnable with Runnable-like objects to make a
RunnableSequence.
Equivalent to RunnableSequence(self, *others) or self | others[0] | ...
Example
from langchain_core.runnables import RunnableLambda
def add_one(x: int) -> int:
return x + 1
def mul_two(x: int) -> int:
return x * 2
runnable_1 = RunnableLambda(add_one)
runnable_2 = RunnableLambda(mul_two)
sequence = runnable_1.pipe(runnable_2)
# Or equivalently:
# sequence = runnable_1 | runnable_2
# sequence = RunnableSequence(first=runnable_1, last=runnable_2)
sequence.invoke(1)
await sequence.ainvoke(1)
# -> 4
sequence.batch([1, 2, 3])
await sequence.abatch([1, 2, 3])
# -> [4, 6, 8]
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
*others
|
Runnable[Any, Other] | Callable[[Any], Other]
|
Other |
()
|
name
|
str | None
|
An optional name for the resulting |
None
|
Returns:
| Type | Description |
|---|---|
RunnableSerializable[Input, Other]
|
A new |
pick
¶
Pick keys from the output dict of this Runnable.
Pick single key:
```python
import json
from langchain_core.runnables import RunnableLambda, RunnableMap
as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)
chain = RunnableMap(str=as_str, json=as_json)
chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3]}
json_only_chain = chain.pick("json")
json_only_chain.invoke("[1, 2, 3]")
# -> [1, 2, 3]
```
Pick list of keys:
```python
from typing import Any
import json
from langchain_core.runnables import RunnableLambda, RunnableMap
as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)
def as_bytes(x: Any) -> bytes:
return bytes(x, "utf-8")
chain = RunnableMap(
str=as_str, json=as_json, bytes=RunnableLambda(as_bytes)
)
chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3], "bytes": b"[1, 2, 3]"}
json_and_bytes_chain = chain.pick(["json", "bytes"])
json_and_bytes_chain.invoke("[1, 2, 3]")
# -> {"json": [1, 2, 3], "bytes": b"[1, 2, 3]"}
```
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
keys
|
str | list[str]
|
A key or list of keys to pick from the output dict. |
required |
Returns:
| Type | Description |
|---|---|
RunnableSerializable[Any, Any]
|
a new |
assign
¶
assign(
**kwargs: (
Runnable[dict[str, Any], Any]
| Callable[[dict[str, Any]], Any]
| Mapping[
str,
Runnable[dict[str, Any], Any]
| Callable[[dict[str, Any]], Any],
]
),
) -> RunnableSerializable[Any, Any]
Assigns new fields to the dict output of this Runnable.
from langchain_community.llms.fake import FakeStreamingListLLM
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import SystemMessagePromptTemplate
from langchain_core.runnables import Runnable
from operator import itemgetter
prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
llm = FakeStreamingListLLM(responses=["foo-lish"])
chain: Runnable = prompt | llm | {"str": StrOutputParser()}
chain_with_assign = chain.assign(hello=itemgetter("str") | llm)
print(chain_with_assign.input_schema.model_json_schema())
# {'title': 'PromptInput', 'type': 'object', 'properties':
{'question': {'title': 'Question', 'type': 'string'}}}
print(chain_with_assign.output_schema.model_json_schema())
# {'title': 'RunnableSequenceOutput', 'type': 'object', 'properties':
{'str': {'title': 'Str',
'type': 'string'}, 'hello': {'title': 'Hello', 'type': 'string'}}}
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**kwargs
|
Runnable[dict[str, Any], Any] | Callable[[dict[str, Any]], Any] | Mapping[str, Runnable[dict[str, Any], Any] | Callable[[dict[str, Any]], Any]]
|
A mapping of keys to |
{}
|
Returns:
| Type | Description |
|---|---|
RunnableSerializable[Any, Any]
|
A new |
batch
¶
batch(
inputs: list[Input],
config: (
RunnableConfig | list[RunnableConfig] | None
) = None,
*,
return_exceptions: bool = False,
**kwargs: Any | None
) -> list[Output]
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying Runnable uses an API which supports a batch mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
inputs
|
list[Input]
|
A list of inputs to the |
required |
config
|
RunnableConfig | list[RunnableConfig] | None
|
A config to use when invoking the |
None
|
return_exceptions
|
bool
|
Whether to return exceptions instead of raising them. Defaults to False. |
False
|
**kwargs
|
Any | None
|
Additional keyword arguments to pass to the |
{}
|
Returns:
| Type | Description |
|---|---|
list[Output]
|
A list of outputs from the |
batch_as_completed
¶
batch_as_completed(
inputs: Sequence[Input],
config: (
RunnableConfig | Sequence[RunnableConfig] | None
) = None,
*,
return_exceptions: bool = False,
**kwargs: Any | None
) -> Iterator[tuple[int, Output | Exception]]
Run invoke in parallel on a list of inputs.
Yields results as they complete.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
inputs
|
Sequence[Input]
|
A list of inputs to the |
required |
config
|
RunnableConfig | Sequence[RunnableConfig] | None
|
A config to use when invoking the |
None
|
return_exceptions
|
bool
|
Whether to return exceptions instead of raising them. Defaults to False. |
False
|
**kwargs
|
Any | None
|
Additional keyword arguments to pass to the |
{}
|
Yields:
| Type | Description |
|---|---|
tuple[int, Output | Exception]
|
Tuples of the index of the input and the output from the |
abatch
async
¶
abatch(
inputs: list[Input],
config: (
RunnableConfig | list[RunnableConfig] | None
) = None,
*,
return_exceptions: bool = False,
**kwargs: Any | None
) -> list[Output]
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying Runnable uses an API which supports a batch mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
inputs
|
list[Input]
|
A list of inputs to the |
required |
config
|
RunnableConfig | list[RunnableConfig] | None
|
A config to use when invoking the |
None
|
return_exceptions
|
bool
|
Whether to return exceptions instead of raising them. Defaults to False. |
False
|
**kwargs
|
Any | None
|
Additional keyword arguments to pass to the |
{}
|
Returns:
| Type | Description |
|---|---|
list[Output]
|
A list of outputs from the |
abatch_as_completed
async
¶
abatch_as_completed(
inputs: Sequence[Input],
config: (
RunnableConfig | Sequence[RunnableConfig] | None
) = None,
*,
return_exceptions: bool = False,
**kwargs: Any | None
) -> AsyncIterator[tuple[int, Output | Exception]]
Run ainvoke in parallel on a list of inputs.
Yields results as they complete.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
inputs
|
Sequence[Input]
|
A list of inputs to the |
required |
config
|
RunnableConfig | Sequence[RunnableConfig] | None
|
A config to use when invoking the |
None
|
return_exceptions
|
bool
|
Whether to return exceptions instead of raising them. Defaults to False. |
False
|
kwargs
|
Any | None
|
Additional keyword arguments to pass to the |
{}
|
Yields:
| Type | Description |
|---|---|
AsyncIterator[tuple[int, Output | Exception]]
|
A tuple of the index of the input and the output from the |
astream_log
async
¶
astream_log(
input: Any,
config: RunnableConfig | None = None,
*,
diff: bool = True,
with_streamed_output_list: bool = True,
include_names: Sequence[str] | None = None,
include_types: Sequence[str] | None = None,
include_tags: Sequence[str] | None = None,
exclude_names: Sequence[str] | None = None,
exclude_types: Sequence[str] | None = None,
exclude_tags: Sequence[str] | None = None,
**kwargs: Any
) -> AsyncIterator[RunLogPatch] | AsyncIterator[RunLog]
Stream all output from a Runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of Jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run.
The Jsonpatch ops can be applied in order to construct state.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input
|
Any
|
The input to the |
required |
config
|
RunnableConfig | None
|
The config to use for the |
None
|
diff
|
bool
|
Whether to yield diffs between each step or the current state. |
True
|
with_streamed_output_list
|
bool
|
Whether to yield the |
True
|
include_names
|
Sequence[str] | None
|
Only include logs with these names. |
None
|
include_types
|
Sequence[str] | None
|
Only include logs with these types. |
None
|
include_tags
|
Sequence[str] | None
|
Only include logs with these tags. |
None
|
exclude_names
|
Sequence[str] | None
|
Exclude logs with these names. |
None
|
exclude_types
|
Sequence[str] | None
|
Exclude logs with these types. |
None
|
exclude_tags
|
Sequence[str] | None
|
Exclude logs with these tags. |
None
|
kwargs
|
Any
|
Additional keyword arguments to pass to the |
{}
|
Yields:
| Type | Description |
|---|---|
AsyncIterator[RunLogPatch] | AsyncIterator[RunLog]
|
A |
astream_events
async
¶
astream_events(
input: Any,
config: RunnableConfig | None = None,
*,
version: Literal["v1", "v2"] = "v2",
include_names: Sequence[str] | None = None,
include_types: Sequence[str] | None = None,
include_tags: Sequence[str] | None = None,
exclude_names: Sequence[str] | None = None,
exclude_types: Sequence[str] | None = None,
exclude_tags: Sequence[str] | None = None,
**kwargs: Any
) -> AsyncIterator[StreamEvent]
Generate a stream of events.
Use to create an iterator over StreamEvents that provide real-time information
about the progress of the Runnable, including StreamEvents from intermediate
results.
A StreamEvent is a dictionary with the following schema:
event: str - Event names are of the format:on_[runnable_type]_(start|stream|end).name: str - The name of theRunnablethat generated the event.run_id: str - randomly generated ID associated with the given execution of theRunnablethat emitted the event. A childRunnablethat gets invoked as part of the execution of a parentRunnableis assigned its own unique ID.parent_ids: list[str] - The IDs of the parent runnables that generated the event. The rootRunnablewill have an empty list. The order of the parent IDs is from the root to the immediate parent. Only available for v2 version of the API. The v1 version of the API will return an empty list.tags: Optional[list[str]] - The tags of theRunnablethat generated the event.metadata: Optional[dict[str, Any]] - The metadata of theRunnablethat generated the event.data: dict[str, Any]
Below is a table that illustrates some events that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table.
Note
This reference table is for the v2 version of the schema.
+--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
| event | name | chunk | input | output |
+==========================+==================+=====================================+===================================================+=====================================================+
| on_chat_model_start | [model name] | | {"messages": [[SystemMessage, HumanMessage]]} | |
+--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
| on_chat_model_stream | [model name] | AIMessageChunk(content="hello") | | |
+--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
| on_chat_model_end | [model name] | | {"messages": [[SystemMessage, HumanMessage]]} | AIMessageChunk(content="hello world") |
+--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
| on_llm_start | [model name] | | {'input': 'hello'} | |
+--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
| on_llm_stream | [model name] | 'Hello' | | |
+--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
| on_llm_end | [model name] | | 'Hello human!' | |
+--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
| on_chain_start | format_docs | | | |
+--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
| on_chain_stream | format_docs | 'hello world!, goodbye world!' | | |
+--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
| on_chain_end | format_docs | | [Document(...)] | 'hello world!, goodbye world!' |
+--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
| on_tool_start | some_tool | | {"x": 1, "y": "2"} | |
+--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
| on_tool_end | some_tool | | | {"x": 1, "y": "2"} |
+--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
| on_retriever_start | [retriever name] | | {"query": "hello"} | |
+--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
| on_retriever_end | [retriever name] | | {"query": "hello"} | [Document(...), ..] |
+--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
| on_prompt_start | [template_name] | | {"question": "hello"} | |
+--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
| on_prompt_end | [template_name] | | {"question": "hello"} | ChatPromptValue(messages: [SystemMessage, ...]) |
+--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+
In addition to the standard events, users can also dispatch custom events (see example below).
Custom events will be only be surfaced with in the v2 version of the API!
A custom event has following format:
+-----------+------+-----------------------------------------------------------------------------------------------------------+ | Attribute | Type | Description | +===========+======+===========================================================================================================+ | name | str | A user defined name for the event. | +-----------+------+-----------------------------------------------------------------------------------------------------------+ | data | Any | The data associated with the event. This can be anything, though we suggest making it JSON serializable. | +-----------+------+-----------------------------------------------------------------------------------------------------------+
Here are declarations associated with the standard events shown above:
format_docs:
def format_docs(docs: list[Document]) -> str:
'''Format the docs.'''
return ", ".join([doc.page_content for doc in docs])
format_docs = RunnableLambda(format_docs)
some_tool:
prompt:
template = ChatPromptTemplate.from_messages(
[
("system", "You are Cat Agent 007"),
("human", "{question}"),
]
).with_config({"run_name": "my_template", "tags": ["my_template"]})
from langchain_core.runnables import RunnableLambda
async def reverse(s: str) -> str:
return s[::-1]
chain = RunnableLambda(func=reverse)
events = [event async for event in chain.astream_events("hello", version="v2")]
# will produce the following events (run_id, and parent_ids
# has been omitted for brevity):
[
{
"data": {"input": "hello"},
"event": "on_chain_start",
"metadata": {},
"name": "reverse",
"tags": [],
},
{
"data": {"chunk": "olleh"},
"event": "on_chain_stream",
"metadata": {},
"name": "reverse",
"tags": [],
},
{
"data": {"output": "olleh"},
"event": "on_chain_end",
"metadata": {},
"name": "reverse",
"tags": [],
},
]
Example: Dispatch Custom Event
from langchain_core.callbacks.manager import (
adispatch_custom_event,
)
from langchain_core.runnables import RunnableLambda, RunnableConfig
import asyncio
async def slow_thing(some_input: str, config: RunnableConfig) -> str:
"""Do something that takes a long time."""
await asyncio.sleep(1) # Placeholder for some slow operation
await adispatch_custom_event(
"progress_event",
{"message": "Finished step 1 of 3"},
config=config # Must be included for python < 3.10
)
await asyncio.sleep(1) # Placeholder for some slow operation
await adispatch_custom_event(
"progress_event",
{"message": "Finished step 2 of 3"},
config=config # Must be included for python < 3.10
)
await asyncio.sleep(1) # Placeholder for some slow operation
return "Done"
slow_thing = RunnableLambda(slow_thing)
async for event in slow_thing.astream_events("some_input", version="v2"):
print(event)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input
|
Any
|
The input to the |
required |
config
|
RunnableConfig | None
|
The config to use for the |
None
|
version
|
Literal['v1', 'v2']
|
The version of the schema to use either |
'v2'
|
include_names
|
Sequence[str] | None
|
Only include events from |
None
|
include_types
|
Sequence[str] | None
|
Only include events from |
None
|
include_tags
|
Sequence[str] | None
|
Only include events from |
None
|
exclude_names
|
Sequence[str] | None
|
Exclude events from |
None
|
exclude_types
|
Sequence[str] | None
|
Exclude events from |
None
|
exclude_tags
|
Sequence[str] | None
|
Exclude events from |
None
|
kwargs
|
Any
|
Additional keyword arguments to pass to the |
{}
|
Yields:
| Type | Description |
|---|---|
AsyncIterator[StreamEvent]
|
An async stream of |
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
If the version is not |
transform
¶
transform(
input: Iterator[Input],
config: RunnableConfig | None = None,
**kwargs: Any | None
) -> Iterator[Output]
Transform inputs to outputs.
Default implementation of transform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while input is still being generated.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input
|
Iterator[Input]
|
An iterator of inputs to the |
required |
config
|
RunnableConfig | None
|
The config to use for the |
None
|
kwargs
|
Any | None
|
Additional keyword arguments to pass to the |
{}
|
Yields:
| Type | Description |
|---|---|
Output
|
The output of the |
atransform
async
¶
atransform(
input: AsyncIterator[Input],
config: RunnableConfig | None = None,
**kwargs: Any | None
) -> AsyncIterator[Output]
Transform inputs to outputs.
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while input is still being generated.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input
|
AsyncIterator[Input]
|
An async iterator of inputs to the |
required |
config
|
RunnableConfig | None
|
The config to use for the |
None
|
kwargs
|
Any | None
|
Additional keyword arguments to pass to the |
{}
|
Yields:
| Type | Description |
|---|---|
AsyncIterator[Output]
|
The output of the |
bind
¶
bind(**kwargs: Any) -> Runnable[Input, Output]
Bind arguments to a Runnable, returning a new Runnable.
Useful when a Runnable in a chain requires an argument that is not
in the output of the previous Runnable or included in the user input.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
kwargs
|
Any
|
The arguments to bind to the |
{}
|
Returns:
| Type | Description |
|---|---|
Runnable[Input, Output]
|
A new |
Example
from langchain_ollama import ChatOllama
from langchain_core.output_parsers import StrOutputParser
llm = ChatOllama(model="llama3.1")
# Without bind.
chain = llm | StrOutputParser()
chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two three four five.'
# With bind.
chain = llm.bind(stop=["three"]) | StrOutputParser()
chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two'
with_config
¶
with_config(
config: RunnableConfig | None = None, **kwargs: Any
) -> Runnable[Input, Output]
Bind config to a Runnable, returning a new Runnable.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
RunnableConfig | None
|
The config to bind to the |
None
|
kwargs
|
Any
|
Additional keyword arguments to pass to the |
{}
|
Returns:
| Type | Description |
|---|---|
Runnable[Input, Output]
|
A new |
with_listeners
¶
with_listeners(
*,
on_start: (
Callable[[Run], None]
| Callable[[Run, RunnableConfig], None]
| None
) = None,
on_end: (
Callable[[Run], None]
| Callable[[Run, RunnableConfig], None]
| None
) = None,
on_error: (
Callable[[Run], None]
| Callable[[Run, RunnableConfig], None]
| None
) = None
) -> Runnable[Input, Output]
Bind lifecycle listeners to a Runnable, returning a new Runnable.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and
any tags or metadata added to the run.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
on_start
|
Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None
|
Called before the |
None
|
on_end
|
Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None
|
Called after the |
None
|
on_error
|
Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None
|
Called if the |
None
|
Returns:
| Type | Description |
|---|---|
Runnable[Input, Output]
|
A new |
Example
from langchain_core.runnables import RunnableLambda
from langchain_core.tracers.schemas import Run
import time
def test_runnable(time_to_sleep: int):
time.sleep(time_to_sleep)
def fn_start(run_obj: Run):
print("start_time:", run_obj.start_time)
def fn_end(run_obj: Run):
print("end_time:", run_obj.end_time)
chain = RunnableLambda(test_runnable).with_listeners(
on_start=fn_start, on_end=fn_end
)
chain.invoke(2)
with_alisteners
¶
with_alisteners(
*,
on_start: AsyncListener | None = None,
on_end: AsyncListener | None = None,
on_error: AsyncListener | None = None
) -> Runnable[Input, Output]
Bind async lifecycle listeners to a Runnable.
Returns a new Runnable.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and
any tags or metadata added to the run.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
on_start
|
AsyncListener | None
|
Called asynchronously before the |
None
|
on_end
|
AsyncListener | None
|
Called asynchronously after the |
None
|
on_error
|
AsyncListener | None
|
Called asynchronously if the |
None
|
Returns:
| Type | Description |
|---|---|
Runnable[Input, Output]
|
A new |
Example
from langchain_core.runnables import RunnableLambda, Runnable
from datetime import datetime, timezone
import time
import asyncio
def format_t(timestamp: float) -> str:
return datetime.fromtimestamp(timestamp, tz=timezone.utc).isoformat()
async def test_runnable(time_to_sleep: int):
print(f"Runnable[{time_to_sleep}s]: starts at {format_t(time.time())}")
await asyncio.sleep(time_to_sleep)
print(f"Runnable[{time_to_sleep}s]: ends at {format_t(time.time())}")
async def fn_start(run_obj: Runnable):
print(f"on start callback starts at {format_t(time.time())}")
await asyncio.sleep(3)
print(f"on start callback ends at {format_t(time.time())}")
async def fn_end(run_obj: Runnable):
print(f"on end callback starts at {format_t(time.time())}")
await asyncio.sleep(2)
print(f"on end callback ends at {format_t(time.time())}")
runnable = RunnableLambda(test_runnable).with_alisteners(
on_start=fn_start,
on_end=fn_end
)
async def concurrent_runs():
await asyncio.gather(runnable.ainvoke(2), runnable.ainvoke(3))
asyncio.run(concurrent_runs())
Result:
on start callback starts at 2025-03-01T07:05:22.875378+00:00
on start callback starts at 2025-03-01T07:05:22.875495+00:00
on start callback ends at 2025-03-01T07:05:25.878862+00:00
on start callback ends at 2025-03-01T07:05:25.878947+00:00
Runnable[2s]: starts at 2025-03-01T07:05:25.879392+00:00
Runnable[3s]: starts at 2025-03-01T07:05:25.879804+00:00
Runnable[2s]: ends at 2025-03-01T07:05:27.881998+00:00
on end callback starts at 2025-03-01T07:05:27.882360+00:00
Runnable[3s]: ends at 2025-03-01T07:05:28.881737+00:00
on end callback starts at 2025-03-01T07:05:28.882428+00:00
on end callback ends at 2025-03-01T07:05:29.883893+00:00
on end callback ends at 2025-03-01T07:05:30.884831+00:00
with_types
¶
with_types(
*,
input_type: type[Input] | None = None,
output_type: type[Output] | None = None
) -> Runnable[Input, Output]
Bind input and output types to a Runnable, returning a new Runnable.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_type
|
type[Input] | None
|
The input type to bind to the |
None
|
output_type
|
type[Output] | None
|
The output type to bind to the |
None
|
Returns:
| Type | Description |
|---|---|
Runnable[Input, Output]
|
A new Runnable with the types bound. |
with_retry
¶
with_retry(
*,
retry_if_exception_type: tuple[
type[BaseException], ...
] = (Exception,),
wait_exponential_jitter: bool = True,
exponential_jitter_params: (
ExponentialJitterParams | None
) = None,
stop_after_attempt: int = 3
) -> Runnable[Input, Output]
Create a new Runnable that retries the original Runnable on exceptions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
retry_if_exception_type
|
tuple[type[BaseException], ...]
|
A tuple of exception types to retry on. Defaults to (Exception,). |
(Exception,)
|
wait_exponential_jitter
|
bool
|
Whether to add jitter to the wait time between retries. Defaults to True. |
True
|
stop_after_attempt
|
int
|
The maximum number of attempts to make before giving up. Defaults to 3. |
3
|
exponential_jitter_params
|
ExponentialJitterParams | None
|
Parameters for
|
None
|
Returns:
| Type | Description |
|---|---|
Runnable[Input, Output]
|
A new Runnable that retries the original Runnable on exceptions. |
Example
from langchain_core.runnables import RunnableLambda
count = 0
def _lambda(x: int) -> None:
global count
count = count + 1
if x == 1:
raise ValueError("x is 1")
else:
pass
runnable = RunnableLambda(_lambda)
try:
runnable.with_retry(
stop_after_attempt=2,
retry_if_exception_type=(ValueError,),
).invoke(1)
except ValueError:
pass
assert count == 2
map
¶
with_fallbacks
¶
with_fallbacks(
fallbacks: Sequence[Runnable[Input, Output]],
*,
exceptions_to_handle: tuple[
type[BaseException], ...
] = (Exception,),
exception_key: str | None = None
) -> RunnableWithFallbacks[Input, Output]
Add fallbacks to a Runnable, returning a new Runnable.
The new Runnable will try the original Runnable, and then each fallback
in order, upon failures.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
fallbacks
|
Sequence[Runnable[Input, Output]]
|
A sequence of runnables to try if the original |
required |
exceptions_to_handle
|
tuple[type[BaseException], ...]
|
A tuple of exception types to handle.
Defaults to |
(Exception,)
|
exception_key
|
str | None
|
If string is specified then handled exceptions will be passed
to fallbacks as part of the input under the specified key.
If None, exceptions will not be passed to fallbacks.
If used, the base |
None
|
Returns:
| Type | Description |
|---|---|
RunnableWithFallbacks[Input, Output]
|
A new |
RunnableWithFallbacks[Input, Output]
|
fallback in order, upon failures. |
Example
from typing import Iterator
from langchain_core.runnables import RunnableGenerator
def _generate_immediate_error(input: Iterator) -> Iterator[str]:
raise ValueError()
yield ""
def _generate(input: Iterator) -> Iterator[str]:
yield from "foo bar"
runnable = RunnableGenerator(_generate_immediate_error).with_fallbacks(
[RunnableGenerator(_generate)]
)
print("".join(runnable.stream({}))) # foo bar
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
fallbacks
|
Sequence[Runnable[Input, Output]]
|
A sequence of runnables to try if the original |
required |
exceptions_to_handle
|
tuple[type[BaseException], ...]
|
A tuple of exception types to handle. |
(Exception,)
|
exception_key
|
str | None
|
If string is specified then handled exceptions will be passed
to fallbacks as part of the input under the specified key.
If None, exceptions will not be passed to fallbacks.
If used, the base |
None
|
Returns:
| Type | Description |
|---|---|
RunnableWithFallbacks[Input, Output]
|
A new |
RunnableWithFallbacks[Input, Output]
|
fallback in order, upon failures. |
as_tool
¶
as_tool(
args_schema: type[BaseModel] | None = None,
*,
name: str | None = None,
description: str | None = None,
arg_types: dict[str, type] | None = None
) -> BaseTool
Create a BaseTool from a Runnable.
as_tool will instantiate a BaseTool with a name, description, and
args_schema from a Runnable. Where possible, schemas are inferred
from runnable.get_input_schema. Alternatively (e.g., if the
Runnable takes a dict as input and the specific dict keys are not typed),
the schema can be specified directly with args_schema. You can also
pass arg_types to just specify the required arguments and their types.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
args_schema
|
type[BaseModel] | None
|
The schema for the tool. Defaults to None. |
None
|
name
|
str | None
|
The name of the tool. Defaults to None. |
None
|
description
|
str | None
|
The description of the tool. Defaults to None. |
None
|
arg_types
|
dict[str, type] | None
|
A dictionary of argument names to types. Defaults to None. |
None
|
Returns:
| Type | Description |
|---|---|
BaseTool
|
A |
Typed dict input:
from typing_extensions import TypedDict
from langchain_core.runnables import RunnableLambda
class Args(TypedDict):
a: int
b: list[int]
def f(x: Args) -> str:
return str(x["a"] * max(x["b"]))
runnable = RunnableLambda(f)
as_tool = runnable.as_tool()
as_tool.invoke({"a": 3, "b": [1, 2]})
dict input, specifying schema via args_schema:
from typing import Any
from pydantic import BaseModel, Field
from langchain_core.runnables import RunnableLambda
def f(x: dict[str, Any]) -> str:
return str(x["a"] * max(x["b"]))
class FSchema(BaseModel):
"""Apply a function to an integer and list of integers."""
a: int = Field(..., description="Integer")
b: list[int] = Field(..., description="List of ints")
runnable = RunnableLambda(f)
as_tool = runnable.as_tool(FSchema)
as_tool.invoke({"a": 3, "b": [1, 2]})
dict input, specifying schema via arg_types:
from typing import Any
from langchain_core.runnables import RunnableLambda
def f(x: dict[str, Any]) -> str:
return str(x["a"] * max(x["b"]))
runnable = RunnableLambda(f)
as_tool = runnable.as_tool(arg_types={"a": int, "b": list[int]})
as_tool.invoke({"a": 3, "b": [1, 2]})
String input:
from langchain_core.runnables import RunnableLambda
def f(x: str) -> str:
return x + "a"
def g(x: str) -> str:
return x + "z"
runnable = RunnableLambda(f) | g
as_tool = runnable.as_tool()
as_tool.invoke("b")
Added in version 0.2.14
lc_id
classmethod
¶
Return a unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
For example, for the class langchain.llms.openai.OpenAI, the id is
["langchain", "llms", "openai", "OpenAI"].
to_json
¶
Serialize the Runnable to JSON.
Returns:
| Type | Description |
|---|---|
SerializedConstructor | SerializedNotImplemented
|
A JSON-serializable representation of the |
to_json_not_implemented
¶
Serialize a "not implemented" object.
Returns:
| Type | Description |
|---|---|
SerializedNotImplemented
|
SerializedNotImplemented. |
configurable_fields
¶
Configure particular Runnable fields at runtime.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**kwargs
|
AnyConfigurableField
|
A dictionary of |
{}
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If a configuration key is not found in the |
Returns:
| Type | Description |
|---|---|
RunnableSerializable[Input, Output]
|
A new |
from langchain_core.runnables import ConfigurableField
from langchain_openai import ChatOpenAI
model = ChatOpenAI(max_tokens=20).configurable_fields(
max_tokens=ConfigurableField(
id="output_token_number",
name="Max tokens in the output",
description="The maximum number of tokens in the output",
)
)
# max_tokens = 20
print("max_tokens_20: ", model.invoke("tell me something about chess").content)
# max_tokens = 200
print(
"max_tokens_200: ",
model.with_config(configurable={"output_token_number": 200})
.invoke("tell me something about chess")
.content,
)
configurable_alternatives
¶
configurable_alternatives(
which: ConfigurableField,
*,
default_key: str = "default",
prefix_keys: bool = False,
**kwargs: (
Runnable[Input, Output]
| Callable[[], Runnable[Input, Output]]
)
) -> RunnableSerializable[Input, Output]
Configure alternatives for Runnables that can be set at runtime.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
which
|
ConfigurableField
|
The |
required |
default_key
|
str
|
The default key to use if no alternative is selected.
Defaults to |
'default'
|
prefix_keys
|
bool
|
Whether to prefix the keys with the |
False
|
**kwargs
|
Runnable[Input, Output] | Callable[[], Runnable[Input, Output]]
|
A dictionary of keys to |
{}
|
Returns:
| Type | Description |
|---|---|
RunnableSerializable[Input, Output]
|
A new |
from langchain_anthropic import ChatAnthropic
from langchain_core.runnables.utils import ConfigurableField
from langchain_openai import ChatOpenAI
model = ChatAnthropic(
model_name="claude-3-7-sonnet-20250219"
).configurable_alternatives(
ConfigurableField(id="llm"),
default_key="anthropic",
openai=ChatOpenAI(),
)
# uses the default model ChatAnthropic
print(model.invoke("which organization created you?").content)
# uses ChatOpenAI
print(
model.with_config(configurable={"llm": "openai"})
.invoke("which organization created you?")
.content
)
set_verbose
¶
get_token_ids
¶
Get the tokens present in the text with tiktoken package.
get_num_tokens
¶
get_num_tokens_from_messages
¶
get_num_tokens_from_messages(
messages: Sequence[BaseMessage],
tools: Optional[
Sequence[
Union[dict[str, Any], type, Callable, BaseTool]
]
] = None,
) -> int
Calculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.
Requirements: You must have the pillow installed if you want to count
image tokens if you are specifying the image as a base64 string, and you must
have both pillow and httpx installed if you are specifying the image
as a URL. If these aren't installed image inputs will be ignored in token
counting.
OpenAI reference <https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb>__
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
messages
|
Sequence[BaseMessage]
|
The message inputs to tokenize. |
required |
tools
|
Optional[Sequence[Union[dict[str, Any], type, Callable, BaseTool]]]
|
If provided, sequence of dict, BaseModel, function, or BaseTools to be converted to tool schemas. |
None
|
generate
¶
generate(
messages: list[list[BaseMessage]],
stop: list[str] | None = None,
callbacks: Callbacks = None,
*,
tags: list[str] | None = None,
metadata: dict[str, Any] | None = None,
run_name: str | None = None,
run_id: UUID | None = None,
**kwargs: Any
) -> LLMResult
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
- Take advantage of batched calls,
- Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
messages
|
list[list[BaseMessage]]
|
List of list of messages. |
required |
stop
|
list[str] | None
|
Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. |
None
|
callbacks
|
Callbacks
|
Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. |
None
|
tags
|
list[str] | None
|
The tags to apply. |
None
|
metadata
|
dict[str, Any] | None
|
The metadata to apply. |
None
|
run_name
|
str | None
|
The name of the run. |
None
|
run_id
|
UUID | None
|
The ID of the run. |
None
|
**kwargs
|
Any
|
Arbitrary additional keyword arguments. These are usually passed to the model provider API call. |
{}
|
Returns:
| Type | Description |
|---|---|
LLMResult
|
An LLMResult, which contains a list of candidate Generations for each input |
LLMResult
|
prompt and additional model provider-specific output. |
agenerate
async
¶
agenerate(
messages: list[list[BaseMessage]],
stop: list[str] | None = None,
callbacks: Callbacks = None,
*,
tags: list[str] | None = None,
metadata: dict[str, Any] | None = None,
run_name: str | None = None,
run_id: UUID | None = None,
**kwargs: Any
) -> LLMResult
Asynchronously pass a sequence of prompts to a model and return generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
- Take advantage of batched calls,
- Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
messages
|
list[list[BaseMessage]]
|
List of list of messages. |
required |
stop
|
list[str] | None
|
Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. |
None
|
callbacks
|
Callbacks
|
Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. |
None
|
tags
|
list[str] | None
|
The tags to apply. |
None
|
metadata
|
dict[str, Any] | None
|
The metadata to apply. |
None
|
run_name
|
str | None
|
The name of the run. |
None
|
run_id
|
UUID | None
|
The ID of the run. |
None
|
**kwargs
|
Any
|
Arbitrary additional keyword arguments. These are usually passed to the model provider API call. |
{}
|
Returns:
| Type | Description |
|---|---|
LLMResult
|
An LLMResult, which contains a list of candidate Generations for each input |
LLMResult
|
prompt and additional model provider-specific output. |
bind_tools
¶
bind_tools(
tools: Sequence[
Union[dict[str, Any], type, Callable, BaseTool]
],
*,
tool_choice: Optional[
Union[
dict,
str,
Literal["auto", "none", "required", "any"],
bool,
]
] = None,
strict: Optional[bool] = None,
parallel_tool_calls: Optional[bool] = None,
**kwargs: Any
) -> Runnable[LanguageModelInput, AIMessage]
Bind tool-like objects to this chat model.
Assumes model is compatible with OpenAI tool-calling API.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tools
|
Sequence[Union[dict[str, Any], type, Callable, BaseTool]]
|
A list of tool definitions to bind to this chat model.
Supports any tool definition handled by
|
required |
tool_choice
|
Optional[Union[dict, str, Literal['auto', 'none', 'required', 'any'], bool]]
|
Which tool to require the model to call. Options are:
|
None
|
strict
|
Optional[bool]
|
If True, model output is guaranteed to exactly match the JSON Schema
provided in the tool definition. The input schema will also be validated according to the
|
None
|
parallel_tool_calls
|
Optional[bool]
|
Set to |
None
|
kwargs
|
Any
|
Any additional parameters are passed directly to
|
{}
|
Behavior changed in 0.1.21
Support for strict argument added.
build_extra
classmethod
¶
Build extra kwargs from additional params that were passed in.
validate_temperature
classmethod
¶
Validate temperature parameter for different models.
- o1 models only allow temperature=1
- gpt-5 models (excluding gpt-5-chat) only allow temperature=1 or unset (defaults to 1)
get_lc_namespace
classmethod
¶
Get the namespace of the langchain object.
is_lc_serializable
classmethod
¶
is_lc_serializable() -> bool
Return whether this model can be serialized by LangChain.
validate_environment
¶
Validate that api key and python package exists in environment.
with_structured_output
¶
with_structured_output(
schema: Optional[_DictOrPydanticClass] = None,
*,
method: Literal[
"function_calling", "json_mode", "json_schema"
] = "function_calling",
include_raw: bool = False,
strict: Optional[bool] = None,
**kwargs: Any
) -> Runnable[LanguageModelInput, _DictOrPydantic]
Model wrapper that returns outputs formatted to match the given schema.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
schema
|
Optional[_DictOrPydanticClass]
|
The output schema. Can be passed in as:
If |
None
|
method
|
Literal['function_calling', 'json_mode', 'json_schema']
|
The method for steering model generation, one of:
|
'function_calling'
|
include_raw
|
bool
|
If |
False
|
strict
|
Optional[bool]
|
|
None
|
kwargs
|
Any
|
Additional keyword args aren't supported. |
{}
|
Returns:
| Type | Description |
|---|---|
Runnable[LanguageModelInput, _DictOrPydantic]
|
A Runnable that takes same inputs as a |
Runnable[LanguageModelInput, _DictOrPydantic]
|
If |
Runnable[LanguageModelInput, _DictOrPydantic]
|
If |
Runnable[LanguageModelInput, _DictOrPydantic]
|
|
Runnable[LanguageModelInput, _DictOrPydantic]
|
|
Runnable[LanguageModelInput, _DictOrPydantic]
|
|