ChatAnthropic¶
Reference docs
This page contains reference documentation for ChatAnthropic. See
the docs
for conceptual guides, tutorials, and examples on using ChatAnthropic.
langchain_anthropic.chat_models.ChatAnthropic
¶
Bases: BaseChatModel
Anthropic chat models.
See Anthropic's docs for a list of the latest models.
Setup
Install langchain-anthropic and set environment variable ANTHROPIC_API_KEY.
Key init args — completion params:
model:
Name of Anthropic model to use. e.g. 'claude-3-7-sonnet-20250219'.
temperature:
Sampling temperature. Ranges from 0.0 to 1.0.
max_tokens:
Max number of tokens to generate.
Key init args — client params:
timeout:
Timeout for requests.
anthropic_proxy:
Proxy to use for the Anthropic clients, will be used for every API call.
If not passed in will be read from env var ANTHROPIC_PROXY.
max_retries:
Max number of retries if a request fails.
api_key:
Anthropic API key. If not passed in will be read from env var
ANTHROPIC_API_KEY.
base_url:
Base URL for API requests. Only specify if using a proxy or service
emulator.
See full list of supported init args and their descriptions in the params section.
Instantiate
Note
Any param which is not explicitly supported will be passed directly to the
anthropic.Anthropic.messages.create(...) API every time to the model is
invoked. For example:
Invoke
messages = [
(
"system",
"You are a helpful translator. Translate the user sentence to French.",
),
("human", "I love programming."),
]
model.invoke(messages)
AIMessage(
content="J'aime la programmation.",
response_metadata={
"id": "msg_01Trik66aiQ9Z1higrD5XFx3",
"model": "claude-3-7-sonnet-20250219",
"stop_reason": "end_turn",
"stop_sequence": None,
"usage": {"input_tokens": 25, "output_tokens": 11},
},
id="run-5886ac5f-3c2e-49f5-8a44-b1e92808c929-0",
usage_metadata={
"input_tokens": 25,
"output_tokens": 11,
"total_tokens": 36,
},
)
Stream
AIMessageChunk(content="J", id="run-272ff5f9-8485-402c-b90d-eac8babc5b25")
AIMessageChunk(content="'", id="run-272ff5f9-8485-402c-b90d-eac8babc5b25")
AIMessageChunk(content="a", id="run-272ff5f9-8485-402c-b90d-eac8babc5b25")
AIMessageChunk(content="ime", id="run-272ff5f9-8485-402c-b90d-eac8babc5b25")
AIMessageChunk(content=" la", id="run-272ff5f9-8485-402c-b90d-eac8babc5b25")
AIMessageChunk(content=" programm", id="run-272ff5f9-8485-402c-b90d-eac8babc5b25")
AIMessageChunk(content="ation", id="run-272ff5f9-8485-402c-b90d-eac8babc5b25")
AIMessageChunk(content=".", id="run-272ff5f9-8485-402c-b90d-eac8babc5b25")
Async
await model.ainvoke(messages)
# stream:
# async for chunk in (await model.astream(messages))
# batch:
# await model.abatch([messages])
AIMessage(
content="J'aime la programmation.",
response_metadata={
"id": "msg_01Trik66aiQ9Z1higrD5XFx3",
"model": "claude-3-7-sonnet-20250219",
"stop_reason": "end_turn",
"stop_sequence": None,
"usage": {"input_tokens": 25, "output_tokens": 11},
},
id="run-5886ac5f-3c2e-49f5-8a44-b1e92808c929-0",
usage_metadata={
"input_tokens": 25,
"output_tokens": 11,
"total_tokens": 36,
},
)
Tool calling
from pydantic import BaseModel, Field
class GetWeather(BaseModel):
'''Get the current weather in a given location'''
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
class GetPopulation(BaseModel):
'''Get the current population in a given location'''
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
model_with_tools = model.bind_tools([GetWeather, GetPopulation])
ai_msg = model_with_tools.invoke("Which city is hotter today and which is bigger: LA or NY?")
ai_msg.tool_calls
[
{
"name": "GetWeather",
"args": {"location": "Los Angeles, CA"},
"id": "toolu_01KzpPEAgzura7hpBqwHbWdo",
},
{
"name": "GetWeather",
"args": {"location": "New York, NY"},
"id": "toolu_01JtgbVGVJbiSwtZk3Uycezx",
},
{
"name": "GetPopulation",
"args": {"location": "Los Angeles, CA"},
"id": "toolu_01429aygngesudV9nTbCKGuw",
},
{
"name": "GetPopulation",
"args": {"location": "New York, NY"},
"id": "toolu_01JPktyd44tVMeBcPPnFSEJG",
},
]
See ChatAnthropic.bind_tools() method for more.
Structured output
from typing import Optional
from pydantic import BaseModel, Field
class Joke(BaseModel):
'''Joke to tell user.'''
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
rating: int | None = Field(description="How funny the joke is, from 1 to 10")
structured_model = model.with_structured_output(Joke)
structured_model.invoke("Tell me a joke about cats")
Joke(
setup="Why was the cat sitting on the computer?",
punchline="To keep an eye on the mouse!",
rating=None,
)
See ChatAnthropic.with_structured_output() for more.
Image input
See multimodal guides for more detail.
import base64
import httpx
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import HumanMessage
image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
image_data = base64.b64encode(httpx.get(image_url).content).decode("utf-8")
model = ChatAnthropic(model="claude-3-5-sonnet-latest")
message = HumanMessage(
content=[
{
"type": "text",
"text": "Can you highlight the differences between these two images?",
},
{
"type": "image",
"base64": image_data,
"mime_type": "image/jpeg",
},
{
"type": "image",
"url": image_url,
},
],
)
ai_msg = model.invoke([message])
ai_msg.content
Files API
You can also pass in files that are managed through Anthropic's Files API:
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-sonnet-4-20250514",
betas=["files-api-2025-04-14"],
)
input_message = {
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this document.",
},
{
"type": "image",
"id": "file_abc123...",
},
],
}
model.invoke([input_message])
PDF input
See multimodal guides for more detail.
from base64 import b64encode
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import HumanMessage
import requests
url = "https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf"
data = b64encode(requests.get(url).content).decode()
model = ChatAnthropic(model="claude-3-5-sonnet-latest")
ai_msg = model.invoke(
[
HumanMessage(
[
"Summarize this document.",
{
"type": "file",
"mime_type": "application/pdf",
"base64": data,
},
]
)
]
)
ai_msg.content
Files API
You can also pass in files that are managed through Anthropic's Files API:
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-sonnet-4-20250514",
betas=["files-api-2025-04-14"],
)
input_message = {
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this document.",
},
{
"type": "file",
"id": "file_abc123...",
},
],
}
model.invoke([input_message])
Extended thinking
Certain Claude models support an extended thinking feature, which will output the step-by-step reasoning process that led to its final answer.
To use it, specify the thinking parameter when initializing ChatAnthropic.
It can also be passed in as a kwarg during invocation.
You will need to specify a token budget to use this feature. See usage example:
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-3-7-sonnet-latest",
max_tokens=5000,
thinking={"type": "enabled", "budget_tokens": 2000},
)
response = model.invoke("What is the cube root of 50.653?")
response.content
[
{
"signature": "...",
"thinking": "To find the cube root of 50.653...",
"type": "thinking",
},
{"text": "The cube root of 50.653 is ...", "type": "text"},
]
Differences in thinking across model versions
The Claude Messages API handles thinking differently across Claude Sonnet 3.7 and Claude 4 models. Refer to their docs for more info.
Citations
Anthropic supports a citations
feature that lets Claude attach context to its answers based on source
documents supplied by the user. When document content blocks
with "citations": {"enabled": True} are included in a query, Claude may
generate citations in its response.
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-3-5-haiku-latest")
messages = [
{
"role": "user",
"content": [
{
"type": "document",
"source": {
"type": "text",
"media_type": "text/plain",
"data": "The grass is green. The sky is blue.",
},
"title": "My Document",
"context": "This is a trustworthy document.",
"citations": {"enabled": True},
},
{"type": "text", "text": "What color is the grass and sky?"},
],
}
]
response = model.invoke(messages)
response.content
[
{"text": "Based on the document, ", "type": "text"},
{
"text": "the grass is green",
"type": "text",
"citations": [
{
"type": "char_location",
"cited_text": "The grass is green. ",
"document_index": 0,
"document_title": "My Document",
"start_char_index": 0,
"end_char_index": 20,
}
],
},
{"text": ", and ", "type": "text"},
{
"text": "the sky is blue",
"type": "text",
"citations": [
{
"type": "char_location",
"cited_text": "The sky is blue.",
"document_index": 0,
"document_title": "My Document",
"start_char_index": 20,
"end_char_index": 36,
}
],
},
{"text": ".", "type": "text"},
]
Token usage
Message chunks containing token usage will be included during streaming by default:
stream = model.stream(messages)
full = next(stream)
for chunk in stream:
full += chunk
full.usage_metadata
These can be disabled by setting stream_usage=False in the stream method,
or by setting stream_usage=False when initializing ChatAnthropic.
Prompt caching
Prompt caching reduces processing time and costs for repetitive tasks or prompts with consistent elements
Note
Only certain models support prompt caching. See the Claude documentation for a full list.
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-3-7-sonnet-20250219")
messages = [
{
"role": "system",
"content": [
{
"type": "text",
"text": "Below is some long context:",
},
{
"type": "text",
"text": f"{long_text}",
"cache_control": {"type": "ephemeral"},
},
],
},
{
"role": "user",
"content": "What's that about?",
},
]
response = model.invoke(messages)
response.usage_metadata["input_token_details"]
Alternatively, you may enable prompt caching at invocation time. You may want to conditionally cache based on runtime conditions, such as the length of the context. Alternatively, this is useful for app-level decisions about what to cache.
Extended caching
The cache lifetime is 5 minutes by default. If this is too short, you can
apply one hour caching by setting ttl to '1h'.
model = ChatAnthropic(
model="claude-3-7-sonnet-20250219",
)
messages = [
{
"role": "user",
"content": [
{
"type": "text",
"text": f"{long_text}",
"cache_control": {"type": "ephemeral", "ttl": "1h"},
},
],
}
]
response = model.invoke(messages)
Details of cached token counts will be included on the InputTokenDetails
of response's usage_metadata:
{
"input_tokens": 1500,
"output_tokens": 200,
"total_tokens": 1700,
"input_token_details": {
"cache_read": 0,
"cache_creation": 1000,
"ephemeral_1h_input_tokens": 750,
"ephemeral_5m_input_tokens": 250,
},
}
See Claude documentation for detail.
!!! note title="Extended context windows (beta)"
Claude Sonnet 4 supports a 1-million token context window, available in beta for
organizations in usage tier 4 and organizations with custom rate limits.
```python
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-sonnet-4-20250514",
betas=["context-1m-2025-08-07"], # Enable 1M context beta
)
long_document = """
This is a very long document that would benefit from the extended 1M
context window...
[imagine this continues for hundreds of thousands of tokens]
"""
messages = [
HumanMessage(f"""
Please analyze this document and provide a summary:
{long_document}
What are the key themes and main conclusions?
""")
]
response = model.invoke(messages)
```
See [Claude documentation](https://docs.claude.com/en/docs/build-with-claude/context-windows#1m-token-context-window)
for detail.
!!! note title="Token-efficient tool use (beta)"
See LangChain [docs](https://python.langchain.com/docs/integrations/chat/anthropic/)
for more detail.
```python
from langchain_anthropic import ChatAnthropic
from langchain_core.tools import tool
model = ChatAnthropic(
model="claude-3-7-sonnet-20250219",
temperature=0,
model_kwargs={
"extra_headers": {
"anthropic-beta": "token-efficient-tools-2025-02-19"
}
}
)
@tool
def get_weather(location: str) -> str:
"""Get the weather at a location."""
return "It's sunny."
model_with_tools = model.bind_tools([get_weather])
response = model_with_tools.invoke(
"What's the weather in San Francisco?"
)
print(response.tool_calls)
print(f'Total tokens: {response.usage_metadata["total_tokens"]}')
```
```txt
[{'name': 'get_weather', 'args': {'location': 'San Francisco'}, 'id': 'toolu_01HLjQMSb1nWmgevQUtEyz17', 'type': 'tool_call'}]
Total tokens: 408
```
!!! note title="Context management"
Anthropic supports a context editing feature that will automatically manage the
model's context window (e.g., by clearing tool results).
See [Anthropic documentation](https://docs.claude.com/en/docs/build-with-claude/context-editing)
for details and configuration options.
```python
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-sonnet-4-5",
betas=["context-management-2025-06-27"],
context_management={"edits": [{"type": "clear_tool_uses_20250919"}]},
)
model_with_tools = model.bind_tools([{"type": "web_search_20250305", "name": "web_search"}])
response = model_with_tools.invoke("Search for recent developments in AI")
```
!!! note title="Built-in tools"
See LangChain [docs](https://python.langchain.com/docs/integrations/chat/anthropic/#built-in-tools)
for more detail.
??? note "Web search"
```python
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-3-5-haiku-latest")
tool = {
"type": "web_search_20250305",
"name": "web_search",
"max_uses": 3,
}
model_with_tools = model.bind_tools([tool])
response = model_with_tools.invoke("How do I update a web app to TypeScript 5.5?")
```
??? note "Web fetch (beta)"
```python
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-3-5-haiku-latest",
betas=["web-fetch-2025-09-10"], # Enable web fetch beta
)
tool = {
"type": "web_fetch_20250910",
"name": "web_fetch",
"max_uses": 3,
}
model_with_tools = model.bind_tools([tool])
response = model_with_tools.invoke("Please analyze the content at https://example.com/article")
```
??? note "Code execution"
```python
model = ChatAnthropic(
model="claude-sonnet-4-20250514",
betas=["code-execution-2025-05-22"],
)
tool = {"type": "code_execution_20250522", "name": "code_execution"}
model_with_tools = model.bind_tools([tool])
response = model_with_tools.invoke(
"Calculate the mean and standard deviation of [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]"
)
```
??? note "Remote MCP"
```python
from langchain_anthropic import ChatAnthropic
mcp_servers = [
{
"type": "url",
"url": "https://mcp.deepwiki.com/mcp",
"name": "deepwiki",
"tool_configuration": { # optional configuration
"enabled": True,
"allowed_tools": ["ask_question"],
},
"authorization_token": "PLACEHOLDER", # optional authorization
}
]
model = ChatAnthropic(
model="claude-sonnet-4-20250514",
betas=["mcp-client-2025-04-04"],
mcp_servers=mcp_servers,
)
response = model.invoke(
"What transport protocols does the 2025-03-26 version of the MCP "
"spec (modelcontextprotocol/modelcontextprotocol) support?"
)
```
??? note "Text editor"
```python
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-3-7-sonnet-20250219")
tool = {"type": "text_editor_20250124", "name": "str_replace_editor"}
model_with_tools = model.bind_tools([tool])
response = model_with_tools.invoke(
"There's a syntax error in my primes.py file. Can you help me fix it?"
)
print(response.text)
response.tool_calls
```
```txt
I'd be happy to help you fix the syntax error in your primes.py file. First, let's look at the current content of the file to identify the error.
[{'name': 'str_replace_editor',
'args': {'command': 'view', 'path': '/repo/primes.py'},
'id': 'toolu_01VdNgt1YV7kGfj9LFLm6HyQ',
'type': 'tool_call'}]
```
??? note "Memory tool"
```python
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-sonnet-4-5",
betas=["context-management-2025-06-27"],
)
model_with_tools = model.bind_tools([{"type": "memory_20250818", "name": "memory"}])
response = model_with_tools.invoke("What are my interests?")
```
!!! note title="Response metadata"
```python
ai_msg = model.invoke(messages)
ai_msg.response_metadata
```
```python
{
"id": "msg_013xU6FHEGEq76aP4RgFerVT",
"model": "claude-3-7-sonnet-20250219",
"stop_reason": "end_turn",
"stop_sequence": None,
"usage": {"input_tokens": 25, "output_tokens": 11},
}
```
| METHOD | DESCRIPTION |
|---|---|
is_lc_serializable |
Whether the class is serializable in langchain. |
get_lc_namespace |
Get the namespace of the LangChain object. |
set_default_max_tokens |
Set default max_tokens. |
build_extra |
Build model kwargs. |
bind_tools |
Bind tool-like objects to this chat model. |
with_structured_output |
Model wrapper that returns outputs formatted to match the given schema. |
get_num_tokens_from_messages |
Count tokens in a sequence of input messages. |
get_name |
Get the name of the |
get_input_schema |
Get a Pydantic model that can be used to validate input to the |
get_input_jsonschema |
Get a JSON schema that represents the input to the |
get_output_schema |
Get a Pydantic model that can be used to validate output to the |
get_output_jsonschema |
Get a JSON schema that represents the output of the |
config_schema |
The type of config this |
get_config_jsonschema |
Get a JSON schema that represents the config of the |
get_graph |
Return a graph representation of this |
get_prompts |
Return a list of prompts used by this |
__or__ |
Runnable "or" operator. |
__ror__ |
Runnable "reverse-or" operator. |
pipe |
Pipe |
pick |
Pick keys from the output |
assign |
Assigns new fields to the |
invoke |
Transform a single input into an output. |
ainvoke |
Transform a single input into an output. |
batch |
Default implementation runs invoke in parallel using a thread pool executor. |
batch_as_completed |
Run |
abatch |
Default implementation runs |
abatch_as_completed |
Run |
stream |
Default implementation of |
astream |
Default implementation of |
astream_log |
Stream all output from a |
astream_events |
Generate a stream of events. |
transform |
Transform inputs to outputs. |
atransform |
Transform inputs to outputs. |
bind |
Bind arguments to a |
with_config |
Bind config to a |
with_listeners |
Bind lifecycle listeners to a |
with_alisteners |
Bind async lifecycle listeners to a |
with_types |
Bind input and output types to a |
with_retry |
Create a new |
map |
Return a new |
with_fallbacks |
Add fallbacks to a |
as_tool |
Create a |
__init__ |
|
lc_id |
Return a unique identifier for this class for serialization purposes. |
to_json |
Serialize the |
to_json_not_implemented |
Serialize a "not implemented" object. |
configurable_fields |
Configure particular |
configurable_alternatives |
Configure alternatives for |
set_verbose |
If verbose is |
generate_prompt |
Pass a sequence of prompts to the model and return model generations. |
agenerate_prompt |
Asynchronously pass a sequence of prompts and return model generations. |
get_token_ids |
Return the ordered ids of the tokens in a text. |
get_num_tokens |
Get the number of tokens present in the text. |
generate |
Pass a sequence of prompts to the model and return model generations. |
agenerate |
Asynchronously pass a sequence of prompts to a model and return generations. |
dict |
Return a dictionary of the LLM. |
model
class-attribute
instance-attribute
¶
Model name to use.
max_tokens
class-attribute
instance-attribute
¶
Denotes the number of tokens to predict per generation.
temperature
class-attribute
instance-attribute
¶
temperature: float | None = None
A non-negative float that tunes the degree of randomness in generation.
top_k
class-attribute
instance-attribute
¶
top_k: int | None = None
Number of most likely tokens to consider at each step.
top_p
class-attribute
instance-attribute
¶
top_p: float | None = None
Total probability mass of tokens to consider at each step.
default_request_timeout
class-attribute
instance-attribute
¶
Timeout for requests to Anthropic Completion API.
max_retries
class-attribute
instance-attribute
¶
max_retries: int = 2
Number of retries allowed for requests sent to the Anthropic Completion API.
stop_sequences
class-attribute
instance-attribute
¶
Default stop sequences.
anthropic_api_url
class-attribute
instance-attribute
¶
anthropic_api_url: str | None = Field(
alias="base_url",
default_factory=from_env(
["ANTHROPIC_API_URL", "ANTHROPIC_BASE_URL"], default="https://api.anthropic.com"
),
)
Base URL for API requests. Only specify if using a proxy or service emulator.
If a value isn't passed in, will attempt to read the value first from
ANTHROPIC_API_URL and if that is not set, ANTHROPIC_BASE_URL.
If neither are set, the default value of https://api.anthropic.com will
be used.
anthropic_api_key
class-attribute
instance-attribute
¶
anthropic_api_key: SecretStr = Field(
alias="api_key", default_factory=secret_from_env("ANTHROPIC_API_KEY", default="")
)
Automatically read from env var ANTHROPIC_API_KEY if not provided.
anthropic_proxy
class-attribute
instance-attribute
¶
Proxy to use for the Anthropic clients, will be used for every API call.
If not provided, will attempt to read from the ANTHROPIC_PROXY environment
variable.
default_headers
class-attribute
instance-attribute
¶
Headers to pass to the Anthropic clients, will be used for every API call.
betas
class-attribute
instance-attribute
¶
List of beta features to enable. If specified, invocations will be routed through client.beta.messages.create.
Example: betas=["mcp-client-2025-04-04"]
streaming
class-attribute
instance-attribute
¶
streaming: bool = False
Whether to use streaming or not.
stream_usage
class-attribute
instance-attribute
¶
stream_usage: bool = True
Whether to include usage metadata in streaming output. If True, additional
message chunks will be generated during the stream including usage metadata.
thinking
class-attribute
instance-attribute
¶
Parameters for Claude reasoning,
e.g., {"type": "enabled", "budget_tokens": 10_000}
mcp_servers
class-attribute
instance-attribute
¶
List of MCP servers to use for the request.
Example: mcp_servers=[{"type": "url", "url": "https://mcp.example.com/mcp",
"name": "example-mcp"}]
context_management
class-attribute
instance-attribute
¶
Configuration for context management.
lc_secrets
property
¶
Return a mapping of secret keys to environment variables.
name
class-attribute
instance-attribute
¶
name: str | None = None
The name of the Runnable. Used for debugging and tracing.
input_schema
property
¶
The type of input this Runnable accepts specified as a Pydantic model.
output_schema
property
¶
Output schema.
The type of output this Runnable produces specified as a Pydantic model.
config_specs
property
¶
config_specs: list[ConfigurableFieldSpec]
List configurable fields for this Runnable.
lc_attributes
property
¶
lc_attributes: dict
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
Default is an empty dictionary.
cache
class-attribute
instance-attribute
¶
Whether to cache the response.
- If
True, will use the global cache. - If
False, will not use a cache - If
None, will use the global cache if it's set, otherwise no cache. - If instance of
BaseCache, will use the provided cache.
Caching is not currently supported for streaming methods of models.
verbose
class-attribute
instance-attribute
¶
Whether to print out response text.
callbacks
class-attribute
instance-attribute
¶
callbacks: Callbacks = Field(default=None, exclude=True)
Callbacks to add to the run trace.
tags
class-attribute
instance-attribute
¶
Tags to add to the run trace.
metadata
class-attribute
instance-attribute
¶
Metadata to add to the run trace.
custom_get_token_ids
class-attribute
instance-attribute
¶
Optional encoder to use for counting tokens.
rate_limiter
class-attribute
instance-attribute
¶
rate_limiter: BaseRateLimiter | None = Field(default=None, exclude=True)
An optional rate limiter to use for limiting the number of requests.
disable_streaming
class-attribute
instance-attribute
¶
Whether to disable streaming for this model.
If streaming is bypassed, then stream/astream/astream_events will
defer to invoke/ainvoke.
- If
True, will always bypass streaming case. - If
'tool_calling', will bypass streaming case only when the model is called with atoolskeyword argument. In other words, LangChain will automatically switch to non-streaming behavior (invoke) only when the tools argument is provided. This offers the best of both worlds. - If
False(Default), will always use streaming case if available.
The main reason for this flag is that code might be written using stream and
a user may want to swap out a given model for another model whose the implementation
does not properly support streaming.
output_version
class-attribute
instance-attribute
¶
Version of AIMessage output format to store in message content.
AIMessage.content_blocks will lazily parse the contents of content into a
standard format. This flag can be used to additionally store the standard format
in message content, e.g., for serialization purposes.
Supported values:
'v0': provider-specific format in content (can lazily-parse withcontent_blocks)'v1': standardized format in content (consistent withcontent_blocks)
Partner packages (e.g.,
langchain-openai) can also use this
field to roll out new content formats in a backward-compatible way.
Added in version 1.0
is_lc_serializable
classmethod
¶
is_lc_serializable() -> bool
Whether the class is serializable in langchain.
get_lc_namespace
classmethod
¶
Get the namespace of the LangChain object.
set_default_max_tokens
classmethod
¶
Set default max_tokens.
bind_tools
¶
bind_tools(
tools: Sequence[dict[str, Any] | type | Callable | BaseTool],
*,
tool_choice: dict[str, str] | str | None = None,
parallel_tool_calls: bool | None = None,
**kwargs: Any,
) -> Runnable[LanguageModelInput, AIMessage]
Bind tool-like objects to this chat model.
| PARAMETER | DESCRIPTION |
|---|---|
tools
|
A list of tool definitions to bind to this chat model.
Supports Anthropic format tool schemas and any tool definition handled
by |
tool_choice
|
Which tool to require the model to call. Options are:
|
parallel_tool_calls
|
Set to Added in version 0.3.2
TYPE:
|
kwargs
|
Any additional parameters are passed directly to
TYPE:
|
Example
from langchain_anthropic import ChatAnthropic
from pydantic import BaseModel, Field
class GetWeather(BaseModel):
'''Get the current weather in a given location'''
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
class GetPrice(BaseModel):
'''Get the price of a specific product.'''
product: str = Field(..., description="The product to look up.")
model = ChatAnthropic(model="claude-3-5-sonnet-latest", temperature=0)
model_with_tools = model.bind_tools([GetWeather, GetPrice])
model_with_tools.invoke(
"What is the weather like in San Francisco",
)
# -> AIMessage(
# content=[
# {'text': '<thinking>\nBased on the user\'s question, the relevant function to call is GetWeather, which requires the "location" parameter.\n\nThe user has directly specified the location as "San Francisco". Since San Francisco is a well known city, I can reasonably infer they mean San Francisco, CA without needing the state specified.\n\nAll the required parameters are provided, so I can proceed with the API call.\n</thinking>', 'type': 'text'},
# {'text': None, 'type': 'tool_use', 'id': 'toolu_01SCgExKzQ7eqSkMHfygvYuu', 'name': 'GetWeather', 'input': {'location': 'San Francisco, CA'}}
# ],
# response_metadata={'id': 'msg_01GM3zQtoFv8jGQMW7abLnhi', 'model': 'claude-3-5-sonnet-latest', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 487, 'output_tokens': 145}},
# id='run-87b1331e-9251-4a68-acef-f0a018b639cc-0'
# )
Example — force tool call with tool_choice 'any':
```python
from langchain_anthropic import ChatAnthropic
from pydantic import BaseModel, Field
class GetWeather(BaseModel):
'''Get the current weather in a given location'''
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
class GetPrice(BaseModel):
'''Get the price of a specific product.'''
product: str = Field(..., description="The product to look up.")
model = ChatAnthropic(model="claude-3-5-sonnet-latest", temperature=0)
model_with_tools = model.bind_tools([GetWeather, GetPrice], tool_choice="any")
model_with_tools.invoke(
"what is the weather like in San Francisco",
)
```
Example — force specific tool call with tool_choice '<name_of_tool>':
from langchain_anthropic import ChatAnthropic
from pydantic import BaseModel, Field
class GetWeather(BaseModel):
'''Get the current weather in a given location'''
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
class GetPrice(BaseModel):
'''Get the price of a specific product.'''
product: str = Field(..., description="The product to look up.")
model = ChatAnthropic(model="claude-3-5-sonnet-latest", temperature=0)
model_with_tools = model.bind_tools([GetWeather, GetPrice], tool_choice="GetWeather")
model_with_tools.invoke("What is the weather like in San Francisco")
Example — cache specific tools:
from langchain_anthropic import ChatAnthropic, convert_to_anthropic_tool
from pydantic import BaseModel, Field
class GetWeather(BaseModel):
'''Get the current weather in a given location'''
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
class GetPrice(BaseModel):
'''Get the price of a specific product.'''
product: str = Field(..., description="The product to look up.")
# We'll convert our pydantic class to the anthropic tool format
# before passing to bind_tools so that we can set the 'cache_control'
# field on our tool.
cached_price_tool = convert_to_anthropic_tool(GetPrice)
# Currently the only supported "cache_control" value is
# {"type": "ephemeral"}.
cached_price_tool["cache_control"] = {"type": "ephemeral"}
# We need to pass in extra headers to enable use of the beta cache
# control API.
model = ChatAnthropic(
model="claude-3-5-sonnet-latest",
temperature=0,
)
model_with_tools = model.bind_tools([GetWeather, cached_price_tool])
model_with_tools.invoke("What is the weather like in San Francisco")
This outputs:
AIMessage(
content=[
{
"text": "Certainly! I can help you find out the current weather in San Francisco. To get this information, I'll use the GetWeather function. Let me fetch that data for you right away.",
"type": "text",
},
{
"id": "toolu_01TS5h8LNo7p5imcG7yRiaUM",
"input": {"location": "San Francisco, CA"},
"name": "GetWeather",
"type": "tool_use",
},
],
response_metadata={
"id": "msg_01Xg7Wr5inFWgBxE5jH9rpRo",
"model": "claude-3-5-sonnet-latest",
"stop_reason": "tool_use",
"stop_sequence": None,
"usage": {
"input_tokens": 171,
"output_tokens": 96,
"cache_creation_input_tokens": 1470,
"cache_read_input_tokens": 0,
},
},
id="run-b36a5b54-5d69-470e-a1b0-b932d00b089e-0",
tool_calls=[
{
"name": "GetWeather",
"args": {"location": "San Francisco, CA"},
"id": "toolu_01TS5h8LNo7p5imcG7yRiaUM",
"type": "tool_call",
}
],
usage_metadata={
"input_tokens": 171,
"output_tokens": 96,
"total_tokens": 267,
},
)
If we invoke the tool again, we can see that the "usage" information in the AIMessage.response_metadata shows that we had a cache hit:
AIMessage(
content=[
{
"text": "To get the current weather in San Francisco, I can use the GetWeather function. Let me check that for you.",
"type": "text",
},
{
"id": "toolu_01HtVtY1qhMFdPprx42qU2eA",
"input": {"location": "San Francisco, CA"},
"name": "GetWeather",
"type": "tool_use",
},
],
response_metadata={
"id": "msg_016RfWHrRvW6DAGCdwB6Ac64",
"model": "claude-3-5-sonnet-latest",
"stop_reason": "tool_use",
"stop_sequence": None,
"usage": {
"input_tokens": 171,
"output_tokens": 82,
"cache_creation_input_tokens": 0,
"cache_read_input_tokens": 1470,
},
},
id="run-88b1f825-dcb7-4277-ac27-53df55d22001-0",
tool_calls=[
{
"name": "GetWeather",
"args": {"location": "San Francisco, CA"},
"id": "toolu_01HtVtY1qhMFdPprx42qU2eA",
"type": "tool_call",
}
],
usage_metadata={
"input_tokens": 171,
"output_tokens": 82,
"total_tokens": 253,
},
)
with_structured_output
¶
with_structured_output(
schema: dict | type, *, include_raw: bool = False, **kwargs: Any
) -> Runnable[LanguageModelInput, dict | BaseModel]
Model wrapper that returns outputs formatted to match the given schema.
| PARAMETER | DESCRIPTION |
|---|---|
schema
|
The output schema. Can be passed in as:
If See |
include_raw
|
If The final output is always a
TYPE:
|
kwargs
|
Additional keyword arguments are ignored.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[LanguageModelInput, dict | BaseModel]
|
A If
|
Example: Pydantic schema (include_raw=False):
from langchain_anthropic import ChatAnthropic
from pydantic import BaseModel
class AnswerWithJustification(BaseModel):
'''An answer to the user question along with justification for the answer.'''
answer: str
justification: str
model = ChatAnthropic(model="claude-3-5-sonnet-latest", temperature=0)
structured_model = model.with_structured_output(AnswerWithJustification)
structured_model.invoke("What weighs more a pound of bricks or a pound of feathers")
# -> AnswerWithJustification(
# answer='They weigh the same',
# justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.'
# )
Example: Pydantic schema (include_raw=True):
from langchain_anthropic import ChatAnthropic
from pydantic import BaseModel
class AnswerWithJustification(BaseModel):
'''An answer to the user question along with justification for the answer.'''
answer: str
justification: str
model = ChatAnthropic(model="claude-3-5-sonnet-latest", temperature=0)
structured_model = model.with_structured_output(AnswerWithJustification, include_raw=True)
structured_model.invoke("What weighs more a pound of bricks or a pound of feathers")
# -> {
# 'raw': AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_Ao02pnFYXD6GN1yzc0uXPsvF', 'function': {'arguments': '{"answer":"They weigh the same.","justification":"Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ."}', 'name': 'AnswerWithJustification'}, 'type': 'function'}]}),
# 'parsed': AnswerWithJustification(answer='They weigh the same.', justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.'),
# 'parsing_error': None
# }
Example: dict schema (include_raw=False):
from langchain_anthropic import ChatAnthropic
schema = {
"name": "AnswerWithJustification",
"description": "An answer to the user question along with justification for the answer.",
"input_schema": {
"type": "object",
"properties": {
"answer": {"type": "string"},
"justification": {"type": "string"},
},
"required": ["answer", "justification"],
},
}
model = ChatAnthropic(model="claude-3-5-sonnet-latest", temperature=0)
structured_model = model.with_structured_output(schema)
structured_model.invoke("What weighs more a pound of bricks or a pound of feathers")
# -> {
# 'answer': 'They weigh the same',
# 'justification': 'Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume and density of the two substances differ.'
# }
get_num_tokens_from_messages
¶
get_num_tokens_from_messages(
messages: list[BaseMessage],
tools: Sequence[dict[str, Any] | type | Callable | BaseTool] | None = None,
**kwargs: Any,
) -> int
Count tokens in a sequence of input messages.
| PARAMETER | DESCRIPTION |
|---|---|
messages
|
The message inputs to tokenize.
TYPE:
|
tools
|
If provided, sequence of
TYPE:
|
kwargs
|
Additional keyword arguments are passed to the Anthropic
TYPE:
|
Basic usage:
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import HumanMessage, SystemMessage
model = ChatAnthropic(model="claude-3-5-sonnet-20241022")
messages = [
SystemMessage(content="You are a scientist"),
HumanMessage(content="Hello, Claude"),
]
model.get_num_tokens_from_messages(messages)
Pass tool schemas:
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import HumanMessage
from langchain_core.tools import tool
model = ChatAnthropic(model="claude-3-5-sonnet-20241022")
@tool(parse_docstring=True)
def get_weather(location: str) -> str:
"""Get the current weather in a given location
Args:
location: The city and state, e.g. San Francisco, CA
"""
return "Sunny"
messages = [
HumanMessage(content="What's the weather like in San Francisco?"),
]
model.get_num_tokens_from_messages(messages, tools=[get_weather])
Behavior changed in 0.3.0
Uses Anthropic's token counting API to count tokens in messages.
get_name
¶
get_input_schema
¶
get_input_schema(config: RunnableConfig | None = None) -> type[BaseModel]
Get a Pydantic model that can be used to validate input to the Runnable.
Runnable objects that leverage the configurable_fields and
configurable_alternatives methods will have a dynamic input schema that
depends on which configuration the Runnable is invoked with.
This method allows to get an input schema for a specific configuration.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
A config to use when generating the schema.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
type[BaseModel]
|
A Pydantic model that can be used to validate input. |
get_input_jsonschema
¶
get_input_jsonschema(config: RunnableConfig | None = None) -> dict[str, Any]
Get a JSON schema that represents the input to the Runnable.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
A config to use when generating the schema.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
dict[str, Any]
|
A JSON schema that represents the input to the |
Example
Added in version 0.3.0
get_output_schema
¶
get_output_schema(config: RunnableConfig | None = None) -> type[BaseModel]
Get a Pydantic model that can be used to validate output to the Runnable.
Runnable objects that leverage the configurable_fields and
configurable_alternatives methods will have a dynamic output schema that
depends on which configuration the Runnable is invoked with.
This method allows to get an output schema for a specific configuration.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
A config to use when generating the schema.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
type[BaseModel]
|
A Pydantic model that can be used to validate output. |
get_output_jsonschema
¶
get_output_jsonschema(config: RunnableConfig | None = None) -> dict[str, Any]
Get a JSON schema that represents the output of the Runnable.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
A config to use when generating the schema.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
dict[str, Any]
|
A JSON schema that represents the output of the |
Example
Added in version 0.3.0
config_schema
¶
The type of config this Runnable accepts specified as a Pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
| PARAMETER | DESCRIPTION |
|---|---|
include
|
A list of fields to include in the config schema. |
| RETURNS | DESCRIPTION |
|---|---|
type[BaseModel]
|
A Pydantic model that can be used to validate config. |
get_config_jsonschema
¶
get_graph
¶
get_graph(config: RunnableConfig | None = None) -> Graph
Return a graph representation of this Runnable.
get_prompts
¶
get_prompts(config: RunnableConfig | None = None) -> list[BasePromptTemplate]
Return a list of prompts used by this Runnable.
__or__
¶
__or__(
other: Runnable[Any, Other]
| Callable[[Iterator[Any]], Iterator[Other]]
| Callable[[AsyncIterator[Any]], AsyncIterator[Other]]
| Callable[[Any], Other]
| Mapping[str, Runnable[Any, Other] | Callable[[Any], Other] | Any],
) -> RunnableSerializable[Input, Other]
Runnable "or" operator.
Compose this Runnable with another object to create a
RunnableSequence.
| PARAMETER | DESCRIPTION |
|---|---|
other
|
Another
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Input, Other]
|
A new |
__ror__
¶
__ror__(
other: Runnable[Other, Any]
| Callable[[Iterator[Other]], Iterator[Any]]
| Callable[[AsyncIterator[Other]], AsyncIterator[Any]]
| Callable[[Other], Any]
| Mapping[str, Runnable[Other, Any] | Callable[[Other], Any] | Any],
) -> RunnableSerializable[Other, Output]
Runnable "reverse-or" operator.
Compose this Runnable with another object to create a
RunnableSequence.
| PARAMETER | DESCRIPTION |
|---|---|
other
|
Another
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Other, Output]
|
A new |
pipe
¶
pipe(
*others: Runnable[Any, Other] | Callable[[Any], Other], name: str | None = None
) -> RunnableSerializable[Input, Other]
Pipe Runnable objects.
Compose this Runnable with Runnable-like objects to make a
RunnableSequence.
Equivalent to RunnableSequence(self, *others) or self | others[0] | ...
Example
from langchain_core.runnables import RunnableLambda
def add_one(x: int) -> int:
return x + 1
def mul_two(x: int) -> int:
return x * 2
runnable_1 = RunnableLambda(add_one)
runnable_2 = RunnableLambda(mul_two)
sequence = runnable_1.pipe(runnable_2)
# Or equivalently:
# sequence = runnable_1 | runnable_2
# sequence = RunnableSequence(first=runnable_1, last=runnable_2)
sequence.invoke(1)
await sequence.ainvoke(1)
# -> 4
sequence.batch([1, 2, 3])
await sequence.abatch([1, 2, 3])
# -> [4, 6, 8]
| PARAMETER | DESCRIPTION |
|---|---|
*others
|
Other
TYPE:
|
name
|
An optional name for the resulting
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Input, Other]
|
A new |
pick
¶
Pick keys from the output dict of this Runnable.
Pick a single key:
import json
from langchain_core.runnables import RunnableLambda, RunnableMap
as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)
chain = RunnableMap(str=as_str, json=as_json)
chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3]}
json_only_chain = chain.pick("json")
json_only_chain.invoke("[1, 2, 3]")
# -> [1, 2, 3]
Pick a list of keys:
from typing import Any
import json
from langchain_core.runnables import RunnableLambda, RunnableMap
as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)
def as_bytes(x: Any) -> bytes:
return bytes(x, "utf-8")
chain = RunnableMap(str=as_str, json=as_json, bytes=RunnableLambda(as_bytes))
chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3], "bytes": b"[1, 2, 3]"}
json_and_bytes_chain = chain.pick(["json", "bytes"])
json_and_bytes_chain.invoke("[1, 2, 3]")
# -> {"json": [1, 2, 3], "bytes": b"[1, 2, 3]"}
| PARAMETER | DESCRIPTION |
|---|---|
keys
|
A key or list of keys to pick from the output dict. |
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Any, Any]
|
a new |
assign
¶
assign(
**kwargs: Runnable[dict[str, Any], Any]
| Callable[[dict[str, Any]], Any]
| Mapping[str, Runnable[dict[str, Any], Any] | Callable[[dict[str, Any]], Any]],
) -> RunnableSerializable[Any, Any]
Assigns new fields to the dict output of this Runnable.
from langchain_community.llms.fake import FakeStreamingListLLM
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import SystemMessagePromptTemplate
from langchain_core.runnables import Runnable
from operator import itemgetter
prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
model = FakeStreamingListLLM(responses=["foo-lish"])
chain: Runnable = prompt | model | {"str": StrOutputParser()}
chain_with_assign = chain.assign(hello=itemgetter("str") | model)
print(chain_with_assign.input_schema.model_json_schema())
# {'title': 'PromptInput', 'type': 'object', 'properties':
{'question': {'title': 'Question', 'type': 'string'}}}
print(chain_with_assign.output_schema.model_json_schema())
# {'title': 'RunnableSequenceOutput', 'type': 'object', 'properties':
{'str': {'title': 'Str',
'type': 'string'}, 'hello': {'title': 'Hello', 'type': 'string'}}}
| PARAMETER | DESCRIPTION |
|---|---|
**kwargs
|
A mapping of keys to
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Any, Any]
|
A new |
invoke
¶
invoke(
input: LanguageModelInput,
config: RunnableConfig | None = None,
*,
stop: list[str] | None = None,
**kwargs: Any,
) -> AIMessage
Transform a single input into an output.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
A config to use when invoking the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Output
|
The output of the |
ainvoke
async
¶
ainvoke(
input: LanguageModelInput,
config: RunnableConfig | None = None,
*,
stop: list[str] | None = None,
**kwargs: Any,
) -> AIMessage
Transform a single input into an output.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
A config to use when invoking the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Output
|
The output of the |
batch
¶
batch(
inputs: list[Input],
config: RunnableConfig | list[RunnableConfig] | None = None,
*,
return_exceptions: bool = False,
**kwargs: Any | None,
) -> list[Output]
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses must override this method if they can batch more efficiently;
e.g., if the underlying Runnable uses an API which supports a batch mode.
| PARAMETER | DESCRIPTION |
|---|---|
inputs
|
A list of inputs to the
TYPE:
|
config
|
A config to use when invoking the
TYPE:
|
return_exceptions
|
Whether to return exceptions instead of raising them.
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[Output]
|
A list of outputs from the |
batch_as_completed
¶
batch_as_completed(
inputs: Sequence[Input],
config: RunnableConfig | Sequence[RunnableConfig] | None = None,
*,
return_exceptions: bool = False,
**kwargs: Any | None,
) -> Iterator[tuple[int, Output | Exception]]
Run invoke in parallel on a list of inputs.
Yields results as they complete.
| PARAMETER | DESCRIPTION |
|---|---|
inputs
|
A list of inputs to the
TYPE:
|
config
|
A config to use when invoking the
TYPE:
|
return_exceptions
|
Whether to return exceptions instead of raising them.
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
tuple[int, Output | Exception]
|
Tuples of the index of the input and the output from the |
abatch
async
¶
abatch(
inputs: list[Input],
config: RunnableConfig | list[RunnableConfig] | None = None,
*,
return_exceptions: bool = False,
**kwargs: Any | None,
) -> list[Output]
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses must override this method if they can batch more efficiently;
e.g., if the underlying Runnable uses an API which supports a batch mode.
| PARAMETER | DESCRIPTION |
|---|---|
inputs
|
A list of inputs to the
TYPE:
|
config
|
A config to use when invoking the
TYPE:
|
return_exceptions
|
Whether to return exceptions instead of raising them.
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[Output]
|
A list of outputs from the |
abatch_as_completed
async
¶
abatch_as_completed(
inputs: Sequence[Input],
config: RunnableConfig | Sequence[RunnableConfig] | None = None,
*,
return_exceptions: bool = False,
**kwargs: Any | None,
) -> AsyncIterator[tuple[int, Output | Exception]]
Run ainvoke in parallel on a list of inputs.
Yields results as they complete.
| PARAMETER | DESCRIPTION |
|---|---|
inputs
|
A list of inputs to the
TYPE:
|
config
|
A config to use when invoking the
TYPE:
|
return_exceptions
|
Whether to return exceptions instead of raising them.
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[tuple[int, Output | Exception]]
|
A tuple of the index of the input and the output from the |
stream
¶
stream(
input: LanguageModelInput,
config: RunnableConfig | None = None,
*,
stop: list[str] | None = None,
**kwargs: Any,
) -> Iterator[AIMessageChunk]
Default implementation of stream, which calls invoke.
Subclasses must override this method if they support streaming output.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
Output
|
The output of the |
astream
async
¶
astream(
input: LanguageModelInput,
config: RunnableConfig | None = None,
*,
stop: list[str] | None = None,
**kwargs: Any,
) -> AsyncIterator[AIMessageChunk]
Default implementation of astream, which calls ainvoke.
Subclasses must override this method if they support streaming output.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[Output]
|
The output of the |
astream_log
async
¶
astream_log(
input: Any,
config: RunnableConfig | None = None,
*,
diff: bool = True,
with_streamed_output_list: bool = True,
include_names: Sequence[str] | None = None,
include_types: Sequence[str] | None = None,
include_tags: Sequence[str] | None = None,
exclude_names: Sequence[str] | None = None,
exclude_types: Sequence[str] | None = None,
exclude_tags: Sequence[str] | None = None,
**kwargs: Any,
) -> AsyncIterator[RunLogPatch] | AsyncIterator[RunLog]
Stream all output from a Runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of Jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run.
The Jsonpatch ops can be applied in order to construct state.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
diff
|
Whether to yield diffs between each step or the current state.
TYPE:
|
with_streamed_output_list
|
Whether to yield the
TYPE:
|
include_names
|
Only include logs with these names. |
include_types
|
Only include logs with these types. |
include_tags
|
Only include logs with these tags. |
exclude_names
|
Exclude logs with these names. |
exclude_types
|
Exclude logs with these types. |
exclude_tags
|
Exclude logs with these tags. |
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[RunLogPatch] | AsyncIterator[RunLog]
|
A |
astream_events
async
¶
astream_events(
input: Any,
config: RunnableConfig | None = None,
*,
version: Literal["v1", "v2"] = "v2",
include_names: Sequence[str] | None = None,
include_types: Sequence[str] | None = None,
include_tags: Sequence[str] | None = None,
exclude_names: Sequence[str] | None = None,
exclude_types: Sequence[str] | None = None,
exclude_tags: Sequence[str] | None = None,
**kwargs: Any,
) -> AsyncIterator[StreamEvent]
Generate a stream of events.
Use to create an iterator over StreamEvent that provide real-time information
about the progress of the Runnable, including StreamEvent from intermediate
results.
A StreamEvent is a dictionary with the following schema:
event: Event names are of the format:on_[runnable_type]_(start|stream|end).name: The name of theRunnablethat generated the event.run_id: Randomly generated ID associated with the given execution of theRunnablethat emitted the event. A childRunnablethat gets invoked as part of the execution of a parentRunnableis assigned its own unique ID.parent_ids: The IDs of the parent runnables that generated the event. The rootRunnablewill have an empty list. The order of the parent IDs is from the root to the immediate parent. Only available for v2 version of the API. The v1 version of the API will return an empty list.tags: The tags of theRunnablethat generated the event.metadata: The metadata of theRunnablethat generated the event.data: The data associated with the event. The contents of this field depend on the type of event. See the table below for more details.
Below is a table that illustrates some events that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table.
Note
This reference table is for the v2 version of the schema.
| event | name | chunk | input | output |
|---|---|---|---|---|
on_chat_model_start |
'[model name]' |
{"messages": [[SystemMessage, HumanMessage]]} |
||
on_chat_model_stream |
'[model name]' |
AIMessageChunk(content="hello") |
||
on_chat_model_end |
'[model name]' |
{"messages": [[SystemMessage, HumanMessage]]} |
AIMessageChunk(content="hello world") |
|
on_llm_start |
'[model name]' |
{'input': 'hello'} |
||
on_llm_stream |
'[model name]' |
'Hello' |
||
on_llm_end |
'[model name]' |
'Hello human!' |
||
on_chain_start |
'format_docs' |
|||
on_chain_stream |
'format_docs' |
'hello world!, goodbye world!' |
||
on_chain_end |
'format_docs' |
[Document(...)] |
'hello world!, goodbye world!' |
|
on_tool_start |
'some_tool' |
{"x": 1, "y": "2"} |
||
on_tool_end |
'some_tool' |
{"x": 1, "y": "2"} |
||
on_retriever_start |
'[retriever name]' |
{"query": "hello"} |
||
on_retriever_end |
'[retriever name]' |
{"query": "hello"} |
[Document(...), ..] |
|
on_prompt_start |
'[template_name]' |
{"question": "hello"} |
||
on_prompt_end |
'[template_name]' |
{"question": "hello"} |
ChatPromptValue(messages: [SystemMessage, ...]) |
In addition to the standard events, users can also dispatch custom events (see example below).
Custom events will be only be surfaced with in the v2 version of the API!
A custom event has following format:
| Attribute | Type | Description |
|---|---|---|
name |
str |
A user defined name for the event. |
data |
Any |
The data associated with the event. This can be anything, though we suggest making it JSON serializable. |
Here are declarations associated with the standard events shown above:
format_docs:
def format_docs(docs: list[Document]) -> str:
'''Format the docs.'''
return ", ".join([doc.page_content for doc in docs])
format_docs = RunnableLambda(format_docs)
some_tool:
prompt:
template = ChatPromptTemplate.from_messages(
[
("system", "You are Cat Agent 007"),
("human", "{question}"),
]
).with_config({"run_name": "my_template", "tags": ["my_template"]})
For instance:
from langchain_core.runnables import RunnableLambda
async def reverse(s: str) -> str:
return s[::-1]
chain = RunnableLambda(func=reverse)
events = [event async for event in chain.astream_events("hello", version="v2")]
# Will produce the following events
# (run_id, and parent_ids has been omitted for brevity):
[
{
"data": {"input": "hello"},
"event": "on_chain_start",
"metadata": {},
"name": "reverse",
"tags": [],
},
{
"data": {"chunk": "olleh"},
"event": "on_chain_stream",
"metadata": {},
"name": "reverse",
"tags": [],
},
{
"data": {"output": "olleh"},
"event": "on_chain_end",
"metadata": {},
"name": "reverse",
"tags": [],
},
]
from langchain_core.callbacks.manager import (
adispatch_custom_event,
)
from langchain_core.runnables import RunnableLambda, RunnableConfig
import asyncio
async def slow_thing(some_input: str, config: RunnableConfig) -> str:
"""Do something that takes a long time."""
await asyncio.sleep(1) # Placeholder for some slow operation
await adispatch_custom_event(
"progress_event",
{"message": "Finished step 1 of 3"},
config=config # Must be included for python < 3.10
)
await asyncio.sleep(1) # Placeholder for some slow operation
await adispatch_custom_event(
"progress_event",
{"message": "Finished step 2 of 3"},
config=config # Must be included for python < 3.10
)
await asyncio.sleep(1) # Placeholder for some slow operation
return "Done"
slow_thing = RunnableLambda(slow_thing)
async for event in slow_thing.astream_events("some_input", version="v2"):
print(event)
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
version
|
The version of the schema to use either
TYPE:
|
include_names
|
Only include events from |
include_types
|
Only include events from |
include_tags
|
Only include events from |
exclude_names
|
Exclude events from |
exclude_types
|
Exclude events from |
exclude_tags
|
Exclude events from |
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[StreamEvent]
|
An async stream of |
| RAISES | DESCRIPTION |
|---|---|
NotImplementedError
|
If the version is not |
transform
¶
transform(
input: Iterator[Input], config: RunnableConfig | None = None, **kwargs: Any | None
) -> Iterator[Output]
Transform inputs to outputs.
Default implementation of transform, which buffers input and calls astream.
Subclasses must override this method if they can start producing output while input is still being generated.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
An iterator of inputs to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
Output
|
The output of the |
atransform
async
¶
atransform(
input: AsyncIterator[Input],
config: RunnableConfig | None = None,
**kwargs: Any | None,
) -> AsyncIterator[Output]
Transform inputs to outputs.
Default implementation of atransform, which buffers input and calls astream.
Subclasses must override this method if they can start producing output while input is still being generated.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
An async iterator of inputs to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[Output]
|
The output of the |
bind
¶
Bind arguments to a Runnable, returning a new Runnable.
Useful when a Runnable in a chain requires an argument that is not
in the output of the previous Runnable or included in the user input.
| PARAMETER | DESCRIPTION |
|---|---|
**kwargs
|
The arguments to bind to the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
Example
from langchain_ollama import ChatOllama
from langchain_core.output_parsers import StrOutputParser
model = ChatOllama(model="llama3.1")
# Without bind
chain = model | StrOutputParser()
chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two three four five.'
# With bind
chain = model.bind(stop=["three"]) | StrOutputParser()
chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two'
with_config
¶
with_config(
config: RunnableConfig | None = None, **kwargs: Any
) -> Runnable[Input, Output]
Bind config to a Runnable, returning a new Runnable.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
The config to bind to the
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
with_listeners
¶
with_listeners(
*,
on_start: Callable[[Run], None]
| Callable[[Run, RunnableConfig], None]
| None = None,
on_end: Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None = None,
on_error: Callable[[Run], None]
| Callable[[Run, RunnableConfig], None]
| None = None,
) -> Runnable[Input, Output]
Bind lifecycle listeners to a Runnable, returning a new Runnable.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and
any tags or metadata added to the run.
| PARAMETER | DESCRIPTION |
|---|---|
on_start
|
Called before the
TYPE:
|
on_end
|
Called after the
TYPE:
|
on_error
|
Called if the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
Example
from langchain_core.runnables import RunnableLambda
from langchain_core.tracers.schemas import Run
import time
def test_runnable(time_to_sleep: int):
time.sleep(time_to_sleep)
def fn_start(run_obj: Run):
print("start_time:", run_obj.start_time)
def fn_end(run_obj: Run):
print("end_time:", run_obj.end_time)
chain = RunnableLambda(test_runnable).with_listeners(
on_start=fn_start, on_end=fn_end
)
chain.invoke(2)
with_alisteners
¶
with_alisteners(
*,
on_start: AsyncListener | None = None,
on_end: AsyncListener | None = None,
on_error: AsyncListener | None = None,
) -> Runnable[Input, Output]
Bind async lifecycle listeners to a Runnable.
Returns a new Runnable.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and
any tags or metadata added to the run.
| PARAMETER | DESCRIPTION |
|---|---|
on_start
|
Called asynchronously before the
TYPE:
|
on_end
|
Called asynchronously after the
TYPE:
|
on_error
|
Called asynchronously if the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
Example
from langchain_core.runnables import RunnableLambda, Runnable
from datetime import datetime, timezone
import time
import asyncio
def format_t(timestamp: float) -> str:
return datetime.fromtimestamp(timestamp, tz=timezone.utc).isoformat()
async def test_runnable(time_to_sleep: int):
print(f"Runnable[{time_to_sleep}s]: starts at {format_t(time.time())}")
await asyncio.sleep(time_to_sleep)
print(f"Runnable[{time_to_sleep}s]: ends at {format_t(time.time())}")
async def fn_start(run_obj: Runnable):
print(f"on start callback starts at {format_t(time.time())}")
await asyncio.sleep(3)
print(f"on start callback ends at {format_t(time.time())}")
async def fn_end(run_obj: Runnable):
print(f"on end callback starts at {format_t(time.time())}")
await asyncio.sleep(2)
print(f"on end callback ends at {format_t(time.time())}")
runnable = RunnableLambda(test_runnable).with_alisteners(
on_start=fn_start,
on_end=fn_end
)
async def concurrent_runs():
await asyncio.gather(runnable.ainvoke(2), runnable.ainvoke(3))
asyncio.run(concurrent_runs())
Result:
on start callback starts at 2025-03-01T07:05:22.875378+00:00
on start callback starts at 2025-03-01T07:05:22.875495+00:00
on start callback ends at 2025-03-01T07:05:25.878862+00:00
on start callback ends at 2025-03-01T07:05:25.878947+00:00
Runnable[2s]: starts at 2025-03-01T07:05:25.879392+00:00
Runnable[3s]: starts at 2025-03-01T07:05:25.879804+00:00
Runnable[2s]: ends at 2025-03-01T07:05:27.881998+00:00
on end callback starts at 2025-03-01T07:05:27.882360+00:00
Runnable[3s]: ends at 2025-03-01T07:05:28.881737+00:00
on end callback starts at 2025-03-01T07:05:28.882428+00:00
on end callback ends at 2025-03-01T07:05:29.883893+00:00
on end callback ends at 2025-03-01T07:05:30.884831+00:00
with_types
¶
with_types(
*, input_type: type[Input] | None = None, output_type: type[Output] | None = None
) -> Runnable[Input, Output]
Bind input and output types to a Runnable, returning a new Runnable.
| PARAMETER | DESCRIPTION |
|---|---|
input_type
|
The input type to bind to the
TYPE:
|
output_type
|
The output type to bind to the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
with_retry
¶
with_retry(
*,
retry_if_exception_type: tuple[type[BaseException], ...] = (Exception,),
wait_exponential_jitter: bool = True,
exponential_jitter_params: ExponentialJitterParams | None = None,
stop_after_attempt: int = 3,
) -> Runnable[Input, Output]
Create a new Runnable that retries the original Runnable on exceptions.
| PARAMETER | DESCRIPTION |
|---|---|
retry_if_exception_type
|
A tuple of exception types to retry on.
TYPE:
|
wait_exponential_jitter
|
Whether to add jitter to the wait time between retries.
TYPE:
|
stop_after_attempt
|
The maximum number of attempts to make before giving up.
TYPE:
|
exponential_jitter_params
|
Parameters for
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new Runnable that retries the original Runnable on exceptions. |
Example
from langchain_core.runnables import RunnableLambda
count = 0
def _lambda(x: int) -> None:
global count
count = count + 1
if x == 1:
raise ValueError("x is 1")
else:
pass
runnable = RunnableLambda(_lambda)
try:
runnable.with_retry(
stop_after_attempt=2,
retry_if_exception_type=(ValueError,),
).invoke(1)
except ValueError:
pass
assert count == 2
map
¶
with_fallbacks
¶
with_fallbacks(
fallbacks: Sequence[Runnable[Input, Output]],
*,
exceptions_to_handle: tuple[type[BaseException], ...] = (Exception,),
exception_key: str | None = None,
) -> RunnableWithFallbacks[Input, Output]
Add fallbacks to a Runnable, returning a new Runnable.
The new Runnable will try the original Runnable, and then each fallback
in order, upon failures.
| PARAMETER | DESCRIPTION |
|---|---|
fallbacks
|
A sequence of runnables to try if the original |
exceptions_to_handle
|
A tuple of exception types to handle.
TYPE:
|
exception_key
|
If
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableWithFallbacks[Input, Output]
|
A new |
Example
from typing import Iterator
from langchain_core.runnables import RunnableGenerator
def _generate_immediate_error(input: Iterator) -> Iterator[str]:
raise ValueError()
yield ""
def _generate(input: Iterator) -> Iterator[str]:
yield from "foo bar"
runnable = RunnableGenerator(_generate_immediate_error).with_fallbacks(
[RunnableGenerator(_generate)]
)
print("".join(runnable.stream({}))) # foo bar
| PARAMETER | DESCRIPTION |
|---|---|
fallbacks
|
A sequence of runnables to try if the original |
exceptions_to_handle
|
A tuple of exception types to handle.
TYPE:
|
exception_key
|
If
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableWithFallbacks[Input, Output]
|
A new |
as_tool
¶
as_tool(
args_schema: type[BaseModel] | None = None,
*,
name: str | None = None,
description: str | None = None,
arg_types: dict[str, type] | None = None,
) -> BaseTool
Create a BaseTool from a Runnable.
as_tool will instantiate a BaseTool with a name, description, and
args_schema from a Runnable. Where possible, schemas are inferred
from runnable.get_input_schema. Alternatively (e.g., if the
Runnable takes a dict as input and the specific dict keys are not typed),
the schema can be specified directly with args_schema. You can also
pass arg_types to just specify the required arguments and their types.
| PARAMETER | DESCRIPTION |
|---|---|
args_schema
|
The schema for the tool. |
name
|
The name of the tool.
TYPE:
|
description
|
The description of the tool.
TYPE:
|
arg_types
|
A dictionary of argument names to types. |
| RETURNS | DESCRIPTION |
|---|---|
BaseTool
|
A |
Typed dict input:
from typing_extensions import TypedDict
from langchain_core.runnables import RunnableLambda
class Args(TypedDict):
a: int
b: list[int]
def f(x: Args) -> str:
return str(x["a"] * max(x["b"]))
runnable = RunnableLambda(f)
as_tool = runnable.as_tool()
as_tool.invoke({"a": 3, "b": [1, 2]})
dict input, specifying schema via args_schema:
from typing import Any
from pydantic import BaseModel, Field
from langchain_core.runnables import RunnableLambda
def f(x: dict[str, Any]) -> str:
return str(x["a"] * max(x["b"]))
class FSchema(BaseModel):
"""Apply a function to an integer and list of integers."""
a: int = Field(..., description="Integer")
b: list[int] = Field(..., description="List of ints")
runnable = RunnableLambda(f)
as_tool = runnable.as_tool(FSchema)
as_tool.invoke({"a": 3, "b": [1, 2]})
dict input, specifying schema via arg_types:
from typing import Any
from langchain_core.runnables import RunnableLambda
def f(x: dict[str, Any]) -> str:
return str(x["a"] * max(x["b"]))
runnable = RunnableLambda(f)
as_tool = runnable.as_tool(arg_types={"a": int, "b": list[int]})
as_tool.invoke({"a": 3, "b": [1, 2]})
String input:
lc_id
classmethod
¶
Return a unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path to the object.
For example, for the class langchain.llms.openai.OpenAI, the id is
["langchain", "llms", "openai", "OpenAI"].
to_json
¶
Serialize the Runnable to JSON.
| RETURNS | DESCRIPTION |
|---|---|
SerializedConstructor | SerializedNotImplemented
|
A JSON-serializable representation of the |
to_json_not_implemented
¶
Serialize a "not implemented" object.
| RETURNS | DESCRIPTION |
|---|---|
SerializedNotImplemented
|
|
configurable_fields
¶
configurable_fields(
**kwargs: AnyConfigurableField,
) -> RunnableSerializable[Input, Output]
Configure particular Runnable fields at runtime.
| PARAMETER | DESCRIPTION |
|---|---|
**kwargs
|
A dictionary of
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If a configuration key is not found in the |
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Input, Output]
|
A new |
from langchain_core.runnables import ConfigurableField
from langchain_openai import ChatOpenAI
model = ChatOpenAI(max_tokens=20).configurable_fields(
max_tokens=ConfigurableField(
id="output_token_number",
name="Max tokens in the output",
description="The maximum number of tokens in the output",
)
)
# max_tokens = 20
print("max_tokens_20: ", model.invoke("tell me something about chess").content)
# max_tokens = 200
print(
"max_tokens_200: ",
model.with_config(configurable={"output_token_number": 200})
.invoke("tell me something about chess")
.content,
)
configurable_alternatives
¶
configurable_alternatives(
which: ConfigurableField,
*,
default_key: str = "default",
prefix_keys: bool = False,
**kwargs: Runnable[Input, Output] | Callable[[], Runnable[Input, Output]],
) -> RunnableSerializable[Input, Output]
Configure alternatives for Runnable objects that can be set at runtime.
| PARAMETER | DESCRIPTION |
|---|---|
which
|
The
TYPE:
|
default_key
|
The default key to use if no alternative is selected.
TYPE:
|
prefix_keys
|
Whether to prefix the keys with the
TYPE:
|
**kwargs
|
A dictionary of keys to
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Input, Output]
|
A new |
from langchain_anthropic import ChatAnthropic
from langchain_core.runnables.utils import ConfigurableField
from langchain_openai import ChatOpenAI
model = ChatAnthropic(
model_name="claude-3-7-sonnet-20250219"
).configurable_alternatives(
ConfigurableField(id="llm"),
default_key="anthropic",
openai=ChatOpenAI(),
)
# uses the default model ChatAnthropic
print(model.invoke("which organization created you?").content)
# uses ChatOpenAI
print(
model.with_config(configurable={"llm": "openai"})
.invoke("which organization created you?")
.content
)
set_verbose
¶
generate_prompt
¶
generate_prompt(
prompts: list[PromptValue],
stop: list[str] | None = None,
callbacks: Callbacks = None,
**kwargs: Any,
) -> LLMResult
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
- Take advantage of batched calls,
- Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).
| PARAMETER | DESCRIPTION |
|---|---|
prompts
|
List of
TYPE:
|
stop
|
Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. |
callbacks
|
TYPE:
|
**kwargs
|
Arbitrary additional keyword arguments. These are usually passed to the model provider API call.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
LLMResult
|
An |
agenerate_prompt
async
¶
agenerate_prompt(
prompts: list[PromptValue],
stop: list[str] | None = None,
callbacks: Callbacks = None,
**kwargs: Any,
) -> LLMResult
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
- Take advantage of batched calls,
- Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).
| PARAMETER | DESCRIPTION |
|---|---|
prompts
|
List of
TYPE:
|
stop
|
Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. |
callbacks
|
TYPE:
|
**kwargs
|
Arbitrary additional keyword arguments. These are usually passed to the model provider API call.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
LLMResult
|
An |
get_token_ids
¶
get_num_tokens
¶
generate
¶
generate(
messages: list[list[BaseMessage]],
stop: list[str] | None = None,
callbacks: Callbacks = None,
*,
tags: list[str] | None = None,
metadata: dict[str, Any] | None = None,
run_name: str | None = None,
run_id: UUID | None = None,
**kwargs: Any,
) -> LLMResult
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
- Take advantage of batched calls,
- Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).
| PARAMETER | DESCRIPTION |
|---|---|
messages
|
List of list of messages.
TYPE:
|
stop
|
Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. |
callbacks
|
TYPE:
|
tags
|
The tags to apply. |
metadata
|
The metadata to apply. |
run_name
|
The name of the run.
TYPE:
|
run_id
|
The ID of the run.
TYPE:
|
**kwargs
|
Arbitrary additional keyword arguments. These are usually passed to the model provider API call.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
LLMResult
|
An |
agenerate
async
¶
agenerate(
messages: list[list[BaseMessage]],
stop: list[str] | None = None,
callbacks: Callbacks = None,
*,
tags: list[str] | None = None,
metadata: dict[str, Any] | None = None,
run_name: str | None = None,
run_id: UUID | None = None,
**kwargs: Any,
) -> LLMResult
Asynchronously pass a sequence of prompts to a model and return generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
- Take advantage of batched calls,
- Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).
| PARAMETER | DESCRIPTION |
|---|---|
messages
|
List of list of messages.
TYPE:
|
stop
|
Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. |
callbacks
|
TYPE:
|
tags
|
The tags to apply. |
metadata
|
The metadata to apply. |
run_name
|
The name of the run.
TYPE:
|
run_id
|
The ID of the run.
TYPE:
|
**kwargs
|
Arbitrary additional keyword arguments. These are usually passed to the model provider API call.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
LLMResult
|
An |