langchain-nvidia-ai-endpoints¶
langchain_nvidia_ai_endpoints
¶
LangChain NVIDIA AI Foundation Model Playground Integration
Note
You can import langchain_nvidia instead.
This comprehensive module integrates NVIDIA's state-of-the-art AI Foundation Models, featuring advanced models for conversational AI and semantic embeddings, into the LangChain framework. It provides robust classes for seamless interaction with NVIDIA's AI models, particularly tailored for enriching conversational experiences and enhancing semantic understanding in various applications.
Features
-
Chat Models (
ChatNVIDIA): This class serves as the primary interface for interacting with NVIDIA's Foundation chat models. Users can effortlessly utilize NVIDIA's advanced models like 'Mistral' to engage in rich, context-aware conversations, applicable across diverse domains from customer support to interactive storytelling. -
Semantic Embeddings (
NVIDIAEmbeddings): The module offers capabilities to generate sophisticated embeddings using NVIDIA's AI models. These embeddings are instrumental for tasks like semantic analysis, text similarity assessments, and contextual understanding, significantly enhancing the depth of NLP applications.
Installation
Install this module easily using pip:
Utilizing Chat Models
After setting up the environment, interact with NVIDIA AI Foundation models::
from langchain_nvidia_ai_endpoints import ChatNVIDIA
ai_chat_model = ChatNVIDIA(model="meta/llama2-70b")
response = ai_chat_model.invoke("Tell me about the LangChain integration.")
Generating Semantic Embeddings
Use NVIDIA's models for creating embeddings, useful in various NLP tasks::
from langchain_nvidia_ai_endpoints import NVIDIAEmbeddings
embed_model = NVIDIAEmbeddings(model="nvolveqa_40k")
embedding_output = embed_model.embed_query("Exploring AI capabilities.")
| FUNCTION | DESCRIPTION |
|---|---|
register_model |
Register a model as a known model. |
Model
¶
Bases: BaseModel
Model information.
| ATTRIBUTE | DESCRIPTION |
|---|---|
id |
Unique identifier for the model, passed as model parameter for requests
TYPE:
|
model_type |
API type
TYPE:
|
client |
Client name
TYPE:
|
endpoint |
Custom endpoint for the model
TYPE:
|
aliases |
List of aliases for the model
TYPE:
|
supports_tools |
Whether the model supports tool calling
TYPE:
|
supports_structured_output |
Whether the model supports structured output
TYPE:
|
supports_thinking |
Whether the model supports thinking mode
TYPE:
|
All aliases are deprecated and will trigger a warning when used.
ChatNVIDIA
¶
Bases: BaseChatModel
NVIDIA chat model.
Example
| METHOD | DESCRIPTION |
|---|---|
get_name |
Get the name of the |
get_input_schema |
Get a Pydantic model that can be used to validate input to the |
get_input_jsonschema |
Get a JSON schema that represents the input to the |
get_output_schema |
Get a Pydantic model that can be used to validate output to the |
get_output_jsonschema |
Get a JSON schema that represents the output of the |
config_schema |
The type of config this |
get_config_jsonschema |
Get a JSON schema that represents the config of the |
get_graph |
Return a graph representation of this |
get_prompts |
Return a list of prompts used by this |
__or__ |
Runnable "or" operator. |
__ror__ |
Runnable "reverse-or" operator. |
pipe |
Pipe |
pick |
Pick keys from the output |
assign |
Assigns new fields to the |
invoke |
Transform a single input into an output. |
ainvoke |
Transform a single input into an output. |
batch |
Default implementation runs invoke in parallel using a thread pool executor. |
batch_as_completed |
Run |
abatch |
Default implementation runs |
abatch_as_completed |
Run |
stream |
Default implementation of |
astream |
Default implementation of |
astream_log |
Stream all output from a |
astream_events |
Generate a stream of events. |
transform |
Transform inputs to outputs. |
atransform |
Transform inputs to outputs. |
bind |
Bind arguments to a |
with_config |
Bind config to a |
with_listeners |
Bind lifecycle listeners to a |
with_alisteners |
Bind async lifecycle listeners to a |
with_types |
Bind input and output types to a |
with_retry |
Create a new |
map |
Return a new |
with_fallbacks |
Add fallbacks to a |
as_tool |
Create a |
is_lc_serializable |
Is this class serializable? |
get_lc_namespace |
Get the namespace of the LangChain object. |
lc_id |
Return a unique identifier for this class for serialization purposes. |
to_json |
Serialize the |
to_json_not_implemented |
Serialize a "not implemented" object. |
configurable_fields |
Configure particular |
configurable_alternatives |
Configure alternatives for |
set_verbose |
If verbose is |
generate_prompt |
Pass a sequence of prompts to the model and return model generations. |
agenerate_prompt |
Asynchronously pass a sequence of prompts and return model generations. |
get_token_ids |
Return the ordered IDs of the tokens in a text. |
get_num_tokens |
Get the number of tokens present in the text. |
get_num_tokens_from_messages |
Get the number of tokens in the messages. |
generate |
Pass a sequence of prompts to the model and return model generations. |
agenerate |
Asynchronously pass a sequence of prompts to a model and return generations. |
dict |
Return a dictionary of the LLM. |
__init__ |
Create a new |
get_available_models |
Get a list of available models that work with |
bind_tools |
Bind tools to the model. |
with_structured_output |
Bind a structured output schema to the model. |
with_thinking_mode |
Configure the model to use thinking mode. |
name
class-attribute
instance-attribute
¶
name: str | None = None
The name of the Runnable. Used for debugging and tracing.
input_schema
property
¶
The type of input this Runnable accepts specified as a Pydantic model.
output_schema
property
¶
Output schema.
The type of output this Runnable produces specified as a Pydantic model.
config_specs
property
¶
config_specs: list[ConfigurableFieldSpec]
List configurable fields for this Runnable.
lc_secrets
property
¶
A map of constructor argument names to secret ids.
For example, {"openai_api_key": "OPENAI_API_KEY"}
lc_attributes
property
¶
lc_attributes: dict
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
Default is an empty dictionary.
cache
class-attribute
instance-attribute
¶
Whether to cache the response.
- If
True, will use the global cache. - If
False, will not use a cache - If
None, will use the global cache if it's set, otherwise no cache. - If instance of
BaseCache, will use the provided cache.
Caching is not currently supported for streaming methods of models.
verbose
class-attribute
instance-attribute
¶
Whether to print out response text.
callbacks
class-attribute
instance-attribute
¶
callbacks: Callbacks = Field(default=None, exclude=True)
Callbacks to add to the run trace.
tags
class-attribute
instance-attribute
¶
Tags to add to the run trace.
metadata
class-attribute
instance-attribute
¶
Metadata to add to the run trace.
custom_get_token_ids
class-attribute
instance-attribute
¶
Optional encoder to use for counting tokens.
rate_limiter
class-attribute
instance-attribute
¶
rate_limiter: BaseRateLimiter | None = Field(default=None, exclude=True)
An optional rate limiter to use for limiting the number of requests.
disable_streaming
class-attribute
instance-attribute
¶
Whether to disable streaming for this model.
If streaming is bypassed, then stream/astream/astream_events will
defer to invoke/ainvoke.
- If
True, will always bypass streaming case. - If
'tool_calling', will bypass streaming case only when the model is called with atoolskeyword argument. In other words, LangChain will automatically switch to non-streaming behavior (invoke) only when the tools argument is provided. This offers the best of both worlds. - If
False(Default), will always use streaming case if available.
The main reason for this flag is that code might be written using stream and
a user may want to swap out a given model for another model whose the implementation
does not properly support streaming.
output_version
class-attribute
instance-attribute
¶
Version of AIMessage output format to store in message content.
AIMessage.content_blocks will lazily parse the contents of content into a
standard format. This flag can be used to additionally store the standard format
in message content, e.g., for serialization purposes.
Supported values:
'v0': provider-specific format in content (can lazily-parse withcontent_blocks)'v1': standardized format in content (consistent withcontent_blocks)
Partner packages (e.g.,
langchain-openai) can also use this
field to roll out new content formats in a backward-compatible way.
Added in langchain-core 1.0
profile
class-attribute
instance-attribute
¶
profile: ModelProfile | None = Field(default=None, exclude=True)
Profile detailing model capabilities.
Beta feature
This is a beta feature. The format of model profiles is subject to change.
If not specified, automatically loaded from the provider package on initialization if data is available.
Example profile data includes context window sizes, supported modalities, or support for tool calling, structured output, and other features.
Added in langchain-core 1.1
available_models
property
¶
Get a list of available models that work with ChatNVIDIA.
get_name
¶
get_input_schema
¶
get_input_schema(config: RunnableConfig | None = None) -> type[BaseModel]
Get a Pydantic model that can be used to validate input to the Runnable.
Runnable objects that leverage the configurable_fields and
configurable_alternatives methods will have a dynamic input schema that
depends on which configuration the Runnable is invoked with.
This method allows to get an input schema for a specific configuration.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
A config to use when generating the schema.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
type[BaseModel]
|
A Pydantic model that can be used to validate input. |
get_input_jsonschema
¶
get_input_jsonschema(config: RunnableConfig | None = None) -> dict[str, Any]
Get a JSON schema that represents the input to the Runnable.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
A config to use when generating the schema.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
dict[str, Any]
|
A JSON schema that represents the input to the |
Example
Added in langchain-core 0.3.0
get_output_schema
¶
get_output_schema(config: RunnableConfig | None = None) -> type[BaseModel]
Get a Pydantic model that can be used to validate output to the Runnable.
Runnable objects that leverage the configurable_fields and
configurable_alternatives methods will have a dynamic output schema that
depends on which configuration the Runnable is invoked with.
This method allows to get an output schema for a specific configuration.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
A config to use when generating the schema.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
type[BaseModel]
|
A Pydantic model that can be used to validate output. |
get_output_jsonschema
¶
get_output_jsonschema(config: RunnableConfig | None = None) -> dict[str, Any]
Get a JSON schema that represents the output of the Runnable.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
A config to use when generating the schema.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
dict[str, Any]
|
A JSON schema that represents the output of the |
Example
Added in langchain-core 0.3.0
config_schema
¶
The type of config this Runnable accepts specified as a Pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
| PARAMETER | DESCRIPTION |
|---|---|
include
|
A list of fields to include in the config schema. |
| RETURNS | DESCRIPTION |
|---|---|
type[BaseModel]
|
A Pydantic model that can be used to validate config. |
get_config_jsonschema
¶
get_graph
¶
get_graph(config: RunnableConfig | None = None) -> Graph
Return a graph representation of this Runnable.
get_prompts
¶
get_prompts(config: RunnableConfig | None = None) -> list[BasePromptTemplate]
Return a list of prompts used by this Runnable.
__or__
¶
__or__(
other: Runnable[Any, Other]
| Callable[[Iterator[Any]], Iterator[Other]]
| Callable[[AsyncIterator[Any]], AsyncIterator[Other]]
| Callable[[Any], Other]
| Mapping[str, Runnable[Any, Other] | Callable[[Any], Other] | Any],
) -> RunnableSerializable[Input, Other]
Runnable "or" operator.
Compose this Runnable with another object to create a
RunnableSequence.
| PARAMETER | DESCRIPTION |
|---|---|
other
|
Another
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Input, Other]
|
A new |
__ror__
¶
__ror__(
other: Runnable[Other, Any]
| Callable[[Iterator[Other]], Iterator[Any]]
| Callable[[AsyncIterator[Other]], AsyncIterator[Any]]
| Callable[[Other], Any]
| Mapping[str, Runnable[Other, Any] | Callable[[Other], Any] | Any],
) -> RunnableSerializable[Other, Output]
Runnable "reverse-or" operator.
Compose this Runnable with another object to create a
RunnableSequence.
| PARAMETER | DESCRIPTION |
|---|---|
other
|
Another
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Other, Output]
|
A new |
pipe
¶
pipe(
*others: Runnable[Any, Other] | Callable[[Any], Other], name: str | None = None
) -> RunnableSerializable[Input, Other]
Pipe Runnable objects.
Compose this Runnable with Runnable-like objects to make a
RunnableSequence.
Equivalent to RunnableSequence(self, *others) or self | others[0] | ...
Example
from langchain_core.runnables import RunnableLambda
def add_one(x: int) -> int:
return x + 1
def mul_two(x: int) -> int:
return x * 2
runnable_1 = RunnableLambda(add_one)
runnable_2 = RunnableLambda(mul_two)
sequence = runnable_1.pipe(runnable_2)
# Or equivalently:
# sequence = runnable_1 | runnable_2
# sequence = RunnableSequence(first=runnable_1, last=runnable_2)
sequence.invoke(1)
await sequence.ainvoke(1)
# -> 4
sequence.batch([1, 2, 3])
await sequence.abatch([1, 2, 3])
# -> [4, 6, 8]
| PARAMETER | DESCRIPTION |
|---|---|
*others
|
Other
TYPE:
|
name
|
An optional name for the resulting
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Input, Other]
|
A new |
pick
¶
Pick keys from the output dict of this Runnable.
Pick a single key
import json
from langchain_core.runnables import RunnableLambda, RunnableMap
as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)
chain = RunnableMap(str=as_str, json=as_json)
chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3]}
json_only_chain = chain.pick("json")
json_only_chain.invoke("[1, 2, 3]")
# -> [1, 2, 3]
Pick a list of keys
from typing import Any
import json
from langchain_core.runnables import RunnableLambda, RunnableMap
as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)
def as_bytes(x: Any) -> bytes:
return bytes(x, "utf-8")
chain = RunnableMap(
str=as_str, json=as_json, bytes=RunnableLambda(as_bytes)
)
chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3], "bytes": b"[1, 2, 3]"}
json_and_bytes_chain = chain.pick(["json", "bytes"])
json_and_bytes_chain.invoke("[1, 2, 3]")
# -> {"json": [1, 2, 3], "bytes": b"[1, 2, 3]"}
| PARAMETER | DESCRIPTION |
|---|---|
keys
|
A key or list of keys to pick from the output dict. |
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Any, Any]
|
a new |
assign
¶
assign(
**kwargs: Runnable[dict[str, Any], Any]
| Callable[[dict[str, Any]], Any]
| Mapping[str, Runnable[dict[str, Any], Any] | Callable[[dict[str, Any]], Any]],
) -> RunnableSerializable[Any, Any]
Assigns new fields to the dict output of this Runnable.
from langchain_core.language_models.fake import FakeStreamingListLLM
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import SystemMessagePromptTemplate
from langchain_core.runnables import Runnable
from operator import itemgetter
prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
model = FakeStreamingListLLM(responses=["foo-lish"])
chain: Runnable = prompt | model | {"str": StrOutputParser()}
chain_with_assign = chain.assign(hello=itemgetter("str") | model)
print(chain_with_assign.input_schema.model_json_schema())
# {'title': 'PromptInput', 'type': 'object', 'properties':
{'question': {'title': 'Question', 'type': 'string'}}}
print(chain_with_assign.output_schema.model_json_schema())
# {'title': 'RunnableSequenceOutput', 'type': 'object', 'properties':
{'str': {'title': 'Str',
'type': 'string'}, 'hello': {'title': 'Hello', 'type': 'string'}}}
| PARAMETER | DESCRIPTION |
|---|---|
**kwargs
|
A mapping of keys to
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Any, Any]
|
A new |
invoke
¶
invoke(
input: LanguageModelInput,
config: RunnableConfig | None = None,
*,
stop: list[str] | None = None,
**kwargs: Any,
) -> AIMessage
Transform a single input into an output.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
A config to use when invoking the The config supports standard keys like Please refer to
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Output
|
The output of the |
ainvoke
async
¶
ainvoke(
input: LanguageModelInput,
config: RunnableConfig | None = None,
*,
stop: list[str] | None = None,
**kwargs: Any,
) -> AIMessage
Transform a single input into an output.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
A config to use when invoking the The config supports standard keys like Please refer to
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Output
|
The output of the |
batch
¶
batch(
inputs: list[Input],
config: RunnableConfig | list[RunnableConfig] | None = None,
*,
return_exceptions: bool = False,
**kwargs: Any | None,
) -> list[Output]
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses must override this method if they can batch more efficiently;
e.g., if the underlying Runnable uses an API which supports a batch mode.
| PARAMETER | DESCRIPTION |
|---|---|
inputs
|
A list of inputs to the
TYPE:
|
config
|
A config to use when invoking the Please refer to
TYPE:
|
return_exceptions
|
Whether to return exceptions instead of raising them.
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[Output]
|
A list of outputs from the |
batch_as_completed
¶
batch_as_completed(
inputs: Sequence[Input],
config: RunnableConfig | Sequence[RunnableConfig] | None = None,
*,
return_exceptions: bool = False,
**kwargs: Any | None,
) -> Iterator[tuple[int, Output | Exception]]
Run invoke in parallel on a list of inputs.
Yields results as they complete.
| PARAMETER | DESCRIPTION |
|---|---|
inputs
|
A list of inputs to the
TYPE:
|
config
|
A config to use when invoking the The config supports standard keys like Please refer to
TYPE:
|
return_exceptions
|
Whether to return exceptions instead of raising them.
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
tuple[int, Output | Exception]
|
Tuples of the index of the input and the output from the |
abatch
async
¶
abatch(
inputs: list[Input],
config: RunnableConfig | list[RunnableConfig] | None = None,
*,
return_exceptions: bool = False,
**kwargs: Any | None,
) -> list[Output]
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses must override this method if they can batch more efficiently;
e.g., if the underlying Runnable uses an API which supports a batch mode.
| PARAMETER | DESCRIPTION |
|---|---|
inputs
|
A list of inputs to the
TYPE:
|
config
|
A config to use when invoking the The config supports standard keys like Please refer to
TYPE:
|
return_exceptions
|
Whether to return exceptions instead of raising them.
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[Output]
|
A list of outputs from the |
abatch_as_completed
async
¶
abatch_as_completed(
inputs: Sequence[Input],
config: RunnableConfig | Sequence[RunnableConfig] | None = None,
*,
return_exceptions: bool = False,
**kwargs: Any | None,
) -> AsyncIterator[tuple[int, Output | Exception]]
Run ainvoke in parallel on a list of inputs.
Yields results as they complete.
| PARAMETER | DESCRIPTION |
|---|---|
inputs
|
A list of inputs to the
TYPE:
|
config
|
A config to use when invoking the The config supports standard keys like Please refer to
TYPE:
|
return_exceptions
|
Whether to return exceptions instead of raising them.
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[tuple[int, Output | Exception]]
|
A tuple of the index of the input and the output from the |
stream
¶
stream(
input: LanguageModelInput,
config: RunnableConfig | None = None,
*,
stop: list[str] | None = None,
**kwargs: Any,
) -> Iterator[AIMessageChunk]
Default implementation of stream, which calls invoke.
Subclasses must override this method if they support streaming output.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
Output
|
The output of the |
astream
async
¶
astream(
input: LanguageModelInput,
config: RunnableConfig | None = None,
*,
stop: list[str] | None = None,
**kwargs: Any,
) -> AsyncIterator[AIMessageChunk]
Default implementation of astream, which calls ainvoke.
Subclasses must override this method if they support streaming output.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[Output]
|
The output of the |
astream_log
async
¶
astream_log(
input: Any,
config: RunnableConfig | None = None,
*,
diff: bool = True,
with_streamed_output_list: bool = True,
include_names: Sequence[str] | None = None,
include_types: Sequence[str] | None = None,
include_tags: Sequence[str] | None = None,
exclude_names: Sequence[str] | None = None,
exclude_types: Sequence[str] | None = None,
exclude_tags: Sequence[str] | None = None,
**kwargs: Any,
) -> AsyncIterator[RunLogPatch] | AsyncIterator[RunLog]
Stream all output from a Runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of Jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run.
The Jsonpatch ops can be applied in order to construct state.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
diff
|
Whether to yield diffs between each step or the current state.
TYPE:
|
with_streamed_output_list
|
Whether to yield the
TYPE:
|
include_names
|
Only include logs with these names. |
include_types
|
Only include logs with these types. |
include_tags
|
Only include logs with these tags. |
exclude_names
|
Exclude logs with these names. |
exclude_types
|
Exclude logs with these types. |
exclude_tags
|
Exclude logs with these tags. |
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[RunLogPatch] | AsyncIterator[RunLog]
|
A |
astream_events
async
¶
astream_events(
input: Any,
config: RunnableConfig | None = None,
*,
version: Literal["v1", "v2"] = "v2",
include_names: Sequence[str] | None = None,
include_types: Sequence[str] | None = None,
include_tags: Sequence[str] | None = None,
exclude_names: Sequence[str] | None = None,
exclude_types: Sequence[str] | None = None,
exclude_tags: Sequence[str] | None = None,
**kwargs: Any,
) -> AsyncIterator[StreamEvent]
Generate a stream of events.
Use to create an iterator over StreamEvent that provide real-time information
about the progress of the Runnable, including StreamEvent from intermediate
results.
A StreamEvent is a dictionary with the following schema:
event: Event names are of the format:on_[runnable_type]_(start|stream|end).name: The name of theRunnablethat generated the event.run_id: Randomly generated ID associated with the given execution of theRunnablethat emitted the event. A childRunnablethat gets invoked as part of the execution of a parentRunnableis assigned its own unique ID.parent_ids: The IDs of the parent runnables that generated the event. The rootRunnablewill have an empty list. The order of the parent IDs is from the root to the immediate parent. Only available for v2 version of the API. The v1 version of the API will return an empty list.tags: The tags of theRunnablethat generated the event.metadata: The metadata of theRunnablethat generated the event.data: The data associated with the event. The contents of this field depend on the type of event. See the table below for more details.
Below is a table that illustrates some events that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table.
Note
This reference table is for the v2 version of the schema.
| event | name | chunk | input | output |
|---|---|---|---|---|
on_chat_model_start |
'[model name]' |
{"messages": [[SystemMessage, HumanMessage]]} |
||
on_chat_model_stream |
'[model name]' |
AIMessageChunk(content="hello") |
||
on_chat_model_end |
'[model name]' |
{"messages": [[SystemMessage, HumanMessage]]} |
AIMessageChunk(content="hello world") |
|
on_llm_start |
'[model name]' |
{'input': 'hello'} |
||
on_llm_stream |
'[model name]' |
'Hello' |
||
on_llm_end |
'[model name]' |
'Hello human!' |
||
on_chain_start |
'format_docs' |
|||
on_chain_stream |
'format_docs' |
'hello world!, goodbye world!' |
||
on_chain_end |
'format_docs' |
[Document(...)] |
'hello world!, goodbye world!' |
|
on_tool_start |
'some_tool' |
{"x": 1, "y": "2"} |
||
on_tool_end |
'some_tool' |
{"x": 1, "y": "2"} |
||
on_retriever_start |
'[retriever name]' |
{"query": "hello"} |
||
on_retriever_end |
'[retriever name]' |
{"query": "hello"} |
[Document(...), ..] |
|
on_prompt_start |
'[template_name]' |
{"question": "hello"} |
||
on_prompt_end |
'[template_name]' |
{"question": "hello"} |
ChatPromptValue(messages: [SystemMessage, ...]) |
In addition to the standard events, users can also dispatch custom events (see example below).
Custom events will be only be surfaced with in the v2 version of the API!
A custom event has following format:
| Attribute | Type | Description |
|---|---|---|
name |
str |
A user defined name for the event. |
data |
Any |
The data associated with the event. This can be anything, though we suggest making it JSON serializable. |
Here are declarations associated with the standard events shown above:
format_docs:
def format_docs(docs: list[Document]) -> str:
'''Format the docs.'''
return ", ".join([doc.page_content for doc in docs])
format_docs = RunnableLambda(format_docs)
some_tool:
prompt:
template = ChatPromptTemplate.from_messages(
[
("system", "You are Cat Agent 007"),
("human", "{question}"),
]
).with_config({"run_name": "my_template", "tags": ["my_template"]})
Example
from langchain_core.runnables import RunnableLambda
async def reverse(s: str) -> str:
return s[::-1]
chain = RunnableLambda(func=reverse)
events = [
event async for event in chain.astream_events("hello", version="v2")
]
# Will produce the following events
# (run_id, and parent_ids has been omitted for brevity):
[
{
"data": {"input": "hello"},
"event": "on_chain_start",
"metadata": {},
"name": "reverse",
"tags": [],
},
{
"data": {"chunk": "olleh"},
"event": "on_chain_stream",
"metadata": {},
"name": "reverse",
"tags": [],
},
{
"data": {"output": "olleh"},
"event": "on_chain_end",
"metadata": {},
"name": "reverse",
"tags": [],
},
]
from langchain_core.callbacks.manager import (
adispatch_custom_event,
)
from langchain_core.runnables import RunnableLambda, RunnableConfig
import asyncio
async def slow_thing(some_input: str, config: RunnableConfig) -> str:
"""Do something that takes a long time."""
await asyncio.sleep(1) # Placeholder for some slow operation
await adispatch_custom_event(
"progress_event",
{"message": "Finished step 1 of 3"},
config=config # Must be included for python < 3.10
)
await asyncio.sleep(1) # Placeholder for some slow operation
await adispatch_custom_event(
"progress_event",
{"message": "Finished step 2 of 3"},
config=config # Must be included for python < 3.10
)
await asyncio.sleep(1) # Placeholder for some slow operation
return "Done"
slow_thing = RunnableLambda(slow_thing)
async for event in slow_thing.astream_events("some_input", version="v2"):
print(event)
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
version
|
The version of the schema to use, either Users should use
No default will be assigned until the API is stabilized.
custom events will only be surfaced in
TYPE:
|
include_names
|
Only include events from |
include_types
|
Only include events from |
include_tags
|
Only include events from |
exclude_names
|
Exclude events from |
exclude_types
|
Exclude events from |
exclude_tags
|
Exclude events from |
**kwargs
|
Additional keyword arguments to pass to the These will be passed to
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[StreamEvent]
|
An async stream of |
| RAISES | DESCRIPTION |
|---|---|
NotImplementedError
|
If the version is not |
transform
¶
transform(
input: Iterator[Input], config: RunnableConfig | None = None, **kwargs: Any | None
) -> Iterator[Output]
Transform inputs to outputs.
Default implementation of transform, which buffers input and calls astream.
Subclasses must override this method if they can start producing output while input is still being generated.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
An iterator of inputs to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
Output
|
The output of the |
atransform
async
¶
atransform(
input: AsyncIterator[Input],
config: RunnableConfig | None = None,
**kwargs: Any | None,
) -> AsyncIterator[Output]
Transform inputs to outputs.
Default implementation of atransform, which buffers input and calls astream.
Subclasses must override this method if they can start producing output while input is still being generated.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
An async iterator of inputs to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[Output]
|
The output of the |
bind
¶
Bind arguments to a Runnable, returning a new Runnable.
Useful when a Runnable in a chain requires an argument that is not
in the output of the previous Runnable or included in the user input.
| PARAMETER | DESCRIPTION |
|---|---|
**kwargs
|
The arguments to bind to the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
Example
from langchain_ollama import ChatOllama
from langchain_core.output_parsers import StrOutputParser
model = ChatOllama(model="llama3.1")
# Without bind
chain = model | StrOutputParser()
chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two three four five.'
# With bind
chain = model.bind(stop=["three"]) | StrOutputParser()
chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two'
with_config
¶
with_config(
config: RunnableConfig | None = None, **kwargs: Any
) -> Runnable[Input, Output]
Bind config to a Runnable, returning a new Runnable.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
The config to bind to the
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
with_listeners
¶
with_listeners(
*,
on_start: Callable[[Run], None]
| Callable[[Run, RunnableConfig], None]
| None = None,
on_end: Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None = None,
on_error: Callable[[Run], None]
| Callable[[Run, RunnableConfig], None]
| None = None,
) -> Runnable[Input, Output]
Bind lifecycle listeners to a Runnable, returning a new Runnable.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and
any tags or metadata added to the run.
| PARAMETER | DESCRIPTION |
|---|---|
on_start
|
Called before the
TYPE:
|
on_end
|
Called after the
TYPE:
|
on_error
|
Called if the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
Example
from langchain_core.runnables import RunnableLambda
from langchain_core.tracers.schemas import Run
import time
def test_runnable(time_to_sleep: int):
time.sleep(time_to_sleep)
def fn_start(run_obj: Run):
print("start_time:", run_obj.start_time)
def fn_end(run_obj: Run):
print("end_time:", run_obj.end_time)
chain = RunnableLambda(test_runnable).with_listeners(
on_start=fn_start, on_end=fn_end
)
chain.invoke(2)
with_alisteners
¶
with_alisteners(
*,
on_start: AsyncListener | None = None,
on_end: AsyncListener | None = None,
on_error: AsyncListener | None = None,
) -> Runnable[Input, Output]
Bind async lifecycle listeners to a Runnable.
Returns a new Runnable.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and
any tags or metadata added to the run.
| PARAMETER | DESCRIPTION |
|---|---|
on_start
|
Called asynchronously before the
TYPE:
|
on_end
|
Called asynchronously after the
TYPE:
|
on_error
|
Called asynchronously if the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
Example
from langchain_core.runnables import RunnableLambda, Runnable
from datetime import datetime, timezone
import time
import asyncio
def format_t(timestamp: float) -> str:
return datetime.fromtimestamp(timestamp, tz=timezone.utc).isoformat()
async def test_runnable(time_to_sleep: int):
print(f"Runnable[{time_to_sleep}s]: starts at {format_t(time.time())}")
await asyncio.sleep(time_to_sleep)
print(f"Runnable[{time_to_sleep}s]: ends at {format_t(time.time())}")
async def fn_start(run_obj: Runnable):
print(f"on start callback starts at {format_t(time.time())}")
await asyncio.sleep(3)
print(f"on start callback ends at {format_t(time.time())}")
async def fn_end(run_obj: Runnable):
print(f"on end callback starts at {format_t(time.time())}")
await asyncio.sleep(2)
print(f"on end callback ends at {format_t(time.time())}")
runnable = RunnableLambda(test_runnable).with_alisteners(
on_start=fn_start, on_end=fn_end
)
async def concurrent_runs():
await asyncio.gather(runnable.ainvoke(2), runnable.ainvoke(3))
asyncio.run(concurrent_runs())
# Result:
# on start callback starts at 2025-03-01T07:05:22.875378+00:00
# on start callback starts at 2025-03-01T07:05:22.875495+00:00
# on start callback ends at 2025-03-01T07:05:25.878862+00:00
# on start callback ends at 2025-03-01T07:05:25.878947+00:00
# Runnable[2s]: starts at 2025-03-01T07:05:25.879392+00:00
# Runnable[3s]: starts at 2025-03-01T07:05:25.879804+00:00
# Runnable[2s]: ends at 2025-03-01T07:05:27.881998+00:00
# on end callback starts at 2025-03-01T07:05:27.882360+00:00
# Runnable[3s]: ends at 2025-03-01T07:05:28.881737+00:00
# on end callback starts at 2025-03-01T07:05:28.882428+00:00
# on end callback ends at 2025-03-01T07:05:29.883893+00:00
# on end callback ends at 2025-03-01T07:05:30.884831+00:00
with_types
¶
with_types(
*, input_type: type[Input] | None = None, output_type: type[Output] | None = None
) -> Runnable[Input, Output]
Bind input and output types to a Runnable, returning a new Runnable.
| PARAMETER | DESCRIPTION |
|---|---|
input_type
|
The input type to bind to the
TYPE:
|
output_type
|
The output type to bind to the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
with_retry
¶
with_retry(
*,
retry_if_exception_type: tuple[type[BaseException], ...] = (Exception,),
wait_exponential_jitter: bool = True,
exponential_jitter_params: ExponentialJitterParams | None = None,
stop_after_attempt: int = 3,
) -> Runnable[Input, Output]
Create a new Runnable that retries the original Runnable on exceptions.
| PARAMETER | DESCRIPTION |
|---|---|
retry_if_exception_type
|
A tuple of exception types to retry on.
TYPE:
|
wait_exponential_jitter
|
Whether to add jitter to the wait time between retries.
TYPE:
|
stop_after_attempt
|
The maximum number of attempts to make before giving up.
TYPE:
|
exponential_jitter_params
|
Parameters for
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
Example
from langchain_core.runnables import RunnableLambda
count = 0
def _lambda(x: int) -> None:
global count
count = count + 1
if x == 1:
raise ValueError("x is 1")
else:
pass
runnable = RunnableLambda(_lambda)
try:
runnable.with_retry(
stop_after_attempt=2,
retry_if_exception_type=(ValueError,),
).invoke(1)
except ValueError:
pass
assert count == 2
map
¶
with_fallbacks
¶
with_fallbacks(
fallbacks: Sequence[Runnable[Input, Output]],
*,
exceptions_to_handle: tuple[type[BaseException], ...] = (Exception,),
exception_key: str | None = None,
) -> RunnableWithFallbacks[Input, Output]
Add fallbacks to a Runnable, returning a new Runnable.
The new Runnable will try the original Runnable, and then each fallback
in order, upon failures.
| PARAMETER | DESCRIPTION |
|---|---|
fallbacks
|
A sequence of runnables to try if the original |
exceptions_to_handle
|
A tuple of exception types to handle.
TYPE:
|
exception_key
|
If If If used, the base
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableWithFallbacks[Input, Output]
|
A new |
Example
from typing import Iterator
from langchain_core.runnables import RunnableGenerator
def _generate_immediate_error(input: Iterator) -> Iterator[str]:
raise ValueError()
yield ""
def _generate(input: Iterator) -> Iterator[str]:
yield from "foo bar"
runnable = RunnableGenerator(_generate_immediate_error).with_fallbacks(
[RunnableGenerator(_generate)]
)
print("".join(runnable.stream({}))) # foo bar
| PARAMETER | DESCRIPTION |
|---|---|
fallbacks
|
A sequence of runnables to try if the original |
exceptions_to_handle
|
A tuple of exception types to handle.
TYPE:
|
exception_key
|
If If If used, the base
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableWithFallbacks[Input, Output]
|
A new |
as_tool
¶
as_tool(
args_schema: type[BaseModel] | None = None,
*,
name: str | None = None,
description: str | None = None,
arg_types: dict[str, type] | None = None,
) -> BaseTool
Create a BaseTool from a Runnable.
as_tool will instantiate a BaseTool with a name, description, and
args_schema from a Runnable. Where possible, schemas are inferred
from runnable.get_input_schema.
Alternatively (e.g., if the Runnable takes a dict as input and the specific
dict keys are not typed), the schema can be specified directly with
args_schema.
You can also pass arg_types to just specify the required arguments and their
types.
| PARAMETER | DESCRIPTION |
|---|---|
args_schema
|
The schema for the tool. |
name
|
The name of the tool.
TYPE:
|
description
|
The description of the tool.
TYPE:
|
arg_types
|
A dictionary of argument names to types. |
| RETURNS | DESCRIPTION |
|---|---|
BaseTool
|
A |
TypedDict input
dict input, specifying schema via args_schema
from typing import Any
from pydantic import BaseModel, Field
from langchain_core.runnables import RunnableLambda
def f(x: dict[str, Any]) -> str:
return str(x["a"] * max(x["b"]))
class FSchema(BaseModel):
"""Apply a function to an integer and list of integers."""
a: int = Field(..., description="Integer")
b: list[int] = Field(..., description="List of ints")
runnable = RunnableLambda(f)
as_tool = runnable.as_tool(FSchema)
as_tool.invoke({"a": 3, "b": [1, 2]})
dict input, specifying schema via arg_types
is_lc_serializable
classmethod
¶
is_lc_serializable() -> bool
Is this class serializable?
By design, even if a class inherits from Serializable, it is not serializable
by default. This is to prevent accidental serialization of objects that should
not be serialized.
| RETURNS | DESCRIPTION |
|---|---|
bool
|
Whether the class is serializable. Default is |
get_lc_namespace
classmethod
¶
lc_id
classmethod
¶
Return a unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path to the object.
For example, for the class langchain.llms.openai.OpenAI, the id is
["langchain", "llms", "openai", "OpenAI"].
to_json
¶
Serialize the Runnable to JSON.
| RETURNS | DESCRIPTION |
|---|---|
SerializedConstructor | SerializedNotImplemented
|
A JSON-serializable representation of the |
to_json_not_implemented
¶
Serialize a "not implemented" object.
| RETURNS | DESCRIPTION |
|---|---|
SerializedNotImplemented
|
|
configurable_fields
¶
configurable_fields(
**kwargs: AnyConfigurableField,
) -> RunnableSerializable[Input, Output]
Configure particular Runnable fields at runtime.
| PARAMETER | DESCRIPTION |
|---|---|
**kwargs
|
A dictionary of
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If a configuration key is not found in the |
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Input, Output]
|
A new |
Example
from langchain_core.runnables import ConfigurableField
from langchain_openai import ChatOpenAI
model = ChatOpenAI(max_tokens=20).configurable_fields(
max_tokens=ConfigurableField(
id="output_token_number",
name="Max tokens in the output",
description="The maximum number of tokens in the output",
)
)
# max_tokens = 20
print(
"max_tokens_20: ", model.invoke("tell me something about chess").content
)
# max_tokens = 200
print(
"max_tokens_200: ",
model.with_config(configurable={"output_token_number": 200})
.invoke("tell me something about chess")
.content,
)
configurable_alternatives
¶
configurable_alternatives(
which: ConfigurableField,
*,
default_key: str = "default",
prefix_keys: bool = False,
**kwargs: Runnable[Input, Output] | Callable[[], Runnable[Input, Output]],
) -> RunnableSerializable[Input, Output]
Configure alternatives for Runnable objects that can be set at runtime.
| PARAMETER | DESCRIPTION |
|---|---|
which
|
The
TYPE:
|
default_key
|
The default key to use if no alternative is selected.
TYPE:
|
prefix_keys
|
Whether to prefix the keys with the
TYPE:
|
**kwargs
|
A dictionary of keys to
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Input, Output]
|
A new |
Example
from langchain_anthropic import ChatAnthropic
from langchain_core.runnables.utils import ConfigurableField
from langchain_openai import ChatOpenAI
model = ChatAnthropic(
model_name="claude-sonnet-4-5-20250929"
).configurable_alternatives(
ConfigurableField(id="llm"),
default_key="anthropic",
openai=ChatOpenAI(),
)
# uses the default model ChatAnthropic
print(model.invoke("which organization created you?").content)
# uses ChatOpenAI
print(
model.with_config(configurable={"llm": "openai"})
.invoke("which organization created you?")
.content
)
set_verbose
¶
generate_prompt
¶
generate_prompt(
prompts: list[PromptValue],
stop: list[str] | None = None,
callbacks: Callbacks = None,
**kwargs: Any,
) -> LLMResult
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
- Take advantage of batched calls,
- Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).
| PARAMETER | DESCRIPTION |
|---|---|
prompts
|
List of A
TYPE:
|
stop
|
Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. |
callbacks
|
Used for executing additional functionality, such as logging or streaming, throughout generation.
TYPE:
|
**kwargs
|
Arbitrary additional keyword arguments. These are usually passed to the model provider API call.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
LLMResult
|
An |
agenerate_prompt
async
¶
agenerate_prompt(
prompts: list[PromptValue],
stop: list[str] | None = None,
callbacks: Callbacks = None,
**kwargs: Any,
) -> LLMResult
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
- Take advantage of batched calls,
- Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).
| PARAMETER | DESCRIPTION |
|---|---|
prompts
|
List of A
TYPE:
|
stop
|
Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. |
callbacks
|
Used for executing additional functionality, such as logging or streaming, throughout generation.
TYPE:
|
**kwargs
|
Arbitrary additional keyword arguments. These are usually passed to the model provider API call.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
LLMResult
|
An |
get_token_ids
¶
get_num_tokens
¶
Get the number of tokens present in the text.
Useful for checking if an input fits in a model's context window.
This should be overridden by model-specific implementations to provide accurate token counts via model-specific tokenizers.
| PARAMETER | DESCRIPTION |
|---|---|
text
|
The string input to tokenize.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
int
|
The integer number of tokens in the text. |
get_num_tokens_from_messages
¶
get_num_tokens_from_messages(
messages: list[BaseMessage], tools: Sequence | None = None
) -> int
Get the number of tokens in the messages.
Useful for checking if an input fits in a model's context window.
This should be overridden by model-specific implementations to provide accurate token counts via model-specific tokenizers.
Note
- The base implementation of
get_num_tokens_from_messagesignores tool schemas. - The base implementation of
get_num_tokens_from_messagesadds additional prefixes to messages in represent user roles, which will add to the overall token count. Model-specific implementations may choose to handle this differently.
| PARAMETER | DESCRIPTION |
|---|---|
messages
|
The message inputs to tokenize.
TYPE:
|
tools
|
If provided, sequence of dict,
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
int
|
The sum of the number of tokens across the messages. |
generate
¶
generate(
messages: list[list[BaseMessage]],
stop: list[str] | None = None,
callbacks: Callbacks = None,
*,
tags: list[str] | None = None,
metadata: dict[str, Any] | None = None,
run_name: str | None = None,
run_id: UUID | None = None,
**kwargs: Any,
) -> LLMResult
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
- Take advantage of batched calls,
- Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).
| PARAMETER | DESCRIPTION |
|---|---|
messages
|
List of list of messages.
TYPE:
|
stop
|
Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. |
callbacks
|
Used for executing additional functionality, such as logging or streaming, throughout generation.
TYPE:
|
tags
|
The tags to apply. |
metadata
|
The metadata to apply. |
run_name
|
The name of the run.
TYPE:
|
run_id
|
The ID of the run.
TYPE:
|
**kwargs
|
Arbitrary additional keyword arguments. These are usually passed to the model provider API call.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
LLMResult
|
An |
agenerate
async
¶
agenerate(
messages: list[list[BaseMessage]],
stop: list[str] | None = None,
callbacks: Callbacks = None,
*,
tags: list[str] | None = None,
metadata: dict[str, Any] | None = None,
run_name: str | None = None,
run_id: UUID | None = None,
**kwargs: Any,
) -> LLMResult
Asynchronously pass a sequence of prompts to a model and return generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
- Take advantage of batched calls,
- Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).
| PARAMETER | DESCRIPTION |
|---|---|
messages
|
List of list of messages.
TYPE:
|
stop
|
Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. |
callbacks
|
Used for executing additional functionality, such as logging or streaming, throughout generation.
TYPE:
|
tags
|
The tags to apply. |
metadata
|
The metadata to apply. |
run_name
|
The name of the run.
TYPE:
|
run_id
|
The ID of the run.
TYPE:
|
**kwargs
|
Arbitrary additional keyword arguments. These are usually passed to the model provider API call.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
LLMResult
|
An |
__init__
¶
__init__(**kwargs: Any)
Create a new NVIDIAChat chat model.
This class provides access to a NVIDIA NIM for chat. By default, it
connects to a hosted NIM, but can be configured to connect to a local NIM
using the base_url parameter.
An API key is required to connect to the hosted NIM.
| PARAMETER | DESCRIPTION |
|---|---|
model
|
The model to use for chat.
TYPE:
|
nvidia_api_key
|
The API key to use for connecting to the hosted NIM.
TYPE:
|
api_key
|
Alternative to
TYPE:
|
base_url
|
The base URL of the NIM to connect to. Format for base URL is
TYPE:
|
temperature
|
Sampling temperature in
TYPE:
|
max_tokens
|
Maximum number of tokens to generate. Deprecated, use
If both max_tokens and max_completion_tokens are supplied,
TYPE:
|
max_completion_tokens
|
Maximum number of tokens to generate.
TYPE:
|
top_p
|
Top-p for distribution sampling.
TYPE:
|
seed
|
A seed for deterministic results.
TYPE:
|
stop
|
A list of cased stop words. |
min_tokens
|
Minimum number of tokens to generate.
TYPE:
|
ignore_eos
|
Whether to ignore end-of-sequence tokens.
TYPE:
|
default_headers
|
Default headers merged into all requests. |
The recommended way to provide the API key is through the NVIDIA_API_KEY
environment variable.
Base URL:
-
Connect to a self-hosted model with NVIDIA NIM using the
base_urlarg to link to the local host atlocalhost:8000:
get_available_models
classmethod
¶
Get a list of available models that work with ChatNVIDIA.
bind_tools
¶
bind_tools(
tools: Sequence[dict[str, Any] | Type | Callable | BaseTool],
*,
tool_choice: dict
| str
| Literal["auto", "none", "any", "required"]
| bool
| None = None,
**kwargs: Any,
) -> Runnable[LanguageModelInput, AIMessage]
Bind tools to the model.
Note
The strict mode is always in effect, if you need it disabled, please file
an issue.
| PARAMETER | DESCRIPTION |
|---|---|
tools
|
A list of tools to bind to the model.
TYPE:
|
tool_choice
|
Control tool choice. Options:
Defaults to passing no value.
TYPE:
|
**kwargs
|
Additional keyword arguments.
TYPE:
|
with_structured_output
¶
with_structured_output(
schema: dict | Type, *, include_raw: bool = False, **kwargs: Any
) -> Runnable[LanguageModelInput, dict | BaseModel]
Bind a structured output schema to the model.
| PARAMETER | DESCRIPTION |
|---|---|
schema
|
The schema to bind to the model. |
include_raw
|
Always
TYPE:
|
**kwargs
|
Additional keyword arguments.
TYPE:
|
Note
- The
strictmode is always in effect, if you need it disabled, please file an issue. - If you need
include_raw=Trueconsider using an unstructured model and output formatter, or file an issue.
The schema can be:
- A dictionary representing a JSON schema
- A Pydantic object
- An
Enum
If a dictionary is provided, the model will return a dictionary.
Dictionary schema
json_schema = {
"title": "joke",
"description": "Joke to tell user.",
"type": "object",
"properties": {
"setup": {
"type": "string",
"description": "The setup of the joke",
},
"punchline": {
"type": "string",
"description": "The punchline to the joke",
},
},
"required": ["setup", "punchline"],
}
structured_llm = llm.with_structured_output(json_schema)
structured_llm.invoke("Tell me a joke about NVIDIA")
# Output: {'setup': 'Why did NVIDIA go broke? The hardware ate all the software.',
# 'punchline': 'It took a big bite out of their main board.'}
If a Pydantic schema is provided, the model will return a Pydantic object.
Pydantic schema
from pydantic import BaseModel, Field
class Joke(BaseModel):
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
structured_llm = llm.with_structured_output(Joke)
structured_llm.invoke("Tell me a joke about NVIDIA")
# Output: Joke(setup='Why did NVIDIA go broke? The hardware ate all the software.',
# punchline='It took a big bite out of their main board.')
If an Enum is provided, all values must be strings, and the model will return
an Enum object.
Enum schema
Streaming
Unlike other streaming responses, the streamed chunks will be increasingly complete. They will not be deltas. The last chunk will contain the complete response.
For instance with a dictionary schema, the chunks will be:
structured_llm = llm.with_structured_output(json_schema)
for chunk in structured_llm.stream("Tell me a joke about NVIDIA"):
print(chunk)
# Output:
# {}
# {'setup': ''}
# {'setup': 'Why'}
# {'setup': 'Why did'}
# {'setup': 'Why did N'}
# {'setup': 'Why did NVID'}
# ...
# {'setup': 'Why did NVIDIA go broke? The hardware ate all the software.', 'punchline': 'It took a big bite out of their main board'}
# {'setup': 'Why did NVIDIA go broke? The hardware ate all the software.', 'punchline': 'It took a big bite out of their main board.'}
For instance with a Pydantic schema, the chunks will be:
structured_llm = llm.with_structured_output(Joke)
for chunk in structured_llm.stream("Tell me a joke about NVIDIA"):
print(chunk)
# Output:
# setup='Why did NVIDIA go broke? The hardware ate all the software.' punchline=''
# setup='Why did NVIDIA go broke? The hardware ate all the software.' punchline='It'
# setup='Why did NVIDIA go broke? The hardware ate all the software.' punchline='It took'
# ...
# setup='Why did NVIDIA go broke? The hardware ate all the software.' punchline='It took a big bite out of their main board'
# setup='Why did NVIDIA go broke? The hardware ate all the software.' punchline='It took a big bite out of their main board.'
For Pydantic schema and Enum, the output will be None if the response is
insufficient to construct the object or otherwise invalid.
llm = ChatNVIDIA(max_completion_tokens=1)
structured_llm = llm.with_structured_output(Joke)
print(structured_llm.invoke("Tell me a joke about NVIDIA"))
# Output: None
For more, see docs on structured output.
with_thinking_mode
¶
with_thinking_mode(
enabled: bool = True, **kwargs: Any
) -> Runnable[LanguageModelInput, BaseMessage]
Configure the model to use thinking mode.
| PARAMETER | DESCRIPTION |
|---|---|
enabled
|
Whether to enable thinking mode.
TYPE:
|
**kwargs
|
Additional keyword arguments.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[LanguageModelInput, BaseMessage]
|
A runnable that will use thinking mode when enabled. |
Example
from langchain_nvidia_ai_endpoints import ChatNVIDIA
model = ChatNVIDIA(model="nvidia/llama-3.1-nemotron-nano-8b-v1")
# Enable thinking mode
thinking_model = model.with_thinking_mode(enabled=True)
response = thinking_model.invoke("Hello")
# Disable thinking mode
no_thinking_model = model.with_thinking_mode(enabled=False)
response = no_thinking_model.invoke("Hello")
NVIDIAEmbeddings
¶
Bases: BaseModel, Embeddings
Client to NVIDIA embeddings models.
| ATTRIBUTE | DESCRIPTION |
|---|---|
model |
The name of the model to use
TYPE:
|
truncate |
TYPE:
|
dimensions |
The number of dimensions for the embeddings. This parameter is not supported by all models.
TYPE:
|
| METHOD | DESCRIPTION |
|---|---|
aembed_documents |
Asynchronous Embed search docs. |
aembed_query |
Asynchronous Embed query text. |
__init__ |
Create a new |
get_available_models |
Get a list of available models that work with |
embed_query |
Input pathway for query embeddings. |
embed_documents |
Input pathway for document embeddings. |
available_models
property
¶
Get a list of available models that work with NVIDIAEmbeddings.
aembed_documents
async
¶
aembed_query
async
¶
__init__
¶
__init__(**kwargs: Any)
Create a new NVIDIAEmbeddings embedder.
This class provides access to a NVIDIA NIM for embedding. By default, it
connects to a hosted NIM, but can be configured to connect to a local NIM
using the base_url parameter.
An API key is required to connect to the hosted NIM.
| PARAMETER | DESCRIPTION |
|---|---|
model
|
The model to use for embedding.
TYPE:
|
nvidia_api_key
|
The API key to use for connecting to the hosted NIM.
TYPE:
|
api_key
|
Alternative to
TYPE:
|
base_url
|
The base URL of the NIM to connect to. Format for base URL is http://host:port
TYPE:
|
trucate
|
TYPE:
|
dimensions
|
The number of dimensions for the embeddings. This parameter is not supported by all models.
TYPE:
|
default_headers
|
Default headers merged into all requests. |
The recommended way to provide the API key is through the NVIDIA_API_KEY
environment variable.
Base URL:
-
Connect to a self-hosted model with NVIDIA NIM using the
base_urlarg to link to the local host atlocalhost:8000:
get_available_models
classmethod
¶
Get a list of available models that work with NVIDIAEmbeddings.
NVIDIA
¶
Bases: LLM
LangChain LLM that uses the Completions API with NVIDIA NIMs.
| METHOD | DESCRIPTION |
|---|---|
get_name |
Get the name of the |
get_input_schema |
Get a Pydantic model that can be used to validate input to the |
get_input_jsonschema |
Get a JSON schema that represents the input to the |
get_output_schema |
Get a Pydantic model that can be used to validate output to the |
get_output_jsonschema |
Get a JSON schema that represents the output of the |
config_schema |
The type of config this |
get_config_jsonschema |
Get a JSON schema that represents the config of the |
get_graph |
Return a graph representation of this |
get_prompts |
Return a list of prompts used by this |
__or__ |
Runnable "or" operator. |
__ror__ |
Runnable "reverse-or" operator. |
pipe |
Pipe |
pick |
Pick keys from the output |
assign |
Assigns new fields to the |
invoke |
Transform a single input into an output. |
ainvoke |
Transform a single input into an output. |
batch |
Default implementation runs invoke in parallel using a thread pool executor. |
batch_as_completed |
Run |
abatch |
Default implementation runs |
abatch_as_completed |
Run |
stream |
Default implementation of |
astream |
Default implementation of |
astream_log |
Stream all output from a |
astream_events |
Generate a stream of events. |
transform |
Transform inputs to outputs. |
atransform |
Transform inputs to outputs. |
bind |
Bind arguments to a |
with_config |
Bind config to a |
with_listeners |
Bind lifecycle listeners to a |
with_alisteners |
Bind async lifecycle listeners to a |
with_types |
Bind input and output types to a |
with_retry |
Create a new |
map |
Return a new |
with_fallbacks |
Add fallbacks to a |
as_tool |
Create a |
is_lc_serializable |
Is this class serializable? |
get_lc_namespace |
Get the namespace of the LangChain object. |
lc_id |
Return a unique identifier for this class for serialization purposes. |
to_json |
Serialize the |
to_json_not_implemented |
Serialize a "not implemented" object. |
configurable_fields |
Configure particular |
configurable_alternatives |
Configure alternatives for |
set_verbose |
If verbose is |
generate_prompt |
Pass a sequence of prompts to the model and return model generations. |
agenerate_prompt |
Asynchronously pass a sequence of prompts and return model generations. |
with_structured_output |
Not implemented on this class. |
get_token_ids |
Return the ordered IDs of the tokens in a text. |
get_num_tokens |
Get the number of tokens present in the text. |
get_num_tokens_from_messages |
Get the number of tokens in the messages. |
generate |
Pass a sequence of prompts to a model and return generations. |
agenerate |
Asynchronously pass a sequence of prompts to a model and return generations. |
__str__ |
Return a string representation of the object for printing. |
dict |
Return a dictionary of the LLM. |
save |
Save the LLM. |
__init__ |
Create a new NVIDIA LLM for Completions APIs. |
get_available_models |
Get a list of available models that work with the Completions API. |
name
class-attribute
instance-attribute
¶
name: str | None = None
The name of the Runnable. Used for debugging and tracing.
input_schema
property
¶
The type of input this Runnable accepts specified as a Pydantic model.
output_schema
property
¶
Output schema.
The type of output this Runnable produces specified as a Pydantic model.
config_specs
property
¶
config_specs: list[ConfigurableFieldSpec]
List configurable fields for this Runnable.
lc_secrets
property
¶
A map of constructor argument names to secret ids.
For example, {"openai_api_key": "OPENAI_API_KEY"}
lc_attributes
property
¶
lc_attributes: dict
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
Default is an empty dictionary.
cache
class-attribute
instance-attribute
¶
Whether to cache the response.
- If
True, will use the global cache. - If
False, will not use a cache - If
None, will use the global cache if it's set, otherwise no cache. - If instance of
BaseCache, will use the provided cache.
Caching is not currently supported for streaming methods of models.
verbose
class-attribute
instance-attribute
¶
Whether to print out response text.
callbacks
class-attribute
instance-attribute
¶
callbacks: Callbacks = Field(default=None, exclude=True)
Callbacks to add to the run trace.
tags
class-attribute
instance-attribute
¶
Tags to add to the run trace.
metadata
class-attribute
instance-attribute
¶
Metadata to add to the run trace.
custom_get_token_ids
class-attribute
instance-attribute
¶
Optional encoder to use for counting tokens.
available_models
property
¶
Get a list of available models that work with NVIDIA.
get_name
¶
get_input_schema
¶
get_input_schema(config: RunnableConfig | None = None) -> type[BaseModel]
Get a Pydantic model that can be used to validate input to the Runnable.
Runnable objects that leverage the configurable_fields and
configurable_alternatives methods will have a dynamic input schema that
depends on which configuration the Runnable is invoked with.
This method allows to get an input schema for a specific configuration.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
A config to use when generating the schema.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
type[BaseModel]
|
A Pydantic model that can be used to validate input. |
get_input_jsonschema
¶
get_input_jsonschema(config: RunnableConfig | None = None) -> dict[str, Any]
Get a JSON schema that represents the input to the Runnable.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
A config to use when generating the schema.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
dict[str, Any]
|
A JSON schema that represents the input to the |
Example
Added in langchain-core 0.3.0
get_output_schema
¶
get_output_schema(config: RunnableConfig | None = None) -> type[BaseModel]
Get a Pydantic model that can be used to validate output to the Runnable.
Runnable objects that leverage the configurable_fields and
configurable_alternatives methods will have a dynamic output schema that
depends on which configuration the Runnable is invoked with.
This method allows to get an output schema for a specific configuration.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
A config to use when generating the schema.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
type[BaseModel]
|
A Pydantic model that can be used to validate output. |
get_output_jsonschema
¶
get_output_jsonschema(config: RunnableConfig | None = None) -> dict[str, Any]
Get a JSON schema that represents the output of the Runnable.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
A config to use when generating the schema.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
dict[str, Any]
|
A JSON schema that represents the output of the |
Example
Added in langchain-core 0.3.0
config_schema
¶
The type of config this Runnable accepts specified as a Pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
| PARAMETER | DESCRIPTION |
|---|---|
include
|
A list of fields to include in the config schema. |
| RETURNS | DESCRIPTION |
|---|---|
type[BaseModel]
|
A Pydantic model that can be used to validate config. |
get_config_jsonschema
¶
get_graph
¶
get_graph(config: RunnableConfig | None = None) -> Graph
Return a graph representation of this Runnable.
get_prompts
¶
get_prompts(config: RunnableConfig | None = None) -> list[BasePromptTemplate]
Return a list of prompts used by this Runnable.
__or__
¶
__or__(
other: Runnable[Any, Other]
| Callable[[Iterator[Any]], Iterator[Other]]
| Callable[[AsyncIterator[Any]], AsyncIterator[Other]]
| Callable[[Any], Other]
| Mapping[str, Runnable[Any, Other] | Callable[[Any], Other] | Any],
) -> RunnableSerializable[Input, Other]
Runnable "or" operator.
Compose this Runnable with another object to create a
RunnableSequence.
| PARAMETER | DESCRIPTION |
|---|---|
other
|
Another
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Input, Other]
|
A new |
__ror__
¶
__ror__(
other: Runnable[Other, Any]
| Callable[[Iterator[Other]], Iterator[Any]]
| Callable[[AsyncIterator[Other]], AsyncIterator[Any]]
| Callable[[Other], Any]
| Mapping[str, Runnable[Other, Any] | Callable[[Other], Any] | Any],
) -> RunnableSerializable[Other, Output]
Runnable "reverse-or" operator.
Compose this Runnable with another object to create a
RunnableSequence.
| PARAMETER | DESCRIPTION |
|---|---|
other
|
Another
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Other, Output]
|
A new |
pipe
¶
pipe(
*others: Runnable[Any, Other] | Callable[[Any], Other], name: str | None = None
) -> RunnableSerializable[Input, Other]
Pipe Runnable objects.
Compose this Runnable with Runnable-like objects to make a
RunnableSequence.
Equivalent to RunnableSequence(self, *others) or self | others[0] | ...
Example
from langchain_core.runnables import RunnableLambda
def add_one(x: int) -> int:
return x + 1
def mul_two(x: int) -> int:
return x * 2
runnable_1 = RunnableLambda(add_one)
runnable_2 = RunnableLambda(mul_two)
sequence = runnable_1.pipe(runnable_2)
# Or equivalently:
# sequence = runnable_1 | runnable_2
# sequence = RunnableSequence(first=runnable_1, last=runnable_2)
sequence.invoke(1)
await sequence.ainvoke(1)
# -> 4
sequence.batch([1, 2, 3])
await sequence.abatch([1, 2, 3])
# -> [4, 6, 8]
| PARAMETER | DESCRIPTION |
|---|---|
*others
|
Other
TYPE:
|
name
|
An optional name for the resulting
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Input, Other]
|
A new |
pick
¶
Pick keys from the output dict of this Runnable.
Pick a single key
import json
from langchain_core.runnables import RunnableLambda, RunnableMap
as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)
chain = RunnableMap(str=as_str, json=as_json)
chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3]}
json_only_chain = chain.pick("json")
json_only_chain.invoke("[1, 2, 3]")
# -> [1, 2, 3]
Pick a list of keys
from typing import Any
import json
from langchain_core.runnables import RunnableLambda, RunnableMap
as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)
def as_bytes(x: Any) -> bytes:
return bytes(x, "utf-8")
chain = RunnableMap(
str=as_str, json=as_json, bytes=RunnableLambda(as_bytes)
)
chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3], "bytes": b"[1, 2, 3]"}
json_and_bytes_chain = chain.pick(["json", "bytes"])
json_and_bytes_chain.invoke("[1, 2, 3]")
# -> {"json": [1, 2, 3], "bytes": b"[1, 2, 3]"}
| PARAMETER | DESCRIPTION |
|---|---|
keys
|
A key or list of keys to pick from the output dict. |
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Any, Any]
|
a new |
assign
¶
assign(
**kwargs: Runnable[dict[str, Any], Any]
| Callable[[dict[str, Any]], Any]
| Mapping[str, Runnable[dict[str, Any], Any] | Callable[[dict[str, Any]], Any]],
) -> RunnableSerializable[Any, Any]
Assigns new fields to the dict output of this Runnable.
from langchain_core.language_models.fake import FakeStreamingListLLM
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import SystemMessagePromptTemplate
from langchain_core.runnables import Runnable
from operator import itemgetter
prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
model = FakeStreamingListLLM(responses=["foo-lish"])
chain: Runnable = prompt | model | {"str": StrOutputParser()}
chain_with_assign = chain.assign(hello=itemgetter("str") | model)
print(chain_with_assign.input_schema.model_json_schema())
# {'title': 'PromptInput', 'type': 'object', 'properties':
{'question': {'title': 'Question', 'type': 'string'}}}
print(chain_with_assign.output_schema.model_json_schema())
# {'title': 'RunnableSequenceOutput', 'type': 'object', 'properties':
{'str': {'title': 'Str',
'type': 'string'}, 'hello': {'title': 'Hello', 'type': 'string'}}}
| PARAMETER | DESCRIPTION |
|---|---|
**kwargs
|
A mapping of keys to
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Any, Any]
|
A new |
invoke
¶
invoke(
input: LanguageModelInput,
config: RunnableConfig | None = None,
*,
stop: list[str] | None = None,
**kwargs: Any,
) -> str
Transform a single input into an output.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
A config to use when invoking the The config supports standard keys like Please refer to
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Output
|
The output of the |
ainvoke
async
¶
ainvoke(
input: LanguageModelInput,
config: RunnableConfig | None = None,
*,
stop: list[str] | None = None,
**kwargs: Any,
) -> str
Transform a single input into an output.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
A config to use when invoking the The config supports standard keys like Please refer to
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Output
|
The output of the |
batch
¶
batch(
inputs: list[LanguageModelInput],
config: RunnableConfig | list[RunnableConfig] | None = None,
*,
return_exceptions: bool = False,
**kwargs: Any,
) -> list[str]
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses must override this method if they can batch more efficiently;
e.g., if the underlying Runnable uses an API which supports a batch mode.
| PARAMETER | DESCRIPTION |
|---|---|
inputs
|
A list of inputs to the
TYPE:
|
config
|
A config to use when invoking the Please refer to
TYPE:
|
return_exceptions
|
Whether to return exceptions instead of raising them.
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[Output]
|
A list of outputs from the |
batch_as_completed
¶
batch_as_completed(
inputs: Sequence[Input],
config: RunnableConfig | Sequence[RunnableConfig] | None = None,
*,
return_exceptions: bool = False,
**kwargs: Any | None,
) -> Iterator[tuple[int, Output | Exception]]
Run invoke in parallel on a list of inputs.
Yields results as they complete.
| PARAMETER | DESCRIPTION |
|---|---|
inputs
|
A list of inputs to the
TYPE:
|
config
|
A config to use when invoking the The config supports standard keys like Please refer to
TYPE:
|
return_exceptions
|
Whether to return exceptions instead of raising them.
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
tuple[int, Output | Exception]
|
Tuples of the index of the input and the output from the |
abatch
async
¶
abatch(
inputs: list[LanguageModelInput],
config: RunnableConfig | list[RunnableConfig] | None = None,
*,
return_exceptions: bool = False,
**kwargs: Any,
) -> list[str]
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses must override this method if they can batch more efficiently;
e.g., if the underlying Runnable uses an API which supports a batch mode.
| PARAMETER | DESCRIPTION |
|---|---|
inputs
|
A list of inputs to the
TYPE:
|
config
|
A config to use when invoking the The config supports standard keys like Please refer to
TYPE:
|
return_exceptions
|
Whether to return exceptions instead of raising them.
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[Output]
|
A list of outputs from the |
abatch_as_completed
async
¶
abatch_as_completed(
inputs: Sequence[Input],
config: RunnableConfig | Sequence[RunnableConfig] | None = None,
*,
return_exceptions: bool = False,
**kwargs: Any | None,
) -> AsyncIterator[tuple[int, Output | Exception]]
Run ainvoke in parallel on a list of inputs.
Yields results as they complete.
| PARAMETER | DESCRIPTION |
|---|---|
inputs
|
A list of inputs to the
TYPE:
|
config
|
A config to use when invoking the The config supports standard keys like Please refer to
TYPE:
|
return_exceptions
|
Whether to return exceptions instead of raising them.
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[tuple[int, Output | Exception]]
|
A tuple of the index of the input and the output from the |
stream
¶
stream(
input: LanguageModelInput,
config: RunnableConfig | None = None,
*,
stop: list[str] | None = None,
**kwargs: Any,
) -> Iterator[str]
Default implementation of stream, which calls invoke.
Subclasses must override this method if they support streaming output.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
Output
|
The output of the |
astream
async
¶
astream(
input: LanguageModelInput,
config: RunnableConfig | None = None,
*,
stop: list[str] | None = None,
**kwargs: Any,
) -> AsyncIterator[str]
Default implementation of astream, which calls ainvoke.
Subclasses must override this method if they support streaming output.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[Output]
|
The output of the |
astream_log
async
¶
astream_log(
input: Any,
config: RunnableConfig | None = None,
*,
diff: bool = True,
with_streamed_output_list: bool = True,
include_names: Sequence[str] | None = None,
include_types: Sequence[str] | None = None,
include_tags: Sequence[str] | None = None,
exclude_names: Sequence[str] | None = None,
exclude_types: Sequence[str] | None = None,
exclude_tags: Sequence[str] | None = None,
**kwargs: Any,
) -> AsyncIterator[RunLogPatch] | AsyncIterator[RunLog]
Stream all output from a Runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of Jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run.
The Jsonpatch ops can be applied in order to construct state.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
diff
|
Whether to yield diffs between each step or the current state.
TYPE:
|
with_streamed_output_list
|
Whether to yield the
TYPE:
|
include_names
|
Only include logs with these names. |
include_types
|
Only include logs with these types. |
include_tags
|
Only include logs with these tags. |
exclude_names
|
Exclude logs with these names. |
exclude_types
|
Exclude logs with these types. |
exclude_tags
|
Exclude logs with these tags. |
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[RunLogPatch] | AsyncIterator[RunLog]
|
A |
astream_events
async
¶
astream_events(
input: Any,
config: RunnableConfig | None = None,
*,
version: Literal["v1", "v2"] = "v2",
include_names: Sequence[str] | None = None,
include_types: Sequence[str] | None = None,
include_tags: Sequence[str] | None = None,
exclude_names: Sequence[str] | None = None,
exclude_types: Sequence[str] | None = None,
exclude_tags: Sequence[str] | None = None,
**kwargs: Any,
) -> AsyncIterator[StreamEvent]
Generate a stream of events.
Use to create an iterator over StreamEvent that provide real-time information
about the progress of the Runnable, including StreamEvent from intermediate
results.
A StreamEvent is a dictionary with the following schema:
event: Event names are of the format:on_[runnable_type]_(start|stream|end).name: The name of theRunnablethat generated the event.run_id: Randomly generated ID associated with the given execution of theRunnablethat emitted the event. A childRunnablethat gets invoked as part of the execution of a parentRunnableis assigned its own unique ID.parent_ids: The IDs of the parent runnables that generated the event. The rootRunnablewill have an empty list. The order of the parent IDs is from the root to the immediate parent. Only available for v2 version of the API. The v1 version of the API will return an empty list.tags: The tags of theRunnablethat generated the event.metadata: The metadata of theRunnablethat generated the event.data: The data associated with the event. The contents of this field depend on the type of event. See the table below for more details.
Below is a table that illustrates some events that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table.
Note
This reference table is for the v2 version of the schema.
| event | name | chunk | input | output |
|---|---|---|---|---|
on_chat_model_start |
'[model name]' |
{"messages": [[SystemMessage, HumanMessage]]} |
||
on_chat_model_stream |
'[model name]' |
AIMessageChunk(content="hello") |
||
on_chat_model_end |
'[model name]' |
{"messages": [[SystemMessage, HumanMessage]]} |
AIMessageChunk(content="hello world") |
|
on_llm_start |
'[model name]' |
{'input': 'hello'} |
||
on_llm_stream |
'[model name]' |
'Hello' |
||
on_llm_end |
'[model name]' |
'Hello human!' |
||
on_chain_start |
'format_docs' |
|||
on_chain_stream |
'format_docs' |
'hello world!, goodbye world!' |
||
on_chain_end |
'format_docs' |
[Document(...)] |
'hello world!, goodbye world!' |
|
on_tool_start |
'some_tool' |
{"x": 1, "y": "2"} |
||
on_tool_end |
'some_tool' |
{"x": 1, "y": "2"} |
||
on_retriever_start |
'[retriever name]' |
{"query": "hello"} |
||
on_retriever_end |
'[retriever name]' |
{"query": "hello"} |
[Document(...), ..] |
|
on_prompt_start |
'[template_name]' |
{"question": "hello"} |
||
on_prompt_end |
'[template_name]' |
{"question": "hello"} |
ChatPromptValue(messages: [SystemMessage, ...]) |
In addition to the standard events, users can also dispatch custom events (see example below).
Custom events will be only be surfaced with in the v2 version of the API!
A custom event has following format:
| Attribute | Type | Description |
|---|---|---|
name |
str |
A user defined name for the event. |
data |
Any |
The data associated with the event. This can be anything, though we suggest making it JSON serializable. |
Here are declarations associated with the standard events shown above:
format_docs:
def format_docs(docs: list[Document]) -> str:
'''Format the docs.'''
return ", ".join([doc.page_content for doc in docs])
format_docs = RunnableLambda(format_docs)
some_tool:
prompt:
template = ChatPromptTemplate.from_messages(
[
("system", "You are Cat Agent 007"),
("human", "{question}"),
]
).with_config({"run_name": "my_template", "tags": ["my_template"]})
Example
from langchain_core.runnables import RunnableLambda
async def reverse(s: str) -> str:
return s[::-1]
chain = RunnableLambda(func=reverse)
events = [
event async for event in chain.astream_events("hello", version="v2")
]
# Will produce the following events
# (run_id, and parent_ids has been omitted for brevity):
[
{
"data": {"input": "hello"},
"event": "on_chain_start",
"metadata": {},
"name": "reverse",
"tags": [],
},
{
"data": {"chunk": "olleh"},
"event": "on_chain_stream",
"metadata": {},
"name": "reverse",
"tags": [],
},
{
"data": {"output": "olleh"},
"event": "on_chain_end",
"metadata": {},
"name": "reverse",
"tags": [],
},
]
from langchain_core.callbacks.manager import (
adispatch_custom_event,
)
from langchain_core.runnables import RunnableLambda, RunnableConfig
import asyncio
async def slow_thing(some_input: str, config: RunnableConfig) -> str:
"""Do something that takes a long time."""
await asyncio.sleep(1) # Placeholder for some slow operation
await adispatch_custom_event(
"progress_event",
{"message": "Finished step 1 of 3"},
config=config # Must be included for python < 3.10
)
await asyncio.sleep(1) # Placeholder for some slow operation
await adispatch_custom_event(
"progress_event",
{"message": "Finished step 2 of 3"},
config=config # Must be included for python < 3.10
)
await asyncio.sleep(1) # Placeholder for some slow operation
return "Done"
slow_thing = RunnableLambda(slow_thing)
async for event in slow_thing.astream_events("some_input", version="v2"):
print(event)
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
version
|
The version of the schema to use, either Users should use
No default will be assigned until the API is stabilized.
custom events will only be surfaced in
TYPE:
|
include_names
|
Only include events from |
include_types
|
Only include events from |
include_tags
|
Only include events from |
exclude_names
|
Exclude events from |
exclude_types
|
Exclude events from |
exclude_tags
|
Exclude events from |
**kwargs
|
Additional keyword arguments to pass to the These will be passed to
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[StreamEvent]
|
An async stream of |
| RAISES | DESCRIPTION |
|---|---|
NotImplementedError
|
If the version is not |
transform
¶
transform(
input: Iterator[Input], config: RunnableConfig | None = None, **kwargs: Any | None
) -> Iterator[Output]
Transform inputs to outputs.
Default implementation of transform, which buffers input and calls astream.
Subclasses must override this method if they can start producing output while input is still being generated.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
An iterator of inputs to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
Output
|
The output of the |
atransform
async
¶
atransform(
input: AsyncIterator[Input],
config: RunnableConfig | None = None,
**kwargs: Any | None,
) -> AsyncIterator[Output]
Transform inputs to outputs.
Default implementation of atransform, which buffers input and calls astream.
Subclasses must override this method if they can start producing output while input is still being generated.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
An async iterator of inputs to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[Output]
|
The output of the |
bind
¶
Bind arguments to a Runnable, returning a new Runnable.
Useful when a Runnable in a chain requires an argument that is not
in the output of the previous Runnable or included in the user input.
| PARAMETER | DESCRIPTION |
|---|---|
**kwargs
|
The arguments to bind to the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
Example
from langchain_ollama import ChatOllama
from langchain_core.output_parsers import StrOutputParser
model = ChatOllama(model="llama3.1")
# Without bind
chain = model | StrOutputParser()
chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two three four five.'
# With bind
chain = model.bind(stop=["three"]) | StrOutputParser()
chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two'
with_config
¶
with_config(
config: RunnableConfig | None = None, **kwargs: Any
) -> Runnable[Input, Output]
Bind config to a Runnable, returning a new Runnable.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
The config to bind to the
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
with_listeners
¶
with_listeners(
*,
on_start: Callable[[Run], None]
| Callable[[Run, RunnableConfig], None]
| None = None,
on_end: Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None = None,
on_error: Callable[[Run], None]
| Callable[[Run, RunnableConfig], None]
| None = None,
) -> Runnable[Input, Output]
Bind lifecycle listeners to a Runnable, returning a new Runnable.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and
any tags or metadata added to the run.
| PARAMETER | DESCRIPTION |
|---|---|
on_start
|
Called before the
TYPE:
|
on_end
|
Called after the
TYPE:
|
on_error
|
Called if the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
Example
from langchain_core.runnables import RunnableLambda
from langchain_core.tracers.schemas import Run
import time
def test_runnable(time_to_sleep: int):
time.sleep(time_to_sleep)
def fn_start(run_obj: Run):
print("start_time:", run_obj.start_time)
def fn_end(run_obj: Run):
print("end_time:", run_obj.end_time)
chain = RunnableLambda(test_runnable).with_listeners(
on_start=fn_start, on_end=fn_end
)
chain.invoke(2)
with_alisteners
¶
with_alisteners(
*,
on_start: AsyncListener | None = None,
on_end: AsyncListener | None = None,
on_error: AsyncListener | None = None,
) -> Runnable[Input, Output]
Bind async lifecycle listeners to a Runnable.
Returns a new Runnable.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and
any tags or metadata added to the run.
| PARAMETER | DESCRIPTION |
|---|---|
on_start
|
Called asynchronously before the
TYPE:
|
on_end
|
Called asynchronously after the
TYPE:
|
on_error
|
Called asynchronously if the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
Example
from langchain_core.runnables import RunnableLambda, Runnable
from datetime import datetime, timezone
import time
import asyncio
def format_t(timestamp: float) -> str:
return datetime.fromtimestamp(timestamp, tz=timezone.utc).isoformat()
async def test_runnable(time_to_sleep: int):
print(f"Runnable[{time_to_sleep}s]: starts at {format_t(time.time())}")
await asyncio.sleep(time_to_sleep)
print(f"Runnable[{time_to_sleep}s]: ends at {format_t(time.time())}")
async def fn_start(run_obj: Runnable):
print(f"on start callback starts at {format_t(time.time())}")
await asyncio.sleep(3)
print(f"on start callback ends at {format_t(time.time())}")
async def fn_end(run_obj: Runnable):
print(f"on end callback starts at {format_t(time.time())}")
await asyncio.sleep(2)
print(f"on end callback ends at {format_t(time.time())}")
runnable = RunnableLambda(test_runnable).with_alisteners(
on_start=fn_start, on_end=fn_end
)
async def concurrent_runs():
await asyncio.gather(runnable.ainvoke(2), runnable.ainvoke(3))
asyncio.run(concurrent_runs())
# Result:
# on start callback starts at 2025-03-01T07:05:22.875378+00:00
# on start callback starts at 2025-03-01T07:05:22.875495+00:00
# on start callback ends at 2025-03-01T07:05:25.878862+00:00
# on start callback ends at 2025-03-01T07:05:25.878947+00:00
# Runnable[2s]: starts at 2025-03-01T07:05:25.879392+00:00
# Runnable[3s]: starts at 2025-03-01T07:05:25.879804+00:00
# Runnable[2s]: ends at 2025-03-01T07:05:27.881998+00:00
# on end callback starts at 2025-03-01T07:05:27.882360+00:00
# Runnable[3s]: ends at 2025-03-01T07:05:28.881737+00:00
# on end callback starts at 2025-03-01T07:05:28.882428+00:00
# on end callback ends at 2025-03-01T07:05:29.883893+00:00
# on end callback ends at 2025-03-01T07:05:30.884831+00:00
with_types
¶
with_types(
*, input_type: type[Input] | None = None, output_type: type[Output] | None = None
) -> Runnable[Input, Output]
Bind input and output types to a Runnable, returning a new Runnable.
| PARAMETER | DESCRIPTION |
|---|---|
input_type
|
The input type to bind to the
TYPE:
|
output_type
|
The output type to bind to the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
with_retry
¶
with_retry(
*,
retry_if_exception_type: tuple[type[BaseException], ...] = (Exception,),
wait_exponential_jitter: bool = True,
exponential_jitter_params: ExponentialJitterParams | None = None,
stop_after_attempt: int = 3,
) -> Runnable[Input, Output]
Create a new Runnable that retries the original Runnable on exceptions.
| PARAMETER | DESCRIPTION |
|---|---|
retry_if_exception_type
|
A tuple of exception types to retry on.
TYPE:
|
wait_exponential_jitter
|
Whether to add jitter to the wait time between retries.
TYPE:
|
stop_after_attempt
|
The maximum number of attempts to make before giving up.
TYPE:
|
exponential_jitter_params
|
Parameters for
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
Example
from langchain_core.runnables import RunnableLambda
count = 0
def _lambda(x: int) -> None:
global count
count = count + 1
if x == 1:
raise ValueError("x is 1")
else:
pass
runnable = RunnableLambda(_lambda)
try:
runnable.with_retry(
stop_after_attempt=2,
retry_if_exception_type=(ValueError,),
).invoke(1)
except ValueError:
pass
assert count == 2
map
¶
with_fallbacks
¶
with_fallbacks(
fallbacks: Sequence[Runnable[Input, Output]],
*,
exceptions_to_handle: tuple[type[BaseException], ...] = (Exception,),
exception_key: str | None = None,
) -> RunnableWithFallbacks[Input, Output]
Add fallbacks to a Runnable, returning a new Runnable.
The new Runnable will try the original Runnable, and then each fallback
in order, upon failures.
| PARAMETER | DESCRIPTION |
|---|---|
fallbacks
|
A sequence of runnables to try if the original |
exceptions_to_handle
|
A tuple of exception types to handle.
TYPE:
|
exception_key
|
If If If used, the base
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableWithFallbacks[Input, Output]
|
A new |
Example
from typing import Iterator
from langchain_core.runnables import RunnableGenerator
def _generate_immediate_error(input: Iterator) -> Iterator[str]:
raise ValueError()
yield ""
def _generate(input: Iterator) -> Iterator[str]:
yield from "foo bar"
runnable = RunnableGenerator(_generate_immediate_error).with_fallbacks(
[RunnableGenerator(_generate)]
)
print("".join(runnable.stream({}))) # foo bar
| PARAMETER | DESCRIPTION |
|---|---|
fallbacks
|
A sequence of runnables to try if the original |
exceptions_to_handle
|
A tuple of exception types to handle.
TYPE:
|
exception_key
|
If If If used, the base
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableWithFallbacks[Input, Output]
|
A new |
as_tool
¶
as_tool(
args_schema: type[BaseModel] | None = None,
*,
name: str | None = None,
description: str | None = None,
arg_types: dict[str, type] | None = None,
) -> BaseTool
Create a BaseTool from a Runnable.
as_tool will instantiate a BaseTool with a name, description, and
args_schema from a Runnable. Where possible, schemas are inferred
from runnable.get_input_schema.
Alternatively (e.g., if the Runnable takes a dict as input and the specific
dict keys are not typed), the schema can be specified directly with
args_schema.
You can also pass arg_types to just specify the required arguments and their
types.
| PARAMETER | DESCRIPTION |
|---|---|
args_schema
|
The schema for the tool. |
name
|
The name of the tool.
TYPE:
|
description
|
The description of the tool.
TYPE:
|
arg_types
|
A dictionary of argument names to types. |
| RETURNS | DESCRIPTION |
|---|---|
BaseTool
|
A |
TypedDict input
dict input, specifying schema via args_schema
from typing import Any
from pydantic import BaseModel, Field
from langchain_core.runnables import RunnableLambda
def f(x: dict[str, Any]) -> str:
return str(x["a"] * max(x["b"]))
class FSchema(BaseModel):
"""Apply a function to an integer and list of integers."""
a: int = Field(..., description="Integer")
b: list[int] = Field(..., description="List of ints")
runnable = RunnableLambda(f)
as_tool = runnable.as_tool(FSchema)
as_tool.invoke({"a": 3, "b": [1, 2]})
dict input, specifying schema via arg_types
is_lc_serializable
classmethod
¶
is_lc_serializable() -> bool
Is this class serializable?
By design, even if a class inherits from Serializable, it is not serializable
by default. This is to prevent accidental serialization of objects that should
not be serialized.
| RETURNS | DESCRIPTION |
|---|---|
bool
|
Whether the class is serializable. Default is |
get_lc_namespace
classmethod
¶
lc_id
classmethod
¶
Return a unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path to the object.
For example, for the class langchain.llms.openai.OpenAI, the id is
["langchain", "llms", "openai", "OpenAI"].
to_json
¶
Serialize the Runnable to JSON.
| RETURNS | DESCRIPTION |
|---|---|
SerializedConstructor | SerializedNotImplemented
|
A JSON-serializable representation of the |
to_json_not_implemented
¶
Serialize a "not implemented" object.
| RETURNS | DESCRIPTION |
|---|---|
SerializedNotImplemented
|
|
configurable_fields
¶
configurable_fields(
**kwargs: AnyConfigurableField,
) -> RunnableSerializable[Input, Output]
Configure particular Runnable fields at runtime.
| PARAMETER | DESCRIPTION |
|---|---|
**kwargs
|
A dictionary of
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If a configuration key is not found in the |
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Input, Output]
|
A new |
Example
from langchain_core.runnables import ConfigurableField
from langchain_openai import ChatOpenAI
model = ChatOpenAI(max_tokens=20).configurable_fields(
max_tokens=ConfigurableField(
id="output_token_number",
name="Max tokens in the output",
description="The maximum number of tokens in the output",
)
)
# max_tokens = 20
print(
"max_tokens_20: ", model.invoke("tell me something about chess").content
)
# max_tokens = 200
print(
"max_tokens_200: ",
model.with_config(configurable={"output_token_number": 200})
.invoke("tell me something about chess")
.content,
)
configurable_alternatives
¶
configurable_alternatives(
which: ConfigurableField,
*,
default_key: str = "default",
prefix_keys: bool = False,
**kwargs: Runnable[Input, Output] | Callable[[], Runnable[Input, Output]],
) -> RunnableSerializable[Input, Output]
Configure alternatives for Runnable objects that can be set at runtime.
| PARAMETER | DESCRIPTION |
|---|---|
which
|
The
TYPE:
|
default_key
|
The default key to use if no alternative is selected.
TYPE:
|
prefix_keys
|
Whether to prefix the keys with the
TYPE:
|
**kwargs
|
A dictionary of keys to
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Input, Output]
|
A new |
Example
from langchain_anthropic import ChatAnthropic
from langchain_core.runnables.utils import ConfigurableField
from langchain_openai import ChatOpenAI
model = ChatAnthropic(
model_name="claude-sonnet-4-5-20250929"
).configurable_alternatives(
ConfigurableField(id="llm"),
default_key="anthropic",
openai=ChatOpenAI(),
)
# uses the default model ChatAnthropic
print(model.invoke("which organization created you?").content)
# uses ChatOpenAI
print(
model.with_config(configurable={"llm": "openai"})
.invoke("which organization created you?")
.content
)
set_verbose
¶
generate_prompt
¶
generate_prompt(
prompts: list[PromptValue],
stop: list[str] | None = None,
callbacks: Callbacks | list[Callbacks] | None = None,
**kwargs: Any,
) -> LLMResult
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
- Take advantage of batched calls,
- Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).
| PARAMETER | DESCRIPTION |
|---|---|
prompts
|
List of A
TYPE:
|
stop
|
Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. |
callbacks
|
Used for executing additional functionality, such as logging or streaming, throughout generation.
TYPE:
|
**kwargs
|
Arbitrary additional keyword arguments. These are usually passed to the model provider API call.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
LLMResult
|
An |
agenerate_prompt
async
¶
agenerate_prompt(
prompts: list[PromptValue],
stop: list[str] | None = None,
callbacks: Callbacks | list[Callbacks] | None = None,
**kwargs: Any,
) -> LLMResult
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
- Take advantage of batched calls,
- Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).
| PARAMETER | DESCRIPTION |
|---|---|
prompts
|
List of A
TYPE:
|
stop
|
Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. |
callbacks
|
Used for executing additional functionality, such as logging or streaming, throughout generation.
TYPE:
|
**kwargs
|
Arbitrary additional keyword arguments. These are usually passed to the model provider API call.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
LLMResult
|
An |
with_structured_output
¶
with_structured_output(
schema: dict | type, **kwargs: Any
) -> Runnable[LanguageModelInput, dict | BaseModel]
Not implemented on this class.
get_token_ids
¶
get_num_tokens
¶
Get the number of tokens present in the text.
Useful for checking if an input fits in a model's context window.
This should be overridden by model-specific implementations to provide accurate token counts via model-specific tokenizers.
| PARAMETER | DESCRIPTION |
|---|---|
text
|
The string input to tokenize.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
int
|
The integer number of tokens in the text. |
get_num_tokens_from_messages
¶
get_num_tokens_from_messages(
messages: list[BaseMessage], tools: Sequence | None = None
) -> int
Get the number of tokens in the messages.
Useful for checking if an input fits in a model's context window.
This should be overridden by model-specific implementations to provide accurate token counts via model-specific tokenizers.
Note
- The base implementation of
get_num_tokens_from_messagesignores tool schemas. - The base implementation of
get_num_tokens_from_messagesadds additional prefixes to messages in represent user roles, which will add to the overall token count. Model-specific implementations may choose to handle this differently.
| PARAMETER | DESCRIPTION |
|---|---|
messages
|
The message inputs to tokenize.
TYPE:
|
tools
|
If provided, sequence of dict,
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
int
|
The sum of the number of tokens across the messages. |
generate
¶
generate(
prompts: list[str],
stop: list[str] | None = None,
callbacks: Callbacks | list[Callbacks] | None = None,
*,
tags: list[str] | list[list[str]] | None = None,
metadata: dict[str, Any] | list[dict[str, Any]] | None = None,
run_name: str | list[str] | None = None,
run_id: UUID | list[UUID | None] | None = None,
**kwargs: Any,
) -> LLMResult
Pass a sequence of prompts to a model and return generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
- Take advantage of batched calls,
- Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).
| PARAMETER | DESCRIPTION |
|---|---|
prompts
|
List of string prompts. |
stop
|
Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. |
callbacks
|
Used for executing additional functionality, such as logging or streaming, throughout generation.
TYPE:
|
tags
|
List of tags to associate with each prompt. If provided, the length of the list must match the length of the prompts list. |
metadata
|
List of metadata dictionaries to associate with each prompt. If provided, the length of the list must match the length of the prompts list.
TYPE:
|
run_name
|
List of run names to associate with each prompt. If provided, the length of the list must match the length of the prompts list. |
run_id
|
List of run IDs to associate with each prompt. If provided, the length of the list must match the length of the prompts list. |
**kwargs
|
Arbitrary additional keyword arguments. These are usually passed to the model provider API call.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If prompts is not a list. |
ValueError
|
If the length of |
| RETURNS | DESCRIPTION |
|---|---|
LLMResult
|
An |
agenerate
async
¶
agenerate(
prompts: list[str],
stop: list[str] | None = None,
callbacks: Callbacks | list[Callbacks] | None = None,
*,
tags: list[str] | list[list[str]] | None = None,
metadata: dict[str, Any] | list[dict[str, Any]] | None = None,
run_name: str | list[str] | None = None,
run_id: UUID | list[UUID | None] | None = None,
**kwargs: Any,
) -> LLMResult
Asynchronously pass a sequence of prompts to a model and return generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
- Take advantage of batched calls,
- Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).
| PARAMETER | DESCRIPTION |
|---|---|
prompts
|
List of string prompts. |
stop
|
Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. |
callbacks
|
Used for executing additional functionality, such as logging or streaming, throughout generation.
TYPE:
|
tags
|
List of tags to associate with each prompt. If provided, the length of the list must match the length of the prompts list. |
metadata
|
List of metadata dictionaries to associate with each prompt. If provided, the length of the list must match the length of the prompts list.
TYPE:
|
run_name
|
List of run names to associate with each prompt. If provided, the length of the list must match the length of the prompts list. |
run_id
|
List of run IDs to associate with each prompt. If provided, the length of the list must match the length of the prompts list. |
**kwargs
|
Arbitrary additional keyword arguments. These are usually passed to the model provider API call.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If the length of |
| RETURNS | DESCRIPTION |
|---|---|
LLMResult
|
An |
save
¶
Save the LLM.
| PARAMETER | DESCRIPTION |
|---|---|
file_path
|
Path to file to save the LLM to. |
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If the file path is not a string or Path object. |
__check_kwargs
¶
Check kwargs, warn for unknown keys, and return a copy recognized keys.
__init__
¶
__init__(**kwargs: Any)
Create a new NVIDIA LLM for Completions APIs.
This class provides access to a NVIDIA NIM for completions. By default, it
connects to a hosted NIM, but can be configured to connect to a local NIM
using the base_url parameter.
An API key is required to connect to the hosted NIM.
| PARAMETER | DESCRIPTION |
|---|---|
model
|
The model to use for completions.
TYPE:
|
nvidia_api_key
|
The API key to use for connecting to the hosted NIM.
TYPE:
|
api_key
|
Alternative to
TYPE:
|
base_url
|
The base URL of the NIM to connect to.
TYPE:
|
The recommended way to provide the API key is through the NVIDIA_API_KEY
environment variable.
Additional arguments that can be passed to the Completions API:
- max_tokens (int): The maximum number of tokens to generate.
- stop (str or List[str]): The stop sequence to use for generating completions.
- temperature (float): The temperature to use for generating completions.
- top_p (float): The top-p value to use for generating completions.
- frequency_penalty (float): The frequency penalty to apply to the completion.
- presence_penalty (float): The presence penalty to apply to the completion.
- seed (int): The seed to use for generating completions.
These additional arguments can also be passed with bind(), e.g.
NVIDIA().bind(max_tokens=512), or pass directly to invoke() or stream(),
e.g. NVIDIA().invoke("prompt", max_tokens=512).
NVIDIARerank
¶
Bases: BaseDocumentCompressor
LangChain Document Compressor that uses the NVIDIA NeMo Retriever Reranking API.
| METHOD | DESCRIPTION |
|---|---|
acompress_documents |
Async compress retrieved documents given the query context. |
__init__ |
Create a new |
get_available_models |
Get a list of available models that work with |
compress_documents |
Compress documents using the NVIDIA NeMo Retriever Reranking microservice API. |
available_models
property
¶
Get a list of available models that work with NVIDIARerank.
acompress_documents
async
¶
acompress_documents(
documents: Sequence[Document], query: str, callbacks: Callbacks | None = None
) -> Sequence[Document]
Async compress retrieved documents given the query context.
| PARAMETER | DESCRIPTION |
|---|---|
documents
|
The retrieved |
query
|
The query context.
TYPE:
|
callbacks
|
Optional
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Sequence[Document]
|
The compressed documents. |
__init__
¶
__init__(**kwargs: Any)
Create a new NVIDIARerank document compressor.
This class provides access to a NVIDIA NIM for reranking. By default, it
connects to a hosted NIM, but can be configured to connect to a local NIM
using the base_url parameter.
An API key is required to connect to the hosted NIM.
| PARAMETER | DESCRIPTION |
|---|---|
model
|
The model to use for reranking.
TYPE:
|
nvidia_api_key
|
The API key to use for connecting to the hosted NIM.
TYPE:
|
api_key
|
Alternative to
TYPE:
|
base_url
|
The base URL of the NIM to connect to.
TYPE:
|
truncate
|
TYPE:
|
default_headers
|
Default headers merged into all requests. |
extra_headers
|
Deprecated. Use |
The recommended way to provide the API key is through the NVIDIA_API_KEY
environment variable.
Base URL:
-
Connect to a self-hosted model with NVIDIA NIM using the
base_urlarg to link to the local host atlocalhost:8000:
Example
from langchain_nvidia_ai_endpoints import NVIDIARerank
from langchain_core.documents import Document
query = "What is the GPU memory bandwidth of H100 SXM?"
passages = [
"The Hopper GPU is paired with the Grace CPU using NVIDIA's ultra-fast
chip-to-chip interconnect, delivering 900GB/s of bandwidth, 7X faster
than PCIe Gen5. This innovative design will deliver up to 30X higher
aggregate system memory bandwidth to the GPU compared to today's fastest
servers and up to 10X higher performance for applications running
terabytes of data.",
"A100 provides up to 20X higher performance over the prior generation
and can be partitioned into seven GPU instances to dynamically adjust to
shifting demands. The A100 80GB debuts the world's fastest memory
bandwidth at over 2 terabytes per second (TB/s) to run the largest
models and datasets.",
"Accelerated servers with H100 deliver the compute power—along with 3
terabytes per second (TB/s) of memory bandwidth per GPU and scalability
with NVLink and NVSwitch™.",
]
client = NVIDIARerank(
model="nvidia/nv-rerankqa-mistral-4b-v3",
api_key="$API_KEY_REQUIRED_IF_EXECUTING_OUTSIDE_NGC"
)
response = client.compress_documents(
query=query,
documents=[Document(page_content=passage) for passage in passages]
)
print(f"Most relevant: {response[0].page_content}"
f"Least relevant: {response[-1].page_content}"
)
# Most relevant: Accelerated servers with H100 deliver the compute
# power—along with 3 terabytes per second (TB/s) of memory bandwidth per GPU
# and scalability with NVLink and NVSwitch™.
# Least relevant: A100 provides up to 20X higher performance over the prior
# generation and can be partitioned into seven GPU instances to dynamically
# adjust to shifting demands. The A100 80GB debuts the world's fastest
# memory bandwidth at over 2 terabytes per second (TB/s) to run the largest
# models and datasets.
get_available_models
classmethod
¶
Get a list of available models that work with NVIDIARerank.
compress_documents
¶
compress_documents(
documents: Sequence[Document], query: str, callbacks: Callbacks | None = None
) -> Sequence[Document]
Compress documents using the NVIDIA NeMo Retriever Reranking microservice API.
| PARAMETER | DESCRIPTION |
|---|---|
documents
|
A sequence of documents to compress. |
query
|
The query to use for compressing the documents.
TYPE:
|
callbacks
|
Callbacks to run during the compression process.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Sequence[Document]
|
A sequence of compressed documents. |
register_model
¶
register_model(model: Model) -> None
Register a model as a known model.
Must be done at the beginning of a program, at least before the model is used or available models are listed.
For instance:
from langchain_nvidia_ai_endpoints import register_model, Model
register_model(
Model(
id="my-custom-model-name",
model_type="chat",
client="ChatNVIDIA",
endpoint="http://host:port/path-to-my-model"
)
)
llm = ChatNVIDIA(model="my-custom-model-name")
Be sure that the id matches the model parameter the endpoint expects.
Supported model types are chat models, which must accept and produce chat completion payloads.
Supported model clients are ChatNVIDIA, for chat models.
Endpoint is required.
Use this instead of passing base_url to a client constructor when the model's
endpoint supports inference and not /v1/models listing.
Use base_url when the model's endpoint supports /v1/models listing and inference
on a known path, e.g. /v1/chat/completions.