langchain-google-genai¶
Reference docs
This page contains reference documentation for Google GenAI. See the docs for conceptual guides, tutorials, and examples on using Google GenAI modules.
langchain_google_genai
¶
LangChain Google Generative AI Integration (GenAI).
This module integrates Google's Generative AI models, specifically the Gemini series, with the LangChain framework. It provides classes for interacting with chat models and generating embeddings, leveraging Google's advanced AI capabilities.
Chat Models
The ChatGoogleGenerativeAI class is the primary interface for interacting with
Google's Gemini chat models. It allows users to send and receive messages using a
specified Gemini model, suitable for various conversational AI applications.
LLMs
The GoogleGenerativeAI class is the primary interface for interacting with Google's
Gemini LLMs. It allows users to generate text using a specified Gemini model.
Embeddings
The GoogleGenerativeAIEmbeddings class provides functionalities to generate embeddings
using Google's models. These embeddings can be used for a range of NLP tasks, including
semantic analysis, similarity comparisons, and more.
Using Chat Models
After setting up your environment with the required API key, you can interact with the Google Gemini models.
from langchain_google_genai import ChatGoogleGenerativeAI
llm = ChatGoogleGenerativeAI(model="gemini-2.5-pro")
llm.invoke("Sing a ballad of LangChain.")
Using LLMs
The package also supports generating text with Google's models.
from langchain_google_genai import GoogleGenerativeAI
llm = GoogleGenerativeAI(model="gemini-2.5-pro")
llm.invoke("Once upon a time, a library called LangChain")
Embedding Generation
The package also supports creating embeddings with Google's models, useful for textual similarity and other NLP applications.
from langchain_google_genai import GoogleGenerativeAIEmbeddings
embeddings = GoogleGenerativeAIEmbeddings(model="models/gemini-embedding-001")
embeddings.embed_query("hello, world!")
ChatGoogleGenerativeAI
¶
Bases: _BaseGoogleGenerativeAI, BaseChatModel
Google GenAI chat model integration.
Instantiation
To use, you must have either:
- The
GOOGLE_API_KEYenvironment variable set with your API key, or - Pass your API key using the
google_api_keykwarg to theChatGoogleGenerativeAIconstructor.
Invoke
messages = [
("system", "Translate the user sentence to French."),
("human", "I love programming."),
]
llm.invoke(messages)
AIMessage(
content="J'adore programmer. \\n",
response_metadata={
"prompt_feedback": {"block_reason": 0, "safety_ratings": []},
"finish_reason": "STOP",
"safety_ratings": [
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"probability": "NEGLIGIBLE",
"blocked": False,
},
{
"category": "HARM_CATEGORY_HATE_SPEECH",
"probability": "NEGLIGIBLE",
"blocked": False,
},
{
"category": "HARM_CATEGORY_HARASSMENT",
"probability": "NEGLIGIBLE",
"blocked": False,
},
{
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
"probability": "NEGLIGIBLE",
"blocked": False,
},
],
},
id="run-56cecc34-2e54-4b52-a974-337e47008ad2-0",
usage_metadata={
"input_tokens": 18,
"output_tokens": 5,
"total_tokens": 23,
},
)
Stream
AIMessageChunk(
content="J",
response_metadata={"finish_reason": "STOP", "safety_ratings": []},
id="run-e905f4f4-58cb-4a10-a960-448a2bb649e3",
usage_metadata={
"input_tokens": 18,
"output_tokens": 1,
"total_tokens": 19,
},
)
AIMessageChunk(
content="'adore programmer. \\n",
response_metadata={
"finish_reason": "STOP",
"safety_ratings": [
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"probability": "NEGLIGIBLE",
"blocked": False,
},
{
"category": "HARM_CATEGORY_HATE_SPEECH",
"probability": "NEGLIGIBLE",
"blocked": False,
},
{
"category": "HARM_CATEGORY_HARASSMENT",
"probability": "NEGLIGIBLE",
"blocked": False,
},
{
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
"probability": "NEGLIGIBLE",
"blocked": False,
},
],
},
id="run-e905f4f4-58cb-4a10-a960-448a2bb649e3",
usage_metadata={
"input_tokens": 18,
"output_tokens": 5,
"total_tokens": 23,
},
)
AIMessageChunk(
content="J'adore programmer. \\n",
response_metadata={
"finish_reason": "STOPSTOP",
"safety_ratings": [
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"probability": "NEGLIGIBLE",
"blocked": False,
},
{
"category": "HARM_CATEGORY_HATE_SPEECH",
"probability": "NEGLIGIBLE",
"blocked": False,
},
{
"category": "HARM_CATEGORY_HARASSMENT",
"probability": "NEGLIGIBLE",
"blocked": False,
},
{
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
"probability": "NEGLIGIBLE",
"blocked": False,
},
],
},
id="run-3ce13a42-cd30-4ad7-a684-f1f0b37cdeec",
usage_metadata={
"input_tokens": 36,
"output_tokens": 6,
"total_tokens": 42,
},
)
Async
Context caching
Context caching allows you to store and reuse content (e.g., PDFs, images) for
faster processing. The cached_content parameter accepts a cache name created
via the Google Generative AI API.
Below are two examples: caching a single file directly and caching multiple
files using Part.
Single file example
This caches a single file and queries it.
from google import genai
from google.genai import types
import time
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_core.messages import HumanMessage
client = genai.Client()
# Upload file
file = client.files.upload(file="./example_file")
while file.state.name == "PROCESSING":
time.sleep(2)
file = client.files.get(name=file.name)
# Create cache
model = "models/gemini-2.5-flash"
cache = client.caches.create(
model=model,
config=types.CreateCachedContentConfig(
display_name="Cached Content",
system_instruction=(
"You are an expert content analyzer, and your job is to answer "
"the user's query based on the file you have access to."
),
contents=[file],
ttl="300s",
),
)
# Query with LangChain
llm = ChatGoogleGenerativeAI(
model=model,
cached_content=cache.name,
)
message = HumanMessage(content="Summarize the main points of the content.")
llm.invoke([message])
Multiple files example
This caches two files using Part and queries them together.
from google import genai
from google.genai.types import CreateCachedContentConfig, Content, Part
import time
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_core.messages import HumanMessage
client = genai.Client()
# Upload files
file_1 = client.files.upload(file="./file1")
while file_1.state.name == "PROCESSING":
time.sleep(2)
file_1 = client.files.get(name=file_1.name)
file_2 = client.files.upload(file="./file2")
while file_2.state.name == "PROCESSING":
time.sleep(2)
file_2 = client.files.get(name=file_2.name)
# Create cache with multiple files
contents = [
Content(
role="user",
parts=[
Part.from_uri(file_uri=file_1.uri, mime_type=file_1.mime_type),
Part.from_uri(file_uri=file_2.uri, mime_type=file_2.mime_type),
],
)
]
model = "gemini-2.5-flash"
cache = client.caches.create(
model=model,
config=CreateCachedContentConfig(
display_name="Cached Contents",
system_instruction=(
"You are an expert content analyzer, and your job is to answer "
"the user's query based on the files you have access to."
),
contents=contents,
ttl="300s",
),
)
# Query with LangChain
llm = ChatGoogleGenerativeAI(
model=model,
cached_content=cache.name,
)
message = HumanMessage(
content="Provide a summary of the key information across both files."
)
llm.invoke([message])
Tool calling
from pydantic import BaseModel, Field
class GetWeather(BaseModel):
'''Get the current weather in a given location'''
location: str = Field(
..., description="The city and state, e.g. San Francisco, CA"
)
class GetPopulation(BaseModel):
'''Get the current population in a given location'''
location: str = Field(
..., description="The city and state, e.g. San Francisco, CA"
)
llm_with_tools = llm.bind_tools([GetWeather, GetPopulation])
ai_msg = llm_with_tools.invoke(
"Which city is hotter today and which is bigger: LA or NY?"
)
ai_msg.tool_calls
[
{
"name": "GetWeather",
"args": {"location": "Los Angeles, CA"},
"id": "c186c99f-f137-4d52-947f-9e3deabba6f6",
},
{
"name": "GetWeather",
"args": {"location": "New York City, NY"},
"id": "cebd4a5d-e800-4fa5-babd-4aa286af4f31",
},
{
"name": "GetPopulation",
"args": {"location": "Los Angeles, CA"},
"id": "4f92d897-f5e4-4d34-a3bc-93062c92591e",
},
{
"name": "GetPopulation",
"args": {"location": "New York City, NY"},
"id": "634582de-5186-4e4b-968b-f192f0a93678",
},
]
Search
Structured output
from typing import Optional
from pydantic import BaseModel, Field
class Joke(BaseModel):
'''Joke to tell user.'''
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
rating: Optional[int] = Field(
description="How funny the joke is, from 1 to 10"
)
# Default method uses function calling
structured_llm = llm.with_structured_output(Joke)
# For more reliable output, use json_schema with native responseSchema
structured_llm_json = llm.with_structured_output(Joke, method="json_schema")
structured_llm_json.invoke("Tell me a joke about cats")
Joke(
setup="Why are cats so good at video games?",
punchline="They have nine lives on the internet",
rating=None,
)
Two methods are supported for structured output:
method='function_calling'(default): Uses tool calling to extract structured data. Compatible with all models.-
method='json_schema': Uses Gemini's native structured output.Supports unions (
anyOf), recursive schemas ($ref), property ordering preservation, and streaming of partial JSON chunks.Uses Gemini's
response_json_schemaAPI param. Refer to the Gemini API docs for more details.
The json_schema method is recommended for better reliability as it
constrains the model's generation process directly rather than relying on
post-processing tool calls.
Image input
import base64
import httpx
from langchain_core.messages import HumanMessage
image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
image_data = base64.b64encode(httpx.get(image_url).content).decode("utf-8")
message = HumanMessage(
content=[
{"type": "text", "text": "describe the weather in this image"},
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{image_data}"},
},
]
)
ai_msg = llm.invoke([message])
ai_msg.content
PDF input
import base64
from langchain_core.messages import HumanMessage
pdf_bytes = open("/path/to/your/test.pdf", "rb").read()
pdf_base64 = base64.b64encode(pdf_bytes).decode("utf-8")
message = HumanMessage(
content=[
{"type": "text", "text": "describe the document in a sentence"},
{
"type": "file",
"source_type": "base64",
"mime_type": "application/pdf",
"data": pdf_base64,
},
]
)
ai_msg = llm.invoke([message])
ai_msg.content
Video input
import base64
from langchain_core.messages import HumanMessage
video_bytes = open("/path/to/your/video.mp4", "rb").read()
video_base64 = base64.b64encode(video_bytes).decode("utf-8")
message = HumanMessage(
content=[
{
"type": "text",
"text": "describe what's in this video in a sentence",
},
{
"type": "file",
"source_type": "base64",
"mime_type": "video/mp4",
"data": video_base64,
},
]
)
ai_msg = llm.invoke([message])
ai_msg.content
Tom and Jerry, along with a turkey, engage in a chaotic Thanksgiving-themed
adventure involving a corn-on-the-cob chase, maze antics, and a disastrous
attempt to prepare a turkey dinner.
You can also pass YouTube URLs directly:
from langchain_core.messages import HumanMessage
message = HumanMessage(
content=[
{"type": "text", "text": "summarize the video in 3 sentences."},
{
"type": "media",
"file_uri": "https://www.youtube.com/watch?v=9hE5-98ZeCg",
"mime_type": "video/mp4",
},
]
)
ai_msg = llm.invoke([message])
ai_msg.content
Audio input
import base64
from langchain_core.messages import HumanMessage
audio_bytes = open("/path/to/your/audio.mp3", "rb").read()
audio_base64 = base64.b64encode(audio_bytes).decode("utf-8")
message = HumanMessage(
content=[
{"type": "text", "text": "summarize this audio in a sentence"},
{
"type": "file",
"source_type": "base64",
"mime_type": "audio/mp3",
"data": audio_base64,
},
]
)
ai_msg = llm.invoke([message])
ai_msg.content
File upload
You can also upload files to Google's servers and reference them by URI.
This works for PDFs, images, videos, and audio files.
import time
from google import genai
from langchain_core.messages import HumanMessage
client = genai.Client()
myfile = client.files.upload(file="/path/to/your/sample.pdf")
while myfile.state.name == "PROCESSING":
time.sleep(2)
myfile = client.files.get(name=myfile.name)
message = HumanMessage(
content=[
{"type": "text", "text": "What is in the document?"},
{
"type": "media",
"file_uri": myfile.uri,
"mime_type": "application/pdf",
},
]
)
ai_msg = llm.invoke([message])
ai_msg.content
Thinking
For thinking models, you have the option to adjust the number of internal
thinking tokens used (thinking_budget) or to disable thinking altogether.
Note that not all models allow disabling thinking.
See the Gemini API docs for more details on thinking models.
To see a thinking model's thoughts, set include_thoughts=True to have the
model's reasoning summaries included in the response.
Token usage
| METHOD | DESCRIPTION |
|---|---|
get_name |
Get the name of the |
get_input_schema |
Get a Pydantic model that can be used to validate input to the |
get_input_jsonschema |
Get a JSON schema that represents the input to the |
get_output_schema |
Get a Pydantic model that can be used to validate output to the |
get_output_jsonschema |
Get a JSON schema that represents the output of the |
config_schema |
The type of config this |
get_config_jsonschema |
Get a JSON schema that represents the config of the |
get_graph |
Return a graph representation of this |
get_prompts |
Return a list of prompts used by this |
__or__ |
Runnable "or" operator. |
__ror__ |
Runnable "reverse-or" operator. |
pipe |
Pipe |
pick |
Pick keys from the output |
assign |
Assigns new fields to the |
ainvoke |
Transform a single input into an output. |
batch |
Default implementation runs invoke in parallel using a thread pool executor. |
batch_as_completed |
Run |
abatch |
Default implementation runs |
abatch_as_completed |
Run |
stream |
Default implementation of |
astream |
Default implementation of |
astream_log |
Stream all output from a |
astream_events |
Generate a stream of events. |
transform |
Transform inputs to outputs. |
atransform |
Transform inputs to outputs. |
bind |
Bind arguments to a |
with_config |
Bind config to a |
with_listeners |
Bind lifecycle listeners to a |
with_alisteners |
Bind async lifecycle listeners to a |
with_types |
Bind input and output types to a |
with_retry |
Create a new |
map |
Return a new |
with_fallbacks |
Add fallbacks to a |
as_tool |
Create a |
get_lc_namespace |
Get the namespace of the LangChain object. |
lc_id |
Return a unique identifier for this class for serialization purposes. |
to_json |
Serialize the |
to_json_not_implemented |
Serialize a "not implemented" object. |
configurable_fields |
Configure particular |
configurable_alternatives |
Configure alternatives for |
set_verbose |
If verbose is |
generate_prompt |
Pass a sequence of prompts to the model and return model generations. |
agenerate_prompt |
Asynchronously pass a sequence of prompts and return model generations. |
get_token_ids |
Return the ordered IDs of the tokens in a text. |
get_num_tokens_from_messages |
Get the number of tokens in the messages. |
generate |
Pass a sequence of prompts to the model and return model generations. |
agenerate |
Asynchronously pass a sequence of prompts to a model and return generations. |
dict |
Return a dictionary of the LLM. |
__init__ |
Needed for arg validation. |
is_lc_serializable |
Is this class serializable? |
build_extra |
Build extra kwargs from additional params that were passed in. |
validate_environment |
Validates params and passes them to |
invoke |
Override |
get_num_tokens |
Get the number of tokens present in the text. Uses the model's tokenizer. |
with_structured_output |
Model wrapper that returns outputs formatted to match the given schema. |
bind_tools |
Bind tool-like objects to this chat model. |
name
class-attribute
instance-attribute
¶
name: str | None = None
The name of the Runnable. Used for debugging and tracing.
input_schema
property
¶
The type of input this Runnable accepts specified as a Pydantic model.
output_schema
property
¶
Output schema.
The type of output this Runnable produces specified as a Pydantic model.
config_specs
property
¶
config_specs: list[ConfigurableFieldSpec]
List configurable fields for this Runnable.
lc_attributes
property
¶
lc_attributes: dict
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
Default is an empty dictionary.
cache
class-attribute
instance-attribute
¶
Whether to cache the response.
- If
True, will use the global cache. - If
False, will not use a cache - If
None, will use the global cache if it's set, otherwise no cache. - If instance of
BaseCache, will use the provided cache.
Caching is not currently supported for streaming methods of models.
verbose
class-attribute
instance-attribute
¶
Whether to print out response text.
callbacks
class-attribute
instance-attribute
¶
callbacks: Callbacks = Field(default=None, exclude=True)
Callbacks to add to the run trace.
tags
class-attribute
instance-attribute
¶
Tags to add to the run trace.
metadata
class-attribute
instance-attribute
¶
Metadata to add to the run trace.
custom_get_token_ids
class-attribute
instance-attribute
¶
Optional encoder to use for counting tokens.
rate_limiter
class-attribute
instance-attribute
¶
rate_limiter: BaseRateLimiter | None = Field(default=None, exclude=True)
An optional rate limiter to use for limiting the number of requests.
disable_streaming
class-attribute
instance-attribute
¶
Whether to disable streaming for this model.
If streaming is bypassed, then stream/astream/astream_events will
defer to invoke/ainvoke.
- If
True, will always bypass streaming case. - If
'tool_calling', will bypass streaming case only when the model is called with atoolskeyword argument. In other words, LangChain will automatically switch to non-streaming behavior (invoke) only when the tools argument is provided. This offers the best of both worlds. - If
False(Default), will always use streaming case if available.
The main reason for this flag is that code might be written using stream and
a user may want to swap out a given model for another model whose the implementation
does not properly support streaming.
output_version
class-attribute
instance-attribute
¶
Version of AIMessage output format to store in message content.
AIMessage.content_blocks will lazily parse the contents of content into a
standard format. This flag can be used to additionally store the standard format
in message content, e.g., for serialization purposes.
Supported values:
'v0': provider-specific format in content (can lazily-parse withcontent_blocks)'v1': standardized format in content (consistent withcontent_blocks)
Partner packages (e.g.,
langchain-openai) can also use this
field to roll out new content formats in a backward-compatible way.
Added in langchain-core 1.0
profile
class-attribute
instance-attribute
¶
profile: ModelProfile | None = Field(default=None, exclude=True)
Profile detailing model capabilities.
Beta feature
This is a beta feature. The format of model profiles is subject to change.
If not specified, automatically loaded from the provider package on initialization if data is available.
Example profile data includes context window sizes, supported modalities, or support for tool calling, structured output, and other features.
Added in langchain-core 1.1
google_api_key
class-attribute
instance-attribute
¶
google_api_key: SecretStr | None = Field(
alias="api_key",
default_factory=secret_from_env(["GOOGLE_API_KEY", "GEMINI_API_KEY"], default=None),
)
Google AI API key.
If not specified, will check the env vars GOOGLE_API_KEY and GEMINI_API_KEY with
precedence given to GOOGLE_API_KEY.
credentials
class-attribute
instance-attribute
¶
credentials: Any = None
The default custom credentials to use when making API calls.
If not provided, credentials will be ascertained from the GOOGLE_API_KEY
or GEMINI_API_KEY env vars with precedence given to GOOGLE_API_KEY.
temperature
class-attribute
instance-attribute
¶
temperature: float = 0.7
Run inference with this temperature.
Must be within [0.0, 2.0].
Gemini 3.0+ models
Setting temperature < 1.0 for Gemini 3.0+ models can cause infinite loops,
degraded reasoning performance, and failure on complex tasks.
top_p
class-attribute
instance-attribute
¶
top_p: float | None = None
Decode using nucleus sampling.
Consider the smallest set of tokens whose probability sum is at least top_p.
Must be within [0.0, 1.0].
top_k
class-attribute
instance-attribute
¶
top_k: int | None = None
Decode using top-k sampling: consider the set of top_k most probable tokens.
Must be positive.
max_output_tokens
class-attribute
instance-attribute
¶
Maximum number of tokens to include in a candidate.
Must be greater than zero.
If unset, will use the model's default value, which varies by model.
See docs for model-specific limits.
To constrain the number of thinking tokens to use when generating a response, see
the thinking_budget parameter.
n
class-attribute
instance-attribute
¶
n: int = 1
Number of chat completions to generate for each prompt.
Note that the API may not return the full n completions if duplicates are
generated.
max_retries
class-attribute
instance-attribute
¶
The maximum number of retries to make when generating.
timeout
class-attribute
instance-attribute
¶
The maximum number of seconds to wait for a response.
client_options
class-attribute
instance-attribute
¶
A dictionary of client options to pass to the Google API client.
Example: api_endpoint
Warning
If both client_options['api_endpoint'] and base_url are specified,
the api_endpoint in client_options takes precedence.
base_url
class-attribute
instance-attribute
¶
Base URL to use for the API client.
This is a convenience alias for client_options['api_endpoint'].
-
REST transport (
transport="rest"): Accepts full URLs with pathshttps://api.example.com/v1/pathhttps://webhook.site/unique-path
-
gRPC transports (
transport="grpc"ortransport="grpc_asyncio"): Only acceptshostname:portformatapi.example.com:443custom.googleapis.com:443https://api.example.com(auto-formatted toapi.example.com:443)- NOT
https://webhook.site/path(paths are not supported in gRPC) - NOT
api.example.com/path(paths are not supported in gRPC)
Warning
If client_options already contains an api_endpoint, this parameter will be
ignored in favor of the existing value.
transport
class-attribute
instance-attribute
¶
A string, one of: ['rest', 'grpc', 'grpc_asyncio'].
The Google client library defaults to 'grpc' for sync clients.
For async clients, 'rest' is converted to 'grpc_asyncio' unless
a custom endpoint is specified.
additional_headers
class-attribute
instance-attribute
¶
Key-value dictionary representing additional headers for the model call
response_modalities
class-attribute
instance-attribute
¶
A list of modalities of the response
media_resolution
class-attribute
instance-attribute
¶
media_resolution: MediaResolution | None = Field(default=None)
Media resolution for the input media.
May be defined at the individual part level, allowing for mixed-resolution requests (e.g., images and videos of different resolutions in the same request).
May be 'low', 'medium', or 'high'.
Can be set either per-part or globally for all media inputs in the request. To set
globally, set in the generation_config.
Model compatibility
Setting per-part media resolution requests to Gemini 2.5 models is not supported.
thinking_budget
class-attribute
instance-attribute
¶
Indicates the thinking budget in tokens.
Used to disable thinking for supported models (when set to 0) or to constrain
the number of tokens used for thinking.
Dynamic thinking (allowing the model to decide how many tokens to use) is
enabled when set to -1.
More information, including per-model limits, can be found in the Gemini API docs.
include_thoughts
class-attribute
instance-attribute
¶
Indicates whether to include thoughts in the response.
Note
This parameter is only applicable for models that support thinking.
This does not disable thinking; to disable thinking, set thinking_budget to
0. for supported models. See the thinking_budget parameter for more details.
safety_settings
class-attribute
instance-attribute
¶
safety_settings: dict[HarmCategory, HarmBlockThreshold] | None = None
Default safety settings to use for all generations.
Example
from google.generativeai.types.safety_types import HarmBlockThreshold, HarmCategory
safety_settings = {
HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_ONLY_HIGH,
HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,
}
thinking_level
class-attribute
instance-attribute
¶
Indicates the thinking level.
Supported values
'low': Minimizes latency and cost.'high': Maximizes reasoning depth.
Replaces thinking_budget
thinking_budget is deprecated for Gemini 3+ models. If both parameters are
provided, thinking_level takes precedence.
If left unspecified, the model's default thinking level is used. For Gemini 3+,
this defaults to 'high'.
convert_system_message_to_human
class-attribute
instance-attribute
¶
convert_system_message_to_human: bool = False
Whether to merge any leading SystemMessage into the following HumanMessage.
Gemini does not support system messages; any unsupported messages will raise an error.
response_mime_type
class-attribute
instance-attribute
¶
response_mime_type: str | None = None
Output response MIME type of the generated candidate text.
Supported MIME types
'text/plain': (default) Text output.'application/json': JSON response in the candidates.'text/x.enum': Enum in plain text. (legacy; use JSON schema output instead)
Note
The model also needs to be prompted to output the appropriate response type, otherwise the behavior is undefined.
(In other words, simply setting this param doesn't force the model to comply; it only tells the model the kind of output expected. You still need to prompt it correctly.)
response_schema
class-attribute
instance-attribute
¶
Enforce a schema to the output.
The format of the dictionary should follow Open API schema.
Has JSON Schema support including:
anyOffor unions$reffor recursive schemas- Output property ordering
- Minimum/maximum constraints
- Streaming of partial JSON chunks
Refer to the Gemini API docs for more details.
cached_content
class-attribute
instance-attribute
¶
cached_content: str | None = None
The name of the cached content used as context to serve the prediction.
Note
Only used in explicit caching, where users can have control over caching (e.g.
what content to cache) and enjoy guaranteed cost savings. Format:
cachedContents/{cachedContent}.
stop
class-attribute
instance-attribute
¶
Stop sequences for the model.
streaming
class-attribute
instance-attribute
¶
streaming: bool | None = None
Whether to stream responses from the model.
model_kwargs
class-attribute
instance-attribute
¶
Holds any unexpected initialization parameters.
get_name
¶
get_input_schema
¶
get_input_schema(config: RunnableConfig | None = None) -> type[BaseModel]
Get a Pydantic model that can be used to validate input to the Runnable.
Runnable objects that leverage the configurable_fields and
configurable_alternatives methods will have a dynamic input schema that
depends on which configuration the Runnable is invoked with.
This method allows to get an input schema for a specific configuration.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
A config to use when generating the schema.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
type[BaseModel]
|
A Pydantic model that can be used to validate input. |
get_input_jsonschema
¶
get_input_jsonschema(config: RunnableConfig | None = None) -> dict[str, Any]
Get a JSON schema that represents the input to the Runnable.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
A config to use when generating the schema.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
dict[str, Any]
|
A JSON schema that represents the input to the |
Example
Added in langchain-core 0.3.0
get_output_schema
¶
get_output_schema(config: RunnableConfig | None = None) -> type[BaseModel]
Get a Pydantic model that can be used to validate output to the Runnable.
Runnable objects that leverage the configurable_fields and
configurable_alternatives methods will have a dynamic output schema that
depends on which configuration the Runnable is invoked with.
This method allows to get an output schema for a specific configuration.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
A config to use when generating the schema.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
type[BaseModel]
|
A Pydantic model that can be used to validate output. |
get_output_jsonschema
¶
get_output_jsonschema(config: RunnableConfig | None = None) -> dict[str, Any]
Get a JSON schema that represents the output of the Runnable.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
A config to use when generating the schema.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
dict[str, Any]
|
A JSON schema that represents the output of the |
Example
Added in langchain-core 0.3.0
config_schema
¶
The type of config this Runnable accepts specified as a Pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
| PARAMETER | DESCRIPTION |
|---|---|
include
|
A list of fields to include in the config schema. |
| RETURNS | DESCRIPTION |
|---|---|
type[BaseModel]
|
A Pydantic model that can be used to validate config. |
get_config_jsonschema
¶
get_graph
¶
get_graph(config: RunnableConfig | None = None) -> Graph
Return a graph representation of this Runnable.
get_prompts
¶
get_prompts(config: RunnableConfig | None = None) -> list[BasePromptTemplate]
Return a list of prompts used by this Runnable.
__or__
¶
__or__(
other: Runnable[Any, Other]
| Callable[[Iterator[Any]], Iterator[Other]]
| Callable[[AsyncIterator[Any]], AsyncIterator[Other]]
| Callable[[Any], Other]
| Mapping[str, Runnable[Any, Other] | Callable[[Any], Other] | Any],
) -> RunnableSerializable[Input, Other]
Runnable "or" operator.
Compose this Runnable with another object to create a
RunnableSequence.
| PARAMETER | DESCRIPTION |
|---|---|
other
|
Another
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Input, Other]
|
A new |
__ror__
¶
__ror__(
other: Runnable[Other, Any]
| Callable[[Iterator[Other]], Iterator[Any]]
| Callable[[AsyncIterator[Other]], AsyncIterator[Any]]
| Callable[[Other], Any]
| Mapping[str, Runnable[Other, Any] | Callable[[Other], Any] | Any],
) -> RunnableSerializable[Other, Output]
Runnable "reverse-or" operator.
Compose this Runnable with another object to create a
RunnableSequence.
| PARAMETER | DESCRIPTION |
|---|---|
other
|
Another
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Other, Output]
|
A new |
pipe
¶
pipe(
*others: Runnable[Any, Other] | Callable[[Any], Other], name: str | None = None
) -> RunnableSerializable[Input, Other]
Pipe Runnable objects.
Compose this Runnable with Runnable-like objects to make a
RunnableSequence.
Equivalent to RunnableSequence(self, *others) or self | others[0] | ...
Example
from langchain_core.runnables import RunnableLambda
def add_one(x: int) -> int:
return x + 1
def mul_two(x: int) -> int:
return x * 2
runnable_1 = RunnableLambda(add_one)
runnable_2 = RunnableLambda(mul_two)
sequence = runnable_1.pipe(runnable_2)
# Or equivalently:
# sequence = runnable_1 | runnable_2
# sequence = RunnableSequence(first=runnable_1, last=runnable_2)
sequence.invoke(1)
await sequence.ainvoke(1)
# -> 4
sequence.batch([1, 2, 3])
await sequence.abatch([1, 2, 3])
# -> [4, 6, 8]
| PARAMETER | DESCRIPTION |
|---|---|
*others
|
Other
TYPE:
|
name
|
An optional name for the resulting
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Input, Other]
|
A new |
pick
¶
Pick keys from the output dict of this Runnable.
Pick a single key
import json
from langchain_core.runnables import RunnableLambda, RunnableMap
as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)
chain = RunnableMap(str=as_str, json=as_json)
chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3]}
json_only_chain = chain.pick("json")
json_only_chain.invoke("[1, 2, 3]")
# -> [1, 2, 3]
Pick a list of keys
from typing import Any
import json
from langchain_core.runnables import RunnableLambda, RunnableMap
as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)
def as_bytes(x: Any) -> bytes:
return bytes(x, "utf-8")
chain = RunnableMap(
str=as_str, json=as_json, bytes=RunnableLambda(as_bytes)
)
chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3], "bytes": b"[1, 2, 3]"}
json_and_bytes_chain = chain.pick(["json", "bytes"])
json_and_bytes_chain.invoke("[1, 2, 3]")
# -> {"json": [1, 2, 3], "bytes": b"[1, 2, 3]"}
| PARAMETER | DESCRIPTION |
|---|---|
keys
|
A key or list of keys to pick from the output dict. |
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Any, Any]
|
a new |
assign
¶
assign(
**kwargs: Runnable[dict[str, Any], Any]
| Callable[[dict[str, Any]], Any]
| Mapping[str, Runnable[dict[str, Any], Any] | Callable[[dict[str, Any]], Any]],
) -> RunnableSerializable[Any, Any]
Assigns new fields to the dict output of this Runnable.
from langchain_core.language_models.fake import FakeStreamingListLLM
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import SystemMessagePromptTemplate
from langchain_core.runnables import Runnable
from operator import itemgetter
prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
model = FakeStreamingListLLM(responses=["foo-lish"])
chain: Runnable = prompt | model | {"str": StrOutputParser()}
chain_with_assign = chain.assign(hello=itemgetter("str") | model)
print(chain_with_assign.input_schema.model_json_schema())
# {'title': 'PromptInput', 'type': 'object', 'properties':
{'question': {'title': 'Question', 'type': 'string'}}}
print(chain_with_assign.output_schema.model_json_schema())
# {'title': 'RunnableSequenceOutput', 'type': 'object', 'properties':
{'str': {'title': 'Str',
'type': 'string'}, 'hello': {'title': 'Hello', 'type': 'string'}}}
| PARAMETER | DESCRIPTION |
|---|---|
**kwargs
|
A mapping of keys to
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Any, Any]
|
A new |
ainvoke
async
¶
ainvoke(
input: LanguageModelInput,
config: RunnableConfig | None = None,
*,
stop: list[str] | None = None,
**kwargs: Any,
) -> AIMessage
Transform a single input into an output.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
A config to use when invoking the The config supports standard keys like Please refer to
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Output
|
The output of the |
batch
¶
batch(
inputs: list[Input],
config: RunnableConfig | list[RunnableConfig] | None = None,
*,
return_exceptions: bool = False,
**kwargs: Any | None,
) -> list[Output]
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses must override this method if they can batch more efficiently;
e.g., if the underlying Runnable uses an API which supports a batch mode.
| PARAMETER | DESCRIPTION |
|---|---|
inputs
|
A list of inputs to the
TYPE:
|
config
|
A config to use when invoking the Please refer to
TYPE:
|
return_exceptions
|
Whether to return exceptions instead of raising them.
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[Output]
|
A list of outputs from the |
batch_as_completed
¶
batch_as_completed(
inputs: Sequence[Input],
config: RunnableConfig | Sequence[RunnableConfig] | None = None,
*,
return_exceptions: bool = False,
**kwargs: Any | None,
) -> Iterator[tuple[int, Output | Exception]]
Run invoke in parallel on a list of inputs.
Yields results as they complete.
| PARAMETER | DESCRIPTION |
|---|---|
inputs
|
A list of inputs to the
TYPE:
|
config
|
A config to use when invoking the The config supports standard keys like Please refer to
TYPE:
|
return_exceptions
|
Whether to return exceptions instead of raising them.
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
tuple[int, Output | Exception]
|
Tuples of the index of the input and the output from the |
abatch
async
¶
abatch(
inputs: list[Input],
config: RunnableConfig | list[RunnableConfig] | None = None,
*,
return_exceptions: bool = False,
**kwargs: Any | None,
) -> list[Output]
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses must override this method if they can batch more efficiently;
e.g., if the underlying Runnable uses an API which supports a batch mode.
| PARAMETER | DESCRIPTION |
|---|---|
inputs
|
A list of inputs to the
TYPE:
|
config
|
A config to use when invoking the The config supports standard keys like Please refer to
TYPE:
|
return_exceptions
|
Whether to return exceptions instead of raising them.
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[Output]
|
A list of outputs from the |
abatch_as_completed
async
¶
abatch_as_completed(
inputs: Sequence[Input],
config: RunnableConfig | Sequence[RunnableConfig] | None = None,
*,
return_exceptions: bool = False,
**kwargs: Any | None,
) -> AsyncIterator[tuple[int, Output | Exception]]
Run ainvoke in parallel on a list of inputs.
Yields results as they complete.
| PARAMETER | DESCRIPTION |
|---|---|
inputs
|
A list of inputs to the
TYPE:
|
config
|
A config to use when invoking the The config supports standard keys like Please refer to
TYPE:
|
return_exceptions
|
Whether to return exceptions instead of raising them.
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[tuple[int, Output | Exception]]
|
A tuple of the index of the input and the output from the |
stream
¶
stream(
input: LanguageModelInput,
config: RunnableConfig | None = None,
*,
stop: list[str] | None = None,
**kwargs: Any,
) -> Iterator[AIMessageChunk]
Default implementation of stream, which calls invoke.
Subclasses must override this method if they support streaming output.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
Output
|
The output of the |
astream
async
¶
astream(
input: LanguageModelInput,
config: RunnableConfig | None = None,
*,
stop: list[str] | None = None,
**kwargs: Any,
) -> AsyncIterator[AIMessageChunk]
Default implementation of astream, which calls ainvoke.
Subclasses must override this method if they support streaming output.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[Output]
|
The output of the |
astream_log
async
¶
astream_log(
input: Any,
config: RunnableConfig | None = None,
*,
diff: bool = True,
with_streamed_output_list: bool = True,
include_names: Sequence[str] | None = None,
include_types: Sequence[str] | None = None,
include_tags: Sequence[str] | None = None,
exclude_names: Sequence[str] | None = None,
exclude_types: Sequence[str] | None = None,
exclude_tags: Sequence[str] | None = None,
**kwargs: Any,
) -> AsyncIterator[RunLogPatch] | AsyncIterator[RunLog]
Stream all output from a Runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of Jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run.
The Jsonpatch ops can be applied in order to construct state.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
diff
|
Whether to yield diffs between each step or the current state.
TYPE:
|
with_streamed_output_list
|
Whether to yield the
TYPE:
|
include_names
|
Only include logs with these names. |
include_types
|
Only include logs with these types. |
include_tags
|
Only include logs with these tags. |
exclude_names
|
Exclude logs with these names. |
exclude_types
|
Exclude logs with these types. |
exclude_tags
|
Exclude logs with these tags. |
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[RunLogPatch] | AsyncIterator[RunLog]
|
A |
astream_events
async
¶
astream_events(
input: Any,
config: RunnableConfig | None = None,
*,
version: Literal["v1", "v2"] = "v2",
include_names: Sequence[str] | None = None,
include_types: Sequence[str] | None = None,
include_tags: Sequence[str] | None = None,
exclude_names: Sequence[str] | None = None,
exclude_types: Sequence[str] | None = None,
exclude_tags: Sequence[str] | None = None,
**kwargs: Any,
) -> AsyncIterator[StreamEvent]
Generate a stream of events.
Use to create an iterator over StreamEvent that provide real-time information
about the progress of the Runnable, including StreamEvent from intermediate
results.
A StreamEvent is a dictionary with the following schema:
event: Event names are of the format:on_[runnable_type]_(start|stream|end).name: The name of theRunnablethat generated the event.run_id: Randomly generated ID associated with the given execution of theRunnablethat emitted the event. A childRunnablethat gets invoked as part of the execution of a parentRunnableis assigned its own unique ID.parent_ids: The IDs of the parent runnables that generated the event. The rootRunnablewill have an empty list. The order of the parent IDs is from the root to the immediate parent. Only available for v2 version of the API. The v1 version of the API will return an empty list.tags: The tags of theRunnablethat generated the event.metadata: The metadata of theRunnablethat generated the event.data: The data associated with the event. The contents of this field depend on the type of event. See the table below for more details.
Below is a table that illustrates some events that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table.
Note
This reference table is for the v2 version of the schema.
| event | name | chunk | input | output |
|---|---|---|---|---|
on_chat_model_start |
'[model name]' |
{"messages": [[SystemMessage, HumanMessage]]} |
||
on_chat_model_stream |
'[model name]' |
AIMessageChunk(content="hello") |
||
on_chat_model_end |
'[model name]' |
{"messages": [[SystemMessage, HumanMessage]]} |
AIMessageChunk(content="hello world") |
|
on_llm_start |
'[model name]' |
{'input': 'hello'} |
||
on_llm_stream |
'[model name]' |
'Hello' |
||
on_llm_end |
'[model name]' |
'Hello human!' |
||
on_chain_start |
'format_docs' |
|||
on_chain_stream |
'format_docs' |
'hello world!, goodbye world!' |
||
on_chain_end |
'format_docs' |
[Document(...)] |
'hello world!, goodbye world!' |
|
on_tool_start |
'some_tool' |
{"x": 1, "y": "2"} |
||
on_tool_end |
'some_tool' |
{"x": 1, "y": "2"} |
||
on_retriever_start |
'[retriever name]' |
{"query": "hello"} |
||
on_retriever_end |
'[retriever name]' |
{"query": "hello"} |
[Document(...), ..] |
|
on_prompt_start |
'[template_name]' |
{"question": "hello"} |
||
on_prompt_end |
'[template_name]' |
{"question": "hello"} |
ChatPromptValue(messages: [SystemMessage, ...]) |
In addition to the standard events, users can also dispatch custom events (see example below).
Custom events will be only be surfaced with in the v2 version of the API!
A custom event has following format:
| Attribute | Type | Description |
|---|---|---|
name |
str |
A user defined name for the event. |
data |
Any |
The data associated with the event. This can be anything, though we suggest making it JSON serializable. |
Here are declarations associated with the standard events shown above:
format_docs:
def format_docs(docs: list[Document]) -> str:
'''Format the docs.'''
return ", ".join([doc.page_content for doc in docs])
format_docs = RunnableLambda(format_docs)
some_tool:
prompt:
template = ChatPromptTemplate.from_messages(
[
("system", "You are Cat Agent 007"),
("human", "{question}"),
]
).with_config({"run_name": "my_template", "tags": ["my_template"]})
Example
from langchain_core.runnables import RunnableLambda
async def reverse(s: str) -> str:
return s[::-1]
chain = RunnableLambda(func=reverse)
events = [
event async for event in chain.astream_events("hello", version="v2")
]
# Will produce the following events
# (run_id, and parent_ids has been omitted for brevity):
[
{
"data": {"input": "hello"},
"event": "on_chain_start",
"metadata": {},
"name": "reverse",
"tags": [],
},
{
"data": {"chunk": "olleh"},
"event": "on_chain_stream",
"metadata": {},
"name": "reverse",
"tags": [],
},
{
"data": {"output": "olleh"},
"event": "on_chain_end",
"metadata": {},
"name": "reverse",
"tags": [],
},
]
from langchain_core.callbacks.manager import (
adispatch_custom_event,
)
from langchain_core.runnables import RunnableLambda, RunnableConfig
import asyncio
async def slow_thing(some_input: str, config: RunnableConfig) -> str:
"""Do something that takes a long time."""
await asyncio.sleep(1) # Placeholder for some slow operation
await adispatch_custom_event(
"progress_event",
{"message": "Finished step 1 of 3"},
config=config # Must be included for python < 3.10
)
await asyncio.sleep(1) # Placeholder for some slow operation
await adispatch_custom_event(
"progress_event",
{"message": "Finished step 2 of 3"},
config=config # Must be included for python < 3.10
)
await asyncio.sleep(1) # Placeholder for some slow operation
return "Done"
slow_thing = RunnableLambda(slow_thing)
async for event in slow_thing.astream_events("some_input", version="v2"):
print(event)
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
version
|
The version of the schema to use, either Users should use
No default will be assigned until the API is stabilized.
custom events will only be surfaced in
TYPE:
|
include_names
|
Only include events from |
include_types
|
Only include events from |
include_tags
|
Only include events from |
exclude_names
|
Exclude events from |
exclude_types
|
Exclude events from |
exclude_tags
|
Exclude events from |
**kwargs
|
Additional keyword arguments to pass to the These will be passed to
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[StreamEvent]
|
An async stream of |
| RAISES | DESCRIPTION |
|---|---|
NotImplementedError
|
If the version is not |
transform
¶
transform(
input: Iterator[Input], config: RunnableConfig | None = None, **kwargs: Any | None
) -> Iterator[Output]
Transform inputs to outputs.
Default implementation of transform, which buffers input and calls astream.
Subclasses must override this method if they can start producing output while input is still being generated.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
An iterator of inputs to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
Output
|
The output of the |
atransform
async
¶
atransform(
input: AsyncIterator[Input],
config: RunnableConfig | None = None,
**kwargs: Any | None,
) -> AsyncIterator[Output]
Transform inputs to outputs.
Default implementation of atransform, which buffers input and calls astream.
Subclasses must override this method if they can start producing output while input is still being generated.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
An async iterator of inputs to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[Output]
|
The output of the |
bind
¶
Bind arguments to a Runnable, returning a new Runnable.
Useful when a Runnable in a chain requires an argument that is not
in the output of the previous Runnable or included in the user input.
| PARAMETER | DESCRIPTION |
|---|---|
**kwargs
|
The arguments to bind to the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
Example
from langchain_ollama import ChatOllama
from langchain_core.output_parsers import StrOutputParser
model = ChatOllama(model="llama3.1")
# Without bind
chain = model | StrOutputParser()
chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two three four five.'
# With bind
chain = model.bind(stop=["three"]) | StrOutputParser()
chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two'
with_config
¶
with_config(
config: RunnableConfig | None = None, **kwargs: Any
) -> Runnable[Input, Output]
Bind config to a Runnable, returning a new Runnable.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
The config to bind to the
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
with_listeners
¶
with_listeners(
*,
on_start: Callable[[Run], None]
| Callable[[Run, RunnableConfig], None]
| None = None,
on_end: Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None = None,
on_error: Callable[[Run], None]
| Callable[[Run, RunnableConfig], None]
| None = None,
) -> Runnable[Input, Output]
Bind lifecycle listeners to a Runnable, returning a new Runnable.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and
any tags or metadata added to the run.
| PARAMETER | DESCRIPTION |
|---|---|
on_start
|
Called before the
TYPE:
|
on_end
|
Called after the
TYPE:
|
on_error
|
Called if the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
Example
from langchain_core.runnables import RunnableLambda
from langchain_core.tracers.schemas import Run
import time
def test_runnable(time_to_sleep: int):
time.sleep(time_to_sleep)
def fn_start(run_obj: Run):
print("start_time:", run_obj.start_time)
def fn_end(run_obj: Run):
print("end_time:", run_obj.end_time)
chain = RunnableLambda(test_runnable).with_listeners(
on_start=fn_start, on_end=fn_end
)
chain.invoke(2)
with_alisteners
¶
with_alisteners(
*,
on_start: AsyncListener | None = None,
on_end: AsyncListener | None = None,
on_error: AsyncListener | None = None,
) -> Runnable[Input, Output]
Bind async lifecycle listeners to a Runnable.
Returns a new Runnable.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and
any tags or metadata added to the run.
| PARAMETER | DESCRIPTION |
|---|---|
on_start
|
Called asynchronously before the
TYPE:
|
on_end
|
Called asynchronously after the
TYPE:
|
on_error
|
Called asynchronously if the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
Example
from langchain_core.runnables import RunnableLambda, Runnable
from datetime import datetime, timezone
import time
import asyncio
def format_t(timestamp: float) -> str:
return datetime.fromtimestamp(timestamp, tz=timezone.utc).isoformat()
async def test_runnable(time_to_sleep: int):
print(f"Runnable[{time_to_sleep}s]: starts at {format_t(time.time())}")
await asyncio.sleep(time_to_sleep)
print(f"Runnable[{time_to_sleep}s]: ends at {format_t(time.time())}")
async def fn_start(run_obj: Runnable):
print(f"on start callback starts at {format_t(time.time())}")
await asyncio.sleep(3)
print(f"on start callback ends at {format_t(time.time())}")
async def fn_end(run_obj: Runnable):
print(f"on end callback starts at {format_t(time.time())}")
await asyncio.sleep(2)
print(f"on end callback ends at {format_t(time.time())}")
runnable = RunnableLambda(test_runnable).with_alisteners(
on_start=fn_start, on_end=fn_end
)
async def concurrent_runs():
await asyncio.gather(runnable.ainvoke(2), runnable.ainvoke(3))
asyncio.run(concurrent_runs())
# Result:
# on start callback starts at 2025-03-01T07:05:22.875378+00:00
# on start callback starts at 2025-03-01T07:05:22.875495+00:00
# on start callback ends at 2025-03-01T07:05:25.878862+00:00
# on start callback ends at 2025-03-01T07:05:25.878947+00:00
# Runnable[2s]: starts at 2025-03-01T07:05:25.879392+00:00
# Runnable[3s]: starts at 2025-03-01T07:05:25.879804+00:00
# Runnable[2s]: ends at 2025-03-01T07:05:27.881998+00:00
# on end callback starts at 2025-03-01T07:05:27.882360+00:00
# Runnable[3s]: ends at 2025-03-01T07:05:28.881737+00:00
# on end callback starts at 2025-03-01T07:05:28.882428+00:00
# on end callback ends at 2025-03-01T07:05:29.883893+00:00
# on end callback ends at 2025-03-01T07:05:30.884831+00:00
with_types
¶
with_types(
*, input_type: type[Input] | None = None, output_type: type[Output] | None = None
) -> Runnable[Input, Output]
Bind input and output types to a Runnable, returning a new Runnable.
| PARAMETER | DESCRIPTION |
|---|---|
input_type
|
The input type to bind to the
TYPE:
|
output_type
|
The output type to bind to the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
with_retry
¶
with_retry(
*,
retry_if_exception_type: tuple[type[BaseException], ...] = (Exception,),
wait_exponential_jitter: bool = True,
exponential_jitter_params: ExponentialJitterParams | None = None,
stop_after_attempt: int = 3,
) -> Runnable[Input, Output]
Create a new Runnable that retries the original Runnable on exceptions.
| PARAMETER | DESCRIPTION |
|---|---|
retry_if_exception_type
|
A tuple of exception types to retry on.
TYPE:
|
wait_exponential_jitter
|
Whether to add jitter to the wait time between retries.
TYPE:
|
stop_after_attempt
|
The maximum number of attempts to make before giving up.
TYPE:
|
exponential_jitter_params
|
Parameters for
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
Example
from langchain_core.runnables import RunnableLambda
count = 0
def _lambda(x: int) -> None:
global count
count = count + 1
if x == 1:
raise ValueError("x is 1")
else:
pass
runnable = RunnableLambda(_lambda)
try:
runnable.with_retry(
stop_after_attempt=2,
retry_if_exception_type=(ValueError,),
).invoke(1)
except ValueError:
pass
assert count == 2
map
¶
with_fallbacks
¶
with_fallbacks(
fallbacks: Sequence[Runnable[Input, Output]],
*,
exceptions_to_handle: tuple[type[BaseException], ...] = (Exception,),
exception_key: str | None = None,
) -> RunnableWithFallbacks[Input, Output]
Add fallbacks to a Runnable, returning a new Runnable.
The new Runnable will try the original Runnable, and then each fallback
in order, upon failures.
| PARAMETER | DESCRIPTION |
|---|---|
fallbacks
|
A sequence of runnables to try if the original |
exceptions_to_handle
|
A tuple of exception types to handle.
TYPE:
|
exception_key
|
If If If used, the base
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableWithFallbacks[Input, Output]
|
A new |
Example
from typing import Iterator
from langchain_core.runnables import RunnableGenerator
def _generate_immediate_error(input: Iterator) -> Iterator[str]:
raise ValueError()
yield ""
def _generate(input: Iterator) -> Iterator[str]:
yield from "foo bar"
runnable = RunnableGenerator(_generate_immediate_error).with_fallbacks(
[RunnableGenerator(_generate)]
)
print("".join(runnable.stream({}))) # foo bar
| PARAMETER | DESCRIPTION |
|---|---|
fallbacks
|
A sequence of runnables to try if the original |
exceptions_to_handle
|
A tuple of exception types to handle.
TYPE:
|
exception_key
|
If If If used, the base
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableWithFallbacks[Input, Output]
|
A new |
as_tool
¶
as_tool(
args_schema: type[BaseModel] | None = None,
*,
name: str | None = None,
description: str | None = None,
arg_types: dict[str, type] | None = None,
) -> BaseTool
Create a BaseTool from a Runnable.
as_tool will instantiate a BaseTool with a name, description, and
args_schema from a Runnable. Where possible, schemas are inferred
from runnable.get_input_schema.
Alternatively (e.g., if the Runnable takes a dict as input and the specific
dict keys are not typed), the schema can be specified directly with
args_schema.
You can also pass arg_types to just specify the required arguments and their
types.
| PARAMETER | DESCRIPTION |
|---|---|
args_schema
|
The schema for the tool. |
name
|
The name of the tool.
TYPE:
|
description
|
The description of the tool.
TYPE:
|
arg_types
|
A dictionary of argument names to types. |
| RETURNS | DESCRIPTION |
|---|---|
BaseTool
|
A |
TypedDict input
dict input, specifying schema via args_schema
from typing import Any
from pydantic import BaseModel, Field
from langchain_core.runnables import RunnableLambda
def f(x: dict[str, Any]) -> str:
return str(x["a"] * max(x["b"]))
class FSchema(BaseModel):
"""Apply a function to an integer and list of integers."""
a: int = Field(..., description="Integer")
b: list[int] = Field(..., description="List of ints")
runnable = RunnableLambda(f)
as_tool = runnable.as_tool(FSchema)
as_tool.invoke({"a": 3, "b": [1, 2]})
dict input, specifying schema via arg_types
get_lc_namespace
classmethod
¶
lc_id
classmethod
¶
Return a unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path to the object.
For example, for the class langchain.llms.openai.OpenAI, the id is
["langchain", "llms", "openai", "OpenAI"].
to_json
¶
Serialize the Runnable to JSON.
| RETURNS | DESCRIPTION |
|---|---|
SerializedConstructor | SerializedNotImplemented
|
A JSON-serializable representation of the |
to_json_not_implemented
¶
Serialize a "not implemented" object.
| RETURNS | DESCRIPTION |
|---|---|
SerializedNotImplemented
|
|
configurable_fields
¶
configurable_fields(
**kwargs: AnyConfigurableField,
) -> RunnableSerializable[Input, Output]
Configure particular Runnable fields at runtime.
| PARAMETER | DESCRIPTION |
|---|---|
**kwargs
|
A dictionary of
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If a configuration key is not found in the |
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Input, Output]
|
A new |
Example
from langchain_core.runnables import ConfigurableField
from langchain_openai import ChatOpenAI
model = ChatOpenAI(max_tokens=20).configurable_fields(
max_tokens=ConfigurableField(
id="output_token_number",
name="Max tokens in the output",
description="The maximum number of tokens in the output",
)
)
# max_tokens = 20
print(
"max_tokens_20: ", model.invoke("tell me something about chess").content
)
# max_tokens = 200
print(
"max_tokens_200: ",
model.with_config(configurable={"output_token_number": 200})
.invoke("tell me something about chess")
.content,
)
configurable_alternatives
¶
configurable_alternatives(
which: ConfigurableField,
*,
default_key: str = "default",
prefix_keys: bool = False,
**kwargs: Runnable[Input, Output] | Callable[[], Runnable[Input, Output]],
) -> RunnableSerializable[Input, Output]
Configure alternatives for Runnable objects that can be set at runtime.
| PARAMETER | DESCRIPTION |
|---|---|
which
|
The
TYPE:
|
default_key
|
The default key to use if no alternative is selected.
TYPE:
|
prefix_keys
|
Whether to prefix the keys with the
TYPE:
|
**kwargs
|
A dictionary of keys to
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Input, Output]
|
A new |
Example
from langchain_anthropic import ChatAnthropic
from langchain_core.runnables.utils import ConfigurableField
from langchain_openai import ChatOpenAI
model = ChatAnthropic(
model_name="claude-sonnet-4-5-20250929"
).configurable_alternatives(
ConfigurableField(id="llm"),
default_key="anthropic",
openai=ChatOpenAI(),
)
# uses the default model ChatAnthropic
print(model.invoke("which organization created you?").content)
# uses ChatOpenAI
print(
model.with_config(configurable={"llm": "openai"})
.invoke("which organization created you?")
.content
)
set_verbose
¶
generate_prompt
¶
generate_prompt(
prompts: list[PromptValue],
stop: list[str] | None = None,
callbacks: Callbacks = None,
**kwargs: Any,
) -> LLMResult
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
- Take advantage of batched calls,
- Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).
| PARAMETER | DESCRIPTION |
|---|---|
prompts
|
List of A
TYPE:
|
stop
|
Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. |
callbacks
|
Used for executing additional functionality, such as logging or streaming, throughout generation.
TYPE:
|
**kwargs
|
Arbitrary additional keyword arguments. These are usually passed to the model provider API call.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
LLMResult
|
An |
agenerate_prompt
async
¶
agenerate_prompt(
prompts: list[PromptValue],
stop: list[str] | None = None,
callbacks: Callbacks = None,
**kwargs: Any,
) -> LLMResult
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
- Take advantage of batched calls,
- Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).
| PARAMETER | DESCRIPTION |
|---|---|
prompts
|
List of A
TYPE:
|
stop
|
Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. |
callbacks
|
Used for executing additional functionality, such as logging or streaming, throughout generation.
TYPE:
|
**kwargs
|
Arbitrary additional keyword arguments. These are usually passed to the model provider API call.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
LLMResult
|
An |
get_token_ids
¶
get_num_tokens_from_messages
¶
get_num_tokens_from_messages(
messages: list[BaseMessage], tools: Sequence | None = None
) -> int
Get the number of tokens in the messages.
Useful for checking if an input fits in a model's context window.
This should be overridden by model-specific implementations to provide accurate token counts via model-specific tokenizers.
Note
- The base implementation of
get_num_tokens_from_messagesignores tool schemas. - The base implementation of
get_num_tokens_from_messagesadds additional prefixes to messages in represent user roles, which will add to the overall token count. Model-specific implementations may choose to handle this differently.
| PARAMETER | DESCRIPTION |
|---|---|
messages
|
The message inputs to tokenize.
TYPE:
|
tools
|
If provided, sequence of dict,
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
int
|
The sum of the number of tokens across the messages. |
generate
¶
generate(
messages: list[list[BaseMessage]],
stop: list[str] | None = None,
callbacks: Callbacks = None,
*,
tags: list[str] | None = None,
metadata: dict[str, Any] | None = None,
run_name: str | None = None,
run_id: UUID | None = None,
**kwargs: Any,
) -> LLMResult
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
- Take advantage of batched calls,
- Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).
| PARAMETER | DESCRIPTION |
|---|---|
messages
|
List of list of messages.
TYPE:
|
stop
|
Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. |
callbacks
|
Used for executing additional functionality, such as logging or streaming, throughout generation.
TYPE:
|
tags
|
The tags to apply. |
metadata
|
The metadata to apply. |
run_name
|
The name of the run.
TYPE:
|
run_id
|
The ID of the run.
TYPE:
|
**kwargs
|
Arbitrary additional keyword arguments. These are usually passed to the model provider API call.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
LLMResult
|
An |
agenerate
async
¶
agenerate(
messages: list[list[BaseMessage]],
stop: list[str] | None = None,
callbacks: Callbacks = None,
*,
tags: list[str] | None = None,
metadata: dict[str, Any] | None = None,
run_name: str | None = None,
run_id: UUID | None = None,
**kwargs: Any,
) -> LLMResult
Asynchronously pass a sequence of prompts to a model and return generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
- Take advantage of batched calls,
- Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).
| PARAMETER | DESCRIPTION |
|---|---|
messages
|
List of list of messages.
TYPE:
|
stop
|
Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. |
callbacks
|
Used for executing additional functionality, such as logging or streaming, throughout generation.
TYPE:
|
tags
|
The tags to apply. |
metadata
|
The metadata to apply. |
run_name
|
The name of the run.
TYPE:
|
run_id
|
The ID of the run.
TYPE:
|
**kwargs
|
Arbitrary additional keyword arguments. These are usually passed to the model provider API call.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
LLMResult
|
An |
is_lc_serializable
classmethod
¶
is_lc_serializable() -> bool
Is this class serializable?
By design, even if a class inherits from Serializable, it is not serializable
by default. This is to prevent accidental serialization of objects that should
not be serialized.
| RETURNS | DESCRIPTION |
|---|---|
bool
|
Whether the class is serializable. Default is |
build_extra
classmethod
¶
Build extra kwargs from additional params that were passed in.
validate_environment
¶
validate_environment() -> Self
Validates params and passes them to google-generativeai package.
invoke
¶
invoke(
input: LanguageModelInput,
config: RunnableConfig | None = None,
*,
code_execution: bool | None = None,
stop: list[str] | None = None,
**kwargs: Any,
) -> AIMessage
Override invoke on ChatGoogleGenerativeAI to add code_execution.
See the models page to see if your chosen model supports code execution. When enabled, the model can execute code to solve problems.
get_num_tokens
¶
with_structured_output
¶
with_structured_output(
schema: dict | type[BaseModel],
method: Literal["function_calling", "json_mode", "json_schema"]
| None = "function_calling",
*,
include_raw: bool = False,
**kwargs: Any,
) -> Runnable[LanguageModelInput, dict | BaseModel]
Model wrapper that returns outputs formatted to match the given schema.
| PARAMETER | DESCRIPTION |
|---|---|
schema
|
The output schema. Can be passed in as:
If See |
include_raw
|
If If an error occurs during model output parsing it will be raised. If If an error occurs during output parsing it will be caught and returned as well. The final output is always a
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If there are any unsupported |
NotImplementedError
|
If the model does not implement
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[LanguageModelInput, Dict | BaseModel]
|
A If
|
Example: Pydantic schema (include_raw=False):
from pydantic import BaseModel
class AnswerWithJustification(BaseModel):
'''An answer to the user question along with justification for the answer.'''
answer: str
justification: str
model = ChatModel(model="model-name", temperature=0)
structured_model = model.with_structured_output(AnswerWithJustification)
structured_model.invoke(
"What weighs more a pound of bricks or a pound of feathers"
)
# -> AnswerWithJustification(
# answer='They weigh the same',
# justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.'
# )
Example: Pydantic schema (include_raw=True):
from pydantic import BaseModel
class AnswerWithJustification(BaseModel):
'''An answer to the user question along with justification for the answer.'''
answer: str
justification: str
model = ChatModel(model="model-name", temperature=0)
structured_model = model.with_structured_output(
AnswerWithJustification, include_raw=True
)
structured_model.invoke(
"What weighs more a pound of bricks or a pound of feathers"
)
# -> {
# 'raw': AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_Ao02pnFYXD6GN1yzc0uXPsvF', 'function': {'arguments': '{"answer":"They weigh the same.","justification":"Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ."}', 'name': 'AnswerWithJustification'}, 'type': 'function'}]}),
# 'parsed': AnswerWithJustification(answer='They weigh the same.', justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.'),
# 'parsing_error': None
# }
Example: dict schema (include_raw=False):
from pydantic import BaseModel
from langchain_core.utils.function_calling import convert_to_openai_tool
class AnswerWithJustification(BaseModel):
'''An answer to the user question along with justification for the answer.'''
answer: str
justification: str
dict_schema = convert_to_openai_tool(AnswerWithJustification)
model = ChatModel(model="model-name", temperature=0)
structured_model = model.with_structured_output(dict_schema)
structured_model.invoke(
"What weighs more a pound of bricks or a pound of feathers"
)
# -> {
# 'answer': 'They weigh the same',
# 'justification': 'Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume and density of the two substances differ.'
# }
Behavior changed in langchain-core 0.2.26
Added support for TypedDict class.
bind_tools
¶
bind_tools(
tools: Sequence[dict[str, Any] | type | Callable[..., Any] | BaseTool | Tool],
tool_config: dict | _ToolConfigDict | None = None,
*,
tool_choice: _ToolChoiceType | bool | None = None,
**kwargs: Any,
) -> Runnable[LanguageModelInput, AIMessage]
Bind tool-like objects to this chat model.
Assumes model is compatible with google-generativeAI tool-calling API.
| PARAMETER | DESCRIPTION |
|---|---|
tools
|
A list of tool definitions to bind to this chat model. Can be a pydantic model, Tools with Union types in their arguments are now supported and
converted to
TYPE:
|
**kwargs
|
Any additional parameters to pass to the
TYPE:
|
GoogleGenerativeAIEmbeddings
¶
Bases: BaseModel, Embeddings
Google Generative AI Embeddings.
To use, you must have either:
- The
GOOGLE_API_KEYenvironment variable set with your API key, or - Pass your API key using the
google_api_keykwarg to theGoogleGenerativeAIEmbeddingsconstructor.
Example
| METHOD | DESCRIPTION |
|---|---|
validate_environment |
Validates params and passes them to |
embed_documents |
Embed a list of strings using the batch endpoint |
embed_query |
Embed a text, using the non-batch endpoint |
aembed_documents |
Embed a list of strings using the batch endpoint |
aembed_query |
Embed a text, using the non-batch endpoint. |
model
class-attribute
instance-attribute
¶
The name of the embedding model to use.
Example: 'models/gemini-embedding-001'
task_type
class-attribute
instance-attribute
¶
The task type.
Valid options include:
'task_type_unspecified''retrieval_query''retrieval_document''semantic_similarity''classification''clustering'
google_api_key
class-attribute
instance-attribute
¶
google_api_key: SecretStr | None = Field(
default_factory=secret_from_env("GOOGLE_API_KEY", default=None)
)
The Google API key to use.
If not provided, the GOOGLE_API_KEY environment variable will be used.
credentials
class-attribute
instance-attribute
¶
The default custom credentials to use when making API calls.
(google.auth.credentials.Credentials)
If not provided, credentials will be ascertained from the GOOGLE_API_KEY env var.
client_options
class-attribute
instance-attribute
¶
A dictionary of client options to pass to the Google API client.
Example: api_endpoint
base_url
class-attribute
instance-attribute
¶
The base URL to use for the API client.
Alias of client_options['api_endpoint'].
transport
class-attribute
instance-attribute
¶
A string, one of: ['rest', 'grpc', 'grpc_asyncio'].
request_options
class-attribute
instance-attribute
¶
A dictionary of request options to pass to the Google API client.
Example: {'timeout': 10}
validate_environment
¶
validate_environment() -> Self
Validates params and passes them to google-generativeai package.
embed_documents
¶
embed_documents(
texts: list[str],
*,
batch_size: int = _DEFAULT_BATCH_SIZE,
task_type: str | None = None,
titles: list[str] | None = None,
output_dimensionality: int | None = None,
) -> list[list[float]]
Embed a list of strings using the batch endpoint
Google Generative AI currently sets a max batch size of 100 strings.
| PARAMETER | DESCRIPTION |
|---|---|
texts
|
The list of strings to embed. |
batch_size
|
Batch size of embeddings to send to the model
TYPE:
|
task_type
|
TYPE:
|
titles
|
Optional list of titles for texts provided. Only applicable when |
output_dimensionality
|
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[list[float]]
|
List of embeddings, one for each text. |
embed_query
¶
embed_query(
text: str,
*,
task_type: str | None = None,
title: str | None = None,
output_dimensionality: int | None = None,
) -> list[float]
Embed a text, using the non-batch endpoint
| PARAMETER | DESCRIPTION |
|---|---|
text
|
The text to embed.
TYPE:
|
task_type
|
TYPE:
|
title
|
Optional title for the text. Only applicable when
TYPE:
|
output_dimensionality
|
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[float]
|
Embedding for the text. |
aembed_documents
async
¶
aembed_documents(
texts: list[str],
*,
batch_size: int = _DEFAULT_BATCH_SIZE,
task_type: str | None = None,
titles: list[str] | None = None,
output_dimensionality: int | None = None,
) -> list[list[float]]
Embed a list of strings using the batch endpoint
Google Generative AI currently sets a max batch size of 100 strings.
| PARAMETER | DESCRIPTION |
|---|---|
texts
|
The list of strings to embed. |
batch_size
|
The batch size of embeddings to send to the model
TYPE:
|
task_type
|
TYPE:
|
titles
|
Optional list of titles for texts provided. Only applicable when |
output_dimensionality
|
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[list[float]]
|
List of embeddings, one for each text. |
aembed_query
async
¶
aembed_query(
text: str,
*,
task_type: str | None = None,
title: str | None = None,
output_dimensionality: int | None = None,
) -> list[float]
Embed a text, using the non-batch endpoint.
| PARAMETER | DESCRIPTION |
|---|---|
text
|
The text to embed.
TYPE:
|
task_type
|
TYPE:
|
title
|
Optional title for the text. Only applicable when
TYPE:
|
output_dimensionality
|
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[float]
|
Embedding for the text. |
GoogleVectorStore
¶
Bases: VectorStore
Google GenerativeAI Vector Store.
Currently, it computes the embedding vectors on the server side.
Add texts to an existing corpus
Create a new corpus
Query the corpus for relevant passages
You can also operate at Google's Document level.
Add texts to an existing Google Vector Store Document
Create a new Google Vector Store Document
Query the Google document
| METHOD | DESCRIPTION |
|---|---|
get_by_ids |
Get documents by their IDs. |
aget_by_ids |
Async get documents by their IDs. |
aadd_texts |
Async run more texts through the embeddings and add to the |
add_documents |
Add or update documents in the |
aadd_documents |
Async run more documents through the embeddings and add to the |
search |
Return docs most similar to query using a specified search type. |
asearch |
Async return docs most similar to query using a specified search type. |
asimilarity_search_with_score |
Async run similarity search with distance. |
similarity_search_with_relevance_scores |
Return docs and relevance scores in the range |
asimilarity_search_with_relevance_scores |
Async return docs and relevance scores in the range |
asimilarity_search |
Async return docs most similar to query. |
similarity_search_by_vector |
Return docs most similar to embedding vector. |
asimilarity_search_by_vector |
Async return docs most similar to embedding vector. |
max_marginal_relevance_search |
Return docs selected using the maximal marginal relevance. |
amax_marginal_relevance_search |
Async return docs selected using the maximal marginal relevance. |
max_marginal_relevance_search_by_vector |
Return docs selected using the maximal marginal relevance. |
amax_marginal_relevance_search_by_vector |
Async return docs selected using the maximal marginal relevance. |
from_documents |
Return |
afrom_documents |
Async return |
afrom_texts |
Async return |
as_retriever |
Return |
__init__ |
Returns an existing Google Semantic Retriever corpus or document. |
create_corpus |
Create a Google Semantic Retriever corpus. |
create_document |
Create a Google Semantic Retriever document. |
from_texts |
Returns a vector store of an existing document with the specified text. |
add_texts |
Add texts to the vector store. |
similarity_search |
Search the vector store for relevant texts. |
similarity_search_with_score |
Run similarity search with distance. |
delete |
Delete chunks. |
adelete |
Delete chunks asynchronously. |
name
property
¶
name: str
Returns the name of the Google entity.
You shouldn't need to care about this unless you want to access your corpus or document via Google Generative AI API.
document_id
property
¶
document_id: str | None
Returns the document ID managed by this vector store.
get_by_ids
¶
Get documents by their IDs.
The returned documents are expected to have the ID field set to the ID of the document in the vector store.
Fewer documents may be returned than requested if some IDs are not found or if there are duplicated IDs.
Users should not assume that the order of the returned documents matches the order of the input IDs. Instead, users should rely on the ID field of the returned documents.
This method should NOT raise exceptions if no documents are found for some IDs.
| PARAMETER | DESCRIPTION |
|---|---|
ids
|
List of IDs to retrieve. |
| RETURNS | DESCRIPTION |
|---|---|
list[Document]
|
List of |
aget_by_ids
async
¶
Async get documents by their IDs.
The returned documents are expected to have the ID field set to the ID of the document in the vector store.
Fewer documents may be returned than requested if some IDs are not found or if there are duplicated IDs.
Users should not assume that the order of the returned documents matches the order of the input IDs. Instead, users should rely on the ID field of the returned documents.
This method should NOT raise exceptions if no documents are found for some IDs.
| PARAMETER | DESCRIPTION |
|---|---|
ids
|
List of IDs to retrieve. |
| RETURNS | DESCRIPTION |
|---|---|
list[Document]
|
List of |
aadd_texts
async
¶
aadd_texts(
texts: Iterable[str],
metadatas: list[dict] | None = None,
*,
ids: list[str] | None = None,
**kwargs: Any,
) -> list[str]
Async run more texts through the embeddings and add to the VectorStore.
| PARAMETER | DESCRIPTION |
|---|---|
texts
|
Iterable of strings to add to the |
metadatas
|
Optional list of metadatas associated with the texts. |
ids
|
Optional list |
**kwargs
|
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[str]
|
List of IDs from adding the texts into the |
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If the number of metadatas does not match the number of texts. |
ValueError
|
If the number of IDs does not match the number of texts. |
add_documents
¶
Add or update documents in the VectorStore.
| PARAMETER | DESCRIPTION |
|---|---|
documents
|
Documents to add to the |
**kwargs
|
Additional keyword arguments. If kwargs contains IDs and documents contain ids, the IDs in the kwargs will receive precedence.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[str]
|
List of IDs of the added texts. |
aadd_documents
async
¶
search
¶
Return docs most similar to query using a specified search type.
| PARAMETER | DESCRIPTION |
|---|---|
query
|
Input text.
TYPE:
|
search_type
|
Type of search to perform. Can be
TYPE:
|
**kwargs
|
Arguments to pass to the search method.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[Document]
|
List of |
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If |
asearch
async
¶
Async return docs most similar to query using a specified search type.
| PARAMETER | DESCRIPTION |
|---|---|
query
|
Input text.
TYPE:
|
search_type
|
Type of search to perform. Can be
TYPE:
|
**kwargs
|
Arguments to pass to the search method.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[Document]
|
List of |
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If |
asimilarity_search_with_score
async
¶
similarity_search_with_relevance_scores
¶
similarity_search_with_relevance_scores(
query: str, k: int = 4, **kwargs: Any
) -> list[tuple[Document, float]]
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
| PARAMETER | DESCRIPTION |
|---|---|
query
|
Input text.
TYPE:
|
k
|
Number of
TYPE:
|
**kwargs
|
kwargs to be passed to similarity search. Should include
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[tuple[Document, float]]
|
List of tuples of |
asimilarity_search_with_relevance_scores
async
¶
asimilarity_search_with_relevance_scores(
query: str, k: int = 4, **kwargs: Any
) -> list[tuple[Document, float]]
Async return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
| PARAMETER | DESCRIPTION |
|---|---|
query
|
Input text.
TYPE:
|
k
|
Number of
TYPE:
|
**kwargs
|
kwargs to be passed to similarity search. Should include
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[tuple[Document, float]]
|
List of tuples of |
asimilarity_search
async
¶
Async return docs most similar to query.
| PARAMETER | DESCRIPTION |
|---|---|
query
|
Input text.
TYPE:
|
k
|
Number of
TYPE:
|
**kwargs
|
Arguments to pass to the search method.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[Document]
|
List of |
similarity_search_by_vector
¶
Return docs most similar to embedding vector.
| PARAMETER | DESCRIPTION |
|---|---|
embedding
|
Embedding to look up documents similar to. |
k
|
Number of
TYPE:
|
**kwargs
|
Arguments to pass to the search method.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[Document]
|
List of |
asimilarity_search_by_vector
async
¶
Async return docs most similar to embedding vector.
| PARAMETER | DESCRIPTION |
|---|---|
embedding
|
Embedding to look up documents similar to. |
k
|
Number of
TYPE:
|
**kwargs
|
Arguments to pass to the search method.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[Document]
|
List of |
max_marginal_relevance_search
¶
max_marginal_relevance_search(
query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any
) -> list[Document]
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.
| PARAMETER | DESCRIPTION |
|---|---|
query
|
Text to look up documents similar to.
TYPE:
|
k
|
Number of
TYPE:
|
fetch_k
|
Number of
TYPE:
|
lambda_mult
|
Number between
TYPE:
|
**kwargs
|
Arguments to pass to the search method.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[Document]
|
List of |
amax_marginal_relevance_search
async
¶
amax_marginal_relevance_search(
query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any
) -> list[Document]
Async return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.
| PARAMETER | DESCRIPTION |
|---|---|
query
|
Text to look up documents similar to.
TYPE:
|
k
|
Number of
TYPE:
|
fetch_k
|
Number of
TYPE:
|
lambda_mult
|
Number between
TYPE:
|
**kwargs
|
Arguments to pass to the search method.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[Document]
|
List of |
max_marginal_relevance_search_by_vector
¶
max_marginal_relevance_search_by_vector(
embedding: list[float],
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
**kwargs: Any,
) -> list[Document]
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.
| PARAMETER | DESCRIPTION |
|---|---|
embedding
|
Embedding to look up documents similar to. |
k
|
Number of
TYPE:
|
fetch_k
|
Number of
TYPE:
|
lambda_mult
|
Number between
TYPE:
|
**kwargs
|
Arguments to pass to the search method.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[Document]
|
List of |
amax_marginal_relevance_search_by_vector
async
¶
amax_marginal_relevance_search_by_vector(
embedding: list[float],
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
**kwargs: Any,
) -> list[Document]
Async return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.
| PARAMETER | DESCRIPTION |
|---|---|
embedding
|
Embedding to look up documents similar to. |
k
|
Number of
TYPE:
|
fetch_k
|
Number of
TYPE:
|
lambda_mult
|
Number between
TYPE:
|
**kwargs
|
Arguments to pass to the search method.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[Document]
|
List of |
from_documents
classmethod
¶
from_documents(documents: list[Document], embedding: Embeddings, **kwargs: Any) -> Self
Return VectorStore initialized from documents and embeddings.
| PARAMETER | DESCRIPTION |
|---|---|
documents
|
List of |
embedding
|
Embedding function to use.
TYPE:
|
**kwargs
|
Additional keyword arguments.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Self
|
|
afrom_documents
async
classmethod
¶
afrom_documents(
documents: list[Document], embedding: Embeddings, **kwargs: Any
) -> Self
Async return VectorStore initialized from documents and embeddings.
| PARAMETER | DESCRIPTION |
|---|---|
documents
|
List of |
embedding
|
Embedding function to use.
TYPE:
|
**kwargs
|
Additional keyword arguments.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Self
|
|
afrom_texts
async
classmethod
¶
afrom_texts(
texts: list[str],
embedding: Embeddings,
metadatas: list[dict] | None = None,
*,
ids: list[str] | None = None,
**kwargs: Any,
) -> Self
Async return VectorStore initialized from texts and embeddings.
| PARAMETER | DESCRIPTION |
|---|---|
texts
|
Texts to add to the |
embedding
|
Embedding function to use.
TYPE:
|
metadatas
|
Optional list of metadatas associated with the texts. |
ids
|
Optional list of IDs associated with the texts. |
**kwargs
|
Additional keyword arguments.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Self
|
|
as_retriever
¶
as_retriever(**kwargs: Any) -> VectorStoreRetriever
Return VectorStoreRetriever initialized from this VectorStore.
| PARAMETER | DESCRIPTION |
|---|---|
**kwargs
|
Keyword arguments to pass to the search function. Can include:
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
VectorStoreRetriever
|
Retriever class for |
Examples:
# Retrieve more documents with higher diversity
# Useful if your dataset has many similar documents
docsearch.as_retriever(
search_type="mmr", search_kwargs={"k": 6, "lambda_mult": 0.25}
)
# Fetch more documents for the MMR algorithm to consider
# But only return the top 5
docsearch.as_retriever(search_type="mmr", search_kwargs={"k": 5, "fetch_k": 50})
# Only retrieve documents that have a relevance score
# Above a certain threshold
docsearch.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={"score_threshold": 0.8},
)
# Only get the single most similar document from the dataset
docsearch.as_retriever(search_kwargs={"k": 1})
# Use a filter to only retrieve documents from a specific paper
docsearch.as_retriever(
search_kwargs={"filter": {"paper_title": "GPT-4 Technical Report"}}
)
__init__
¶
Returns an existing Google Semantic Retriever corpus or document.
If just the corpus ID is provided, the vector store operates over all documents within that corpus.
If the document ID is provided, the vector store operates over just that document.
| RAISES | DESCRIPTION |
|---|---|
DoesNotExistsException
|
If the IDs do not match to anything on Google
server. In this case, consider using |
create_corpus
classmethod
¶
create_corpus(
corpus_id: str | None = None, display_name: str | None = None
) -> GoogleVectorStore
Create a Google Semantic Retriever corpus.
| PARAMETER | DESCRIPTION |
|---|---|
corpus_id
|
The ID to use to create the new corpus. If not provided, Google server will provide one.
TYPE:
|
display_name
|
The title of the new corpus. If not provided, Google server will provide one.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
GoogleVectorStore
|
An instance of vector store that points to the newly created corpus. |
create_document
classmethod
¶
create_document(
corpus_id: str,
document_id: str | None = None,
display_name: str | None = None,
metadata: dict[str, Any] | None = None,
) -> GoogleVectorStore
Create a Google Semantic Retriever document.
| PARAMETER | DESCRIPTION |
|---|---|
corpus_id
|
ID of an existing corpus.
TYPE:
|
document_id
|
The ID to use to create the new Google Semantic Retriever document. If not provided, Google server will provide one.
TYPE:
|
display_name
|
The title of the new document. If not provided, Google server will provide one.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
GoogleVectorStore
|
An instance of vector store that points to the newly created document. |
from_texts
classmethod
¶
from_texts(
texts: list[str],
embedding: Embeddings | None = None,
metadatas: list[dict[str, Any]] | None = None,
*,
corpus_id: str | None = None,
document_id: str | None = None,
**kwargs: Any,
) -> GoogleVectorStore
Returns a vector store of an existing document with the specified text.
| PARAMETER | DESCRIPTION |
|---|---|
corpus_id
|
REQUIRED. Must be an existing corpus.
TYPE:
|
document_id
|
REQUIRED. Must be an existing document.
TYPE:
|
texts
|
Texts to be loaded into the vector store. |
| RETURNS | DESCRIPTION |
|---|---|
GoogleVectorStore
|
A vector store pointing to the specified Google Semantic Retriever Document. |
| RAISES | DESCRIPTION |
|---|---|
DoesNotExistsException
|
If the IDs do not match to anything at Google server. |
add_texts
¶
similarity_search
¶
similarity_search(
query: str, k: int = 4, filter: dict[str, Any] | None = None, **kwargs: Any
) -> list[Document]
Search the vector store for relevant texts.
similarity_search_with_score
¶
similarity_search_with_score(
query: str, k: int = 4, filter: dict[str, Any] | None = None, **kwargs: Any
) -> list[tuple[Document, float]]
Run similarity search with distance.
delete
¶
Delete chunks.
Note that the "ids" are not corpus ID or document ID. Rather, these
are the entity names returned by add_texts.
| RETURNS | DESCRIPTION |
|---|---|
bool | None
|
|
GoogleGenerativeAI
¶
Bases: _BaseGoogleGenerativeAI, BaseLLM
Google GenerativeAI text completion large language models (legacy LLMs).
Basic Usage
| METHOD | DESCRIPTION |
|---|---|
get_name |
Get the name of the |
get_input_schema |
Get a Pydantic model that can be used to validate input to the |
get_input_jsonschema |
Get a JSON schema that represents the input to the |
get_output_schema |
Get a Pydantic model that can be used to validate output to the |
get_output_jsonschema |
Get a JSON schema that represents the output of the |
config_schema |
The type of config this |
get_config_jsonschema |
Get a JSON schema that represents the config of the |
get_graph |
Return a graph representation of this |
get_prompts |
Return a list of prompts used by this |
__or__ |
Runnable "or" operator. |
__ror__ |
Runnable "reverse-or" operator. |
pipe |
Pipe |
pick |
Pick keys from the output |
assign |
Assigns new fields to the |
invoke |
Transform a single input into an output. |
ainvoke |
Transform a single input into an output. |
batch |
Default implementation runs invoke in parallel using a thread pool executor. |
batch_as_completed |
Run |
abatch |
Default implementation runs |
abatch_as_completed |
Run |
stream |
Default implementation of |
astream |
Default implementation of |
astream_log |
Stream all output from a |
astream_events |
Generate a stream of events. |
transform |
Transform inputs to outputs. |
atransform |
Transform inputs to outputs. |
bind |
Bind arguments to a |
with_config |
Bind config to a |
with_listeners |
Bind lifecycle listeners to a |
with_alisteners |
Bind async lifecycle listeners to a |
with_types |
Bind input and output types to a |
with_retry |
Create a new |
map |
Return a new |
with_fallbacks |
Add fallbacks to a |
as_tool |
Create a |
is_lc_serializable |
Is this class serializable? |
get_lc_namespace |
Get the namespace of the LangChain object. |
lc_id |
Return a unique identifier for this class for serialization purposes. |
to_json |
Serialize the |
to_json_not_implemented |
Serialize a "not implemented" object. |
configurable_fields |
Configure particular |
configurable_alternatives |
Configure alternatives for |
set_verbose |
If verbose is |
generate_prompt |
Pass a sequence of prompts to the model and return model generations. |
agenerate_prompt |
Asynchronously pass a sequence of prompts and return model generations. |
with_structured_output |
Not implemented on this class. |
get_token_ids |
Return the ordered IDs of the tokens in a text. |
get_num_tokens_from_messages |
Get the number of tokens in the messages. |
generate |
Pass a sequence of prompts to a model and return generations. |
agenerate |
Asynchronously pass a sequence of prompts to a model and return generations. |
__str__ |
Return a string representation of the object for printing. |
dict |
Return a dictionary of the LLM. |
save |
Save the LLM. |
__init__ |
Needed for arg validation. |
validate_environment |
Validates params and passes them to google-generativeai package. |
get_num_tokens |
Get the number of tokens present in the text. |
name
class-attribute
instance-attribute
¶
name: str | None = None
The name of the Runnable. Used for debugging and tracing.
input_schema
property
¶
The type of input this Runnable accepts specified as a Pydantic model.
output_schema
property
¶
Output schema.
The type of output this Runnable produces specified as a Pydantic model.
config_specs
property
¶
config_specs: list[ConfigurableFieldSpec]
List configurable fields for this Runnable.
lc_attributes
property
¶
lc_attributes: dict
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
Default is an empty dictionary.
cache
class-attribute
instance-attribute
¶
Whether to cache the response.
- If
True, will use the global cache. - If
False, will not use a cache - If
None, will use the global cache if it's set, otherwise no cache. - If instance of
BaseCache, will use the provided cache.
Caching is not currently supported for streaming methods of models.
verbose
class-attribute
instance-attribute
¶
Whether to print out response text.
callbacks
class-attribute
instance-attribute
¶
callbacks: Callbacks = Field(default=None, exclude=True)
Callbacks to add to the run trace.
tags
class-attribute
instance-attribute
¶
Tags to add to the run trace.
metadata
class-attribute
instance-attribute
¶
Metadata to add to the run trace.
custom_get_token_ids
class-attribute
instance-attribute
¶
Optional encoder to use for counting tokens.
google_api_key
class-attribute
instance-attribute
¶
google_api_key: SecretStr | None = Field(
alias="api_key",
default_factory=secret_from_env(["GOOGLE_API_KEY", "GEMINI_API_KEY"], default=None),
)
Google AI API key.
If not specified, will check the env vars GOOGLE_API_KEY and GEMINI_API_KEY with
precedence given to GOOGLE_API_KEY.
credentials
class-attribute
instance-attribute
¶
credentials: Any = None
The default custom credentials to use when making API calls.
If not provided, credentials will be ascertained from the GOOGLE_API_KEY
or GEMINI_API_KEY env vars with precedence given to GOOGLE_API_KEY.
temperature
class-attribute
instance-attribute
¶
temperature: float = 0.7
Run inference with this temperature.
Must be within [0.0, 2.0].
Gemini 3.0+ models
Setting temperature < 1.0 for Gemini 3.0+ models can cause infinite loops,
degraded reasoning performance, and failure on complex tasks.
top_p
class-attribute
instance-attribute
¶
top_p: float | None = None
Decode using nucleus sampling.
Consider the smallest set of tokens whose probability sum is at least top_p.
Must be within [0.0, 1.0].
top_k
class-attribute
instance-attribute
¶
top_k: int | None = None
Decode using top-k sampling: consider the set of top_k most probable tokens.
Must be positive.
max_output_tokens
class-attribute
instance-attribute
¶
Maximum number of tokens to include in a candidate.
Must be greater than zero.
If unset, will use the model's default value, which varies by model.
See docs for model-specific limits.
To constrain the number of thinking tokens to use when generating a response, see
the thinking_budget parameter.
n
class-attribute
instance-attribute
¶
n: int = 1
Number of chat completions to generate for each prompt.
Note that the API may not return the full n completions if duplicates are
generated.
max_retries
class-attribute
instance-attribute
¶
The maximum number of retries to make when generating.
timeout
class-attribute
instance-attribute
¶
The maximum number of seconds to wait for a response.
client_options
class-attribute
instance-attribute
¶
A dictionary of client options to pass to the Google API client.
Example: api_endpoint
Warning
If both client_options['api_endpoint'] and base_url are specified,
the api_endpoint in client_options takes precedence.
base_url
class-attribute
instance-attribute
¶
Base URL to use for the API client.
This is a convenience alias for client_options['api_endpoint'].
-
REST transport (
transport="rest"): Accepts full URLs with pathshttps://api.example.com/v1/pathhttps://webhook.site/unique-path
-
gRPC transports (
transport="grpc"ortransport="grpc_asyncio"): Only acceptshostname:portformatapi.example.com:443custom.googleapis.com:443https://api.example.com(auto-formatted toapi.example.com:443)- NOT
https://webhook.site/path(paths are not supported in gRPC) - NOT
api.example.com/path(paths are not supported in gRPC)
Warning
If client_options already contains an api_endpoint, this parameter will be
ignored in favor of the existing value.
transport
class-attribute
instance-attribute
¶
A string, one of: ['rest', 'grpc', 'grpc_asyncio'].
The Google client library defaults to 'grpc' for sync clients.
For async clients, 'rest' is converted to 'grpc_asyncio' unless
a custom endpoint is specified.
additional_headers
class-attribute
instance-attribute
¶
Key-value dictionary representing additional headers for the model call
response_modalities
class-attribute
instance-attribute
¶
A list of modalities of the response
media_resolution
class-attribute
instance-attribute
¶
media_resolution: MediaResolution | None = Field(default=None)
Media resolution for the input media.
May be defined at the individual part level, allowing for mixed-resolution requests (e.g., images and videos of different resolutions in the same request).
May be 'low', 'medium', or 'high'.
Can be set either per-part or globally for all media inputs in the request. To set
globally, set in the generation_config.
Model compatibility
Setting per-part media resolution requests to Gemini 2.5 models is not supported.
thinking_budget
class-attribute
instance-attribute
¶
Indicates the thinking budget in tokens.
Used to disable thinking for supported models (when set to 0) or to constrain
the number of tokens used for thinking.
Dynamic thinking (allowing the model to decide how many tokens to use) is
enabled when set to -1.
More information, including per-model limits, can be found in the Gemini API docs.
include_thoughts
class-attribute
instance-attribute
¶
Indicates whether to include thoughts in the response.
Note
This parameter is only applicable for models that support thinking.
This does not disable thinking; to disable thinking, set thinking_budget to
0. for supported models. See the thinking_budget parameter for more details.
safety_settings
class-attribute
instance-attribute
¶
safety_settings: dict[HarmCategory, HarmBlockThreshold] | None = None
Default safety settings to use for all generations.
Example
from google.generativeai.types.safety_types import HarmBlockThreshold, HarmCategory
safety_settings = {
HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_ONLY_HIGH,
HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,
}
get_name
¶
get_input_schema
¶
get_input_schema(config: RunnableConfig | None = None) -> type[BaseModel]
Get a Pydantic model that can be used to validate input to the Runnable.
Runnable objects that leverage the configurable_fields and
configurable_alternatives methods will have a dynamic input schema that
depends on which configuration the Runnable is invoked with.
This method allows to get an input schema for a specific configuration.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
A config to use when generating the schema.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
type[BaseModel]
|
A Pydantic model that can be used to validate input. |
get_input_jsonschema
¶
get_input_jsonschema(config: RunnableConfig | None = None) -> dict[str, Any]
Get a JSON schema that represents the input to the Runnable.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
A config to use when generating the schema.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
dict[str, Any]
|
A JSON schema that represents the input to the |
Example
Added in langchain-core 0.3.0
get_output_schema
¶
get_output_schema(config: RunnableConfig | None = None) -> type[BaseModel]
Get a Pydantic model that can be used to validate output to the Runnable.
Runnable objects that leverage the configurable_fields and
configurable_alternatives methods will have a dynamic output schema that
depends on which configuration the Runnable is invoked with.
This method allows to get an output schema for a specific configuration.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
A config to use when generating the schema.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
type[BaseModel]
|
A Pydantic model that can be used to validate output. |
get_output_jsonschema
¶
get_output_jsonschema(config: RunnableConfig | None = None) -> dict[str, Any]
Get a JSON schema that represents the output of the Runnable.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
A config to use when generating the schema.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
dict[str, Any]
|
A JSON schema that represents the output of the |
Example
Added in langchain-core 0.3.0
config_schema
¶
The type of config this Runnable accepts specified as a Pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
| PARAMETER | DESCRIPTION |
|---|---|
include
|
A list of fields to include in the config schema. |
| RETURNS | DESCRIPTION |
|---|---|
type[BaseModel]
|
A Pydantic model that can be used to validate config. |
get_config_jsonschema
¶
get_graph
¶
get_graph(config: RunnableConfig | None = None) -> Graph
Return a graph representation of this Runnable.
get_prompts
¶
get_prompts(config: RunnableConfig | None = None) -> list[BasePromptTemplate]
Return a list of prompts used by this Runnable.
__or__
¶
__or__(
other: Runnable[Any, Other]
| Callable[[Iterator[Any]], Iterator[Other]]
| Callable[[AsyncIterator[Any]], AsyncIterator[Other]]
| Callable[[Any], Other]
| Mapping[str, Runnable[Any, Other] | Callable[[Any], Other] | Any],
) -> RunnableSerializable[Input, Other]
Runnable "or" operator.
Compose this Runnable with another object to create a
RunnableSequence.
| PARAMETER | DESCRIPTION |
|---|---|
other
|
Another
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Input, Other]
|
A new |
__ror__
¶
__ror__(
other: Runnable[Other, Any]
| Callable[[Iterator[Other]], Iterator[Any]]
| Callable[[AsyncIterator[Other]], AsyncIterator[Any]]
| Callable[[Other], Any]
| Mapping[str, Runnable[Other, Any] | Callable[[Other], Any] | Any],
) -> RunnableSerializable[Other, Output]
Runnable "reverse-or" operator.
Compose this Runnable with another object to create a
RunnableSequence.
| PARAMETER | DESCRIPTION |
|---|---|
other
|
Another
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Other, Output]
|
A new |
pipe
¶
pipe(
*others: Runnable[Any, Other] | Callable[[Any], Other], name: str | None = None
) -> RunnableSerializable[Input, Other]
Pipe Runnable objects.
Compose this Runnable with Runnable-like objects to make a
RunnableSequence.
Equivalent to RunnableSequence(self, *others) or self | others[0] | ...
Example
from langchain_core.runnables import RunnableLambda
def add_one(x: int) -> int:
return x + 1
def mul_two(x: int) -> int:
return x * 2
runnable_1 = RunnableLambda(add_one)
runnable_2 = RunnableLambda(mul_two)
sequence = runnable_1.pipe(runnable_2)
# Or equivalently:
# sequence = runnable_1 | runnable_2
# sequence = RunnableSequence(first=runnable_1, last=runnable_2)
sequence.invoke(1)
await sequence.ainvoke(1)
# -> 4
sequence.batch([1, 2, 3])
await sequence.abatch([1, 2, 3])
# -> [4, 6, 8]
| PARAMETER | DESCRIPTION |
|---|---|
*others
|
Other
TYPE:
|
name
|
An optional name for the resulting
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Input, Other]
|
A new |
pick
¶
Pick keys from the output dict of this Runnable.
Pick a single key
import json
from langchain_core.runnables import RunnableLambda, RunnableMap
as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)
chain = RunnableMap(str=as_str, json=as_json)
chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3]}
json_only_chain = chain.pick("json")
json_only_chain.invoke("[1, 2, 3]")
# -> [1, 2, 3]
Pick a list of keys
from typing import Any
import json
from langchain_core.runnables import RunnableLambda, RunnableMap
as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)
def as_bytes(x: Any) -> bytes:
return bytes(x, "utf-8")
chain = RunnableMap(
str=as_str, json=as_json, bytes=RunnableLambda(as_bytes)
)
chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3], "bytes": b"[1, 2, 3]"}
json_and_bytes_chain = chain.pick(["json", "bytes"])
json_and_bytes_chain.invoke("[1, 2, 3]")
# -> {"json": [1, 2, 3], "bytes": b"[1, 2, 3]"}
| PARAMETER | DESCRIPTION |
|---|---|
keys
|
A key or list of keys to pick from the output dict. |
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Any, Any]
|
a new |
assign
¶
assign(
**kwargs: Runnable[dict[str, Any], Any]
| Callable[[dict[str, Any]], Any]
| Mapping[str, Runnable[dict[str, Any], Any] | Callable[[dict[str, Any]], Any]],
) -> RunnableSerializable[Any, Any]
Assigns new fields to the dict output of this Runnable.
from langchain_core.language_models.fake import FakeStreamingListLLM
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import SystemMessagePromptTemplate
from langchain_core.runnables import Runnable
from operator import itemgetter
prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
model = FakeStreamingListLLM(responses=["foo-lish"])
chain: Runnable = prompt | model | {"str": StrOutputParser()}
chain_with_assign = chain.assign(hello=itemgetter("str") | model)
print(chain_with_assign.input_schema.model_json_schema())
# {'title': 'PromptInput', 'type': 'object', 'properties':
{'question': {'title': 'Question', 'type': 'string'}}}
print(chain_with_assign.output_schema.model_json_schema())
# {'title': 'RunnableSequenceOutput', 'type': 'object', 'properties':
{'str': {'title': 'Str',
'type': 'string'}, 'hello': {'title': 'Hello', 'type': 'string'}}}
| PARAMETER | DESCRIPTION |
|---|---|
**kwargs
|
A mapping of keys to
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Any, Any]
|
A new |
invoke
¶
invoke(
input: LanguageModelInput,
config: RunnableConfig | None = None,
*,
stop: list[str] | None = None,
**kwargs: Any,
) -> str
Transform a single input into an output.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
A config to use when invoking the The config supports standard keys like Please refer to
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Output
|
The output of the |
ainvoke
async
¶
ainvoke(
input: LanguageModelInput,
config: RunnableConfig | None = None,
*,
stop: list[str] | None = None,
**kwargs: Any,
) -> str
Transform a single input into an output.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
A config to use when invoking the The config supports standard keys like Please refer to
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Output
|
The output of the |
batch
¶
batch(
inputs: list[LanguageModelInput],
config: RunnableConfig | list[RunnableConfig] | None = None,
*,
return_exceptions: bool = False,
**kwargs: Any,
) -> list[str]
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses must override this method if they can batch more efficiently;
e.g., if the underlying Runnable uses an API which supports a batch mode.
| PARAMETER | DESCRIPTION |
|---|---|
inputs
|
A list of inputs to the
TYPE:
|
config
|
A config to use when invoking the Please refer to
TYPE:
|
return_exceptions
|
Whether to return exceptions instead of raising them.
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[Output]
|
A list of outputs from the |
batch_as_completed
¶
batch_as_completed(
inputs: Sequence[Input],
config: RunnableConfig | Sequence[RunnableConfig] | None = None,
*,
return_exceptions: bool = False,
**kwargs: Any | None,
) -> Iterator[tuple[int, Output | Exception]]
Run invoke in parallel on a list of inputs.
Yields results as they complete.
| PARAMETER | DESCRIPTION |
|---|---|
inputs
|
A list of inputs to the
TYPE:
|
config
|
A config to use when invoking the The config supports standard keys like Please refer to
TYPE:
|
return_exceptions
|
Whether to return exceptions instead of raising them.
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
tuple[int, Output | Exception]
|
Tuples of the index of the input and the output from the |
abatch
async
¶
abatch(
inputs: list[LanguageModelInput],
config: RunnableConfig | list[RunnableConfig] | None = None,
*,
return_exceptions: bool = False,
**kwargs: Any,
) -> list[str]
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses must override this method if they can batch more efficiently;
e.g., if the underlying Runnable uses an API which supports a batch mode.
| PARAMETER | DESCRIPTION |
|---|---|
inputs
|
A list of inputs to the
TYPE:
|
config
|
A config to use when invoking the The config supports standard keys like Please refer to
TYPE:
|
return_exceptions
|
Whether to return exceptions instead of raising them.
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
list[Output]
|
A list of outputs from the |
abatch_as_completed
async
¶
abatch_as_completed(
inputs: Sequence[Input],
config: RunnableConfig | Sequence[RunnableConfig] | None = None,
*,
return_exceptions: bool = False,
**kwargs: Any | None,
) -> AsyncIterator[tuple[int, Output | Exception]]
Run ainvoke in parallel on a list of inputs.
Yields results as they complete.
| PARAMETER | DESCRIPTION |
|---|---|
inputs
|
A list of inputs to the
TYPE:
|
config
|
A config to use when invoking the The config supports standard keys like Please refer to
TYPE:
|
return_exceptions
|
Whether to return exceptions instead of raising them.
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[tuple[int, Output | Exception]]
|
A tuple of the index of the input and the output from the |
stream
¶
stream(
input: LanguageModelInput,
config: RunnableConfig | None = None,
*,
stop: list[str] | None = None,
**kwargs: Any,
) -> Iterator[str]
Default implementation of stream, which calls invoke.
Subclasses must override this method if they support streaming output.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
Output
|
The output of the |
astream
async
¶
astream(
input: LanguageModelInput,
config: RunnableConfig | None = None,
*,
stop: list[str] | None = None,
**kwargs: Any,
) -> AsyncIterator[str]
Default implementation of astream, which calls ainvoke.
Subclasses must override this method if they support streaming output.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[Output]
|
The output of the |
astream_log
async
¶
astream_log(
input: Any,
config: RunnableConfig | None = None,
*,
diff: bool = True,
with_streamed_output_list: bool = True,
include_names: Sequence[str] | None = None,
include_types: Sequence[str] | None = None,
include_tags: Sequence[str] | None = None,
exclude_names: Sequence[str] | None = None,
exclude_types: Sequence[str] | None = None,
exclude_tags: Sequence[str] | None = None,
**kwargs: Any,
) -> AsyncIterator[RunLogPatch] | AsyncIterator[RunLog]
Stream all output from a Runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of Jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run.
The Jsonpatch ops can be applied in order to construct state.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
diff
|
Whether to yield diffs between each step or the current state.
TYPE:
|
with_streamed_output_list
|
Whether to yield the
TYPE:
|
include_names
|
Only include logs with these names. |
include_types
|
Only include logs with these types. |
include_tags
|
Only include logs with these tags. |
exclude_names
|
Exclude logs with these names. |
exclude_types
|
Exclude logs with these types. |
exclude_tags
|
Exclude logs with these tags. |
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[RunLogPatch] | AsyncIterator[RunLog]
|
A |
astream_events
async
¶
astream_events(
input: Any,
config: RunnableConfig | None = None,
*,
version: Literal["v1", "v2"] = "v2",
include_names: Sequence[str] | None = None,
include_types: Sequence[str] | None = None,
include_tags: Sequence[str] | None = None,
exclude_names: Sequence[str] | None = None,
exclude_types: Sequence[str] | None = None,
exclude_tags: Sequence[str] | None = None,
**kwargs: Any,
) -> AsyncIterator[StreamEvent]
Generate a stream of events.
Use to create an iterator over StreamEvent that provide real-time information
about the progress of the Runnable, including StreamEvent from intermediate
results.
A StreamEvent is a dictionary with the following schema:
event: Event names are of the format:on_[runnable_type]_(start|stream|end).name: The name of theRunnablethat generated the event.run_id: Randomly generated ID associated with the given execution of theRunnablethat emitted the event. A childRunnablethat gets invoked as part of the execution of a parentRunnableis assigned its own unique ID.parent_ids: The IDs of the parent runnables that generated the event. The rootRunnablewill have an empty list. The order of the parent IDs is from the root to the immediate parent. Only available for v2 version of the API. The v1 version of the API will return an empty list.tags: The tags of theRunnablethat generated the event.metadata: The metadata of theRunnablethat generated the event.data: The data associated with the event. The contents of this field depend on the type of event. See the table below for more details.
Below is a table that illustrates some events that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table.
Note
This reference table is for the v2 version of the schema.
| event | name | chunk | input | output |
|---|---|---|---|---|
on_chat_model_start |
'[model name]' |
{"messages": [[SystemMessage, HumanMessage]]} |
||
on_chat_model_stream |
'[model name]' |
AIMessageChunk(content="hello") |
||
on_chat_model_end |
'[model name]' |
{"messages": [[SystemMessage, HumanMessage]]} |
AIMessageChunk(content="hello world") |
|
on_llm_start |
'[model name]' |
{'input': 'hello'} |
||
on_llm_stream |
'[model name]' |
'Hello' |
||
on_llm_end |
'[model name]' |
'Hello human!' |
||
on_chain_start |
'format_docs' |
|||
on_chain_stream |
'format_docs' |
'hello world!, goodbye world!' |
||
on_chain_end |
'format_docs' |
[Document(...)] |
'hello world!, goodbye world!' |
|
on_tool_start |
'some_tool' |
{"x": 1, "y": "2"} |
||
on_tool_end |
'some_tool' |
{"x": 1, "y": "2"} |
||
on_retriever_start |
'[retriever name]' |
{"query": "hello"} |
||
on_retriever_end |
'[retriever name]' |
{"query": "hello"} |
[Document(...), ..] |
|
on_prompt_start |
'[template_name]' |
{"question": "hello"} |
||
on_prompt_end |
'[template_name]' |
{"question": "hello"} |
ChatPromptValue(messages: [SystemMessage, ...]) |
In addition to the standard events, users can also dispatch custom events (see example below).
Custom events will be only be surfaced with in the v2 version of the API!
A custom event has following format:
| Attribute | Type | Description |
|---|---|---|
name |
str |
A user defined name for the event. |
data |
Any |
The data associated with the event. This can be anything, though we suggest making it JSON serializable. |
Here are declarations associated with the standard events shown above:
format_docs:
def format_docs(docs: list[Document]) -> str:
'''Format the docs.'''
return ", ".join([doc.page_content for doc in docs])
format_docs = RunnableLambda(format_docs)
some_tool:
prompt:
template = ChatPromptTemplate.from_messages(
[
("system", "You are Cat Agent 007"),
("human", "{question}"),
]
).with_config({"run_name": "my_template", "tags": ["my_template"]})
Example
from langchain_core.runnables import RunnableLambda
async def reverse(s: str) -> str:
return s[::-1]
chain = RunnableLambda(func=reverse)
events = [
event async for event in chain.astream_events("hello", version="v2")
]
# Will produce the following events
# (run_id, and parent_ids has been omitted for brevity):
[
{
"data": {"input": "hello"},
"event": "on_chain_start",
"metadata": {},
"name": "reverse",
"tags": [],
},
{
"data": {"chunk": "olleh"},
"event": "on_chain_stream",
"metadata": {},
"name": "reverse",
"tags": [],
},
{
"data": {"output": "olleh"},
"event": "on_chain_end",
"metadata": {},
"name": "reverse",
"tags": [],
},
]
from langchain_core.callbacks.manager import (
adispatch_custom_event,
)
from langchain_core.runnables import RunnableLambda, RunnableConfig
import asyncio
async def slow_thing(some_input: str, config: RunnableConfig) -> str:
"""Do something that takes a long time."""
await asyncio.sleep(1) # Placeholder for some slow operation
await adispatch_custom_event(
"progress_event",
{"message": "Finished step 1 of 3"},
config=config # Must be included for python < 3.10
)
await asyncio.sleep(1) # Placeholder for some slow operation
await adispatch_custom_event(
"progress_event",
{"message": "Finished step 2 of 3"},
config=config # Must be included for python < 3.10
)
await asyncio.sleep(1) # Placeholder for some slow operation
return "Done"
slow_thing = RunnableLambda(slow_thing)
async for event in slow_thing.astream_events("some_input", version="v2"):
print(event)
| PARAMETER | DESCRIPTION |
|---|---|
input
|
The input to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
version
|
The version of the schema to use, either Users should use
No default will be assigned until the API is stabilized.
custom events will only be surfaced in
TYPE:
|
include_names
|
Only include events from |
include_types
|
Only include events from |
include_tags
|
Only include events from |
exclude_names
|
Exclude events from |
exclude_types
|
Exclude events from |
exclude_tags
|
Exclude events from |
**kwargs
|
Additional keyword arguments to pass to the These will be passed to
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[StreamEvent]
|
An async stream of |
| RAISES | DESCRIPTION |
|---|---|
NotImplementedError
|
If the version is not |
transform
¶
transform(
input: Iterator[Input], config: RunnableConfig | None = None, **kwargs: Any | None
) -> Iterator[Output]
Transform inputs to outputs.
Default implementation of transform, which buffers input and calls astream.
Subclasses must override this method if they can start producing output while input is still being generated.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
An iterator of inputs to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
Output
|
The output of the |
atransform
async
¶
atransform(
input: AsyncIterator[Input],
config: RunnableConfig | None = None,
**kwargs: Any | None,
) -> AsyncIterator[Output]
Transform inputs to outputs.
Default implementation of atransform, which buffers input and calls astream.
Subclasses must override this method if they can start producing output while input is still being generated.
| PARAMETER | DESCRIPTION |
|---|---|
input
|
An async iterator of inputs to the
TYPE:
|
config
|
The config to use for the
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| YIELDS | DESCRIPTION |
|---|---|
AsyncIterator[Output]
|
The output of the |
bind
¶
Bind arguments to a Runnable, returning a new Runnable.
Useful when a Runnable in a chain requires an argument that is not
in the output of the previous Runnable or included in the user input.
| PARAMETER | DESCRIPTION |
|---|---|
**kwargs
|
The arguments to bind to the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
Example
from langchain_ollama import ChatOllama
from langchain_core.output_parsers import StrOutputParser
model = ChatOllama(model="llama3.1")
# Without bind
chain = model | StrOutputParser()
chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two three four five.'
# With bind
chain = model.bind(stop=["three"]) | StrOutputParser()
chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two'
with_config
¶
with_config(
config: RunnableConfig | None = None, **kwargs: Any
) -> Runnable[Input, Output]
Bind config to a Runnable, returning a new Runnable.
| PARAMETER | DESCRIPTION |
|---|---|
config
|
The config to bind to the
TYPE:
|
**kwargs
|
Additional keyword arguments to pass to the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
with_listeners
¶
with_listeners(
*,
on_start: Callable[[Run], None]
| Callable[[Run, RunnableConfig], None]
| None = None,
on_end: Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None = None,
on_error: Callable[[Run], None]
| Callable[[Run, RunnableConfig], None]
| None = None,
) -> Runnable[Input, Output]
Bind lifecycle listeners to a Runnable, returning a new Runnable.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and
any tags or metadata added to the run.
| PARAMETER | DESCRIPTION |
|---|---|
on_start
|
Called before the
TYPE:
|
on_end
|
Called after the
TYPE:
|
on_error
|
Called if the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
Example
from langchain_core.runnables import RunnableLambda
from langchain_core.tracers.schemas import Run
import time
def test_runnable(time_to_sleep: int):
time.sleep(time_to_sleep)
def fn_start(run_obj: Run):
print("start_time:", run_obj.start_time)
def fn_end(run_obj: Run):
print("end_time:", run_obj.end_time)
chain = RunnableLambda(test_runnable).with_listeners(
on_start=fn_start, on_end=fn_end
)
chain.invoke(2)
with_alisteners
¶
with_alisteners(
*,
on_start: AsyncListener | None = None,
on_end: AsyncListener | None = None,
on_error: AsyncListener | None = None,
) -> Runnable[Input, Output]
Bind async lifecycle listeners to a Runnable.
Returns a new Runnable.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and
any tags or metadata added to the run.
| PARAMETER | DESCRIPTION |
|---|---|
on_start
|
Called asynchronously before the
TYPE:
|
on_end
|
Called asynchronously after the
TYPE:
|
on_error
|
Called asynchronously if the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
Example
from langchain_core.runnables import RunnableLambda, Runnable
from datetime import datetime, timezone
import time
import asyncio
def format_t(timestamp: float) -> str:
return datetime.fromtimestamp(timestamp, tz=timezone.utc).isoformat()
async def test_runnable(time_to_sleep: int):
print(f"Runnable[{time_to_sleep}s]: starts at {format_t(time.time())}")
await asyncio.sleep(time_to_sleep)
print(f"Runnable[{time_to_sleep}s]: ends at {format_t(time.time())}")
async def fn_start(run_obj: Runnable):
print(f"on start callback starts at {format_t(time.time())}")
await asyncio.sleep(3)
print(f"on start callback ends at {format_t(time.time())}")
async def fn_end(run_obj: Runnable):
print(f"on end callback starts at {format_t(time.time())}")
await asyncio.sleep(2)
print(f"on end callback ends at {format_t(time.time())}")
runnable = RunnableLambda(test_runnable).with_alisteners(
on_start=fn_start, on_end=fn_end
)
async def concurrent_runs():
await asyncio.gather(runnable.ainvoke(2), runnable.ainvoke(3))
asyncio.run(concurrent_runs())
# Result:
# on start callback starts at 2025-03-01T07:05:22.875378+00:00
# on start callback starts at 2025-03-01T07:05:22.875495+00:00
# on start callback ends at 2025-03-01T07:05:25.878862+00:00
# on start callback ends at 2025-03-01T07:05:25.878947+00:00
# Runnable[2s]: starts at 2025-03-01T07:05:25.879392+00:00
# Runnable[3s]: starts at 2025-03-01T07:05:25.879804+00:00
# Runnable[2s]: ends at 2025-03-01T07:05:27.881998+00:00
# on end callback starts at 2025-03-01T07:05:27.882360+00:00
# Runnable[3s]: ends at 2025-03-01T07:05:28.881737+00:00
# on end callback starts at 2025-03-01T07:05:28.882428+00:00
# on end callback ends at 2025-03-01T07:05:29.883893+00:00
# on end callback ends at 2025-03-01T07:05:30.884831+00:00
with_types
¶
with_types(
*, input_type: type[Input] | None = None, output_type: type[Output] | None = None
) -> Runnable[Input, Output]
Bind input and output types to a Runnable, returning a new Runnable.
| PARAMETER | DESCRIPTION |
|---|---|
input_type
|
The input type to bind to the
TYPE:
|
output_type
|
The output type to bind to the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
with_retry
¶
with_retry(
*,
retry_if_exception_type: tuple[type[BaseException], ...] = (Exception,),
wait_exponential_jitter: bool = True,
exponential_jitter_params: ExponentialJitterParams | None = None,
stop_after_attempt: int = 3,
) -> Runnable[Input, Output]
Create a new Runnable that retries the original Runnable on exceptions.
| PARAMETER | DESCRIPTION |
|---|---|
retry_if_exception_type
|
A tuple of exception types to retry on.
TYPE:
|
wait_exponential_jitter
|
Whether to add jitter to the wait time between retries.
TYPE:
|
stop_after_attempt
|
The maximum number of attempts to make before giving up.
TYPE:
|
exponential_jitter_params
|
Parameters for
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Runnable[Input, Output]
|
A new |
Example
from langchain_core.runnables import RunnableLambda
count = 0
def _lambda(x: int) -> None:
global count
count = count + 1
if x == 1:
raise ValueError("x is 1")
else:
pass
runnable = RunnableLambda(_lambda)
try:
runnable.with_retry(
stop_after_attempt=2,
retry_if_exception_type=(ValueError,),
).invoke(1)
except ValueError:
pass
assert count == 2
map
¶
with_fallbacks
¶
with_fallbacks(
fallbacks: Sequence[Runnable[Input, Output]],
*,
exceptions_to_handle: tuple[type[BaseException], ...] = (Exception,),
exception_key: str | None = None,
) -> RunnableWithFallbacks[Input, Output]
Add fallbacks to a Runnable, returning a new Runnable.
The new Runnable will try the original Runnable, and then each fallback
in order, upon failures.
| PARAMETER | DESCRIPTION |
|---|---|
fallbacks
|
A sequence of runnables to try if the original |
exceptions_to_handle
|
A tuple of exception types to handle.
TYPE:
|
exception_key
|
If If If used, the base
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableWithFallbacks[Input, Output]
|
A new |
Example
from typing import Iterator
from langchain_core.runnables import RunnableGenerator
def _generate_immediate_error(input: Iterator) -> Iterator[str]:
raise ValueError()
yield ""
def _generate(input: Iterator) -> Iterator[str]:
yield from "foo bar"
runnable = RunnableGenerator(_generate_immediate_error).with_fallbacks(
[RunnableGenerator(_generate)]
)
print("".join(runnable.stream({}))) # foo bar
| PARAMETER | DESCRIPTION |
|---|---|
fallbacks
|
A sequence of runnables to try if the original |
exceptions_to_handle
|
A tuple of exception types to handle.
TYPE:
|
exception_key
|
If If If used, the base
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableWithFallbacks[Input, Output]
|
A new |
as_tool
¶
as_tool(
args_schema: type[BaseModel] | None = None,
*,
name: str | None = None,
description: str | None = None,
arg_types: dict[str, type] | None = None,
) -> BaseTool
Create a BaseTool from a Runnable.
as_tool will instantiate a BaseTool with a name, description, and
args_schema from a Runnable. Where possible, schemas are inferred
from runnable.get_input_schema.
Alternatively (e.g., if the Runnable takes a dict as input and the specific
dict keys are not typed), the schema can be specified directly with
args_schema.
You can also pass arg_types to just specify the required arguments and their
types.
| PARAMETER | DESCRIPTION |
|---|---|
args_schema
|
The schema for the tool. |
name
|
The name of the tool.
TYPE:
|
description
|
The description of the tool.
TYPE:
|
arg_types
|
A dictionary of argument names to types. |
| RETURNS | DESCRIPTION |
|---|---|
BaseTool
|
A |
TypedDict input
dict input, specifying schema via args_schema
from typing import Any
from pydantic import BaseModel, Field
from langchain_core.runnables import RunnableLambda
def f(x: dict[str, Any]) -> str:
return str(x["a"] * max(x["b"]))
class FSchema(BaseModel):
"""Apply a function to an integer and list of integers."""
a: int = Field(..., description="Integer")
b: list[int] = Field(..., description="List of ints")
runnable = RunnableLambda(f)
as_tool = runnable.as_tool(FSchema)
as_tool.invoke({"a": 3, "b": [1, 2]})
dict input, specifying schema via arg_types
is_lc_serializable
classmethod
¶
is_lc_serializable() -> bool
Is this class serializable?
By design, even if a class inherits from Serializable, it is not serializable
by default. This is to prevent accidental serialization of objects that should
not be serialized.
| RETURNS | DESCRIPTION |
|---|---|
bool
|
Whether the class is serializable. Default is |
get_lc_namespace
classmethod
¶
lc_id
classmethod
¶
Return a unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path to the object.
For example, for the class langchain.llms.openai.OpenAI, the id is
["langchain", "llms", "openai", "OpenAI"].
to_json
¶
Serialize the Runnable to JSON.
| RETURNS | DESCRIPTION |
|---|---|
SerializedConstructor | SerializedNotImplemented
|
A JSON-serializable representation of the |
to_json_not_implemented
¶
Serialize a "not implemented" object.
| RETURNS | DESCRIPTION |
|---|---|
SerializedNotImplemented
|
|
configurable_fields
¶
configurable_fields(
**kwargs: AnyConfigurableField,
) -> RunnableSerializable[Input, Output]
Configure particular Runnable fields at runtime.
| PARAMETER | DESCRIPTION |
|---|---|
**kwargs
|
A dictionary of
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If a configuration key is not found in the |
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Input, Output]
|
A new |
Example
from langchain_core.runnables import ConfigurableField
from langchain_openai import ChatOpenAI
model = ChatOpenAI(max_tokens=20).configurable_fields(
max_tokens=ConfigurableField(
id="output_token_number",
name="Max tokens in the output",
description="The maximum number of tokens in the output",
)
)
# max_tokens = 20
print(
"max_tokens_20: ", model.invoke("tell me something about chess").content
)
# max_tokens = 200
print(
"max_tokens_200: ",
model.with_config(configurable={"output_token_number": 200})
.invoke("tell me something about chess")
.content,
)
configurable_alternatives
¶
configurable_alternatives(
which: ConfigurableField,
*,
default_key: str = "default",
prefix_keys: bool = False,
**kwargs: Runnable[Input, Output] | Callable[[], Runnable[Input, Output]],
) -> RunnableSerializable[Input, Output]
Configure alternatives for Runnable objects that can be set at runtime.
| PARAMETER | DESCRIPTION |
|---|---|
which
|
The
TYPE:
|
default_key
|
The default key to use if no alternative is selected.
TYPE:
|
prefix_keys
|
Whether to prefix the keys with the
TYPE:
|
**kwargs
|
A dictionary of keys to
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
RunnableSerializable[Input, Output]
|
A new |
Example
from langchain_anthropic import ChatAnthropic
from langchain_core.runnables.utils import ConfigurableField
from langchain_openai import ChatOpenAI
model = ChatAnthropic(
model_name="claude-sonnet-4-5-20250929"
).configurable_alternatives(
ConfigurableField(id="llm"),
default_key="anthropic",
openai=ChatOpenAI(),
)
# uses the default model ChatAnthropic
print(model.invoke("which organization created you?").content)
# uses ChatOpenAI
print(
model.with_config(configurable={"llm": "openai"})
.invoke("which organization created you?")
.content
)
set_verbose
¶
generate_prompt
¶
generate_prompt(
prompts: list[PromptValue],
stop: list[str] | None = None,
callbacks: Callbacks | list[Callbacks] | None = None,
**kwargs: Any,
) -> LLMResult
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
- Take advantage of batched calls,
- Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).
| PARAMETER | DESCRIPTION |
|---|---|
prompts
|
List of A
TYPE:
|
stop
|
Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. |
callbacks
|
Used for executing additional functionality, such as logging or streaming, throughout generation.
TYPE:
|
**kwargs
|
Arbitrary additional keyword arguments. These are usually passed to the model provider API call.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
LLMResult
|
An |
agenerate_prompt
async
¶
agenerate_prompt(
prompts: list[PromptValue],
stop: list[str] | None = None,
callbacks: Callbacks | list[Callbacks] | None = None,
**kwargs: Any,
) -> LLMResult
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
- Take advantage of batched calls,
- Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).
| PARAMETER | DESCRIPTION |
|---|---|
prompts
|
List of A
TYPE:
|
stop
|
Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. |
callbacks
|
Used for executing additional functionality, such as logging or streaming, throughout generation.
TYPE:
|
**kwargs
|
Arbitrary additional keyword arguments. These are usually passed to the model provider API call.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
LLMResult
|
An |
with_structured_output
¶
with_structured_output(
schema: dict | type, **kwargs: Any
) -> Runnable[LanguageModelInput, dict | BaseModel]
Not implemented on this class.
get_token_ids
¶
get_num_tokens_from_messages
¶
get_num_tokens_from_messages(
messages: list[BaseMessage], tools: Sequence | None = None
) -> int
Get the number of tokens in the messages.
Useful for checking if an input fits in a model's context window.
This should be overridden by model-specific implementations to provide accurate token counts via model-specific tokenizers.
Note
- The base implementation of
get_num_tokens_from_messagesignores tool schemas. - The base implementation of
get_num_tokens_from_messagesadds additional prefixes to messages in represent user roles, which will add to the overall token count. Model-specific implementations may choose to handle this differently.
| PARAMETER | DESCRIPTION |
|---|---|
messages
|
The message inputs to tokenize.
TYPE:
|
tools
|
If provided, sequence of dict,
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
int
|
The sum of the number of tokens across the messages. |
generate
¶
generate(
prompts: list[str],
stop: list[str] | None = None,
callbacks: Callbacks | list[Callbacks] | None = None,
*,
tags: list[str] | list[list[str]] | None = None,
metadata: dict[str, Any] | list[dict[str, Any]] | None = None,
run_name: str | list[str] | None = None,
run_id: UUID | list[UUID | None] | None = None,
**kwargs: Any,
) -> LLMResult
Pass a sequence of prompts to a model and return generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
- Take advantage of batched calls,
- Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).
| PARAMETER | DESCRIPTION |
|---|---|
prompts
|
List of string prompts. |
stop
|
Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. |
callbacks
|
Used for executing additional functionality, such as logging or streaming, throughout generation.
TYPE:
|
tags
|
List of tags to associate with each prompt. If provided, the length of the list must match the length of the prompts list. |
metadata
|
List of metadata dictionaries to associate with each prompt. If provided, the length of the list must match the length of the prompts list.
TYPE:
|
run_name
|
List of run names to associate with each prompt. If provided, the length of the list must match the length of the prompts list. |
run_id
|
List of run IDs to associate with each prompt. If provided, the length of the list must match the length of the prompts list. |
**kwargs
|
Arbitrary additional keyword arguments. These are usually passed to the model provider API call.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If prompts is not a list. |
ValueError
|
If the length of |
| RETURNS | DESCRIPTION |
|---|---|
LLMResult
|
An |
agenerate
async
¶
agenerate(
prompts: list[str],
stop: list[str] | None = None,
callbacks: Callbacks | list[Callbacks] | None = None,
*,
tags: list[str] | list[list[str]] | None = None,
metadata: dict[str, Any] | list[dict[str, Any]] | None = None,
run_name: str | list[str] | None = None,
run_id: UUID | list[UUID | None] | None = None,
**kwargs: Any,
) -> LLMResult
Asynchronously pass a sequence of prompts to a model and return generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
- Take advantage of batched calls,
- Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).
| PARAMETER | DESCRIPTION |
|---|---|
prompts
|
List of string prompts. |
stop
|
Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. |
callbacks
|
Used for executing additional functionality, such as logging or streaming, throughout generation.
TYPE:
|
tags
|
List of tags to associate with each prompt. If provided, the length of the list must match the length of the prompts list. |
metadata
|
List of metadata dictionaries to associate with each prompt. If provided, the length of the list must match the length of the prompts list.
TYPE:
|
run_name
|
List of run names to associate with each prompt. If provided, the length of the list must match the length of the prompts list. |
run_id
|
List of run IDs to associate with each prompt. If provided, the length of the list must match the length of the prompts list. |
**kwargs
|
Arbitrary additional keyword arguments. These are usually passed to the model provider API call.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If the length of |
| RETURNS | DESCRIPTION |
|---|---|
LLMResult
|
An |
save
¶
Save the LLM.
| PARAMETER | DESCRIPTION |
|---|---|
file_path
|
Path to file to save the LLM to. |
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If the file path is not a string or Path object. |