Skip to content

langchain-openai

Module for OpenAI integrations.

Modules:

Name Description
chat_models

Module for OpenAI chat models.

custom_tool

Custom tool decorator for OpenAI custom tools.

embeddings

Module for OpenAI embeddings.

llms

Module for OpenAI large language models. Chat models are in chat_models/.

output_parsers

Output parsers for OpenAI tools.

tools

Tools package for OpenAI integrations.

Classes:

Name Description
AzureChatOpenAI

Azure OpenAI chat model integration.

ChatOpenAI

OpenAI chat model integration.

AzureOpenAIEmbeddings

AzureOpenAI embedding model integration.

OpenAIEmbeddings

OpenAI embedding model integration.

AzureOpenAI

Azure-specific OpenAI large language models.

OpenAI

OpenAI completion model integration.

AzureChatOpenAI

Bases: BaseChatOpenAI

Azure OpenAI chat model integration.

Setup

Head to the Azure OpenAI quickstart guide <https://learn.microsoft.com/en-us/azure/ai-foundry/openai/chatgpt-quickstart?tabs=keyless%2Ctypescript-keyless%2Cpython-new%2Ccommand-line&pivots=programming-language-python>__ to create your Azure OpenAI deployment.

Then install langchain-openai and set environment variables AZURE_OPENAI_API_KEY and AZURE_OPENAI_ENDPOINT:

.. code-block:: bash

pip install -U langchain-openai

export AZURE_OPENAI_API_KEY="your-api-key"
export AZURE_OPENAI_ENDPOINT="https://your-endpoint.openai.azure.com/"

Key init args — completion params: azure_deployment: str Name of Azure OpenAI deployment to use. temperature: float Sampling temperature. max_tokens: Optional[int] Max number of tokens to generate. logprobs: Optional[bool] Whether to return logprobs.

Key init args — client params: api_version: str Azure OpenAI REST API version to use (distinct from the version of the underlying model). See more on the different versions. <https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#rest-api-versioning>__ timeout: Union[float, Tuple[float, float], Any, None] Timeout for requests. max_retries: Optional[int] Max number of retries. organization: Optional[str] OpenAI organization ID. If not passed in will be read from env var OPENAI_ORG_ID. model: Optional[str] The name of the underlying OpenAI model. Used for tracing and token counting. Does not affect completion. E.g. 'gpt-4', 'gpt-35-turbo', etc. model_version: Optional[str] The version of the underlying OpenAI model. Used for tracing and token counting. Does not affect completion. E.g., '0125', '0125-preview', etc.

See full list of supported init args and their descriptions in the params section.

Instantiate

.. code-block:: python

from langchain_openai import AzureChatOpenAI

llm = AzureChatOpenAI(
    azure_deployment="your-deployment",
    api_version="2024-05-01-preview",
    temperature=0,
    max_tokens=None,
    timeout=None,
    max_retries=2,
    # organization="...",
    # model="gpt-35-turbo",
    # model_version="0125",
    # other params...
)

Note

Any param which is not explicitly supported will be passed directly to the openai.AzureOpenAI.chat.completions.create(...) API every time to the model is invoked.

For example:

.. code-block:: python

from langchain_openai import AzureChatOpenAI
import openai

AzureChatOpenAI(..., logprobs=True).invoke(...)

# results in underlying API call of:

openai.AzureOpenAI(..).chat.completions.create(..., logprobs=True)

# which is also equivalent to:

AzureChatOpenAI(...).invoke(..., logprobs=True)
Invoke

.. code-block:: python

messages = [
    (
        "system",
        "You are a helpful translator. Translate the user sentence to French.",
    ),
    ("human", "I love programming."),
]
llm.invoke(messages)

.. code-block:: python

AIMessage(
    content="J'adore programmer.",
    usage_metadata={
        "input_tokens": 28,
        "output_tokens": 6,
        "total_tokens": 34,
    },
    response_metadata={
        "token_usage": {
            "completion_tokens": 6,
            "prompt_tokens": 28,
            "total_tokens": 34,
        },
        "model_name": "gpt-4",
        "system_fingerprint": "fp_7ec89fabc6",
        "prompt_filter_results": [
            {
                "prompt_index": 0,
                "content_filter_results": {
                    "hate": {"filtered": False, "severity": "safe"},
                    "self_harm": {"filtered": False, "severity": "safe"},
                    "sexual": {"filtered": False, "severity": "safe"},
                    "violence": {"filtered": False, "severity": "safe"},
                },
            }
        ],
        "finish_reason": "stop",
        "logprobs": None,
        "content_filter_results": {
            "hate": {"filtered": False, "severity": "safe"},
            "self_harm": {"filtered": False, "severity": "safe"},
            "sexual": {"filtered": False, "severity": "safe"},
            "violence": {"filtered": False, "severity": "safe"},
        },
    },
    id="run-6d7a5282-0de0-4f27-9cc0-82a9db9a3ce9-0",
)
Stream

.. code-block:: python

for chunk in llm.stream(messages):
    print(chunk.text, end="")

.. code-block:: python

AIMessageChunk(content="", id="run-a6f294d3-0700-4f6a-abc2-c6ef1178c37f")
AIMessageChunk(content="J", id="run-a6f294d3-0700-4f6a-abc2-c6ef1178c37f")
AIMessageChunk(content="'", id="run-a6f294d3-0700-4f6a-abc2-c6ef1178c37f")
AIMessageChunk(content="ad", id="run-a6f294d3-0700-4f6a-abc2-c6ef1178c37f")
AIMessageChunk(content="ore", id="run-a6f294d3-0700-4f6a-abc2-c6ef1178c37f")
AIMessageChunk(content=" la", id="run-a6f294d3-0700-4f6a-abc2-c6ef1178c37f")
AIMessageChunk(
    content=" programm", id="run-a6f294d3-0700-4f6a-abc2-c6ef1178c37f"
)
AIMessageChunk(
    content="ation", id="run-a6f294d3-0700-4f6a-abc2-c6ef1178c37f"
)
AIMessageChunk(content=".", id="run-a6f294d3-0700-4f6a-abc2-c6ef1178c37f")
AIMessageChunk(
    content="",
    response_metadata={
        "finish_reason": "stop",
        "model_name": "gpt-4",
        "system_fingerprint": "fp_811936bd4f",
    },
    id="run-a6f294d3-0700-4f6a-abc2-c6ef1178c37f",
)

.. code-block:: python

stream = llm.stream(messages)
full = next(stream)
for chunk in stream:
    full += chunk
full

.. code-block:: python

AIMessageChunk(
    content="J'adore la programmation.",
    response_metadata={
        "finish_reason": "stop",
        "model_name": "gpt-4",
        "system_fingerprint": "fp_811936bd4f",
    },
    id="run-ba60e41c-9258-44b8-8f3a-2f10599643b3",
)
Async

.. code-block:: python

await llm.ainvoke(messages)

# stream:
# async for chunk in (await llm.astream(messages))

# batch:
# await llm.abatch([messages])
Tool calling

.. code-block:: python

from pydantic import BaseModel, Field


class GetWeather(BaseModel):
    '''Get the current weather in a given location'''

    location: str = Field(
        ..., description="The city and state, e.g. San Francisco, CA"
    )


class GetPopulation(BaseModel):
    '''Get the current population in a given location'''

    location: str = Field(
        ..., description="The city and state, e.g. San Francisco, CA"
    )


llm_with_tools = llm.bind_tools([GetWeather, GetPopulation])
ai_msg = llm_with_tools.invoke(
    "Which city is hotter today and which is bigger: LA or NY?"
)
ai_msg.tool_calls

.. code-block:: python

[
    {
        "name": "GetWeather",
        "args": {"location": "Los Angeles, CA"},
        "id": "call_6XswGD5Pqk8Tt5atYr7tfenU",
    },
    {
        "name": "GetWeather",
        "args": {"location": "New York, NY"},
        "id": "call_ZVL15vA8Y7kXqOy3dtmQgeCi",
    },
    {
        "name": "GetPopulation",
        "args": {"location": "Los Angeles, CA"},
        "id": "call_49CFW8zqC9W7mh7hbMLSIrXw",
    },
    {
        "name": "GetPopulation",
        "args": {"location": "New York, NY"},
        "id": "call_6ghfKxV264jEfe1mRIkS3PE7",
    },
]
Structured output

.. code-block:: python

from typing import Optional

from pydantic import BaseModel, Field


class Joke(BaseModel):
    '''Joke to tell user.'''

    setup: str = Field(description="The setup of the joke")
    punchline: str = Field(description="The punchline to the joke")
    rating: Optional[int] = Field(
        description="How funny the joke is, from 1 to 10"
    )


structured_llm = llm.with_structured_output(Joke)
structured_llm.invoke("Tell me a joke about cats")

.. code-block:: python

Joke(
    setup="Why was the cat sitting on the computer?",
    punchline="To keep an eye on the mouse!",
    rating=None,
)

See AzureChatOpenAI.with_structured_output() for more.

JSON mode

.. code-block:: python

json_llm = llm.bind(response_format={"type": "json_object"})
ai_msg = json_llm.invoke(
    "Return a JSON object with key 'random_ints' and a value of 10 random ints in [0-99]"
)
ai_msg.content

.. code-block:: python

'\\n{\\n  "random_ints": [23, 87, 45, 12, 78, 34, 56, 90, 11, 67]\\n}'
Image input

.. code-block:: python

import base64
import httpx
from langchain_core.messages import HumanMessage

image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
image_data = base64.b64encode(httpx.get(image_url).content).decode("utf-8")
message = HumanMessage(
    content=[
        {"type": "text", "text": "describe the weather in this image"},
        {
            "type": "image_url",
            "image_url": {"url": f"data:image/jpeg;base64,{image_data}"},
        },
    ]
)
ai_msg = llm.invoke([message])
ai_msg.content

.. code-block:: python

"The weather in the image appears to be quite pleasant. The sky is mostly clear"
Token usage

.. code-block:: python

ai_msg = llm.invoke(messages)
ai_msg.usage_metadata

.. code-block:: python

{"input_tokens": 28, "output_tokens": 5, "total_tokens": 33}
Logprobs

.. code-block:: python

logprobs_llm = llm.bind(logprobs=True)
ai_msg = logprobs_llm.invoke(messages)
ai_msg.response_metadata["logprobs"]

.. code-block:: python

{
    "content": [
        {
            "token": "J",
            "bytes": [74],
            "logprob": -4.9617593e-06,
            "top_logprobs": [],
        },
        {
            "token": "'adore",
            "bytes": [39, 97, 100, 111, 114, 101],
            "logprob": -0.25202933,
            "top_logprobs": [],
        },
        {
            "token": " la",
            "bytes": [32, 108, 97],
            "logprob": -0.20141791,
            "top_logprobs": [],
        },
        {
            "token": " programmation",
            "bytes": [
                32,
                112,
                114,
                111,
                103,
                114,
                97,
                109,
                109,
                97,
                116,
                105,
                111,
                110,
            ],
            "logprob": -1.9361265e-07,
            "top_logprobs": [],
        },
        {
            "token": ".",
            "bytes": [46],
            "logprob": -1.2233183e-05,
            "top_logprobs": [],
        },
    ]
}

Response metadata .. code-block:: python

    ai_msg = llm.invoke(messages)
    ai_msg.response_metadata

.. code-block:: python

    {
        "token_usage": {
            "completion_tokens": 6,
            "prompt_tokens": 28,
            "total_tokens": 34,
        },
        "model_name": "gpt-35-turbo",
        "system_fingerprint": None,
        "prompt_filter_results": [
            {
                "prompt_index": 0,
                "content_filter_results": {
                    "hate": {"filtered": False, "severity": "safe"},
                    "self_harm": {"filtered": False, "severity": "safe"},
                    "sexual": {"filtered": False, "severity": "safe"},
                    "violence": {"filtered": False, "severity": "safe"},
                },
            }
        ],
        "finish_reason": "stop",
        "logprobs": None,
        "content_filter_results": {
            "hate": {"filtered": False, "severity": "safe"},
            "self_harm": {"filtered": False, "severity": "safe"},
            "sexual": {"filtered": False, "severity": "safe"},
            "violence": {"filtered": False, "severity": "safe"},
        },
    }

Methods:

Name Description
get_name

Get the name of the Runnable.

get_input_schema

Get a pydantic model that can be used to validate input to the Runnable.

get_input_jsonschema

Get a JSON schema that represents the input to the Runnable.

get_output_schema

Get a pydantic model that can be used to validate output to the Runnable.

get_output_jsonschema

Get a JSON schema that represents the output of the Runnable.

config_schema

The type of config this Runnable accepts specified as a pydantic model.

get_config_jsonschema

Get a JSON schema that represents the config of the Runnable.

get_graph

Return a graph representation of this Runnable.

get_prompts

Return a list of prompts used by this Runnable.

__or__

Runnable "or" operator.

__ror__

Runnable "reverse-or" operator.

pipe

Pipe runnables.

pick

Pick keys from the output dict of this Runnable.

assign

Assigns new fields to the dict output of this Runnable.

batch

Default implementation runs invoke in parallel using a thread pool executor.

batch_as_completed

Run invoke in parallel on a list of inputs.

abatch

Default implementation runs ainvoke in parallel using asyncio.gather.

abatch_as_completed

Run ainvoke in parallel on a list of inputs.

astream_log

Stream all output from a Runnable, as reported to the callback system.

astream_events

Generate a stream of events.

transform

Transform inputs to outputs.

atransform

Transform inputs to outputs.

bind

Bind arguments to a Runnable, returning a new Runnable.

with_config

Bind config to a Runnable, returning a new Runnable.

with_listeners

Bind lifecycle listeners to a Runnable, returning a new Runnable.

with_alisteners

Bind async lifecycle listeners to a Runnable.

with_types

Bind input and output types to a Runnable, returning a new Runnable.

with_retry

Create a new Runnable that retries the original Runnable on exceptions.

map

Return a new Runnable that maps a list of inputs to a list of outputs.

with_fallbacks

Add fallbacks to a Runnable, returning a new Runnable.

as_tool

Create a BaseTool from a Runnable.

__init__
lc_id

Return a unique identifier for this class for serialization purposes.

to_json

Serialize the Runnable to JSON.

to_json_not_implemented

Serialize a "not implemented" object.

configurable_fields

Configure particular Runnable fields at runtime.

configurable_alternatives

Configure alternatives for Runnables that can be set at runtime.

set_verbose

If verbose is None, set it.

get_token_ids

Get the tokens present in the text with tiktoken package.

get_num_tokens

Get the number of tokens present in the text.

get_num_tokens_from_messages

Calculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.

generate

Pass a sequence of prompts to the model and return model generations.

agenerate

Asynchronously pass a sequence of prompts to a model and return generations.

dict

Return a dictionary of the LLM.

bind_tools

Bind tool-like objects to this chat model.

build_extra

Build extra kwargs from additional params that were passed in.

validate_temperature

Validate temperature parameter for different models.

get_lc_namespace

Get the namespace of the langchain object.

is_lc_serializable

Check if the class is serializable in langchain.

validate_environment

Validate that api key and python package exists in environment.

with_structured_output

Model wrapper that returns outputs formatted to match the given schema.

Attributes:

Name Type Description
InputType TypeAlias

Get the input type for this runnable.

OutputType Any

Get the output type for this runnable.

input_schema type[BaseModel]

The type of input this Runnable accepts specified as a pydantic model.

output_schema type[BaseModel]

Output schema.

config_specs list[ConfigurableFieldSpec]

List configurable fields for this Runnable.

cache BaseCache | bool | None

Whether to cache the response.

verbose bool

Whether to print out response text.

callbacks Callbacks

Callbacks to add to the run trace.

tags list[str] | None

Tags to add to the run trace.

metadata dict[str, Any] | None

Metadata to add to the run trace.

custom_get_token_ids Callable[[str], list[int]] | None

Optional encoder to use for counting tokens.

rate_limiter BaseRateLimiter | None

An optional rate limiter to use for limiting the number of requests.

disable_streaming bool | Literal['tool_calling']

Whether to disable streaming for this model.

output_version Optional[str]

Version of AIMessage output format to use.

temperature Optional[float]

What sampling temperature to use.

model_kwargs dict[str, Any]

Holds any model parameters valid for create call not explicitly specified.

openai_api_base Optional[str]

Base URL path for API requests, leave blank if not using a proxy or service emulator.

openai_organization Optional[str]

Automatically inferred from env var OPENAI_ORG_ID if not provided.

request_timeout Union[float, tuple[float, float], Any, None]

Timeout for requests to OpenAI completion API. Can be float, httpx.Timeout or

stream_usage Optional[bool]

Whether to include usage metadata in streaming output. If enabled, an additional

max_retries Optional[int]

Maximum number of retries to make when generating.

presence_penalty Optional[float]

Penalizes repeated tokens.

frequency_penalty Optional[float]

Penalizes repeated tokens according to frequency.

seed Optional[int]

Seed for generation

logprobs Optional[bool]

Whether to return logprobs.

top_logprobs Optional[int]

Number of most likely tokens to return at each token position, each with

logit_bias Optional[dict[int, int]]

Modify the likelihood of specified tokens appearing in the completion.

streaming bool

Whether to stream the results or not.

n Optional[int]

Number of chat completions to generate for each prompt.

top_p Optional[float]

Total probability mass of tokens to consider at each step.

reasoning_effort Optional[str]

Constrains effort on reasoning for reasoning models. For use with the Chat

reasoning Optional[dict[str, Any]]

Reasoning parameters for reasoning models, i.e., OpenAI o-series models (o1, o3,

verbosity Optional[str]

Controls the verbosity level of responses for reasoning models. For use with the

tiktoken_model_name Optional[str]

The model name to pass to tiktoken when using this class.

http_client Union[Any, None]

Optional httpx.Client. Only used for sync invocations. Must specify

http_async_client Union[Any, None]

Optional httpx.AsyncClient. Only used for async invocations. Must specify

stop Optional[Union[list[str], str]]

Default stop sequences.

extra_body Optional[Mapping[str, Any]]

Optional additional JSON properties to include in the request parameters when

include_response_headers bool

Whether to include response headers in the output message response_metadata.

include Optional[list[str]]

Additional fields to include in generations from Responses API.

service_tier Optional[str]

Latency tier for request. Options are 'auto', 'default', or 'flex'.

store Optional[bool]

If True, OpenAI may store response data for future use. Defaults to True

truncation Optional[str]

Truncation strategy (Responses API). Can be 'auto' or 'disabled'

use_previous_response_id bool

If True, always pass previous_response_id using the ID of the most recent

use_responses_api Optional[bool]

Whether to use the Responses API instead of the Chat API.

azure_endpoint Optional[str]

Your Azure endpoint, including the resource.

deployment_name Union[str, None]

A model deployment.

openai_api_version Optional[str]

Automatically inferred from env var OPENAI_API_VERSION if not provided.

openai_api_key Optional[SecretStr]

Automatically inferred from env var AZURE_OPENAI_API_KEY if not provided.

azure_ad_token Optional[SecretStr]

Your Azure Active Directory token.

azure_ad_token_provider Union[Callable[[], str], None]

A function that returns an Azure Active Directory token.

azure_ad_async_token_provider Union[Callable[[], Awaitable[str]], None]

A function that returns an Azure Active Directory token.

model_version str

The version of the model (e.g. '0125' for 'gpt-3.5-0125').

openai_api_type Optional[str]

Legacy, for openai<1.0.0 support.

validate_base_url bool

If legacy arg openai_api_base is passed in, try to infer if it is a

model_name Optional[str]

Name of the deployed OpenAI model, e.g. 'gpt-4o', 'gpt-35-turbo', etc.

disabled_params Optional[dict[str, Any]]

Parameters of the OpenAI client or chat.completions endpoint that should be

max_tokens Optional[int]

Maximum number of tokens to generate.

lc_secrets dict[str, str]

Get the mapping of secret environment variables.

lc_attributes dict[str, Any]

Get the attributes relevant to tracing.

InputType property

InputType: TypeAlias

Get the input type for this runnable.

OutputType property

OutputType: Any

Get the output type for this runnable.

input_schema property

input_schema: type[BaseModel]

The type of input this Runnable accepts specified as a pydantic model.

output_schema property

output_schema: type[BaseModel]

Output schema.

The type of output this Runnable produces specified as a pydantic model.

config_specs property

config_specs: list[ConfigurableFieldSpec]

List configurable fields for this Runnable.

cache class-attribute instance-attribute

cache: BaseCache | bool | None = Field(
    default=None, exclude=True
)

Whether to cache the response.

  • If true, will use the global cache.
  • If false, will not use a cache
  • If None, will use the global cache if it's set, otherwise no cache.
  • If instance of BaseCache, will use the provided cache.

Caching is not currently supported for streaming methods of models.

verbose class-attribute instance-attribute

verbose: bool = Field(
    default_factory=_get_verbosity, exclude=True, repr=False
)

Whether to print out response text.

callbacks class-attribute instance-attribute

callbacks: Callbacks = Field(default=None, exclude=True)

Callbacks to add to the run trace.

tags class-attribute instance-attribute

tags: list[str] | None = Field(default=None, exclude=True)

Tags to add to the run trace.

metadata class-attribute instance-attribute

metadata: dict[str, Any] | None = Field(
    default=None, exclude=True
)

Metadata to add to the run trace.

custom_get_token_ids class-attribute instance-attribute

custom_get_token_ids: Callable[[str], list[int]] | None = (
    Field(default=None, exclude=True)
)

Optional encoder to use for counting tokens.

rate_limiter class-attribute instance-attribute

rate_limiter: BaseRateLimiter | None = Field(
    default=None, exclude=True
)

An optional rate limiter to use for limiting the number of requests.

disable_streaming class-attribute instance-attribute

disable_streaming: bool | Literal['tool_calling'] = False

Whether to disable streaming for this model.

If streaming is bypassed, then stream()/astream()/astream_events() will defer to invoke()/ainvoke().

  • If True, will always bypass streaming case.
  • If 'tool_calling', will bypass streaming case only when the model is called with a tools keyword argument. In other words, LangChain will automatically switch to non-streaming behavior (invoke()) only when the tools argument is provided. This offers the best of both worlds.
  • If False (default), will always use streaming case if available.

The main reason for this flag is that code might be written using stream() and a user may want to swap out a given model for another model whose the implementation does not properly support streaming.

output_version class-attribute instance-attribute

output_version: Optional[str] = Field(
    default_factory=from_env(
        "LC_OUTPUT_VERSION", default=None
    )
)

Version of AIMessage output format to use.

This field is used to roll-out new output formats for chat model AIMessages in a backwards-compatible way.

Supported values:

  • 'v0': AIMessage format as of langchain-openai 0.3.x.
  • 'responses/v1': Formats Responses API output items into AIMessage content blocks (Responses API only)
  • "v1": v1 of LangChain cross-provider standard.

Behavior changed in 1.0.0

Default updated to "responses/v1".

.. versionchanged:: 1.0.0

Default updated to ``"responses/v1"``.

temperature class-attribute instance-attribute

temperature: Optional[float] = None

What sampling temperature to use.

model_kwargs class-attribute instance-attribute

model_kwargs: dict[str, Any] = Field(default_factory=dict)

Holds any model parameters valid for create call not explicitly specified.

openai_api_base class-attribute instance-attribute

openai_api_base: Optional[str] = Field(
    default=None, alias="base_url"
)

Base URL path for API requests, leave blank if not using a proxy or service emulator.

openai_organization class-attribute instance-attribute

openai_organization: Optional[str] = Field(
    default=None, alias="organization"
)

Automatically inferred from env var OPENAI_ORG_ID if not provided.

request_timeout class-attribute instance-attribute

request_timeout: Union[
    float, tuple[float, float], Any, None
] = Field(default=None, alias="timeout")

Timeout for requests to OpenAI completion API. Can be float, httpx.Timeout or None.

stream_usage class-attribute instance-attribute

stream_usage: Optional[bool] = None

Whether to include usage metadata in streaming output. If enabled, an additional message chunk will be generated during the stream including usage metadata.

This parameter is enabled unless openai_api_base is set or the model is initialized with a custom client, as many chat completions APIs do not support streaming token usage.

Added in version 0.3.9

Behavior changed in 0.3.35

Enabled for default base URL and client.

max_retries class-attribute instance-attribute

max_retries: Optional[int] = None

Maximum number of retries to make when generating.

presence_penalty class-attribute instance-attribute

presence_penalty: Optional[float] = None

Penalizes repeated tokens.

frequency_penalty class-attribute instance-attribute

frequency_penalty: Optional[float] = None

Penalizes repeated tokens according to frequency.

seed class-attribute instance-attribute

seed: Optional[int] = None

Seed for generation

logprobs class-attribute instance-attribute

logprobs: Optional[bool] = None

Whether to return logprobs.

top_logprobs class-attribute instance-attribute

top_logprobs: Optional[int] = None

Number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used.

logit_bias class-attribute instance-attribute

logit_bias: Optional[dict[int, int]] = None

Modify the likelihood of specified tokens appearing in the completion.

streaming class-attribute instance-attribute

streaming: bool = False

Whether to stream the results or not.

n class-attribute instance-attribute

n: Optional[int] = None

Number of chat completions to generate for each prompt.

top_p class-attribute instance-attribute

top_p: Optional[float] = None

Total probability mass of tokens to consider at each step.

reasoning_effort class-attribute instance-attribute

reasoning_effort: Optional[str] = None

Constrains effort on reasoning for reasoning models. For use with the Chat Completions API.

Reasoning models only, like OpenAI o1, o3, and o4-mini.

Currently supported values are 'minimal', 'low', 'medium', and 'high'. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

Added in version 0.2.14

reasoning class-attribute instance-attribute

reasoning: Optional[dict[str, Any]] = None

Reasoning parameters for reasoning models, i.e., OpenAI o-series models (o1, o3, o4-mini, etc.). For use with the Responses API.

Example:

.. code-block:: python

reasoning={
    "effort": "medium",  # can be "low", "medium", or "high"
    "summary": "auto",  # can be "auto", "concise", or "detailed"
}

Added in version 0.3.24

verbosity class-attribute instance-attribute

verbosity: Optional[str] = None

Controls the verbosity level of responses for reasoning models. For use with the Responses API.

Currently supported values are 'low', 'medium', and 'high'.

Controls how detailed the model's responses are.

Added in version 0.3.28

tiktoken_model_name class-attribute instance-attribute

tiktoken_model_name: Optional[str] = None

The model name to pass to tiktoken when using this class. Tiktoken is used to count the number of tokens in documents to constrain them to be under a certain limit. By default, when set to None, this will be the same as the embedding model name. However, there are some cases where you may want to use this Embedding class with a model name not supported by tiktoken. This can include when using Azure embeddings or when using one of the many model providers that expose an OpenAI-like API but with different models. In those cases, in order to avoid erroring when tiktoken is called, you can specify a model name to use here.

http_client class-attribute instance-attribute

http_client: Union[Any, None] = Field(
    default=None, exclude=True
)

Optional httpx.Client. Only used for sync invocations. Must specify http_async_client as well if you'd like a custom client for async invocations.

http_async_client class-attribute instance-attribute

http_async_client: Union[Any, None] = Field(
    default=None, exclude=True
)

Optional httpx.AsyncClient. Only used for async invocations. Must specify http_client as well if you'd like a custom client for sync invocations.

stop class-attribute instance-attribute

stop: Optional[Union[list[str], str]] = Field(
    default=None, alias="stop_sequences"
)

Default stop sequences.

extra_body class-attribute instance-attribute

extra_body: Optional[Mapping[str, Any]] = None

Optional additional JSON properties to include in the request parameters when making requests to OpenAI compatible APIs, such as vLLM, LM Studio, or other providers.

This is the recommended way to pass custom parameters that are specific to your OpenAI-compatible API provider but not part of the standard OpenAI API.

Examples:

  • LM Studio TTL parameter: extra_body={"ttl": 300}
  • vLLM custom parameters: extra_body={"use_beam_search": True}
  • Any other provider-specific parameters

Note

Do NOT use model_kwargs for custom parameters that are not part of the standard OpenAI API, as this will cause errors when making API calls. Use extra_body instead.

include_response_headers class-attribute instance-attribute

include_response_headers: bool = False

Whether to include response headers in the output message response_metadata.

include class-attribute instance-attribute

include: Optional[list[str]] = None

Additional fields to include in generations from Responses API.

Supported values:

  • 'file_search_call.results'
  • 'message.input_image.image_url'
  • 'computer_call_output.output.image_url'
  • 'reasoning.encrypted_content'
  • 'code_interpreter_call.outputs'

Added in version 0.3.24

service_tier class-attribute instance-attribute

service_tier: Optional[str] = None

Latency tier for request. Options are 'auto', 'default', or 'flex'. Relevant for users of OpenAI's scale tier service.

store class-attribute instance-attribute

store: Optional[bool] = None

If True, OpenAI may store response data for future use. Defaults to True for the Responses API and False for the Chat Completions API.

Added in version 0.3.24

truncation class-attribute instance-attribute

truncation: Optional[str] = None

Truncation strategy (Responses API). Can be 'auto' or 'disabled' (default). If 'auto', model may drop input items from the middle of the message sequence to fit the context window.

Added in version 0.3.24

use_previous_response_id class-attribute instance-attribute

use_previous_response_id: bool = False

If True, always pass previous_response_id using the ID of the most recent response. Responses API only.

Input messages up to the most recent response will be dropped from request payloads.

For example, the following two are equivalent:

.. code-block:: python

llm = ChatOpenAI(
    model="o4-mini",
    use_previous_response_id=True,
)
llm.invoke(
    [
        HumanMessage("Hello"),
        AIMessage("Hi there!", response_metadata={"id": "resp_123"}),
        HumanMessage("How are you?"),
    ]
)

.. code-block:: python

llm = ChatOpenAI(model="o4-mini", use_responses_api=True)
llm.invoke([HumanMessage("How are you?")], previous_response_id="resp_123")

Added in version 0.3.26

use_responses_api class-attribute instance-attribute

use_responses_api: Optional[bool] = None

Whether to use the Responses API instead of the Chat API.

If not specified then will be inferred based on invocation params.

Added in version 0.3.9

azure_endpoint class-attribute instance-attribute

azure_endpoint: Optional[str] = Field(
    default_factory=from_env(
        "AZURE_OPENAI_ENDPOINT", default=None
    )
)

Your Azure endpoint, including the resource.

Automatically inferred from env var AZURE_OPENAI_ENDPOINT if not provided.

Example: https://example-resource.azure.openai.com/

deployment_name class-attribute instance-attribute

deployment_name: Union[str, None] = Field(
    default=None, alias="azure_deployment"
)

A model deployment.

If given sets the base client URL to include /deployments/{azure_deployment}

Note

This means you won't be able to use non-deployment endpoints.

openai_api_version class-attribute instance-attribute

openai_api_version: Optional[str] = Field(
    alias="api_version",
    default_factory=from_env(
        "OPENAI_API_VERSION", default=None
    ),
)

Automatically inferred from env var OPENAI_API_VERSION if not provided.

openai_api_key class-attribute instance-attribute

openai_api_key: Optional[SecretStr] = Field(
    alias="api_key",
    default_factory=secret_from_env(
        ["AZURE_OPENAI_API_KEY", "OPENAI_API_KEY"],
        default=None,
    ),
)

Automatically inferred from env var AZURE_OPENAI_API_KEY if not provided.

azure_ad_token class-attribute instance-attribute

azure_ad_token: Optional[SecretStr] = Field(
    default_factory=secret_from_env(
        "AZURE_OPENAI_AD_TOKEN", default=None
    )
)

Your Azure Active Directory token.

Automatically inferred from env var AZURE_OPENAI_AD_TOKEN if not provided.

For more, see this page <https://www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-id>__.

azure_ad_token_provider class-attribute instance-attribute

azure_ad_token_provider: Union[Callable[[], str], None] = (
    None
)

A function that returns an Azure Active Directory token.

Will be invoked on every sync request. For async requests, will be invoked if azure_ad_async_token_provider is not provided.

azure_ad_async_token_provider class-attribute instance-attribute

azure_ad_async_token_provider: Union[
    Callable[[], Awaitable[str]], None
] = None

A function that returns an Azure Active Directory token.

Will be invoked on every async request.

model_version class-attribute instance-attribute

model_version: str = ''

The version of the model (e.g. '0125' for 'gpt-3.5-0125').

Azure OpenAI doesn't return model version with the response by default so it must be manually specified if you want to use this information downstream, e.g. when calculating costs.

When you specify the version, it will be appended to the model name in the response. Setting correct version will help you to calculate the cost properly. Model version is not validated, so make sure you set it correctly to get the correct cost.

openai_api_type class-attribute instance-attribute

openai_api_type: Optional[str] = Field(
    default_factory=from_env(
        "OPENAI_API_TYPE", default="azure"
    )
)

Legacy, for openai<1.0.0 support.

validate_base_url class-attribute instance-attribute

validate_base_url: bool = True

If legacy arg openai_api_base is passed in, try to infer if it is a base_url or azure_endpoint and update client params accordingly.

model_name class-attribute instance-attribute

model_name: Optional[str] = Field(
    default=None, alias="model"
)

Name of the deployed OpenAI model, e.g. 'gpt-4o', 'gpt-35-turbo', etc.

Distinct from the Azure deployment name, which is set by the Azure user. Used for tracing and token counting.

Warning

Does NOT affect completion.

disabled_params class-attribute instance-attribute

disabled_params: Optional[dict[str, Any]] = Field(
    default=None
)

Parameters of the OpenAI client or chat.completions endpoint that should be disabled for the given model.

Should be specified as {"param": None | ['val1', 'val2']} where the key is the parameter and the value is either None, meaning that parameter should never be used, or it's a list of disabled values for the parameter.

For example, older models may not support the 'parallel_tool_calls' parameter at all, in which case disabled_params={"parallel_tool_calls: None} can ben passed in.

If a parameter is disabled then it will not be used by default in any methods, e.g. in langchain_openai.chat_models.azure.AzureChatOpenAI.with_structured_output. However this does not prevent a user from directly passed in the parameter during invocation.

By default, unless model_name="gpt-4o" is specified, then 'parallel_tools_calls' will be disabled.

max_tokens class-attribute instance-attribute

max_tokens: Optional[int] = Field(
    default=None, alias="max_completion_tokens"
)

Maximum number of tokens to generate.

lc_secrets property

lc_secrets: dict[str, str]

Get the mapping of secret environment variables.

lc_attributes property

lc_attributes: dict[str, Any]

Get the attributes relevant to tracing.

get_name

get_name(
    suffix: str | None = None, *, name: str | None = None
) -> str

Get the name of the Runnable.

Parameters:

Name Type Description Default
suffix str | None

An optional suffix to append to the name.

None
name str | None

An optional name to use instead of the Runnable's name.

None

Returns:

Type Description
str

The name of the Runnable.

get_input_schema

get_input_schema(
    config: RunnableConfig | None = None,
) -> type[BaseModel]

Get a pydantic model that can be used to validate input to the Runnable.

Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the Runnable is invoked with.

This method allows to get an input schema for a specific configuration.

Parameters:

Name Type Description Default
config RunnableConfig | None

A config to use when generating the schema.

None

Returns:

Type Description
type[BaseModel]

A pydantic model that can be used to validate input.

get_input_jsonschema

get_input_jsonschema(
    config: RunnableConfig | None = None,
) -> dict[str, Any]

Get a JSON schema that represents the input to the Runnable.

Parameters:

Name Type Description Default
config RunnableConfig | None

A config to use when generating the schema.

None

Returns:

Type Description
dict[str, Any]

A JSON schema that represents the input to the Runnable.

Example
from langchain_core.runnables import RunnableLambda


def add_one(x: int) -> int:
    return x + 1


runnable = RunnableLambda(add_one)

print(runnable.get_input_jsonschema())

Added in version 0.3.0

get_output_schema

get_output_schema(
    config: RunnableConfig | None = None,
) -> type[BaseModel]

Get a pydantic model that can be used to validate output to the Runnable.

Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the Runnable is invoked with.

This method allows to get an output schema for a specific configuration.

Parameters:

Name Type Description Default
config RunnableConfig | None

A config to use when generating the schema.

None

Returns:

Type Description
type[BaseModel]

A pydantic model that can be used to validate output.

get_output_jsonschema

get_output_jsonschema(
    config: RunnableConfig | None = None,
) -> dict[str, Any]

Get a JSON schema that represents the output of the Runnable.

Parameters:

Name Type Description Default
config RunnableConfig | None

A config to use when generating the schema.

None

Returns:

Type Description
dict[str, Any]

A JSON schema that represents the output of the Runnable.

Example
from langchain_core.runnables import RunnableLambda


def add_one(x: int) -> int:
    return x + 1


runnable = RunnableLambda(add_one)

print(runnable.get_output_jsonschema())

Added in version 0.3.0

config_schema

config_schema(
    *, include: Sequence[str] | None = None
) -> type[BaseModel]

The type of config this Runnable accepts specified as a pydantic model.

To mark a field as configurable, see the configurable_fields and configurable_alternatives methods.

Parameters:

Name Type Description Default
include Sequence[str] | None

A list of fields to include in the config schema.

None

Returns:

Type Description
type[BaseModel]

A pydantic model that can be used to validate config.

get_config_jsonschema

get_config_jsonschema(
    *, include: Sequence[str] | None = None
) -> dict[str, Any]

Get a JSON schema that represents the config of the Runnable.

Parameters:

Name Type Description Default
include Sequence[str] | None

A list of fields to include in the config schema.

None

Returns:

Type Description
dict[str, Any]

A JSON schema that represents the config of the Runnable.

Added in version 0.3.0

get_graph

get_graph(config: RunnableConfig | None = None) -> Graph

Return a graph representation of this Runnable.

get_prompts

get_prompts(
    config: RunnableConfig | None = None,
) -> list[BasePromptTemplate]

Return a list of prompts used by this Runnable.

__or__

__or__(
    other: (
        Runnable[Any, Other]
        | Callable[[Iterator[Any]], Iterator[Other]]
        | Callable[
            [AsyncIterator[Any]], AsyncIterator[Other]
        ]
        | Callable[[Any], Other]
        | Mapping[
            str,
            Runnable[Any, Other]
            | Callable[[Any], Other]
            | Any,
        ]
    ),
) -> RunnableSerializable[Input, Other]

Runnable "or" operator.

Compose this Runnable with another object to create a RunnableSequence.

Parameters:

Name Type Description Default
other Runnable[Any, Other] | Callable[[Iterator[Any]], Iterator[Other]] | Callable[[AsyncIterator[Any]], AsyncIterator[Other]] | Callable[[Any], Other] | Mapping[str, Runnable[Any, Other] | Callable[[Any], Other] | Any]

Another Runnable or a Runnable-like object.

required

Returns:

Type Description
RunnableSerializable[Input, Other]

A new Runnable.

__ror__

__ror__(
    other: (
        Runnable[Other, Any]
        | Callable[[Iterator[Other]], Iterator[Any]]
        | Callable[
            [AsyncIterator[Other]], AsyncIterator[Any]
        ]
        | Callable[[Other], Any]
        | Mapping[
            str,
            Runnable[Other, Any]
            | Callable[[Other], Any]
            | Any,
        ]
    ),
) -> RunnableSerializable[Other, Output]

Runnable "reverse-or" operator.

Compose this Runnable with another object to create a RunnableSequence.

Parameters:

Name Type Description Default
other Runnable[Other, Any] | Callable[[Iterator[Other]], Iterator[Any]] | Callable[[AsyncIterator[Other]], AsyncIterator[Any]] | Callable[[Other], Any] | Mapping[str, Runnable[Other, Any] | Callable[[Other], Any] | Any]

Another Runnable or a Runnable-like object.

required

Returns:

Type Description
RunnableSerializable[Other, Output]

A new Runnable.

pipe

pipe(
    *others: Runnable[Any, Other] | Callable[[Any], Other],
    name: str | None = None
) -> RunnableSerializable[Input, Other]

Pipe runnables.

Compose this Runnable with Runnable-like objects to make a RunnableSequence.

Equivalent to RunnableSequence(self, *others) or self | others[0] | ...

Example
from langchain_core.runnables import RunnableLambda


def add_one(x: int) -> int:
    return x + 1


def mul_two(x: int) -> int:
    return x * 2


runnable_1 = RunnableLambda(add_one)
runnable_2 = RunnableLambda(mul_two)
sequence = runnable_1.pipe(runnable_2)
# Or equivalently:
# sequence = runnable_1 | runnable_2
# sequence = RunnableSequence(first=runnable_1, last=runnable_2)
sequence.invoke(1)
await sequence.ainvoke(1)
# -> 4

sequence.batch([1, 2, 3])
await sequence.abatch([1, 2, 3])
# -> [4, 6, 8]

Parameters:

Name Type Description Default
*others Runnable[Any, Other] | Callable[[Any], Other]

Other Runnable or Runnable-like objects to compose

()
name str | None

An optional name for the resulting RunnableSequence.

None

Returns:

Type Description
RunnableSerializable[Input, Other]

A new Runnable.

pick

pick(
    keys: str | list[str],
) -> RunnableSerializable[Any, Any]

Pick keys from the output dict of this Runnable.

Pick single key:

```python
import json

from langchain_core.runnables import RunnableLambda, RunnableMap

as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)
chain = RunnableMap(str=as_str, json=as_json)

chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3]}

json_only_chain = chain.pick("json")
json_only_chain.invoke("[1, 2, 3]")
# -> [1, 2, 3]
```

Pick list of keys:

```python
from typing import Any

import json

from langchain_core.runnables import RunnableLambda, RunnableMap

as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)


def as_bytes(x: Any) -> bytes:
    return bytes(x, "utf-8")


chain = RunnableMap(
    str=as_str, json=as_json, bytes=RunnableLambda(as_bytes)
)

chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3], "bytes": b"[1, 2, 3]"}

json_and_bytes_chain = chain.pick(["json", "bytes"])
json_and_bytes_chain.invoke("[1, 2, 3]")
# -> {"json": [1, 2, 3], "bytes": b"[1, 2, 3]"}
```

Parameters:

Name Type Description Default
keys str | list[str]

A key or list of keys to pick from the output dict.

required

Returns:

Type Description
RunnableSerializable[Any, Any]

a new Runnable.

assign

assign(
    **kwargs: (
        Runnable[dict[str, Any], Any]
        | Callable[[dict[str, Any]], Any]
        | Mapping[
            str,
            Runnable[dict[str, Any], Any]
            | Callable[[dict[str, Any]], Any],
        ]
    ),
) -> RunnableSerializable[Any, Any]

Assigns new fields to the dict output of this Runnable.

from langchain_community.llms.fake import FakeStreamingListLLM
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import SystemMessagePromptTemplate
from langchain_core.runnables import Runnable
from operator import itemgetter

prompt = (
    SystemMessagePromptTemplate.from_template("You are a nice assistant.")
    + "{question}"
)
llm = FakeStreamingListLLM(responses=["foo-lish"])

chain: Runnable = prompt | llm | {"str": StrOutputParser()}

chain_with_assign = chain.assign(hello=itemgetter("str") | llm)

print(chain_with_assign.input_schema.model_json_schema())
# {'title': 'PromptInput', 'type': 'object', 'properties':
{'question': {'title': 'Question', 'type': 'string'}}}
print(chain_with_assign.output_schema.model_json_schema())
# {'title': 'RunnableSequenceOutput', 'type': 'object', 'properties':
{'str': {'title': 'Str',
'type': 'string'}, 'hello': {'title': 'Hello', 'type': 'string'}}}

Parameters:

Name Type Description Default
**kwargs Runnable[dict[str, Any], Any] | Callable[[dict[str, Any]], Any] | Mapping[str, Runnable[dict[str, Any], Any] | Callable[[dict[str, Any]], Any]]

A mapping of keys to Runnable or Runnable-like objects that will be invoked with the entire output dict of this Runnable.

{}

Returns:

Type Description
RunnableSerializable[Any, Any]

A new Runnable.

batch

batch(
    inputs: list[Input],
    config: (
        RunnableConfig | list[RunnableConfig] | None
    ) = None,
    *,
    return_exceptions: bool = False,
    **kwargs: Any | None
) -> list[Output]

Default implementation runs invoke in parallel using a thread pool executor.

The default implementation of batch works well for IO bound runnables.

Subclasses should override this method if they can batch more efficiently; e.g., if the underlying Runnable uses an API which supports a batch mode.

Parameters:

Name Type Description Default
inputs list[Input]

A list of inputs to the Runnable.

required
config RunnableConfig | list[RunnableConfig] | None

A config to use when invoking the Runnable. The config supports standard keys like 'tags', 'metadata' for tracing purposes, 'max_concurrency' for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Defaults to None.

None
return_exceptions bool

Whether to return exceptions instead of raising them. Defaults to False.

False
**kwargs Any | None

Additional keyword arguments to pass to the Runnable.

{}

Returns:

Type Description
list[Output]

A list of outputs from the Runnable.

batch_as_completed

batch_as_completed(
    inputs: Sequence[Input],
    config: (
        RunnableConfig | Sequence[RunnableConfig] | None
    ) = None,
    *,
    return_exceptions: bool = False,
    **kwargs: Any | None
) -> Iterator[tuple[int, Output | Exception]]

Run invoke in parallel on a list of inputs.

Yields results as they complete.

Parameters:

Name Type Description Default
inputs Sequence[Input]

A list of inputs to the Runnable.

required
config RunnableConfig | Sequence[RunnableConfig] | None

A config to use when invoking the Runnable. The config supports standard keys like 'tags', 'metadata' for tracing purposes, 'max_concurrency' for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Defaults to None.

None
return_exceptions bool

Whether to return exceptions instead of raising them. Defaults to False.

False
**kwargs Any | None

Additional keyword arguments to pass to the Runnable.

{}

Yields:

Type Description
tuple[int, Output | Exception]

Tuples of the index of the input and the output from the Runnable.

abatch async

abatch(
    inputs: list[Input],
    config: (
        RunnableConfig | list[RunnableConfig] | None
    ) = None,
    *,
    return_exceptions: bool = False,
    **kwargs: Any | None
) -> list[Output]

Default implementation runs ainvoke in parallel using asyncio.gather.

The default implementation of batch works well for IO bound runnables.

Subclasses should override this method if they can batch more efficiently; e.g., if the underlying Runnable uses an API which supports a batch mode.

Parameters:

Name Type Description Default
inputs list[Input]

A list of inputs to the Runnable.

required
config RunnableConfig | list[RunnableConfig] | None

A config to use when invoking the Runnable. The config supports standard keys like 'tags', 'metadata' for tracing purposes, 'max_concurrency' for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Defaults to None.

None
return_exceptions bool

Whether to return exceptions instead of raising them. Defaults to False.

False
**kwargs Any | None

Additional keyword arguments to pass to the Runnable.

{}

Returns:

Type Description
list[Output]

A list of outputs from the Runnable.

abatch_as_completed async

abatch_as_completed(
    inputs: Sequence[Input],
    config: (
        RunnableConfig | Sequence[RunnableConfig] | None
    ) = None,
    *,
    return_exceptions: bool = False,
    **kwargs: Any | None
) -> AsyncIterator[tuple[int, Output | Exception]]

Run ainvoke in parallel on a list of inputs.

Yields results as they complete.

Parameters:

Name Type Description Default
inputs Sequence[Input]

A list of inputs to the Runnable.

required
config RunnableConfig | Sequence[RunnableConfig] | None

A config to use when invoking the Runnable. The config supports standard keys like 'tags', 'metadata' for tracing purposes, 'max_concurrency' for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Defaults to None.

None
return_exceptions bool

Whether to return exceptions instead of raising them. Defaults to False.

False
kwargs Any | None

Additional keyword arguments to pass to the Runnable.

{}

Yields:

Type Description
AsyncIterator[tuple[int, Output | Exception]]

A tuple of the index of the input and the output from the Runnable.

astream_log async

astream_log(
    input: Any,
    config: RunnableConfig | None = None,
    *,
    diff: bool = True,
    with_streamed_output_list: bool = True,
    include_names: Sequence[str] | None = None,
    include_types: Sequence[str] | None = None,
    include_tags: Sequence[str] | None = None,
    exclude_names: Sequence[str] | None = None,
    exclude_types: Sequence[str] | None = None,
    exclude_tags: Sequence[str] | None = None,
    **kwargs: Any
) -> AsyncIterator[RunLogPatch] | AsyncIterator[RunLog]

Stream all output from a Runnable, as reported to the callback system.

This includes all inner runs of LLMs, Retrievers, Tools, etc.

Output is streamed as Log objects, which include a list of Jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run.

The Jsonpatch ops can be applied in order to construct state.

Parameters:

Name Type Description Default
input Any

The input to the Runnable.

required
config RunnableConfig | None

The config to use for the Runnable.

None
diff bool

Whether to yield diffs between each step or the current state.

True
with_streamed_output_list bool

Whether to yield the streamed_output list.

True
include_names Sequence[str] | None

Only include logs with these names.

None
include_types Sequence[str] | None

Only include logs with these types.

None
include_tags Sequence[str] | None

Only include logs with these tags.

None
exclude_names Sequence[str] | None

Exclude logs with these names.

None
exclude_types Sequence[str] | None

Exclude logs with these types.

None
exclude_tags Sequence[str] | None

Exclude logs with these tags.

None
kwargs Any

Additional keyword arguments to pass to the Runnable.

{}

Yields:

Type Description
AsyncIterator[RunLogPatch] | AsyncIterator[RunLog]

A RunLogPatch or RunLog object.

astream_events async

astream_events(
    input: Any,
    config: RunnableConfig | None = None,
    *,
    version: Literal["v1", "v2"] = "v2",
    include_names: Sequence[str] | None = None,
    include_types: Sequence[str] | None = None,
    include_tags: Sequence[str] | None = None,
    exclude_names: Sequence[str] | None = None,
    exclude_types: Sequence[str] | None = None,
    exclude_tags: Sequence[str] | None = None,
    **kwargs: Any
) -> AsyncIterator[StreamEvent]

Generate a stream of events.

Use to create an iterator over StreamEvents that provide real-time information about the progress of the Runnable, including StreamEvents from intermediate results.

A StreamEvent is a dictionary with the following schema:

  • event: str - Event names are of the format: on_[runnable_type]_(start|stream|end).
  • name: str - The name of the Runnable that generated the event.
  • run_id: str - randomly generated ID associated with the given execution of the Runnable that emitted the event. A child Runnable that gets invoked as part of the execution of a parent Runnable is assigned its own unique ID.
  • parent_ids: list[str] - The IDs of the parent runnables that generated the event. The root Runnable will have an empty list. The order of the parent IDs is from the root to the immediate parent. Only available for v2 version of the API. The v1 version of the API will return an empty list.
  • tags: Optional[list[str]] - The tags of the Runnable that generated the event.
  • metadata: Optional[dict[str, Any]] - The metadata of the Runnable that generated the event.
  • data: dict[str, Any]

Below is a table that illustrates some events that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table.

Note

This reference table is for the v2 version of the schema.

+--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | event | name | chunk | input | output | +==========================+==================+=====================================+===================================================+=====================================================+ | on_chat_model_start | [model name] | | {"messages": [[SystemMessage, HumanMessage]]} | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_chat_model_stream | [model name] | AIMessageChunk(content="hello") | | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_chat_model_end | [model name] | | {"messages": [[SystemMessage, HumanMessage]]} | AIMessageChunk(content="hello world") | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_llm_start | [model name] | | {'input': 'hello'} | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_llm_stream | [model name] | 'Hello' | | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_llm_end | [model name] | | 'Hello human!' | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_chain_start | format_docs | | | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_chain_stream | format_docs | 'hello world!, goodbye world!' | | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_chain_end | format_docs | | [Document(...)] | 'hello world!, goodbye world!' | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_tool_start | some_tool | | {"x": 1, "y": "2"} | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_tool_end | some_tool | | | {"x": 1, "y": "2"} | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_retriever_start | [retriever name] | | {"query": "hello"} | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_retriever_end | [retriever name] | | {"query": "hello"} | [Document(...), ..] | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_prompt_start | [template_name] | | {"question": "hello"} | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_prompt_end | [template_name] | | {"question": "hello"} | ChatPromptValue(messages: [SystemMessage, ...]) | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+

In addition to the standard events, users can also dispatch custom events (see example below).

Custom events will be only be surfaced with in the v2 version of the API!

A custom event has following format:

+-----------+------+-----------------------------------------------------------------------------------------------------------+ | Attribute | Type | Description | +===========+======+===========================================================================================================+ | name | str | A user defined name for the event. | +-----------+------+-----------------------------------------------------------------------------------------------------------+ | data | Any | The data associated with the event. This can be anything, though we suggest making it JSON serializable. | +-----------+------+-----------------------------------------------------------------------------------------------------------+

Here are declarations associated with the standard events shown above:

format_docs:

def format_docs(docs: list[Document]) -> str:
    '''Format the docs.'''
    return ", ".join([doc.page_content for doc in docs])


format_docs = RunnableLambda(format_docs)

some_tool:

@tool
def some_tool(x: int, y: str) -> dict:
    '''Some_tool.'''
    return {"x": x, "y": y}

prompt:

template = ChatPromptTemplate.from_messages(
    [
        ("system", "You are Cat Agent 007"),
        ("human", "{question}"),
    ]
).with_config({"run_name": "my_template", "tags": ["my_template"]})
Example:

from langchain_core.runnables import RunnableLambda


async def reverse(s: str) -> str:
    return s[::-1]


chain = RunnableLambda(func=reverse)

events = [event async for event in chain.astream_events("hello", version="v2")]

# will produce the following events (run_id, and parent_ids
# has been omitted for brevity):
[
    {
        "data": {"input": "hello"},
        "event": "on_chain_start",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
    {
        "data": {"chunk": "olleh"},
        "event": "on_chain_stream",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
    {
        "data": {"output": "olleh"},
        "event": "on_chain_end",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
]

Example: Dispatch Custom Event

from langchain_core.callbacks.manager import (
    adispatch_custom_event,
)
from langchain_core.runnables import RunnableLambda, RunnableConfig
import asyncio


async def slow_thing(some_input: str, config: RunnableConfig) -> str:
    """Do something that takes a long time."""
    await asyncio.sleep(1) # Placeholder for some slow operation
    await adispatch_custom_event(
        "progress_event",
        {"message": "Finished step 1 of 3"},
        config=config # Must be included for python < 3.10
    )
    await asyncio.sleep(1) # Placeholder for some slow operation
    await adispatch_custom_event(
        "progress_event",
        {"message": "Finished step 2 of 3"},
        config=config # Must be included for python < 3.10
    )
    await asyncio.sleep(1) # Placeholder for some slow operation
    return "Done"

slow_thing = RunnableLambda(slow_thing)

async for event in slow_thing.astream_events("some_input", version="v2"):
    print(event)

Parameters:

Name Type Description Default
input Any

The input to the Runnable.

required
config RunnableConfig | None

The config to use for the Runnable.

None
version Literal['v1', 'v2']

The version of the schema to use either 'v2' or 'v1'. Users should use 'v2'. 'v1' is for backwards compatibility and will be deprecated in 0.4.0. No default will be assigned until the API is stabilized. custom events will only be surfaced in 'v2'.

'v2'
include_names Sequence[str] | None

Only include events from Runnables with matching names.

None
include_types Sequence[str] | None

Only include events from Runnables with matching types.

None
include_tags Sequence[str] | None

Only include events from Runnables with matching tags.

None
exclude_names Sequence[str] | None

Exclude events from Runnables with matching names.

None
exclude_types Sequence[str] | None

Exclude events from Runnables with matching types.

None
exclude_tags Sequence[str] | None

Exclude events from Runnables with matching tags.

None
kwargs Any

Additional keyword arguments to pass to the Runnable. These will be passed to astream_log as this implementation of astream_events is built on top of astream_log.

{}

Yields:

Type Description
AsyncIterator[StreamEvent]

An async stream of StreamEvents.

Raises:

Type Description
NotImplementedError

If the version is not 'v1' or 'v2'.

transform

transform(
    input: Iterator[Input],
    config: RunnableConfig | None = None,
    **kwargs: Any | None
) -> Iterator[Output]

Transform inputs to outputs.

Default implementation of transform, which buffers input and calls astream.

Subclasses should override this method if they can start producing output while input is still being generated.

Parameters:

Name Type Description Default
input Iterator[Input]

An iterator of inputs to the Runnable.

required
config RunnableConfig | None

The config to use for the Runnable. Defaults to None.

None
kwargs Any | None

Additional keyword arguments to pass to the Runnable.

{}

Yields:

Type Description
Output

The output of the Runnable.

atransform async

atransform(
    input: AsyncIterator[Input],
    config: RunnableConfig | None = None,
    **kwargs: Any | None
) -> AsyncIterator[Output]

Transform inputs to outputs.

Default implementation of atransform, which buffers input and calls astream.

Subclasses should override this method if they can start producing output while input is still being generated.

Parameters:

Name Type Description Default
input AsyncIterator[Input]

An async iterator of inputs to the Runnable.

required
config RunnableConfig | None

The config to use for the Runnable. Defaults to None.

None
kwargs Any | None

Additional keyword arguments to pass to the Runnable.

{}

Yields:

Type Description
AsyncIterator[Output]

The output of the Runnable.

bind

bind(**kwargs: Any) -> Runnable[Input, Output]

Bind arguments to a Runnable, returning a new Runnable.

Useful when a Runnable in a chain requires an argument that is not in the output of the previous Runnable or included in the user input.

Parameters:

Name Type Description Default
kwargs Any

The arguments to bind to the Runnable.

{}

Returns:

Type Description
Runnable[Input, Output]

A new Runnable with the arguments bound.

Example
from langchain_ollama import ChatOllama
from langchain_core.output_parsers import StrOutputParser

llm = ChatOllama(model="llama3.1")

# Without bind.
chain = llm | StrOutputParser()

chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two three four five.'

# With bind.
chain = llm.bind(stop=["three"]) | StrOutputParser()

chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two'

with_config

with_config(
    config: RunnableConfig | None = None, **kwargs: Any
) -> Runnable[Input, Output]

Bind config to a Runnable, returning a new Runnable.

Parameters:

Name Type Description Default
config RunnableConfig | None

The config to bind to the Runnable.

None
kwargs Any

Additional keyword arguments to pass to the Runnable.

{}

Returns:

Type Description
Runnable[Input, Output]

A new Runnable with the config bound.

with_listeners

with_listeners(
    *,
    on_start: (
        Callable[[Run], None]
        | Callable[[Run, RunnableConfig], None]
        | None
    ) = None,
    on_end: (
        Callable[[Run], None]
        | Callable[[Run, RunnableConfig], None]
        | None
    ) = None,
    on_error: (
        Callable[[Run], None]
        | Callable[[Run, RunnableConfig], None]
        | None
    ) = None
) -> Runnable[Input, Output]

Bind lifecycle listeners to a Runnable, returning a new Runnable.

The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run.

Parameters:

Name Type Description Default
on_start Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None

Called before the Runnable starts running, with the Run object. Defaults to None.

None
on_end Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None

Called after the Runnable finishes running, with the Run object. Defaults to None.

None
on_error Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None

Called if the Runnable throws an error, with the Run object. Defaults to None.

None

Returns:

Type Description
Runnable[Input, Output]

A new Runnable with the listeners bound.

Example
from langchain_core.runnables import RunnableLambda
from langchain_core.tracers.schemas import Run

import time


def test_runnable(time_to_sleep: int):
    time.sleep(time_to_sleep)


def fn_start(run_obj: Run):
    print("start_time:", run_obj.start_time)


def fn_end(run_obj: Run):
    print("end_time:", run_obj.end_time)


chain = RunnableLambda(test_runnable).with_listeners(
    on_start=fn_start, on_end=fn_end
)
chain.invoke(2)

with_alisteners

with_alisteners(
    *,
    on_start: AsyncListener | None = None,
    on_end: AsyncListener | None = None,
    on_error: AsyncListener | None = None
) -> Runnable[Input, Output]

Bind async lifecycle listeners to a Runnable.

Returns a new Runnable.

The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run.

Parameters:

Name Type Description Default
on_start AsyncListener | None

Called asynchronously before the Runnable starts running, with the Run object. Defaults to None.

None
on_end AsyncListener | None

Called asynchronously after the Runnable finishes running, with the Run object. Defaults to None.

None
on_error AsyncListener | None

Called asynchronously if the Runnable throws an error, with the Run object. Defaults to None.

None

Returns:

Type Description
Runnable[Input, Output]

A new Runnable with the listeners bound.

Example
from langchain_core.runnables import RunnableLambda, Runnable
from datetime import datetime, timezone
import time
import asyncio

def format_t(timestamp: float) -> str:
    return datetime.fromtimestamp(timestamp, tz=timezone.utc).isoformat()

async def test_runnable(time_to_sleep: int):
    print(f"Runnable[{time_to_sleep}s]: starts at {format_t(time.time())}")
    await asyncio.sleep(time_to_sleep)
    print(f"Runnable[{time_to_sleep}s]: ends at {format_t(time.time())}")

async def fn_start(run_obj: Runnable):
    print(f"on start callback starts at {format_t(time.time())}")
    await asyncio.sleep(3)
    print(f"on start callback ends at {format_t(time.time())}")

async def fn_end(run_obj: Runnable):
    print(f"on end callback starts at {format_t(time.time())}")
    await asyncio.sleep(2)
    print(f"on end callback ends at {format_t(time.time())}")

runnable = RunnableLambda(test_runnable).with_alisteners(
    on_start=fn_start,
    on_end=fn_end
)
async def concurrent_runs():
    await asyncio.gather(runnable.ainvoke(2), runnable.ainvoke(3))

asyncio.run(concurrent_runs())
Result:
on start callback starts at 2025-03-01T07:05:22.875378+00:00
on start callback starts at 2025-03-01T07:05:22.875495+00:00
on start callback ends at 2025-03-01T07:05:25.878862+00:00
on start callback ends at 2025-03-01T07:05:25.878947+00:00
Runnable[2s]: starts at 2025-03-01T07:05:25.879392+00:00
Runnable[3s]: starts at 2025-03-01T07:05:25.879804+00:00
Runnable[2s]: ends at 2025-03-01T07:05:27.881998+00:00
on end callback starts at 2025-03-01T07:05:27.882360+00:00
Runnable[3s]: ends at 2025-03-01T07:05:28.881737+00:00
on end callback starts at 2025-03-01T07:05:28.882428+00:00
on end callback ends at 2025-03-01T07:05:29.883893+00:00
on end callback ends at 2025-03-01T07:05:30.884831+00:00

with_types

with_types(
    *,
    input_type: type[Input] | None = None,
    output_type: type[Output] | None = None
) -> Runnable[Input, Output]

Bind input and output types to a Runnable, returning a new Runnable.

Parameters:

Name Type Description Default
input_type type[Input] | None

The input type to bind to the Runnable. Defaults to None.

None
output_type type[Output] | None

The output type to bind to the Runnable. Defaults to None.

None

Returns:

Type Description
Runnable[Input, Output]

A new Runnable with the types bound.

with_retry

with_retry(
    *,
    retry_if_exception_type: tuple[
        type[BaseException], ...
    ] = (Exception,),
    wait_exponential_jitter: bool = True,
    exponential_jitter_params: (
        ExponentialJitterParams | None
    ) = None,
    stop_after_attempt: int = 3
) -> Runnable[Input, Output]

Create a new Runnable that retries the original Runnable on exceptions.

Parameters:

Name Type Description Default
retry_if_exception_type tuple[type[BaseException], ...]

A tuple of exception types to retry on. Defaults to (Exception,).

(Exception,)
wait_exponential_jitter bool

Whether to add jitter to the wait time between retries. Defaults to True.

True
stop_after_attempt int

The maximum number of attempts to make before giving up. Defaults to 3.

3
exponential_jitter_params ExponentialJitterParams | None

Parameters for tenacity.wait_exponential_jitter. Namely: initial, max, exp_base, and jitter (all float values).

None

Returns:

Type Description
Runnable[Input, Output]

A new Runnable that retries the original Runnable on exceptions.

Example
from langchain_core.runnables import RunnableLambda

count = 0


def _lambda(x: int) -> None:
    global count
    count = count + 1
    if x == 1:
        raise ValueError("x is 1")
    else:
        pass


runnable = RunnableLambda(_lambda)
try:
    runnable.with_retry(
        stop_after_attempt=2,
        retry_if_exception_type=(ValueError,),
    ).invoke(1)
except ValueError:
    pass

assert count == 2

map

map() -> Runnable[list[Input], list[Output]]

Return a new Runnable that maps a list of inputs to a list of outputs.

Calls invoke with each input.

Returns:

Type Description
Runnable[list[Input], list[Output]]

A new Runnable that maps a list of inputs to a list of outputs.

Example
from langchain_core.runnables import RunnableLambda


def _lambda(x: int) -> int:
    return x + 1


runnable = RunnableLambda(_lambda)
print(runnable.map().invoke([1, 2, 3]))  # [2, 3, 4]

with_fallbacks

with_fallbacks(
    fallbacks: Sequence[Runnable[Input, Output]],
    *,
    exceptions_to_handle: tuple[
        type[BaseException], ...
    ] = (Exception,),
    exception_key: str | None = None
) -> RunnableWithFallbacks[Input, Output]

Add fallbacks to a Runnable, returning a new Runnable.

The new Runnable will try the original Runnable, and then each fallback in order, upon failures.

Parameters:

Name Type Description Default
fallbacks Sequence[Runnable[Input, Output]]

A sequence of runnables to try if the original Runnable fails.

required
exceptions_to_handle tuple[type[BaseException], ...]

A tuple of exception types to handle. Defaults to (Exception,).

(Exception,)
exception_key str | None

If string is specified then handled exceptions will be passed to fallbacks as part of the input under the specified key. If None, exceptions will not be passed to fallbacks. If used, the base Runnable and its fallbacks must accept a dictionary as input. Defaults to None.

None

Returns:

Type Description
RunnableWithFallbacks[Input, Output]

A new Runnable that will try the original Runnable, and then each

RunnableWithFallbacks[Input, Output]

fallback in order, upon failures.

Example
from typing import Iterator

from langchain_core.runnables import RunnableGenerator


def _generate_immediate_error(input: Iterator) -> Iterator[str]:
    raise ValueError()
    yield ""


def _generate(input: Iterator) -> Iterator[str]:
    yield from "foo bar"


runnable = RunnableGenerator(_generate_immediate_error).with_fallbacks(
    [RunnableGenerator(_generate)]
)
print("".join(runnable.stream({})))  # foo bar

Parameters:

Name Type Description Default
fallbacks Sequence[Runnable[Input, Output]]

A sequence of runnables to try if the original Runnable fails.

required
exceptions_to_handle tuple[type[BaseException], ...]

A tuple of exception types to handle.

(Exception,)
exception_key str | None

If string is specified then handled exceptions will be passed to fallbacks as part of the input under the specified key. If None, exceptions will not be passed to fallbacks. If used, the base Runnable and its fallbacks must accept a dictionary as input.

None

Returns:

Type Description
RunnableWithFallbacks[Input, Output]

A new Runnable that will try the original Runnable, and then each

RunnableWithFallbacks[Input, Output]

fallback in order, upon failures.

as_tool

as_tool(
    args_schema: type[BaseModel] | None = None,
    *,
    name: str | None = None,
    description: str | None = None,
    arg_types: dict[str, type] | None = None
) -> BaseTool

Create a BaseTool from a Runnable.

as_tool will instantiate a BaseTool with a name, description, and args_schema from a Runnable. Where possible, schemas are inferred from runnable.get_input_schema. Alternatively (e.g., if the Runnable takes a dict as input and the specific dict keys are not typed), the schema can be specified directly with args_schema. You can also pass arg_types to just specify the required arguments and their types.

Parameters:

Name Type Description Default
args_schema type[BaseModel] | None

The schema for the tool. Defaults to None.

None
name str | None

The name of the tool. Defaults to None.

None
description str | None

The description of the tool. Defaults to None.

None
arg_types dict[str, type] | None

A dictionary of argument names to types. Defaults to None.

None

Returns:

Type Description
BaseTool

A BaseTool instance.

Typed dict input:

from typing_extensions import TypedDict
from langchain_core.runnables import RunnableLambda


class Args(TypedDict):
    a: int
    b: list[int]


def f(x: Args) -> str:
    return str(x["a"] * max(x["b"]))


runnable = RunnableLambda(f)
as_tool = runnable.as_tool()
as_tool.invoke({"a": 3, "b": [1, 2]})

dict input, specifying schema via args_schema:

from typing import Any
from pydantic import BaseModel, Field
from langchain_core.runnables import RunnableLambda

def f(x: dict[str, Any]) -> str:
    return str(x["a"] * max(x["b"]))

class FSchema(BaseModel):
    """Apply a function to an integer and list of integers."""

    a: int = Field(..., description="Integer")
    b: list[int] = Field(..., description="List of ints")

runnable = RunnableLambda(f)
as_tool = runnable.as_tool(FSchema)
as_tool.invoke({"a": 3, "b": [1, 2]})

dict input, specifying schema via arg_types:

from typing import Any
from langchain_core.runnables import RunnableLambda


def f(x: dict[str, Any]) -> str:
    return str(x["a"] * max(x["b"]))


runnable = RunnableLambda(f)
as_tool = runnable.as_tool(arg_types={"a": int, "b": list[int]})
as_tool.invoke({"a": 3, "b": [1, 2]})

String input:

from langchain_core.runnables import RunnableLambda


def f(x: str) -> str:
    return x + "a"


def g(x: str) -> str:
    return x + "z"


runnable = RunnableLambda(f) | g
as_tool = runnable.as_tool()
as_tool.invoke("b")

Added in version 0.2.14

__init__

__init__(*args: Any, **kwargs: Any) -> None

lc_id classmethod

lc_id() -> list[str]

Return a unique identifier for this class for serialization purposes.

The unique identifier is a list of strings that describes the path to the object. For example, for the class langchain.llms.openai.OpenAI, the id is ["langchain", "llms", "openai", "OpenAI"].

to_json

to_json() -> (
    SerializedConstructor | SerializedNotImplemented
)

Serialize the Runnable to JSON.

Returns:

Type Description
SerializedConstructor | SerializedNotImplemented

A JSON-serializable representation of the Runnable.

to_json_not_implemented

to_json_not_implemented() -> SerializedNotImplemented

Serialize a "not implemented" object.

Returns:

Type Description
SerializedNotImplemented

SerializedNotImplemented.

configurable_fields

configurable_fields(
    **kwargs: AnyConfigurableField,
) -> RunnableSerializable[Input, Output]

Configure particular Runnable fields at runtime.

Parameters:

Name Type Description Default
**kwargs AnyConfigurableField

A dictionary of ConfigurableField instances to configure.

{}

Raises:

Type Description
ValueError

If a configuration key is not found in the Runnable.

Returns:

Type Description
RunnableSerializable[Input, Output]

A new Runnable with the fields configured.

from langchain_core.runnables import ConfigurableField
from langchain_openai import ChatOpenAI

model = ChatOpenAI(max_tokens=20).configurable_fields(
    max_tokens=ConfigurableField(
        id="output_token_number",
        name="Max tokens in the output",
        description="The maximum number of tokens in the output",
    )
)

# max_tokens = 20
print("max_tokens_20: ", model.invoke("tell me something about chess").content)

# max_tokens = 200
print(
    "max_tokens_200: ",
    model.with_config(configurable={"output_token_number": 200})
    .invoke("tell me something about chess")
    .content,
)

configurable_alternatives

configurable_alternatives(
    which: ConfigurableField,
    *,
    default_key: str = "default",
    prefix_keys: bool = False,
    **kwargs: (
        Runnable[Input, Output]
        | Callable[[], Runnable[Input, Output]]
    )
) -> RunnableSerializable[Input, Output]

Configure alternatives for Runnables that can be set at runtime.

Parameters:

Name Type Description Default
which ConfigurableField

The ConfigurableField instance that will be used to select the alternative.

required
default_key str

The default key to use if no alternative is selected. Defaults to 'default'.

'default'
prefix_keys bool

Whether to prefix the keys with the ConfigurableField id. Defaults to False.

False
**kwargs Runnable[Input, Output] | Callable[[], Runnable[Input, Output]]

A dictionary of keys to Runnable instances or callables that return Runnable instances.

{}

Returns:

Type Description
RunnableSerializable[Input, Output]

A new Runnable with the alternatives configured.

from langchain_anthropic import ChatAnthropic
from langchain_core.runnables.utils import ConfigurableField
from langchain_openai import ChatOpenAI

model = ChatAnthropic(
    model_name="claude-3-7-sonnet-20250219"
).configurable_alternatives(
    ConfigurableField(id="llm"),
    default_key="anthropic",
    openai=ChatOpenAI(),
)

# uses the default model ChatAnthropic
print(model.invoke("which organization created you?").content)

# uses ChatOpenAI
print(
    model.with_config(configurable={"llm": "openai"})
    .invoke("which organization created you?")
    .content
)

set_verbose

set_verbose(verbose: bool | None) -> bool

If verbose is None, set it.

This allows users to pass in None as verbose to access the global setting.

Parameters:

Name Type Description Default
verbose bool | None

The verbosity setting to use.

required

Returns:

Type Description
bool

The verbosity setting to use.

get_token_ids

get_token_ids(text: str) -> list[int]

Get the tokens present in the text with tiktoken package.

get_num_tokens

get_num_tokens(text: str) -> int

Get the number of tokens present in the text.

Useful for checking if an input fits in a model's context window.

Parameters:

Name Type Description Default
text str

The string input to tokenize.

required

Returns:

Type Description
int

The integer number of tokens in the text.

get_num_tokens_from_messages

get_num_tokens_from_messages(
    messages: Sequence[BaseMessage],
    tools: Optional[
        Sequence[
            Union[dict[str, Any], type, Callable, BaseTool]
        ]
    ] = None,
) -> int

Calculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.

Requirements: You must have the pillow installed if you want to count image tokens if you are specifying the image as a base64 string, and you must have both pillow and httpx installed if you are specifying the image as a URL. If these aren't installed image inputs will be ignored in token counting.

OpenAI reference <https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb>__

Parameters:

Name Type Description Default
messages Sequence[BaseMessage]

The message inputs to tokenize.

required
tools Optional[Sequence[Union[dict[str, Any], type, Callable, BaseTool]]]

If provided, sequence of dict, BaseModel, function, or BaseTools to be converted to tool schemas.

None

generate

generate(
    messages: list[list[BaseMessage]],
    stop: list[str] | None = None,
    callbacks: Callbacks = None,
    *,
    tags: list[str] | None = None,
    metadata: dict[str, Any] | None = None,
    run_name: str | None = None,
    run_id: UUID | None = None,
    **kwargs: Any
) -> LLMResult

Pass a sequence of prompts to the model and return model generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:

  1. Take advantage of batched calls,
  2. Need more output from the model than just the top generated value,
  3. Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).

Parameters:

Name Type Description Default
messages list[list[BaseMessage]]

List of list of messages.

required
stop list[str] | None

Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

None
callbacks Callbacks

Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

None
tags list[str] | None

The tags to apply.

None
metadata dict[str, Any] | None

The metadata to apply.

None
run_name str | None

The name of the run.

None
run_id UUID | None

The ID of the run.

None
**kwargs Any

Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

{}

Returns:

Type Description
LLMResult

An LLMResult, which contains a list of candidate Generations for each input

LLMResult

prompt and additional model provider-specific output.

agenerate async

agenerate(
    messages: list[list[BaseMessage]],
    stop: list[str] | None = None,
    callbacks: Callbacks = None,
    *,
    tags: list[str] | None = None,
    metadata: dict[str, Any] | None = None,
    run_name: str | None = None,
    run_id: UUID | None = None,
    **kwargs: Any
) -> LLMResult

Asynchronously pass a sequence of prompts to a model and return generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:

  1. Take advantage of batched calls,
  2. Need more output from the model than just the top generated value,
  3. Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).

Parameters:

Name Type Description Default
messages list[list[BaseMessage]]

List of list of messages.

required
stop list[str] | None

Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

None
callbacks Callbacks

Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

None
tags list[str] | None

The tags to apply.

None
metadata dict[str, Any] | None

The metadata to apply.

None
run_name str | None

The name of the run.

None
run_id UUID | None

The ID of the run.

None
**kwargs Any

Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

{}

Returns:

Type Description
LLMResult

An LLMResult, which contains a list of candidate Generations for each input

LLMResult

prompt and additional model provider-specific output.

dict

dict(**kwargs: Any) -> dict

Return a dictionary of the LLM.

bind_tools

bind_tools(
    tools: Sequence[
        Union[dict[str, Any], type, Callable, BaseTool]
    ],
    *,
    tool_choice: Optional[
        Union[
            dict,
            str,
            Literal["auto", "none", "required", "any"],
            bool,
        ]
    ] = None,
    strict: Optional[bool] = None,
    parallel_tool_calls: Optional[bool] = None,
    **kwargs: Any
) -> Runnable[LanguageModelInput, AIMessage]

Bind tool-like objects to this chat model.

Assumes model is compatible with OpenAI tool-calling API.

Parameters:

Name Type Description Default
tools Sequence[Union[dict[str, Any], type, Callable, BaseTool]]

A list of tool definitions to bind to this chat model. Supports any tool definition handled by langchain_core.utils.function_calling.convert_to_openai_tool.

required
tool_choice Optional[Union[dict, str, Literal['auto', 'none', 'required', 'any'], bool]]

Which tool to require the model to call. Options are:

  • str of the form '<<tool_name>>': calls <> tool.
  • 'auto': automatically selects a tool (including no tool).
  • 'none': does not call a tool.
  • 'any' or 'required' or True: force at least one tool to be called.
  • dict of the form {"type": "function", "function": {"name": <<tool_name>>}}: calls <> tool.
  • False or None: no effect, default OpenAI behavior.
None
strict Optional[bool]

If True, model output is guaranteed to exactly match the JSON Schema provided in the tool definition. The input schema will also be validated according to the supported schemas <https://platform.openai.com/docs/guides/structured-outputs/supported-schemas?api-mode=responses#supported-schemas>__. If False, input schema will not be validated and model output will not be validated. If None, strict argument will not be passed to the model.

None
parallel_tool_calls Optional[bool]

Set to False to disable parallel tool use. Defaults to None (no specification, which allows parallel tool use).

None
kwargs Any

Any additional parameters are passed directly to langchain_openai.chat_models.base.ChatOpenAI.bind.

{}

Behavior changed in 0.1.21

Support for strict argument added.

build_extra classmethod

build_extra(values: dict[str, Any]) -> Any

Build extra kwargs from additional params that were passed in.

validate_temperature classmethod

validate_temperature(values: dict[str, Any]) -> Any

Validate temperature parameter for different models.

  • o1 models only allow temperature=1
  • gpt-5 models (excluding gpt-5-chat) only allow temperature=1 or unset (defaults to 1)

get_lc_namespace classmethod

get_lc_namespace() -> list[str]

Get the namespace of the langchain object.

is_lc_serializable classmethod

is_lc_serializable() -> bool

Check if the class is serializable in langchain.

validate_environment

validate_environment() -> Self

Validate that api key and python package exists in environment.

with_structured_output

with_structured_output(
    schema: Optional[_DictOrPydanticClass] = None,
    *,
    method: Literal[
        "function_calling", "json_mode", "json_schema"
    ] = "json_schema",
    include_raw: bool = False,
    strict: Optional[bool] = None,
    **kwargs: Any
) -> Runnable[LanguageModelInput, _DictOrPydantic]

Model wrapper that returns outputs formatted to match the given schema.

Parameters:

Name Type Description Default
schema Optional[_DictOrPydanticClass]

The output schema. Can be passed in as:

  • a JSON Schema,
  • a TypedDict class,
  • or a Pydantic class,
  • an OpenAI function/tool schema.

If schema is a Pydantic class then the model output will be a Pydantic instance of that class, and the model-generated fields will be validated by the Pydantic class. Otherwise the model output will be a dict and will not be validated. See langchain_core.utils.function_calling.convert_to_openai_tool for more on how to properly specify types and descriptions of schema fields when specifying a Pydantic or TypedDict class.

None
method Literal['function_calling', 'json_mode', 'json_schema']

The method for steering model generation, one of:

  • 'json_schema': Uses OpenAI's Structured Output API <https://platform.openai.com/docs/guides/structured-outputs>__. Supported for 'gpt-4o-mini', 'gpt-4o-2024-08-06', 'o1', and later models.
  • 'function_calling': Uses OpenAI's tool-calling (formerly called function calling) API <https://platform.openai.com/docs/guides/function-calling>__
  • 'json_mode': Uses OpenAI's JSON mode <https://platform.openai.com/docs/guides/structured-outputs/json-mode>__. Note that if using JSON mode then you must include instructions for formatting the output into the desired schema into the model call

Learn more about the differences between the methods and which models support which methods here <https://platform.openai.com/docs/guides/structured-outputs/function-calling-vs-response-format>__.

'json_schema'
include_raw bool

If False then only the parsed structured output is returned. If an error occurs during model output parsing it will be raised. If True then both the raw model response (a BaseMessage) and the parsed model response will be returned. If an error occurs during output parsing it will be caught and returned as well. The final output is always a dict with keys 'raw', 'parsed', and 'parsing_error'.

False
strict Optional[bool]
  • True: Model output is guaranteed to exactly match the schema. The input schema will also be validated according to the supported schemas <https://platform.openai.com/docs/guides/structured-outputs/supported-schemas?api-mode=responses#supported-schemas>__.
  • False: Input schema will not be validated and model output will not be validated.
  • None: strict argument will not be passed to the model.

If schema is specified via TypedDict or JSON schema, strict is not enabled by default. Pass strict=True to enable it.

Note

strict can only be non-null if method is 'json_schema' or 'function_calling'.

None
tools

A list of tool-like objects to bind to the chat model. Requires that:

  • method is 'json_schema' (default).
  • strict=True
  • include_raw=True

If a model elects to call a tool, the resulting AIMessage in 'raw' will include tool calls.

Example

.. code-block:: python

from langchain.chat_models import init_chat_model
from pydantic import BaseModel

class ResponseSchema(BaseModel):
    response: str

def get_weather(location: str) -> str:
    \"\"\"Get weather at a location.\"\"\"
    pass

llm = init_chat_model("openai:gpt-4o-mini")

structured_llm = llm.with_structured_output(
    ResponseSchema,
    tools=[get_weather],
    strict=True,
    include_raw=True,
)

structured_llm.invoke("What's the weather in Boston?")

.. code-block:: python

{
    "raw": AIMessage(content="", tool_calls=[...], ...),
    "parsing_error": None,
    "parsed": None,
}
required
kwargs Any

Additional keyword args are passed through to the model.

{}

Returns:

Type Description
Runnable[LanguageModelInput, _DictOrPydantic]

A Runnable that takes same inputs as a langchain_core.language_models.chat.BaseChatModel.

Runnable[LanguageModelInput, _DictOrPydantic]

If include_raw is False and schema is a Pydantic class, Runnable outputs

Runnable[LanguageModelInput, _DictOrPydantic]

an instance of schema (i.e., a Pydantic object). Otherwise, if include_raw is False then Runnable outputs a dict.

Runnable[LanguageModelInput, _DictOrPydantic]

If include_raw is True, then Runnable outputs a dict with keys:

Runnable[LanguageModelInput, _DictOrPydantic]
  • 'raw': BaseMessage
Runnable[LanguageModelInput, _DictOrPydantic]
  • 'parsed': None if there was a parsing error, otherwise the type depends on the schema as described above.
Runnable[LanguageModelInput, _DictOrPydantic]
  • 'parsing_error': Optional[BaseException]

Behavior changed in 0.1.20

Added support for TypedDict class schema.

Behavior changed in 0.1.21

Support for strict argument added. Support for method="json_schema" added.

Behavior changed in 0.3.0

method default changed from "function_calling" to "json_schema".

Behavior changed in 0.3.12

Support for tools added.

Behavior changed in 0.3.21

Pass kwargs through to the model.

Example: schema=Pydantic class, method='json_schema', include_raw=False, strict=True

Note, OpenAI has a number of restrictions on what types of schemas can be provided if strict = True. When using Pydantic, our model cannot specify any Field metadata (like min/max constraints) and fields cannot have default values.

See all constraints here <https://platform.openai.com/docs/guides/structured-outputs/supported-schemas>__.

.. code-block:: python

from typing import Optional

from langchain_openai import AzureChatOpenAI
from pydantic import BaseModel, Field

class AnswerWithJustification(BaseModel):
    '''An answer to the user question along with justification for the answer.'''

    answer: str
    justification: Optional[str] = Field(
        default=..., description="A justification for the answer."
    )

llm = AzureChatOpenAI(
    azure_deployment="...", model="gpt-4o", temperature=0
)
structured_llm = llm.with_structured_output(AnswerWithJustification)

structured_llm.invoke(
    "What weighs more a pound of bricks or a pound of feathers"
)

# -> AnswerWithJustification(
#     answer='They weigh the same',
#     justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.'
# )
Example: schema=Pydantic class, method='function_calling', include_raw=False, strict=False

.. code-block:: python

from typing import Optional

from langchain_openai import AzureChatOpenAI
from pydantic import BaseModel, Field

class AnswerWithJustification(BaseModel):
    '''An answer to the user question along with justification for the answer.'''

    answer: str
    justification: Optional[str] = Field(
        default=..., description="A justification for the answer."
    )

llm = AzureChatOpenAI(
    azure_deployment="...", model="gpt-4o", temperature=0
)
structured_llm = llm.with_structured_output(
    AnswerWithJustification, method="function_calling"
)

structured_llm.invoke(
    "What weighs more a pound of bricks or a pound of feathers"
)

# -> AnswerWithJustification(
#     answer='They weigh the same',
#     justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.'
# )
Example: schema=Pydantic class, method='json_schema', include_raw=True

.. code-block:: python

from langchain_openai import AzureChatOpenAI
from pydantic import BaseModel

class AnswerWithJustification(BaseModel):
    '''An answer to the user question along with justification for the answer.'''

    answer: str
    justification: str

llm = AzureChatOpenAI(
    azure_deployment="...", model="gpt-4o", temperature=0
)
structured_llm = llm.with_structured_output(
    AnswerWithJustification, include_raw=True
)

structured_llm.invoke(
    "What weighs more a pound of bricks or a pound of feathers"
)
# -> {
#     'raw': AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_Ao02pnFYXD6GN1yzc0uXPsvF', 'function': {'arguments': '{"answer":"They weigh the same.","justification":"Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ."}', 'name': 'AnswerWithJustification'}, 'type': 'function'}]}),
#     'parsed': AnswerWithJustification(answer='They weigh the same.', justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.'),
#     'parsing_error': None
# }
Example: schema=TypedDict class, method='json_schema', include_raw=False, strict=False

.. code-block:: python

from typing_extensions import Annotated, TypedDict

from langchain_openai import AzureChatOpenAI

class AnswerWithJustification(TypedDict):
    '''An answer to the user question along with justification for the answer.'''

    answer: str
    justification: Annotated[
        Optional[str], None, "A justification for the answer."
    ]

llm = AzureChatOpenAI(
    azure_deployment="...", model="gpt-4o", temperature=0
)
structured_llm = llm.with_structured_output(AnswerWithJustification)

structured_llm.invoke(
    "What weighs more a pound of bricks or a pound of feathers"
)
# -> {
#     'answer': 'They weigh the same',
#     'justification': 'Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume and density of the two substances differ.'
# }
Example: schema=OpenAI function schema, method='json_schema', include_raw=False

.. code-block:: python

from langchain_openai import AzureChatOpenAI

oai_schema = {
    'name': 'AnswerWithJustification',
    'description': 'An answer to the user question along with justification for the answer.',
    'parameters': {
        'type': 'object',
        'properties': {
            'answer': {'type': 'string'},
            'justification': {'description': 'A justification for the answer.', 'type': 'string'}
        },
       'required': ['answer']
   }

}

llm = AzureChatOpenAI(
    azure_deployment="...",
    model="gpt-4o",
    temperature=0,
)
structured_llm = llm.with_structured_output(oai_schema)

structured_llm.invoke(
    "What weighs more a pound of bricks or a pound of feathers"
)
# -> {
#     'answer': 'They weigh the same',
#     'justification': 'Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume and density of the two substances differ.'
# }
Example: schema=Pydantic class, method='json_mode', include_raw=True

.. code-block::

from langchain_openai import AzureChatOpenAI
from pydantic import BaseModel

class AnswerWithJustification(BaseModel):
    answer: str
    justification: str

llm = AzureChatOpenAI(
    azure_deployment="...",
    model="gpt-4o",
    temperature=0,
)
structured_llm = llm.with_structured_output(
    AnswerWithJustification,
    method="json_mode",
    include_raw=True
)

structured_llm.invoke(
    "Answer the following question. "
    "Make sure to return a JSON blob with keys 'answer' and 'justification'.\\n\\n"
    "What's heavier a pound of bricks or a pound of feathers?"
)
# -> {
#     'raw': AIMessage(content='{\\n    "answer": "They are both the same weight.",\\n    "justification": "Both a pound of bricks and a pound of feathers weigh one pound. The difference lies in the volume and density of the materials, not the weight." \\n}'),
#     'parsed': AnswerWithJustification(answer='They are both the same weight.', justification='Both a pound of bricks and a pound of feathers weigh one pound. The difference lies in the volume and density of the materials, not the weight.'),
#     'parsing_error': None
# }
Example: schema=None, method='json_mode', include_raw=True

.. code-block::

structured_llm = llm.with_structured_output(method="json_mode", include_raw=True)

structured_llm.invoke(
    "Answer the following question. "
    "Make sure to return a JSON blob with keys 'answer' and 'justification'.\\n\\n"
    "What's heavier a pound of bricks or a pound of feathers?"
)
# -> {
#     'raw': AIMessage(content='{\\n    "answer": "They are both the same weight.",\\n    "justification": "Both a pound of bricks and a pound of feathers weigh one pound. The difference lies in the volume and density of the materials, not the weight." \\n}'),
#     'parsed': {
#         'answer': 'They are both the same weight.',
#         'justification': 'Both a pound of bricks and a pound of feathers weigh one pound. The difference lies in the volume and density of the materials, not the weight.'
#     },
#     'parsing_error': None
# }

ChatOpenAI

Bases: BaseChatOpenAI

OpenAI chat model integration.

Setup

:open:

Install langchain-openai and set environment variable OPENAI_API_KEY.

.. code-block:: bash

pip install -U langchain-openai
export OPENAI_API_KEY="your-api-key"
Key init args — completion params

model: str Name of OpenAI model to use. temperature: float Sampling temperature. max_tokens: Optional[int] Max number of tokens to generate. logprobs: Optional[bool] Whether to return logprobs. stream_options: Dict Configure streaming outputs, like whether to return token usage when streaming ({"include_usage": True}). use_responses_api: Optional[bool] Whether to use the responses API.

See full list of supported init args and their descriptions in the params section.

Key init args — client params

timeout: Union[float, Tuple[float, float], Any, None] Timeout for requests. max_retries: Optional[int] Max number of retries. api_key: Optional[str] OpenAI API key. If not passed in will be read from env var OPENAI_API_KEY. base_url: Optional[str] Base URL for API requests. Only specify if using a proxy or service emulator. organization: Optional[str] OpenAI organization ID. If not passed in will be read from env var OPENAI_ORG_ID.

See full list of supported init args and their descriptions in the params section.

Instantiate

.. code-block:: python

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    model="gpt-4o",
    temperature=0,
    max_tokens=None,
    timeout=None,
    max_retries=2,
    # api_key="...",
    # base_url="...",
    # organization="...",
    # other params...
)

Note

Any param which is not explicitly supported will be passed directly to the openai.OpenAI.chat.completions.create(...) API every time to the model is invoked. For example:

.. code-block:: python

from langchain_openai import ChatOpenAI
import openai

ChatOpenAI(..., frequency_penalty=0.2).invoke(...)

# results in underlying API call of:

openai.OpenAI(..).chat.completions.create(..., frequency_penalty=0.2)

# which is also equivalent to:

ChatOpenAI(...).invoke(..., frequency_penalty=0.2)
Invoke

.. code-block:: python

messages = [
    (
        "system",
        "You are a helpful translator. Translate the user sentence to French.",
    ),
    ("human", "I love programming."),
]
llm.invoke(messages)

.. code-block:: pycon

AIMessage(
    content="J'adore la programmation.",
    response_metadata={
        "token_usage": {
            "completion_tokens": 5,
            "prompt_tokens": 31,
            "total_tokens": 36,
        },
        "model_name": "gpt-4o",
        "system_fingerprint": "fp_43dfabdef1",
        "finish_reason": "stop",
        "logprobs": None,
    },
    id="run-012cffe2-5d3d-424d-83b5-51c6d4a593d1-0",
    usage_metadata={"input_tokens": 31, "output_tokens": 5, "total_tokens": 36},
)
Stream

.. code-block:: python

for chunk in llm.stream(messages):
    print(chunk.text, end="")

.. code-block:: python

AIMessageChunk(content="", id="run-9e1517e3-12bf-48f2-bb1b-2e824f7cd7b0")
AIMessageChunk(content="J", id="run-9e1517e3-12bf-48f2-bb1b-2e824f7cd7b0")
AIMessageChunk(
    content="'adore", id="run-9e1517e3-12bf-48f2-bb1b-2e824f7cd7b0"
)
AIMessageChunk(content=" la", id="run-9e1517e3-12bf-48f2-bb1b-2e824f7cd7b0")
AIMessageChunk(
    content=" programmation", id="run-9e1517e3-12bf-48f2-bb1b-2e824f7cd7b0"
)
AIMessageChunk(content=".", id="run-9e1517e3-12bf-48f2-bb1b-2e824f7cd7b0")
AIMessageChunk(
    content="",
    response_metadata={"finish_reason": "stop"},
    id="run-9e1517e3-12bf-48f2-bb1b-2e824f7cd7b0",
)

.. code-block:: python

stream = llm.stream(messages)
full = next(stream)
for chunk in stream:
    full += chunk
full

.. code-block:: python

AIMessageChunk(
    content="J'adore la programmation.",
    response_metadata={"finish_reason": "stop"},
    id="run-bf917526-7f58-4683-84f7-36a6b671d140",
)
Async

.. code-block:: python

await llm.ainvoke(messages)

# stream:
# async for chunk in (await llm.astream(messages))

# batch:
# await llm.abatch([messages])

.. code-block:: python

AIMessage(
    content="J'adore la programmation.",
    response_metadata={
        "token_usage": {
            "completion_tokens": 5,
            "prompt_tokens": 31,
            "total_tokens": 36,
        },
        "model_name": "gpt-4o",
        "system_fingerprint": "fp_43dfabdef1",
        "finish_reason": "stop",
        "logprobs": None,
    },
    id="run-012cffe2-5d3d-424d-83b5-51c6d4a593d1-0",
    usage_metadata={
        "input_tokens": 31,
        "output_tokens": 5,
        "total_tokens": 36,
    },
)
Tool calling

.. code-block:: python

from pydantic import BaseModel, Field

class GetWeather(BaseModel):
    '''Get the current weather in a given location'''

    location: str = Field(
        ..., description="The city and state, e.g. San Francisco, CA"
    )

class GetPopulation(BaseModel):
    '''Get the current population in a given location'''

    location: str = Field(
        ..., description="The city and state, e.g. San Francisco, CA"
    )

llm_with_tools = llm.bind_tools(
    [GetWeather, GetPopulation]
    # strict = True  # enforce tool args schema is respected
)
ai_msg = llm_with_tools.invoke(
    "Which city is hotter today and which is bigger: LA or NY?"
)
ai_msg.tool_calls

.. code-block:: python

[
    {
        "name": "GetWeather",
        "args": {"location": "Los Angeles, CA"},
        "id": "call_6XswGD5Pqk8Tt5atYr7tfenU",
    },
    {
        "name": "GetWeather",
        "args": {"location": "New York, NY"},
        "id": "call_ZVL15vA8Y7kXqOy3dtmQgeCi",
    },
    {
        "name": "GetPopulation",
        "args": {"location": "Los Angeles, CA"},
        "id": "call_49CFW8zqC9W7mh7hbMLSIrXw",
    },
    {
        "name": "GetPopulation",
        "args": {"location": "New York, NY"},
        "id": "call_6ghfKxV264jEfe1mRIkS3PE7",
    },
]

Note

openai >= 1.32 supports a parallel_tool_calls parameter that defaults to True. This parameter can be set to False to disable parallel tool calls:

.. code-block:: python

ai_msg = llm_with_tools.invoke(
    "What is the weather in LA and NY?", parallel_tool_calls=False
)
ai_msg.tool_calls

.. code-block:: python

[
    {
        "name": "GetWeather",
        "args": {"location": "Los Angeles, CA"},
        "id": "call_4OoY0ZR99iEvC7fevsH8Uhtz",
    }
]

Like other runtime parameters, parallel_tool_calls can be bound to a model using llm.bind(parallel_tool_calls=False) or during instantiation by setting model_kwargs.

See ChatOpenAI.bind_tools() method for more.

Built-in tools

Added in version 0.3.9

You can access built-in tools <https://platform.openai.com/docs/guides/tools?api-mode=responses>_ supported by the OpenAI Responses API. See LangChain docs <https://python.langchain.com/docs/integrations/chat/openai/>__ for more detail.

Note

langchain-openai >= 0.3.26 allows users to opt-in to an updated AIMessage format when using the Responses API. Setting

.. code-block:: python

llm = ChatOpenAI(model="...", output_version="responses/v1")

will format output from reasoning summaries, built-in tool invocations, and other response items into the message's content field, rather than additional_kwargs. We recommend this format for new applications.

.. code-block:: python

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4.1-mini", output_version="responses/v1")

tool = {"type": "web_search"}
llm_with_tools = llm.bind_tools([tool])

response = llm_with_tools.invoke(
    "What was a positive news story from today?"
)
response.content

.. code-block:: python

[
    {
        "type": "text",
        "text": "Today, a heartwarming story emerged from ...",
        "annotations": [
            {
                "end_index": 778,
                "start_index": 682,
                "title": "Title of story",
                "type": "url_citation",
                "url": "<url of story>",
            }
        ],
    }
]
Managing conversation state

Added in version 0.3.9

OpenAI's Responses API supports management of conversation state <https://platform.openai.com/docs/guides/conversation-state?api-mode=responses>. Passing in response IDs from previous messages will continue a conversational thread. See LangChain conversation docs <https://python.langchain.com/docs/integrations/chat/openai/>_ for more detail.

.. code-block:: python

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    model="gpt-4.1-mini",
    use_responses_api=True,
    output_version="responses/v1",
)
response = llm.invoke("Hi, I'm Bob.")
response.text

.. code-block:: python

"Hi Bob! How can I assist you today?"

.. code-block:: python

second_response = llm.invoke(
    "What is my name?",
    previous_response_id=response.response_metadata["id"],
)
second_response.text

.. code-block:: python

"Your name is Bob. How can I help you today, Bob?"

Added in version 0.3.26

You can also initialize ChatOpenAI with :attr:use_previous_response_id. Input messages up to the most recent response will then be dropped from request payloads, and previous_response_id will be set using the ID of the most recent response.

.. code-block:: python

llm = ChatOpenAI(model="gpt-4.1-mini", use_previous_response_id=True)
Reasoning output

OpenAI's Responses API supports reasoning models <https://platform.openai.com/docs/guides/reasoning?api-mode=responses>_ that expose a summary of internal reasoning processes.

Note

langchain-openai >= 0.3.26 allows users to opt-in to an updated AIMessage format when using the Responses API. Setting

.. code-block:: python

llm = ChatOpenAI(model="...", output_version="responses/v1")

will format output from reasoning summaries, built-in tool invocations, and other response items into the message's content field, rather than additional_kwargs. We recommend this format for new applications.

.. code-block:: python

from langchain_openai import ChatOpenAI

reasoning = {
    "effort": "medium",  # 'low', 'medium', or 'high'
    "summary": "auto",  # 'detailed', 'auto', or None
}

llm = ChatOpenAI(
    model="o4-mini", reasoning=reasoning, output_version="responses/v1"
)
response = llm.invoke("What is 3^3?")

# Response text
print(f"Output: {response.text}")

# Reasoning summaries
for block in response.content:
    if block["type"] == "reasoning":
        for summary in block["summary"]:
            print(summary["text"])

.. code-block::

Output: 3³ = 27
Reasoning: The user wants to know...
Structured output

.. code-block:: python

from typing import Optional

from pydantic import BaseModel, Field

class Joke(BaseModel):
    '''Joke to tell user.'''

    setup: str = Field(description="The setup of the joke")
    punchline: str = Field(description="The punchline to the joke")
    rating: Optional[int] = Field(
        description="How funny the joke is, from 1 to 10"
    )

structured_llm = llm.with_structured_output(Joke)
structured_llm.invoke("Tell me a joke about cats")

.. code-block:: python

Joke(
    setup="Why was the cat sitting on the computer?",
    punchline="To keep an eye on the mouse!",
    rating=None,
)

See ChatOpenAI.with_structured_output() for more.

JSON mode

.. code-block:: python

json_llm = llm.bind(response_format={"type": "json_object"})
ai_msg = json_llm.invoke(
    "Return a JSON object with key 'random_ints' and a value of 10 random ints in [0-99]"
)
ai_msg.content

.. code-block:: python

'\\n{\\n  "random_ints": [23, 87, 45, 12, 78, 34, 56, 90, 11, 67]\\n}'
Image input

.. code-block:: python

import base64
import httpx
from langchain_core.messages import HumanMessage

image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
image_data = base64.b64encode(httpx.get(image_url).content).decode("utf-8")
message = HumanMessage(
    content=[
        {"type": "text", "text": "describe the weather in this image"},
        {
            "type": "image_url",
            "image_url": {"url": f"data:image/jpeg;base64,{image_data}"},
        },
    ]
)
ai_msg = llm.invoke([message])
ai_msg.content

.. code-block:: python

"The weather in the image appears to be clear and pleasant. The sky is mostly blue with scattered, light clouds, suggesting a sunny day with minimal cloud cover. There is no indication of rain or strong winds, and the overall scene looks bright and calm. The lush green grass and clear visibility further indicate good weather conditions."
Token usage

.. code-block:: python

ai_msg = llm.invoke(messages)
ai_msg.usage_metadata

.. code-block:: python

{"input_tokens": 28, "output_tokens": 5, "total_tokens": 33}

When streaming, set the stream_usage kwarg:

.. code-block:: python

stream = llm.stream(messages, stream_usage=True)
full = next(stream)
for chunk in stream:
    full += chunk
full.usage_metadata

.. code-block:: python

{"input_tokens": 28, "output_tokens": 5, "total_tokens": 33}

Alternatively, setting stream_usage when instantiating the model can be useful when incorporating ChatOpenAI into LCEL chains-- or when using methods like .with_structured_output, which generate chains under the hood.

.. code-block:: python

llm = ChatOpenAI(model="gpt-4o", stream_usage=True)
structured_llm = llm.with_structured_output(...)
Logprobs

.. code-block:: python

logprobs_llm = llm.bind(logprobs=True)
ai_msg = logprobs_llm.invoke(messages)
ai_msg.response_metadata["logprobs"]

.. code-block:: python

{
    "content": [
        {
            "token": "J",
            "bytes": [74],
            "logprob": -4.9617593e-06,
            "top_logprobs": [],
        },
        {
            "token": "'adore",
            "bytes": [39, 97, 100, 111, 114, 101],
            "logprob": -0.25202933,
            "top_logprobs": [],
        },
        {
            "token": " la",
            "bytes": [32, 108, 97],
            "logprob": -0.20141791,
            "top_logprobs": [],
        },
        {
            "token": " programmation",
            "bytes": [
                32,
                112,
                114,
                111,
                103,
                114,
                97,
                109,
                109,
                97,
                116,
                105,
                111,
                110,
            ],
            "logprob": -1.9361265e-07,
            "top_logprobs": [],
        },
        {
            "token": ".",
            "bytes": [46],
            "logprob": -1.2233183e-05,
            "top_logprobs": [],
        },
    ]
}
Response metadata

.. code-block:: python

ai_msg = llm.invoke(messages)
ai_msg.response_metadata

.. code-block:: python

{
    "token_usage": {
        "completion_tokens": 5,
        "prompt_tokens": 28,
        "total_tokens": 33,
    },
    "model_name": "gpt-4o",
    "system_fingerprint": "fp_319be4768e",
    "finish_reason": "stop",
    "logprobs": None,
}
Flex processing

OpenAI offers a variety of service tiers <https://platform.openai.com/docs/guides/flex-processing>_. The "flex" tier offers cheaper pricing for requests, with the trade-off that responses may take longer and resources might not always be available. This approach is best suited for non-critical tasks, including model testing, data enhancement, or jobs that can be run asynchronously.

To use it, initialize the model with service_tier="flex":

.. code-block:: python

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="o4-mini", service_tier="flex")

Note that this is a beta feature that is only available for a subset of models. See OpenAI flex processing docs <https://platform.openai.com/docs/guides/flex-processing>__ for more detail.

OpenAI-compatible APIs

ChatOpenAI can be used with OpenAI-compatible APIs like LM Studio <https://lmstudio.ai/>, vLLM <https://github.com/vllm-project/vllm>, Ollama <https://ollama.com/>__, and others. To use custom parameters specific to these providers, use the extra_body parameter.

LM Studio example with TTL (auto-eviction):

.. code-block:: python

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    base_url="http://localhost:1234/v1",
    api_key="lm-studio",  # Can be any string
    model="mlx-community/QwQ-32B-4bit",
    temperature=0,
    extra_body={
        "ttl": 300
    },  # Auto-evict model after 5 minutes of inactivity
)

vLLM example with custom parameters:

.. code-block:: python

llm = ChatOpenAI(
    base_url="http://localhost:8000/v1",
    api_key="EMPTY",
    model="meta-llama/Llama-2-7b-chat-hf",
    extra_body={"use_beam_search": True, "best_of": 4},
)
model_kwargs vs extra_body

Use the correct parameter for different types of API arguments:

Use model_kwargs for:

  • Standard OpenAI API parameters not explicitly defined as class parameters
  • Parameters that should be flattened into the top-level request payload
  • Examples: max_completion_tokens, stream_options, modalities, audio

.. code-block:: python

# Standard OpenAI parameters
llm = ChatOpenAI(
    model="gpt-4o",
    model_kwargs={
        "stream_options": {"include_usage": True},
        "max_completion_tokens": 300,
        "modalities": ["text", "audio"],
        "audio": {"voice": "alloy", "format": "wav"},
    },
)

Use extra_body for:

  • Custom parameters specific to OpenAI-compatible providers (vLLM, LM Studio, etc.)
  • Parameters that need to be nested under extra_body in the request
  • Any non-standard OpenAI API parameters

.. code-block:: python

# Custom provider parameters
llm = ChatOpenAI(
    base_url="http://localhost:8000/v1",
    model="custom-model",
    extra_body={
        "use_beam_search": True,  # vLLM parameter
        "best_of": 4,  # vLLM parameter
        "ttl": 300,  # LM Studio parameter
    },
)

Key Differences:

  • model_kwargs: Parameters are merged into top-level request payload
  • extra_body: Parameters are nested under extra_body key in request

Important

Always use extra_body for custom parameters, not model_kwargs. Using model_kwargs for non-OpenAI parameters will cause API errors.

Prompt caching optimization

For high-volume applications with repetitive prompts, use prompt_cache_key per-invocation to improve cache hit rates and reduce costs:

.. code-block:: python

llm = ChatOpenAI(model="gpt-4o-mini")

response = llm.invoke(
    messages,
    prompt_cache_key="example-key-a",  # Routes to same machine for cache hits
)

customer_response = llm.invoke(messages, prompt_cache_key="example-key-b")
support_response = llm.invoke(messages, prompt_cache_key="example-key-c")

# Dynamic cache keys based on context
cache_key = f"example-key-{dynamic_suffix}"
response = llm.invoke(messages, prompt_cache_key=cache_key)

Cache keys help ensure requests with the same prompt prefix are routed to machines with existing cache, providing cost reduction and latency improvement on cached tokens.

Methods:

Name Description
get_name

Get the name of the Runnable.

get_input_schema

Get a pydantic model that can be used to validate input to the Runnable.

get_input_jsonschema

Get a JSON schema that represents the input to the Runnable.

get_output_schema

Get a pydantic model that can be used to validate output to the Runnable.

get_output_jsonschema

Get a JSON schema that represents the output of the Runnable.

config_schema

The type of config this Runnable accepts specified as a pydantic model.

get_config_jsonschema

Get a JSON schema that represents the config of the Runnable.

get_graph

Return a graph representation of this Runnable.

get_prompts

Return a list of prompts used by this Runnable.

__or__

Runnable "or" operator.

__ror__

Runnable "reverse-or" operator.

pipe

Pipe runnables.

pick

Pick keys from the output dict of this Runnable.

assign

Assigns new fields to the dict output of this Runnable.

batch

Default implementation runs invoke in parallel using a thread pool executor.

batch_as_completed

Run invoke in parallel on a list of inputs.

abatch

Default implementation runs ainvoke in parallel using asyncio.gather.

abatch_as_completed

Run ainvoke in parallel on a list of inputs.

astream_log

Stream all output from a Runnable, as reported to the callback system.

astream_events

Generate a stream of events.

transform

Transform inputs to outputs.

atransform

Transform inputs to outputs.

bind

Bind arguments to a Runnable, returning a new Runnable.

with_config

Bind config to a Runnable, returning a new Runnable.

with_listeners

Bind lifecycle listeners to a Runnable, returning a new Runnable.

with_alisteners

Bind async lifecycle listeners to a Runnable.

with_types

Bind input and output types to a Runnable, returning a new Runnable.

with_retry

Create a new Runnable that retries the original Runnable on exceptions.

map

Return a new Runnable that maps a list of inputs to a list of outputs.

with_fallbacks

Add fallbacks to a Runnable, returning a new Runnable.

as_tool

Create a BaseTool from a Runnable.

__init__
lc_id

Return a unique identifier for this class for serialization purposes.

to_json

Serialize the Runnable to JSON.

to_json_not_implemented

Serialize a "not implemented" object.

configurable_fields

Configure particular Runnable fields at runtime.

configurable_alternatives

Configure alternatives for Runnables that can be set at runtime.

set_verbose

If verbose is None, set it.

get_token_ids

Get the tokens present in the text with tiktoken package.

get_num_tokens

Get the number of tokens present in the text.

get_num_tokens_from_messages

Calculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.

generate

Pass a sequence of prompts to the model and return model generations.

agenerate

Asynchronously pass a sequence of prompts to a model and return generations.

dict

Return a dictionary of the LLM.

bind_tools

Bind tool-like objects to this chat model.

build_extra

Build extra kwargs from additional params that were passed in.

validate_temperature

Validate temperature parameter for different models.

validate_environment

Validate that api key and python package exists in environment.

get_lc_namespace

Get the namespace of the langchain object.

is_lc_serializable

Return whether this model can be serialized by LangChain.

with_structured_output

Model wrapper that returns outputs formatted to match the given schema.

Attributes:

Name Type Description
InputType TypeAlias

Get the input type for this runnable.

OutputType Any

Get the output type for this runnable.

input_schema type[BaseModel]

The type of input this Runnable accepts specified as a pydantic model.

output_schema type[BaseModel]

Output schema.

config_specs list[ConfigurableFieldSpec]

List configurable fields for this Runnable.

cache BaseCache | bool | None

Whether to cache the response.

verbose bool

Whether to print out response text.

callbacks Callbacks

Callbacks to add to the run trace.

tags list[str] | None

Tags to add to the run trace.

metadata dict[str, Any] | None

Metadata to add to the run trace.

custom_get_token_ids Callable[[str], list[int]] | None

Optional encoder to use for counting tokens.

rate_limiter BaseRateLimiter | None

An optional rate limiter to use for limiting the number of requests.

disable_streaming bool | Literal['tool_calling']

Whether to disable streaming for this model.

output_version Optional[str]

Version of AIMessage output format to use.

model_name str

Model name to use.

temperature Optional[float]

What sampling temperature to use.

model_kwargs dict[str, Any]

Holds any model parameters valid for create call not explicitly specified.

openai_api_base Optional[str]

Base URL path for API requests, leave blank if not using a proxy or service emulator.

openai_organization Optional[str]

Automatically inferred from env var OPENAI_ORG_ID if not provided.

request_timeout Union[float, tuple[float, float], Any, None]

Timeout for requests to OpenAI completion API. Can be float, httpx.Timeout or

stream_usage Optional[bool]

Whether to include usage metadata in streaming output. If enabled, an additional

max_retries Optional[int]

Maximum number of retries to make when generating.

presence_penalty Optional[float]

Penalizes repeated tokens.

frequency_penalty Optional[float]

Penalizes repeated tokens according to frequency.

seed Optional[int]

Seed for generation

logprobs Optional[bool]

Whether to return logprobs.

top_logprobs Optional[int]

Number of most likely tokens to return at each token position, each with

logit_bias Optional[dict[int, int]]

Modify the likelihood of specified tokens appearing in the completion.

streaming bool

Whether to stream the results or not.

n Optional[int]

Number of chat completions to generate for each prompt.

top_p Optional[float]

Total probability mass of tokens to consider at each step.

reasoning_effort Optional[str]

Constrains effort on reasoning for reasoning models. For use with the Chat

reasoning Optional[dict[str, Any]]

Reasoning parameters for reasoning models, i.e., OpenAI o-series models (o1, o3,

verbosity Optional[str]

Controls the verbosity level of responses for reasoning models. For use with the

tiktoken_model_name Optional[str]

The model name to pass to tiktoken when using this class.

http_client Union[Any, None]

Optional httpx.Client. Only used for sync invocations. Must specify

http_async_client Union[Any, None]

Optional httpx.AsyncClient. Only used for async invocations. Must specify

stop Optional[Union[list[str], str]]

Default stop sequences.

extra_body Optional[Mapping[str, Any]]

Optional additional JSON properties to include in the request parameters when

include_response_headers bool

Whether to include response headers in the output message response_metadata.

disabled_params Optional[dict[str, Any]]

Parameters of the OpenAI client or chat.completions endpoint that should be

include Optional[list[str]]

Additional fields to include in generations from Responses API.

service_tier Optional[str]

Latency tier for request. Options are 'auto', 'default', or 'flex'.

store Optional[bool]

If True, OpenAI may store response data for future use. Defaults to True

truncation Optional[str]

Truncation strategy (Responses API). Can be 'auto' or 'disabled'

use_previous_response_id bool

If True, always pass previous_response_id using the ID of the most recent

use_responses_api Optional[bool]

Whether to use the Responses API instead of the Chat API.

max_tokens Optional[int]

Maximum number of tokens to generate.

lc_secrets dict[str, str]

Mapping of secret environment variables.

lc_attributes dict[str, Any]

Get the attributes of the langchain object.

InputType property

InputType: TypeAlias

Get the input type for this runnable.

OutputType property

OutputType: Any

Get the output type for this runnable.

input_schema property

input_schema: type[BaseModel]

The type of input this Runnable accepts specified as a pydantic model.

output_schema property

output_schema: type[BaseModel]

Output schema.

The type of output this Runnable produces specified as a pydantic model.

config_specs property

config_specs: list[ConfigurableFieldSpec]

List configurable fields for this Runnable.

cache class-attribute instance-attribute

cache: BaseCache | bool | None = Field(
    default=None, exclude=True
)

Whether to cache the response.

  • If true, will use the global cache.
  • If false, will not use a cache
  • If None, will use the global cache if it's set, otherwise no cache.
  • If instance of BaseCache, will use the provided cache.

Caching is not currently supported for streaming methods of models.

verbose class-attribute instance-attribute

verbose: bool = Field(
    default_factory=_get_verbosity, exclude=True, repr=False
)

Whether to print out response text.

callbacks class-attribute instance-attribute

callbacks: Callbacks = Field(default=None, exclude=True)

Callbacks to add to the run trace.

tags class-attribute instance-attribute

tags: list[str] | None = Field(default=None, exclude=True)

Tags to add to the run trace.

metadata class-attribute instance-attribute

metadata: dict[str, Any] | None = Field(
    default=None, exclude=True
)

Metadata to add to the run trace.

custom_get_token_ids class-attribute instance-attribute

custom_get_token_ids: Callable[[str], list[int]] | None = (
    Field(default=None, exclude=True)
)

Optional encoder to use for counting tokens.

rate_limiter class-attribute instance-attribute

rate_limiter: BaseRateLimiter | None = Field(
    default=None, exclude=True
)

An optional rate limiter to use for limiting the number of requests.

disable_streaming class-attribute instance-attribute

disable_streaming: bool | Literal['tool_calling'] = False

Whether to disable streaming for this model.

If streaming is bypassed, then stream()/astream()/astream_events() will defer to invoke()/ainvoke().

  • If True, will always bypass streaming case.
  • If 'tool_calling', will bypass streaming case only when the model is called with a tools keyword argument. In other words, LangChain will automatically switch to non-streaming behavior (invoke()) only when the tools argument is provided. This offers the best of both worlds.
  • If False (default), will always use streaming case if available.

The main reason for this flag is that code might be written using stream() and a user may want to swap out a given model for another model whose the implementation does not properly support streaming.

output_version class-attribute instance-attribute

output_version: Optional[str] = Field(
    default_factory=from_env(
        "LC_OUTPUT_VERSION", default=None
    )
)

Version of AIMessage output format to use.

This field is used to roll-out new output formats for chat model AIMessages in a backwards-compatible way.

Supported values:

  • 'v0': AIMessage format as of langchain-openai 0.3.x.
  • 'responses/v1': Formats Responses API output items into AIMessage content blocks (Responses API only)
  • "v1": v1 of LangChain cross-provider standard.

Behavior changed in 1.0.0

Default updated to "responses/v1".

.. versionchanged:: 1.0.0

Default updated to ``"responses/v1"``.

model_name class-attribute instance-attribute

model_name: str = Field(
    default="gpt-3.5-turbo", alias="model"
)

Model name to use.

temperature class-attribute instance-attribute

temperature: Optional[float] = None

What sampling temperature to use.

model_kwargs class-attribute instance-attribute

model_kwargs: dict[str, Any] = Field(default_factory=dict)

Holds any model parameters valid for create call not explicitly specified.

openai_api_base class-attribute instance-attribute

openai_api_base: Optional[str] = Field(
    default=None, alias="base_url"
)

Base URL path for API requests, leave blank if not using a proxy or service emulator.

openai_organization class-attribute instance-attribute

openai_organization: Optional[str] = Field(
    default=None, alias="organization"
)

Automatically inferred from env var OPENAI_ORG_ID if not provided.

request_timeout class-attribute instance-attribute

request_timeout: Union[
    float, tuple[float, float], Any, None
] = Field(default=None, alias="timeout")

Timeout for requests to OpenAI completion API. Can be float, httpx.Timeout or None.

stream_usage class-attribute instance-attribute

stream_usage: Optional[bool] = None

Whether to include usage metadata in streaming output. If enabled, an additional message chunk will be generated during the stream including usage metadata.

This parameter is enabled unless openai_api_base is set or the model is initialized with a custom client, as many chat completions APIs do not support streaming token usage.

Added in version 0.3.9

Behavior changed in 0.3.35

Enabled for default base URL and client.

max_retries class-attribute instance-attribute

max_retries: Optional[int] = None

Maximum number of retries to make when generating.

presence_penalty class-attribute instance-attribute

presence_penalty: Optional[float] = None

Penalizes repeated tokens.

frequency_penalty class-attribute instance-attribute

frequency_penalty: Optional[float] = None

Penalizes repeated tokens according to frequency.

seed class-attribute instance-attribute

seed: Optional[int] = None

Seed for generation

logprobs class-attribute instance-attribute

logprobs: Optional[bool] = None

Whether to return logprobs.

top_logprobs class-attribute instance-attribute

top_logprobs: Optional[int] = None

Number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used.

logit_bias class-attribute instance-attribute

logit_bias: Optional[dict[int, int]] = None

Modify the likelihood of specified tokens appearing in the completion.

streaming class-attribute instance-attribute

streaming: bool = False

Whether to stream the results or not.

n class-attribute instance-attribute

n: Optional[int] = None

Number of chat completions to generate for each prompt.

top_p class-attribute instance-attribute

top_p: Optional[float] = None

Total probability mass of tokens to consider at each step.

reasoning_effort class-attribute instance-attribute

reasoning_effort: Optional[str] = None

Constrains effort on reasoning for reasoning models. For use with the Chat Completions API.

Reasoning models only, like OpenAI o1, o3, and o4-mini.

Currently supported values are 'minimal', 'low', 'medium', and 'high'. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

Added in version 0.2.14

reasoning class-attribute instance-attribute

reasoning: Optional[dict[str, Any]] = None

Reasoning parameters for reasoning models, i.e., OpenAI o-series models (o1, o3, o4-mini, etc.). For use with the Responses API.

Example:

.. code-block:: python

reasoning={
    "effort": "medium",  # can be "low", "medium", or "high"
    "summary": "auto",  # can be "auto", "concise", or "detailed"
}

Added in version 0.3.24

verbosity class-attribute instance-attribute

verbosity: Optional[str] = None

Controls the verbosity level of responses for reasoning models. For use with the Responses API.

Currently supported values are 'low', 'medium', and 'high'.

Controls how detailed the model's responses are.

Added in version 0.3.28

tiktoken_model_name class-attribute instance-attribute

tiktoken_model_name: Optional[str] = None

The model name to pass to tiktoken when using this class. Tiktoken is used to count the number of tokens in documents to constrain them to be under a certain limit. By default, when set to None, this will be the same as the embedding model name. However, there are some cases where you may want to use this Embedding class with a model name not supported by tiktoken. This can include when using Azure embeddings or when using one of the many model providers that expose an OpenAI-like API but with different models. In those cases, in order to avoid erroring when tiktoken is called, you can specify a model name to use here.

http_client class-attribute instance-attribute

http_client: Union[Any, None] = Field(
    default=None, exclude=True
)

Optional httpx.Client. Only used for sync invocations. Must specify http_async_client as well if you'd like a custom client for async invocations.

http_async_client class-attribute instance-attribute

http_async_client: Union[Any, None] = Field(
    default=None, exclude=True
)

Optional httpx.AsyncClient. Only used for async invocations. Must specify http_client as well if you'd like a custom client for sync invocations.

stop class-attribute instance-attribute

stop: Optional[Union[list[str], str]] = Field(
    default=None, alias="stop_sequences"
)

Default stop sequences.

extra_body class-attribute instance-attribute

extra_body: Optional[Mapping[str, Any]] = None

Optional additional JSON properties to include in the request parameters when making requests to OpenAI compatible APIs, such as vLLM, LM Studio, or other providers.

This is the recommended way to pass custom parameters that are specific to your OpenAI-compatible API provider but not part of the standard OpenAI API.

Examples:

  • LM Studio TTL parameter: extra_body={"ttl": 300}
  • vLLM custom parameters: extra_body={"use_beam_search": True}
  • Any other provider-specific parameters

Note

Do NOT use model_kwargs for custom parameters that are not part of the standard OpenAI API, as this will cause errors when making API calls. Use extra_body instead.

include_response_headers class-attribute instance-attribute

include_response_headers: bool = False

Whether to include response headers in the output message response_metadata.

disabled_params class-attribute instance-attribute

disabled_params: Optional[dict[str, Any]] = Field(
    default=None
)

Parameters of the OpenAI client or chat.completions endpoint that should be disabled for the given model.

Should be specified as {"param": None | ['val1', 'val2']} where the key is the parameter and the value is either None, meaning that parameter should never be used, or it's a list of disabled values for the parameter.

For example, older models may not support the 'parallel_tool_calls' parameter at all, in which case disabled_params={"parallel_tool_calls": None} can be passed in.

If a parameter is disabled then it will not be used by default in any methods, e.g. in langchain_openai.chat_models.base.ChatOpenAI.with_structured_output. However this does not prevent a user from directly passed in the parameter during invocation.

include class-attribute instance-attribute

include: Optional[list[str]] = None

Additional fields to include in generations from Responses API.

Supported values:

  • 'file_search_call.results'
  • 'message.input_image.image_url'
  • 'computer_call_output.output.image_url'
  • 'reasoning.encrypted_content'
  • 'code_interpreter_call.outputs'

Added in version 0.3.24

service_tier class-attribute instance-attribute

service_tier: Optional[str] = None

Latency tier for request. Options are 'auto', 'default', or 'flex'. Relevant for users of OpenAI's scale tier service.

store class-attribute instance-attribute

store: Optional[bool] = None

If True, OpenAI may store response data for future use. Defaults to True for the Responses API and False for the Chat Completions API.

Added in version 0.3.24

truncation class-attribute instance-attribute

truncation: Optional[str] = None

Truncation strategy (Responses API). Can be 'auto' or 'disabled' (default). If 'auto', model may drop input items from the middle of the message sequence to fit the context window.

Added in version 0.3.24

use_previous_response_id class-attribute instance-attribute

use_previous_response_id: bool = False

If True, always pass previous_response_id using the ID of the most recent response. Responses API only.

Input messages up to the most recent response will be dropped from request payloads.

For example, the following two are equivalent:

.. code-block:: python

llm = ChatOpenAI(
    model="o4-mini",
    use_previous_response_id=True,
)
llm.invoke(
    [
        HumanMessage("Hello"),
        AIMessage("Hi there!", response_metadata={"id": "resp_123"}),
        HumanMessage("How are you?"),
    ]
)

.. code-block:: python

llm = ChatOpenAI(model="o4-mini", use_responses_api=True)
llm.invoke([HumanMessage("How are you?")], previous_response_id="resp_123")

Added in version 0.3.26

use_responses_api class-attribute instance-attribute

use_responses_api: Optional[bool] = None

Whether to use the Responses API instead of the Chat API.

If not specified then will be inferred based on invocation params.

Added in version 0.3.9

max_tokens class-attribute instance-attribute

max_tokens: Optional[int] = Field(
    default=None, alias="max_completion_tokens"
)

Maximum number of tokens to generate.

lc_secrets property

lc_secrets: dict[str, str]

Mapping of secret environment variables.

lc_attributes property

lc_attributes: dict[str, Any]

Get the attributes of the langchain object.

get_name

get_name(
    suffix: str | None = None, *, name: str | None = None
) -> str

Get the name of the Runnable.

Parameters:

Name Type Description Default
suffix str | None

An optional suffix to append to the name.

None
name str | None

An optional name to use instead of the Runnable's name.

None

Returns:

Type Description
str

The name of the Runnable.

get_input_schema

get_input_schema(
    config: RunnableConfig | None = None,
) -> type[BaseModel]

Get a pydantic model that can be used to validate input to the Runnable.

Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the Runnable is invoked with.

This method allows to get an input schema for a specific configuration.

Parameters:

Name Type Description Default
config RunnableConfig | None

A config to use when generating the schema.

None

Returns:

Type Description
type[BaseModel]

A pydantic model that can be used to validate input.

get_input_jsonschema

get_input_jsonschema(
    config: RunnableConfig | None = None,
) -> dict[str, Any]

Get a JSON schema that represents the input to the Runnable.

Parameters:

Name Type Description Default
config RunnableConfig | None

A config to use when generating the schema.

None

Returns:

Type Description
dict[str, Any]

A JSON schema that represents the input to the Runnable.

Example
from langchain_core.runnables import RunnableLambda


def add_one(x: int) -> int:
    return x + 1


runnable = RunnableLambda(add_one)

print(runnable.get_input_jsonschema())

Added in version 0.3.0

get_output_schema

get_output_schema(
    config: RunnableConfig | None = None,
) -> type[BaseModel]

Get a pydantic model that can be used to validate output to the Runnable.

Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the Runnable is invoked with.

This method allows to get an output schema for a specific configuration.

Parameters:

Name Type Description Default
config RunnableConfig | None

A config to use when generating the schema.

None

Returns:

Type Description
type[BaseModel]

A pydantic model that can be used to validate output.

get_output_jsonschema

get_output_jsonschema(
    config: RunnableConfig | None = None,
) -> dict[str, Any]

Get a JSON schema that represents the output of the Runnable.

Parameters:

Name Type Description Default
config RunnableConfig | None

A config to use when generating the schema.

None

Returns:

Type Description
dict[str, Any]

A JSON schema that represents the output of the Runnable.

Example
from langchain_core.runnables import RunnableLambda


def add_one(x: int) -> int:
    return x + 1


runnable = RunnableLambda(add_one)

print(runnable.get_output_jsonschema())

Added in version 0.3.0

config_schema

config_schema(
    *, include: Sequence[str] | None = None
) -> type[BaseModel]

The type of config this Runnable accepts specified as a pydantic model.

To mark a field as configurable, see the configurable_fields and configurable_alternatives methods.

Parameters:

Name Type Description Default
include Sequence[str] | None

A list of fields to include in the config schema.

None

Returns:

Type Description
type[BaseModel]

A pydantic model that can be used to validate config.

get_config_jsonschema

get_config_jsonschema(
    *, include: Sequence[str] | None = None
) -> dict[str, Any]

Get a JSON schema that represents the config of the Runnable.

Parameters:

Name Type Description Default
include Sequence[str] | None

A list of fields to include in the config schema.

None

Returns:

Type Description
dict[str, Any]

A JSON schema that represents the config of the Runnable.

Added in version 0.3.0

get_graph

get_graph(config: RunnableConfig | None = None) -> Graph

Return a graph representation of this Runnable.

get_prompts

get_prompts(
    config: RunnableConfig | None = None,
) -> list[BasePromptTemplate]

Return a list of prompts used by this Runnable.

__or__

__or__(
    other: (
        Runnable[Any, Other]
        | Callable[[Iterator[Any]], Iterator[Other]]
        | Callable[
            [AsyncIterator[Any]], AsyncIterator[Other]
        ]
        | Callable[[Any], Other]
        | Mapping[
            str,
            Runnable[Any, Other]
            | Callable[[Any], Other]
            | Any,
        ]
    ),
) -> RunnableSerializable[Input, Other]

Runnable "or" operator.

Compose this Runnable with another object to create a RunnableSequence.

Parameters:

Name Type Description Default
other Runnable[Any, Other] | Callable[[Iterator[Any]], Iterator[Other]] | Callable[[AsyncIterator[Any]], AsyncIterator[Other]] | Callable[[Any], Other] | Mapping[str, Runnable[Any, Other] | Callable[[Any], Other] | Any]

Another Runnable or a Runnable-like object.

required

Returns:

Type Description
RunnableSerializable[Input, Other]

A new Runnable.

__ror__

__ror__(
    other: (
        Runnable[Other, Any]
        | Callable[[Iterator[Other]], Iterator[Any]]
        | Callable[
            [AsyncIterator[Other]], AsyncIterator[Any]
        ]
        | Callable[[Other], Any]
        | Mapping[
            str,
            Runnable[Other, Any]
            | Callable[[Other], Any]
            | Any,
        ]
    ),
) -> RunnableSerializable[Other, Output]

Runnable "reverse-or" operator.

Compose this Runnable with another object to create a RunnableSequence.

Parameters:

Name Type Description Default
other Runnable[Other, Any] | Callable[[Iterator[Other]], Iterator[Any]] | Callable[[AsyncIterator[Other]], AsyncIterator[Any]] | Callable[[Other], Any] | Mapping[str, Runnable[Other, Any] | Callable[[Other], Any] | Any]

Another Runnable or a Runnable-like object.

required

Returns:

Type Description
RunnableSerializable[Other, Output]

A new Runnable.

pipe

pipe(
    *others: Runnable[Any, Other] | Callable[[Any], Other],
    name: str | None = None
) -> RunnableSerializable[Input, Other]

Pipe runnables.

Compose this Runnable with Runnable-like objects to make a RunnableSequence.

Equivalent to RunnableSequence(self, *others) or self | others[0] | ...

Example
from langchain_core.runnables import RunnableLambda


def add_one(x: int) -> int:
    return x + 1


def mul_two(x: int) -> int:
    return x * 2


runnable_1 = RunnableLambda(add_one)
runnable_2 = RunnableLambda(mul_two)
sequence = runnable_1.pipe(runnable_2)
# Or equivalently:
# sequence = runnable_1 | runnable_2
# sequence = RunnableSequence(first=runnable_1, last=runnable_2)
sequence.invoke(1)
await sequence.ainvoke(1)
# -> 4

sequence.batch([1, 2, 3])
await sequence.abatch([1, 2, 3])
# -> [4, 6, 8]

Parameters:

Name Type Description Default
*others Runnable[Any, Other] | Callable[[Any], Other]

Other Runnable or Runnable-like objects to compose

()
name str | None

An optional name for the resulting RunnableSequence.

None

Returns:

Type Description
RunnableSerializable[Input, Other]

A new Runnable.

pick

pick(
    keys: str | list[str],
) -> RunnableSerializable[Any, Any]

Pick keys from the output dict of this Runnable.

Pick single key:

```python
import json

from langchain_core.runnables import RunnableLambda, RunnableMap

as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)
chain = RunnableMap(str=as_str, json=as_json)

chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3]}

json_only_chain = chain.pick("json")
json_only_chain.invoke("[1, 2, 3]")
# -> [1, 2, 3]
```

Pick list of keys:

```python
from typing import Any

import json

from langchain_core.runnables import RunnableLambda, RunnableMap

as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)


def as_bytes(x: Any) -> bytes:
    return bytes(x, "utf-8")


chain = RunnableMap(
    str=as_str, json=as_json, bytes=RunnableLambda(as_bytes)
)

chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3], "bytes": b"[1, 2, 3]"}

json_and_bytes_chain = chain.pick(["json", "bytes"])
json_and_bytes_chain.invoke("[1, 2, 3]")
# -> {"json": [1, 2, 3], "bytes": b"[1, 2, 3]"}
```

Parameters:

Name Type Description Default
keys str | list[str]

A key or list of keys to pick from the output dict.

required

Returns:

Type Description
RunnableSerializable[Any, Any]

a new Runnable.

assign

assign(
    **kwargs: (
        Runnable[dict[str, Any], Any]
        | Callable[[dict[str, Any]], Any]
        | Mapping[
            str,
            Runnable[dict[str, Any], Any]
            | Callable[[dict[str, Any]], Any],
        ]
    ),
) -> RunnableSerializable[Any, Any]

Assigns new fields to the dict output of this Runnable.

from langchain_community.llms.fake import FakeStreamingListLLM
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import SystemMessagePromptTemplate
from langchain_core.runnables import Runnable
from operator import itemgetter

prompt = (
    SystemMessagePromptTemplate.from_template("You are a nice assistant.")
    + "{question}"
)
llm = FakeStreamingListLLM(responses=["foo-lish"])

chain: Runnable = prompt | llm | {"str": StrOutputParser()}

chain_with_assign = chain.assign(hello=itemgetter("str") | llm)

print(chain_with_assign.input_schema.model_json_schema())
# {'title': 'PromptInput', 'type': 'object', 'properties':
{'question': {'title': 'Question', 'type': 'string'}}}
print(chain_with_assign.output_schema.model_json_schema())
# {'title': 'RunnableSequenceOutput', 'type': 'object', 'properties':
{'str': {'title': 'Str',
'type': 'string'}, 'hello': {'title': 'Hello', 'type': 'string'}}}

Parameters:

Name Type Description Default
**kwargs Runnable[dict[str, Any], Any] | Callable[[dict[str, Any]], Any] | Mapping[str, Runnable[dict[str, Any], Any] | Callable[[dict[str, Any]], Any]]

A mapping of keys to Runnable or Runnable-like objects that will be invoked with the entire output dict of this Runnable.

{}

Returns:

Type Description
RunnableSerializable[Any, Any]

A new Runnable.

batch

batch(
    inputs: list[Input],
    config: (
        RunnableConfig | list[RunnableConfig] | None
    ) = None,
    *,
    return_exceptions: bool = False,
    **kwargs: Any | None
) -> list[Output]

Default implementation runs invoke in parallel using a thread pool executor.

The default implementation of batch works well for IO bound runnables.

Subclasses should override this method if they can batch more efficiently; e.g., if the underlying Runnable uses an API which supports a batch mode.

Parameters:

Name Type Description Default
inputs list[Input]

A list of inputs to the Runnable.

required
config RunnableConfig | list[RunnableConfig] | None

A config to use when invoking the Runnable. The config supports standard keys like 'tags', 'metadata' for tracing purposes, 'max_concurrency' for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Defaults to None.

None
return_exceptions bool

Whether to return exceptions instead of raising them. Defaults to False.

False
**kwargs Any | None

Additional keyword arguments to pass to the Runnable.

{}

Returns:

Type Description
list[Output]

A list of outputs from the Runnable.

batch_as_completed

batch_as_completed(
    inputs: Sequence[Input],
    config: (
        RunnableConfig | Sequence[RunnableConfig] | None
    ) = None,
    *,
    return_exceptions: bool = False,
    **kwargs: Any | None
) -> Iterator[tuple[int, Output | Exception]]

Run invoke in parallel on a list of inputs.

Yields results as they complete.

Parameters:

Name Type Description Default
inputs Sequence[Input]

A list of inputs to the Runnable.

required
config RunnableConfig | Sequence[RunnableConfig] | None

A config to use when invoking the Runnable. The config supports standard keys like 'tags', 'metadata' for tracing purposes, 'max_concurrency' for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Defaults to None.

None
return_exceptions bool

Whether to return exceptions instead of raising them. Defaults to False.

False
**kwargs Any | None

Additional keyword arguments to pass to the Runnable.

{}

Yields:

Type Description
tuple[int, Output | Exception]

Tuples of the index of the input and the output from the Runnable.

abatch async

abatch(
    inputs: list[Input],
    config: (
        RunnableConfig | list[RunnableConfig] | None
    ) = None,
    *,
    return_exceptions: bool = False,
    **kwargs: Any | None
) -> list[Output]

Default implementation runs ainvoke in parallel using asyncio.gather.

The default implementation of batch works well for IO bound runnables.

Subclasses should override this method if they can batch more efficiently; e.g., if the underlying Runnable uses an API which supports a batch mode.

Parameters:

Name Type Description Default
inputs list[Input]

A list of inputs to the Runnable.

required
config RunnableConfig | list[RunnableConfig] | None

A config to use when invoking the Runnable. The config supports standard keys like 'tags', 'metadata' for tracing purposes, 'max_concurrency' for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Defaults to None.

None
return_exceptions bool

Whether to return exceptions instead of raising them. Defaults to False.

False
**kwargs Any | None

Additional keyword arguments to pass to the Runnable.

{}

Returns:

Type Description
list[Output]

A list of outputs from the Runnable.

abatch_as_completed async

abatch_as_completed(
    inputs: Sequence[Input],
    config: (
        RunnableConfig | Sequence[RunnableConfig] | None
    ) = None,
    *,
    return_exceptions: bool = False,
    **kwargs: Any | None
) -> AsyncIterator[tuple[int, Output | Exception]]

Run ainvoke in parallel on a list of inputs.

Yields results as they complete.

Parameters:

Name Type Description Default
inputs Sequence[Input]

A list of inputs to the Runnable.

required
config RunnableConfig | Sequence[RunnableConfig] | None

A config to use when invoking the Runnable. The config supports standard keys like 'tags', 'metadata' for tracing purposes, 'max_concurrency' for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Defaults to None.

None
return_exceptions bool

Whether to return exceptions instead of raising them. Defaults to False.

False
kwargs Any | None

Additional keyword arguments to pass to the Runnable.

{}

Yields:

Type Description
AsyncIterator[tuple[int, Output | Exception]]

A tuple of the index of the input and the output from the Runnable.

astream_log async

astream_log(
    input: Any,
    config: RunnableConfig | None = None,
    *,
    diff: bool = True,
    with_streamed_output_list: bool = True,
    include_names: Sequence[str] | None = None,
    include_types: Sequence[str] | None = None,
    include_tags: Sequence[str] | None = None,
    exclude_names: Sequence[str] | None = None,
    exclude_types: Sequence[str] | None = None,
    exclude_tags: Sequence[str] | None = None,
    **kwargs: Any
) -> AsyncIterator[RunLogPatch] | AsyncIterator[RunLog]

Stream all output from a Runnable, as reported to the callback system.

This includes all inner runs of LLMs, Retrievers, Tools, etc.

Output is streamed as Log objects, which include a list of Jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run.

The Jsonpatch ops can be applied in order to construct state.

Parameters:

Name Type Description Default
input Any

The input to the Runnable.

required
config RunnableConfig | None

The config to use for the Runnable.

None
diff bool

Whether to yield diffs between each step or the current state.

True
with_streamed_output_list bool

Whether to yield the streamed_output list.

True
include_names Sequence[str] | None

Only include logs with these names.

None
include_types Sequence[str] | None

Only include logs with these types.

None
include_tags Sequence[str] | None

Only include logs with these tags.

None
exclude_names Sequence[str] | None

Exclude logs with these names.

None
exclude_types Sequence[str] | None

Exclude logs with these types.

None
exclude_tags Sequence[str] | None

Exclude logs with these tags.

None
kwargs Any

Additional keyword arguments to pass to the Runnable.

{}

Yields:

Type Description
AsyncIterator[RunLogPatch] | AsyncIterator[RunLog]

A RunLogPatch or RunLog object.

astream_events async

astream_events(
    input: Any,
    config: RunnableConfig | None = None,
    *,
    version: Literal["v1", "v2"] = "v2",
    include_names: Sequence[str] | None = None,
    include_types: Sequence[str] | None = None,
    include_tags: Sequence[str] | None = None,
    exclude_names: Sequence[str] | None = None,
    exclude_types: Sequence[str] | None = None,
    exclude_tags: Sequence[str] | None = None,
    **kwargs: Any
) -> AsyncIterator[StreamEvent]

Generate a stream of events.

Use to create an iterator over StreamEvents that provide real-time information about the progress of the Runnable, including StreamEvents from intermediate results.

A StreamEvent is a dictionary with the following schema:

  • event: str - Event names are of the format: on_[runnable_type]_(start|stream|end).
  • name: str - The name of the Runnable that generated the event.
  • run_id: str - randomly generated ID associated with the given execution of the Runnable that emitted the event. A child Runnable that gets invoked as part of the execution of a parent Runnable is assigned its own unique ID.
  • parent_ids: list[str] - The IDs of the parent runnables that generated the event. The root Runnable will have an empty list. The order of the parent IDs is from the root to the immediate parent. Only available for v2 version of the API. The v1 version of the API will return an empty list.
  • tags: Optional[list[str]] - The tags of the Runnable that generated the event.
  • metadata: Optional[dict[str, Any]] - The metadata of the Runnable that generated the event.
  • data: dict[str, Any]

Below is a table that illustrates some events that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table.

Note

This reference table is for the v2 version of the schema.

+--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | event | name | chunk | input | output | +==========================+==================+=====================================+===================================================+=====================================================+ | on_chat_model_start | [model name] | | {"messages": [[SystemMessage, HumanMessage]]} | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_chat_model_stream | [model name] | AIMessageChunk(content="hello") | | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_chat_model_end | [model name] | | {"messages": [[SystemMessage, HumanMessage]]} | AIMessageChunk(content="hello world") | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_llm_start | [model name] | | {'input': 'hello'} | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_llm_stream | [model name] | 'Hello' | | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_llm_end | [model name] | | 'Hello human!' | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_chain_start | format_docs | | | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_chain_stream | format_docs | 'hello world!, goodbye world!' | | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_chain_end | format_docs | | [Document(...)] | 'hello world!, goodbye world!' | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_tool_start | some_tool | | {"x": 1, "y": "2"} | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_tool_end | some_tool | | | {"x": 1, "y": "2"} | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_retriever_start | [retriever name] | | {"query": "hello"} | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_retriever_end | [retriever name] | | {"query": "hello"} | [Document(...), ..] | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_prompt_start | [template_name] | | {"question": "hello"} | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_prompt_end | [template_name] | | {"question": "hello"} | ChatPromptValue(messages: [SystemMessage, ...]) | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+

In addition to the standard events, users can also dispatch custom events (see example below).

Custom events will be only be surfaced with in the v2 version of the API!

A custom event has following format:

+-----------+------+-----------------------------------------------------------------------------------------------------------+ | Attribute | Type | Description | +===========+======+===========================================================================================================+ | name | str | A user defined name for the event. | +-----------+------+-----------------------------------------------------------------------------------------------------------+ | data | Any | The data associated with the event. This can be anything, though we suggest making it JSON serializable. | +-----------+------+-----------------------------------------------------------------------------------------------------------+

Here are declarations associated with the standard events shown above:

format_docs:

def format_docs(docs: list[Document]) -> str:
    '''Format the docs.'''
    return ", ".join([doc.page_content for doc in docs])


format_docs = RunnableLambda(format_docs)

some_tool:

@tool
def some_tool(x: int, y: str) -> dict:
    '''Some_tool.'''
    return {"x": x, "y": y}

prompt:

template = ChatPromptTemplate.from_messages(
    [
        ("system", "You are Cat Agent 007"),
        ("human", "{question}"),
    ]
).with_config({"run_name": "my_template", "tags": ["my_template"]})
Example:

from langchain_core.runnables import RunnableLambda


async def reverse(s: str) -> str:
    return s[::-1]


chain = RunnableLambda(func=reverse)

events = [event async for event in chain.astream_events("hello", version="v2")]

# will produce the following events (run_id, and parent_ids
# has been omitted for brevity):
[
    {
        "data": {"input": "hello"},
        "event": "on_chain_start",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
    {
        "data": {"chunk": "olleh"},
        "event": "on_chain_stream",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
    {
        "data": {"output": "olleh"},
        "event": "on_chain_end",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
]

Example: Dispatch Custom Event

from langchain_core.callbacks.manager import (
    adispatch_custom_event,
)
from langchain_core.runnables import RunnableLambda, RunnableConfig
import asyncio


async def slow_thing(some_input: str, config: RunnableConfig) -> str:
    """Do something that takes a long time."""
    await asyncio.sleep(1) # Placeholder for some slow operation
    await adispatch_custom_event(
        "progress_event",
        {"message": "Finished step 1 of 3"},
        config=config # Must be included for python < 3.10
    )
    await asyncio.sleep(1) # Placeholder for some slow operation
    await adispatch_custom_event(
        "progress_event",
        {"message": "Finished step 2 of 3"},
        config=config # Must be included for python < 3.10
    )
    await asyncio.sleep(1) # Placeholder for some slow operation
    return "Done"

slow_thing = RunnableLambda(slow_thing)

async for event in slow_thing.astream_events("some_input", version="v2"):
    print(event)

Parameters:

Name Type Description Default
input Any

The input to the Runnable.

required
config RunnableConfig | None

The config to use for the Runnable.

None
version Literal['v1', 'v2']

The version of the schema to use either 'v2' or 'v1'. Users should use 'v2'. 'v1' is for backwards compatibility and will be deprecated in 0.4.0. No default will be assigned until the API is stabilized. custom events will only be surfaced in 'v2'.

'v2'
include_names Sequence[str] | None

Only include events from Runnables with matching names.

None
include_types Sequence[str] | None

Only include events from Runnables with matching types.

None
include_tags Sequence[str] | None

Only include events from Runnables with matching tags.

None
exclude_names Sequence[str] | None

Exclude events from Runnables with matching names.

None
exclude_types Sequence[str] | None

Exclude events from Runnables with matching types.

None
exclude_tags Sequence[str] | None

Exclude events from Runnables with matching tags.

None
kwargs Any

Additional keyword arguments to pass to the Runnable. These will be passed to astream_log as this implementation of astream_events is built on top of astream_log.

{}

Yields:

Type Description
AsyncIterator[StreamEvent]

An async stream of StreamEvents.

Raises:

Type Description
NotImplementedError

If the version is not 'v1' or 'v2'.

transform

transform(
    input: Iterator[Input],
    config: RunnableConfig | None = None,
    **kwargs: Any | None
) -> Iterator[Output]

Transform inputs to outputs.

Default implementation of transform, which buffers input and calls astream.

Subclasses should override this method if they can start producing output while input is still being generated.

Parameters:

Name Type Description Default
input Iterator[Input]

An iterator of inputs to the Runnable.

required
config RunnableConfig | None

The config to use for the Runnable. Defaults to None.

None
kwargs Any | None

Additional keyword arguments to pass to the Runnable.

{}

Yields:

Type Description
Output

The output of the Runnable.

atransform async

atransform(
    input: AsyncIterator[Input],
    config: RunnableConfig | None = None,
    **kwargs: Any | None
) -> AsyncIterator[Output]

Transform inputs to outputs.

Default implementation of atransform, which buffers input and calls astream.

Subclasses should override this method if they can start producing output while input is still being generated.

Parameters:

Name Type Description Default
input AsyncIterator[Input]

An async iterator of inputs to the Runnable.

required
config RunnableConfig | None

The config to use for the Runnable. Defaults to None.

None
kwargs Any | None

Additional keyword arguments to pass to the Runnable.

{}

Yields:

Type Description
AsyncIterator[Output]

The output of the Runnable.

bind

bind(**kwargs: Any) -> Runnable[Input, Output]

Bind arguments to a Runnable, returning a new Runnable.

Useful when a Runnable in a chain requires an argument that is not in the output of the previous Runnable or included in the user input.

Parameters:

Name Type Description Default
kwargs Any

The arguments to bind to the Runnable.

{}

Returns:

Type Description
Runnable[Input, Output]

A new Runnable with the arguments bound.

Example
from langchain_ollama import ChatOllama
from langchain_core.output_parsers import StrOutputParser

llm = ChatOllama(model="llama3.1")

# Without bind.
chain = llm | StrOutputParser()

chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two three four five.'

# With bind.
chain = llm.bind(stop=["three"]) | StrOutputParser()

chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two'

with_config

with_config(
    config: RunnableConfig | None = None, **kwargs: Any
) -> Runnable[Input, Output]

Bind config to a Runnable, returning a new Runnable.

Parameters:

Name Type Description Default
config RunnableConfig | None

The config to bind to the Runnable.

None
kwargs Any

Additional keyword arguments to pass to the Runnable.

{}

Returns:

Type Description
Runnable[Input, Output]

A new Runnable with the config bound.

with_listeners

with_listeners(
    *,
    on_start: (
        Callable[[Run], None]
        | Callable[[Run, RunnableConfig], None]
        | None
    ) = None,
    on_end: (
        Callable[[Run], None]
        | Callable[[Run, RunnableConfig], None]
        | None
    ) = None,
    on_error: (
        Callable[[Run], None]
        | Callable[[Run, RunnableConfig], None]
        | None
    ) = None
) -> Runnable[Input, Output]

Bind lifecycle listeners to a Runnable, returning a new Runnable.

The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run.

Parameters:

Name Type Description Default
on_start Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None

Called before the Runnable starts running, with the Run object. Defaults to None.

None
on_end Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None

Called after the Runnable finishes running, with the Run object. Defaults to None.

None
on_error Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None

Called if the Runnable throws an error, with the Run object. Defaults to None.

None

Returns:

Type Description
Runnable[Input, Output]

A new Runnable with the listeners bound.

Example
from langchain_core.runnables import RunnableLambda
from langchain_core.tracers.schemas import Run

import time


def test_runnable(time_to_sleep: int):
    time.sleep(time_to_sleep)


def fn_start(run_obj: Run):
    print("start_time:", run_obj.start_time)


def fn_end(run_obj: Run):
    print("end_time:", run_obj.end_time)


chain = RunnableLambda(test_runnable).with_listeners(
    on_start=fn_start, on_end=fn_end
)
chain.invoke(2)

with_alisteners

with_alisteners(
    *,
    on_start: AsyncListener | None = None,
    on_end: AsyncListener | None = None,
    on_error: AsyncListener | None = None
) -> Runnable[Input, Output]

Bind async lifecycle listeners to a Runnable.

Returns a new Runnable.

The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run.

Parameters:

Name Type Description Default
on_start AsyncListener | None

Called asynchronously before the Runnable starts running, with the Run object. Defaults to None.

None
on_end AsyncListener | None

Called asynchronously after the Runnable finishes running, with the Run object. Defaults to None.

None
on_error AsyncListener | None

Called asynchronously if the Runnable throws an error, with the Run object. Defaults to None.

None

Returns:

Type Description
Runnable[Input, Output]

A new Runnable with the listeners bound.

Example
from langchain_core.runnables import RunnableLambda, Runnable
from datetime import datetime, timezone
import time
import asyncio

def format_t(timestamp: float) -> str:
    return datetime.fromtimestamp(timestamp, tz=timezone.utc).isoformat()

async def test_runnable(time_to_sleep: int):
    print(f"Runnable[{time_to_sleep}s]: starts at {format_t(time.time())}")
    await asyncio.sleep(time_to_sleep)
    print(f"Runnable[{time_to_sleep}s]: ends at {format_t(time.time())}")

async def fn_start(run_obj: Runnable):
    print(f"on start callback starts at {format_t(time.time())}")
    await asyncio.sleep(3)
    print(f"on start callback ends at {format_t(time.time())}")

async def fn_end(run_obj: Runnable):
    print(f"on end callback starts at {format_t(time.time())}")
    await asyncio.sleep(2)
    print(f"on end callback ends at {format_t(time.time())}")

runnable = RunnableLambda(test_runnable).with_alisteners(
    on_start=fn_start,
    on_end=fn_end
)
async def concurrent_runs():
    await asyncio.gather(runnable.ainvoke(2), runnable.ainvoke(3))

asyncio.run(concurrent_runs())
Result:
on start callback starts at 2025-03-01T07:05:22.875378+00:00
on start callback starts at 2025-03-01T07:05:22.875495+00:00
on start callback ends at 2025-03-01T07:05:25.878862+00:00
on start callback ends at 2025-03-01T07:05:25.878947+00:00
Runnable[2s]: starts at 2025-03-01T07:05:25.879392+00:00
Runnable[3s]: starts at 2025-03-01T07:05:25.879804+00:00
Runnable[2s]: ends at 2025-03-01T07:05:27.881998+00:00
on end callback starts at 2025-03-01T07:05:27.882360+00:00
Runnable[3s]: ends at 2025-03-01T07:05:28.881737+00:00
on end callback starts at 2025-03-01T07:05:28.882428+00:00
on end callback ends at 2025-03-01T07:05:29.883893+00:00
on end callback ends at 2025-03-01T07:05:30.884831+00:00

with_types

with_types(
    *,
    input_type: type[Input] | None = None,
    output_type: type[Output] | None = None
) -> Runnable[Input, Output]

Bind input and output types to a Runnable, returning a new Runnable.

Parameters:

Name Type Description Default
input_type type[Input] | None

The input type to bind to the Runnable. Defaults to None.

None
output_type type[Output] | None

The output type to bind to the Runnable. Defaults to None.

None

Returns:

Type Description
Runnable[Input, Output]

A new Runnable with the types bound.

with_retry

with_retry(
    *,
    retry_if_exception_type: tuple[
        type[BaseException], ...
    ] = (Exception,),
    wait_exponential_jitter: bool = True,
    exponential_jitter_params: (
        ExponentialJitterParams | None
    ) = None,
    stop_after_attempt: int = 3
) -> Runnable[Input, Output]

Create a new Runnable that retries the original Runnable on exceptions.

Parameters:

Name Type Description Default
retry_if_exception_type tuple[type[BaseException], ...]

A tuple of exception types to retry on. Defaults to (Exception,).

(Exception,)
wait_exponential_jitter bool

Whether to add jitter to the wait time between retries. Defaults to True.

True
stop_after_attempt int

The maximum number of attempts to make before giving up. Defaults to 3.

3
exponential_jitter_params ExponentialJitterParams | None

Parameters for tenacity.wait_exponential_jitter. Namely: initial, max, exp_base, and jitter (all float values).

None

Returns:

Type Description
Runnable[Input, Output]

A new Runnable that retries the original Runnable on exceptions.

Example
from langchain_core.runnables import RunnableLambda

count = 0


def _lambda(x: int) -> None:
    global count
    count = count + 1
    if x == 1:
        raise ValueError("x is 1")
    else:
        pass


runnable = RunnableLambda(_lambda)
try:
    runnable.with_retry(
        stop_after_attempt=2,
        retry_if_exception_type=(ValueError,),
    ).invoke(1)
except ValueError:
    pass

assert count == 2

map

map() -> Runnable[list[Input], list[Output]]

Return a new Runnable that maps a list of inputs to a list of outputs.

Calls invoke with each input.

Returns:

Type Description
Runnable[list[Input], list[Output]]

A new Runnable that maps a list of inputs to a list of outputs.

Example
from langchain_core.runnables import RunnableLambda


def _lambda(x: int) -> int:
    return x + 1


runnable = RunnableLambda(_lambda)
print(runnable.map().invoke([1, 2, 3]))  # [2, 3, 4]

with_fallbacks

with_fallbacks(
    fallbacks: Sequence[Runnable[Input, Output]],
    *,
    exceptions_to_handle: tuple[
        type[BaseException], ...
    ] = (Exception,),
    exception_key: str | None = None
) -> RunnableWithFallbacks[Input, Output]

Add fallbacks to a Runnable, returning a new Runnable.

The new Runnable will try the original Runnable, and then each fallback in order, upon failures.

Parameters:

Name Type Description Default
fallbacks Sequence[Runnable[Input, Output]]

A sequence of runnables to try if the original Runnable fails.

required
exceptions_to_handle tuple[type[BaseException], ...]

A tuple of exception types to handle. Defaults to (Exception,).

(Exception,)
exception_key str | None

If string is specified then handled exceptions will be passed to fallbacks as part of the input under the specified key. If None, exceptions will not be passed to fallbacks. If used, the base Runnable and its fallbacks must accept a dictionary as input. Defaults to None.

None

Returns:

Type Description
RunnableWithFallbacks[Input, Output]

A new Runnable that will try the original Runnable, and then each

RunnableWithFallbacks[Input, Output]

fallback in order, upon failures.

Example
from typing import Iterator

from langchain_core.runnables import RunnableGenerator


def _generate_immediate_error(input: Iterator) -> Iterator[str]:
    raise ValueError()
    yield ""


def _generate(input: Iterator) -> Iterator[str]:
    yield from "foo bar"


runnable = RunnableGenerator(_generate_immediate_error).with_fallbacks(
    [RunnableGenerator(_generate)]
)
print("".join(runnable.stream({})))  # foo bar

Parameters:

Name Type Description Default
fallbacks Sequence[Runnable[Input, Output]]

A sequence of runnables to try if the original Runnable fails.

required
exceptions_to_handle tuple[type[BaseException], ...]

A tuple of exception types to handle.

(Exception,)
exception_key str | None

If string is specified then handled exceptions will be passed to fallbacks as part of the input under the specified key. If None, exceptions will not be passed to fallbacks. If used, the base Runnable and its fallbacks must accept a dictionary as input.

None

Returns:

Type Description
RunnableWithFallbacks[Input, Output]

A new Runnable that will try the original Runnable, and then each

RunnableWithFallbacks[Input, Output]

fallback in order, upon failures.

as_tool

as_tool(
    args_schema: type[BaseModel] | None = None,
    *,
    name: str | None = None,
    description: str | None = None,
    arg_types: dict[str, type] | None = None
) -> BaseTool

Create a BaseTool from a Runnable.

as_tool will instantiate a BaseTool with a name, description, and args_schema from a Runnable. Where possible, schemas are inferred from runnable.get_input_schema. Alternatively (e.g., if the Runnable takes a dict as input and the specific dict keys are not typed), the schema can be specified directly with args_schema. You can also pass arg_types to just specify the required arguments and their types.

Parameters:

Name Type Description Default
args_schema type[BaseModel] | None

The schema for the tool. Defaults to None.

None
name str | None

The name of the tool. Defaults to None.

None
description str | None

The description of the tool. Defaults to None.

None
arg_types dict[str, type] | None

A dictionary of argument names to types. Defaults to None.

None

Returns:

Type Description
BaseTool

A BaseTool instance.

Typed dict input:

from typing_extensions import TypedDict
from langchain_core.runnables import RunnableLambda


class Args(TypedDict):
    a: int
    b: list[int]


def f(x: Args) -> str:
    return str(x["a"] * max(x["b"]))


runnable = RunnableLambda(f)
as_tool = runnable.as_tool()
as_tool.invoke({"a": 3, "b": [1, 2]})

dict input, specifying schema via args_schema:

from typing import Any
from pydantic import BaseModel, Field
from langchain_core.runnables import RunnableLambda

def f(x: dict[str, Any]) -> str:
    return str(x["a"] * max(x["b"]))

class FSchema(BaseModel):
    """Apply a function to an integer and list of integers."""

    a: int = Field(..., description="Integer")
    b: list[int] = Field(..., description="List of ints")

runnable = RunnableLambda(f)
as_tool = runnable.as_tool(FSchema)
as_tool.invoke({"a": 3, "b": [1, 2]})

dict input, specifying schema via arg_types:

from typing import Any
from langchain_core.runnables import RunnableLambda


def f(x: dict[str, Any]) -> str:
    return str(x["a"] * max(x["b"]))


runnable = RunnableLambda(f)
as_tool = runnable.as_tool(arg_types={"a": int, "b": list[int]})
as_tool.invoke({"a": 3, "b": [1, 2]})

String input:

from langchain_core.runnables import RunnableLambda


def f(x: str) -> str:
    return x + "a"


def g(x: str) -> str:
    return x + "z"


runnable = RunnableLambda(f) | g
as_tool = runnable.as_tool()
as_tool.invoke("b")

Added in version 0.2.14

__init__

__init__(*args: Any, **kwargs: Any) -> None

lc_id classmethod

lc_id() -> list[str]

Return a unique identifier for this class for serialization purposes.

The unique identifier is a list of strings that describes the path to the object. For example, for the class langchain.llms.openai.OpenAI, the id is ["langchain", "llms", "openai", "OpenAI"].

to_json

to_json() -> (
    SerializedConstructor | SerializedNotImplemented
)

Serialize the Runnable to JSON.

Returns:

Type Description
SerializedConstructor | SerializedNotImplemented

A JSON-serializable representation of the Runnable.

to_json_not_implemented

to_json_not_implemented() -> SerializedNotImplemented

Serialize a "not implemented" object.

Returns:

Type Description
SerializedNotImplemented

SerializedNotImplemented.

configurable_fields

configurable_fields(
    **kwargs: AnyConfigurableField,
) -> RunnableSerializable[Input, Output]

Configure particular Runnable fields at runtime.

Parameters:

Name Type Description Default
**kwargs AnyConfigurableField

A dictionary of ConfigurableField instances to configure.

{}

Raises:

Type Description
ValueError

If a configuration key is not found in the Runnable.

Returns:

Type Description
RunnableSerializable[Input, Output]

A new Runnable with the fields configured.

from langchain_core.runnables import ConfigurableField
from langchain_openai import ChatOpenAI

model = ChatOpenAI(max_tokens=20).configurable_fields(
    max_tokens=ConfigurableField(
        id="output_token_number",
        name="Max tokens in the output",
        description="The maximum number of tokens in the output",
    )
)

# max_tokens = 20
print("max_tokens_20: ", model.invoke("tell me something about chess").content)

# max_tokens = 200
print(
    "max_tokens_200: ",
    model.with_config(configurable={"output_token_number": 200})
    .invoke("tell me something about chess")
    .content,
)

configurable_alternatives

configurable_alternatives(
    which: ConfigurableField,
    *,
    default_key: str = "default",
    prefix_keys: bool = False,
    **kwargs: (
        Runnable[Input, Output]
        | Callable[[], Runnable[Input, Output]]
    )
) -> RunnableSerializable[Input, Output]

Configure alternatives for Runnables that can be set at runtime.

Parameters:

Name Type Description Default
which ConfigurableField

The ConfigurableField instance that will be used to select the alternative.

required
default_key str

The default key to use if no alternative is selected. Defaults to 'default'.

'default'
prefix_keys bool

Whether to prefix the keys with the ConfigurableField id. Defaults to False.

False
**kwargs Runnable[Input, Output] | Callable[[], Runnable[Input, Output]]

A dictionary of keys to Runnable instances or callables that return Runnable instances.

{}

Returns:

Type Description
RunnableSerializable[Input, Output]

A new Runnable with the alternatives configured.

from langchain_anthropic import ChatAnthropic
from langchain_core.runnables.utils import ConfigurableField
from langchain_openai import ChatOpenAI

model = ChatAnthropic(
    model_name="claude-3-7-sonnet-20250219"
).configurable_alternatives(
    ConfigurableField(id="llm"),
    default_key="anthropic",
    openai=ChatOpenAI(),
)

# uses the default model ChatAnthropic
print(model.invoke("which organization created you?").content)

# uses ChatOpenAI
print(
    model.with_config(configurable={"llm": "openai"})
    .invoke("which organization created you?")
    .content
)

set_verbose

set_verbose(verbose: bool | None) -> bool

If verbose is None, set it.

This allows users to pass in None as verbose to access the global setting.

Parameters:

Name Type Description Default
verbose bool | None

The verbosity setting to use.

required

Returns:

Type Description
bool

The verbosity setting to use.

get_token_ids

get_token_ids(text: str) -> list[int]

Get the tokens present in the text with tiktoken package.

get_num_tokens

get_num_tokens(text: str) -> int

Get the number of tokens present in the text.

Useful for checking if an input fits in a model's context window.

Parameters:

Name Type Description Default
text str

The string input to tokenize.

required

Returns:

Type Description
int

The integer number of tokens in the text.

get_num_tokens_from_messages

get_num_tokens_from_messages(
    messages: Sequence[BaseMessage],
    tools: Optional[
        Sequence[
            Union[dict[str, Any], type, Callable, BaseTool]
        ]
    ] = None,
) -> int

Calculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.

Requirements: You must have the pillow installed if you want to count image tokens if you are specifying the image as a base64 string, and you must have both pillow and httpx installed if you are specifying the image as a URL. If these aren't installed image inputs will be ignored in token counting.

OpenAI reference <https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb>__

Parameters:

Name Type Description Default
messages Sequence[BaseMessage]

The message inputs to tokenize.

required
tools Optional[Sequence[Union[dict[str, Any], type, Callable, BaseTool]]]

If provided, sequence of dict, BaseModel, function, or BaseTools to be converted to tool schemas.

None

generate

generate(
    messages: list[list[BaseMessage]],
    stop: list[str] | None = None,
    callbacks: Callbacks = None,
    *,
    tags: list[str] | None = None,
    metadata: dict[str, Any] | None = None,
    run_name: str | None = None,
    run_id: UUID | None = None,
    **kwargs: Any
) -> LLMResult

Pass a sequence of prompts to the model and return model generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:

  1. Take advantage of batched calls,
  2. Need more output from the model than just the top generated value,
  3. Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).

Parameters:

Name Type Description Default
messages list[list[BaseMessage]]

List of list of messages.

required
stop list[str] | None

Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

None
callbacks Callbacks

Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

None
tags list[str] | None

The tags to apply.

None
metadata dict[str, Any] | None

The metadata to apply.

None
run_name str | None

The name of the run.

None
run_id UUID | None

The ID of the run.

None
**kwargs Any

Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

{}

Returns:

Type Description
LLMResult

An LLMResult, which contains a list of candidate Generations for each input

LLMResult

prompt and additional model provider-specific output.

agenerate async

agenerate(
    messages: list[list[BaseMessage]],
    stop: list[str] | None = None,
    callbacks: Callbacks = None,
    *,
    tags: list[str] | None = None,
    metadata: dict[str, Any] | None = None,
    run_name: str | None = None,
    run_id: UUID | None = None,
    **kwargs: Any
) -> LLMResult

Asynchronously pass a sequence of prompts to a model and return generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:

  1. Take advantage of batched calls,
  2. Need more output from the model than just the top generated value,
  3. Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).

Parameters:

Name Type Description Default
messages list[list[BaseMessage]]

List of list of messages.

required
stop list[str] | None

Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

None
callbacks Callbacks

Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

None
tags list[str] | None

The tags to apply.

None
metadata dict[str, Any] | None

The metadata to apply.

None
run_name str | None

The name of the run.

None
run_id UUID | None

The ID of the run.

None
**kwargs Any

Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

{}

Returns:

Type Description
LLMResult

An LLMResult, which contains a list of candidate Generations for each input

LLMResult

prompt and additional model provider-specific output.

dict

dict(**kwargs: Any) -> dict

Return a dictionary of the LLM.

bind_tools

bind_tools(
    tools: Sequence[
        Union[dict[str, Any], type, Callable, BaseTool]
    ],
    *,
    tool_choice: Optional[
        Union[
            dict,
            str,
            Literal["auto", "none", "required", "any"],
            bool,
        ]
    ] = None,
    strict: Optional[bool] = None,
    parallel_tool_calls: Optional[bool] = None,
    **kwargs: Any
) -> Runnable[LanguageModelInput, AIMessage]

Bind tool-like objects to this chat model.

Assumes model is compatible with OpenAI tool-calling API.

Parameters:

Name Type Description Default
tools Sequence[Union[dict[str, Any], type, Callable, BaseTool]]

A list of tool definitions to bind to this chat model. Supports any tool definition handled by langchain_core.utils.function_calling.convert_to_openai_tool.

required
tool_choice Optional[Union[dict, str, Literal['auto', 'none', 'required', 'any'], bool]]

Which tool to require the model to call. Options are:

  • str of the form '<<tool_name>>': calls <> tool.
  • 'auto': automatically selects a tool (including no tool).
  • 'none': does not call a tool.
  • 'any' or 'required' or True: force at least one tool to be called.
  • dict of the form {"type": "function", "function": {"name": <<tool_name>>}}: calls <> tool.
  • False or None: no effect, default OpenAI behavior.
None
strict Optional[bool]

If True, model output is guaranteed to exactly match the JSON Schema provided in the tool definition. The input schema will also be validated according to the supported schemas <https://platform.openai.com/docs/guides/structured-outputs/supported-schemas?api-mode=responses#supported-schemas>__. If False, input schema will not be validated and model output will not be validated. If None, strict argument will not be passed to the model.

None
parallel_tool_calls Optional[bool]

Set to False to disable parallel tool use. Defaults to None (no specification, which allows parallel tool use).

None
kwargs Any

Any additional parameters are passed directly to langchain_openai.chat_models.base.ChatOpenAI.bind.

{}

Behavior changed in 0.1.21

Support for strict argument added.

build_extra classmethod

build_extra(values: dict[str, Any]) -> Any

Build extra kwargs from additional params that were passed in.

validate_temperature classmethod

validate_temperature(values: dict[str, Any]) -> Any

Validate temperature parameter for different models.

  • o1 models only allow temperature=1
  • gpt-5 models (excluding gpt-5-chat) only allow temperature=1 or unset (defaults to 1)

validate_environment

validate_environment() -> Self

Validate that api key and python package exists in environment.

get_lc_namespace classmethod

get_lc_namespace() -> list[str]

Get the namespace of the langchain object.

is_lc_serializable classmethod

is_lc_serializable() -> bool

Return whether this model can be serialized by LangChain.

with_structured_output

with_structured_output(
    schema: Optional[_DictOrPydanticClass] = None,
    *,
    method: Literal[
        "function_calling", "json_mode", "json_schema"
    ] = "json_schema",
    include_raw: bool = False,
    strict: Optional[bool] = None,
    **kwargs: Any
) -> Runnable[LanguageModelInput, _DictOrPydantic]

Model wrapper that returns outputs formatted to match the given schema.

Parameters:

Name Type Description Default
schema Optional[_DictOrPydanticClass]

The output schema. Can be passed in as:

  • a JSON Schema,
  • a TypedDict class,
  • or a Pydantic class,
  • an OpenAI function/tool schema.

If schema is a Pydantic class then the model output will be a Pydantic instance of that class, and the model-generated fields will be validated by the Pydantic class. Otherwise the model output will be a dict and will not be validated. See langchain_core.utils.function_calling.convert_to_openai_tool for more on how to properly specify types and descriptions of schema fields when specifying a Pydantic or TypedDict class.

None
method Literal['function_calling', 'json_mode', 'json_schema']

The method for steering model generation, one of:

  • 'json_schema': Uses OpenAI's Structured Output API <https://platform.openai.com/docs/guides/structured-outputs>__. Supported for 'gpt-4o-mini', 'gpt-4o-2024-08-06', 'o1', and later models.
  • 'function_calling': Uses OpenAI's tool-calling (formerly called function calling) API <https://platform.openai.com/docs/guides/function-calling>__
  • 'json_mode': Uses OpenAI's JSON mode <https://platform.openai.com/docs/guides/structured-outputs/json-mode>__. Note that if using JSON mode then you must include instructions for formatting the output into the desired schema into the model call

Learn more about the differences between the methods and which models support which methods here <https://platform.openai.com/docs/guides/structured-outputs/function-calling-vs-response-format>__.

'json_schema'
include_raw bool

If False then only the parsed structured output is returned. If an error occurs during model output parsing it will be raised. If True then both the raw model response (a BaseMessage) and the parsed model response will be returned. If an error occurs during output parsing it will be caught and returned as well. The final output is always a dict with keys 'raw', 'parsed', and 'parsing_error'.

False
strict Optional[bool]
  • True: Model output is guaranteed to exactly match the schema. The input schema will also be validated according to the supported schemas <https://platform.openai.com/docs/guides/structured-outputs/supported-schemas?api-mode=responses#supported-schemas>__.
  • False: Input schema will not be validated and model output will not be validated.
  • None: strict argument will not be passed to the model.

If schema is specified via TypedDict or JSON schema, strict is not enabled by default. Pass strict=True to enable it.

Note

strict can only be non-null if method is 'json_schema' or 'function_calling'.

None
tools

A list of tool-like objects to bind to the chat model. Requires that:

  • method is 'json_schema' (default).
  • strict=True
  • include_raw=True

If a model elects to call a tool, the resulting AIMessage in 'raw' will include tool calls.

Example

.. code-block:: python

from langchain.chat_models import init_chat_model
from pydantic import BaseModel

class ResponseSchema(BaseModel):
    response: str

def get_weather(location: str) -> str:
    \"\"\"Get weather at a location.\"\"\"
    pass

llm = init_chat_model("openai:gpt-4o-mini")

structured_llm = llm.with_structured_output(
    ResponseSchema,
    tools=[get_weather],
    strict=True,
    include_raw=True,
)

structured_llm.invoke("What's the weather in Boston?")

.. code-block:: python

{
    "raw": AIMessage(content="", tool_calls=[...], ...),
    "parsing_error": None,
    "parsed": None,
}
required
kwargs Any

Additional keyword args are passed through to the model.

{}

Returns:

Type Description
Runnable[LanguageModelInput, _DictOrPydantic]

A Runnable that takes same inputs as a langchain_core.language_models.chat.BaseChatModel.

Runnable[LanguageModelInput, _DictOrPydantic]

If include_raw is False and schema is a Pydantic class, Runnable outputs

Runnable[LanguageModelInput, _DictOrPydantic]

an instance of schema (i.e., a Pydantic object). Otherwise, if include_raw is False then Runnable outputs a dict.

Runnable[LanguageModelInput, _DictOrPydantic]

If include_raw is True, then Runnable outputs a dict with keys:

Runnable[LanguageModelInput, _DictOrPydantic]
  • 'raw': BaseMessage
Runnable[LanguageModelInput, _DictOrPydantic]
  • 'parsed': None if there was a parsing error, otherwise the type depends on the schema as described above.
Runnable[LanguageModelInput, _DictOrPydantic]
  • 'parsing_error': Optional[BaseException]

Behavior changed in 0.1.20

Added support for TypedDict class schema.

Behavior changed in 0.1.21

Support for strict argument added. Support for method="json_schema" added.

Behavior changed in 0.3.0

method default changed from "function_calling" to "json_schema".

Behavior changed in 0.3.12

Support for tools added.

Behavior changed in 0.3.21

Pass kwargs through to the model.

Example: schema=Pydantic class, method='json_schema', include_raw=False, strict=True

Note, OpenAI has a number of restrictions on what types of schemas can be provided if strict = True. When using Pydantic, our model cannot specify any Field metadata (like min/max constraints) and fields cannot have default values.

See all constraints here <https://platform.openai.com/docs/guides/structured-outputs/supported-schemas>__.

.. code-block:: python

from typing import Optional

from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field

class AnswerWithJustification(BaseModel):
    '''An answer to the user question along with justification for the answer.'''

    answer: str
    justification: Optional[str] = Field(
        default=..., description="A justification for the answer."
    )

llm = ChatOpenAI(model="gpt-4o", temperature=0)
structured_llm = llm.with_structured_output(AnswerWithJustification)

structured_llm.invoke(
    "What weighs more a pound of bricks or a pound of feathers"
)

# -> AnswerWithJustification(
#     answer='They weigh the same',
#     justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.'
# )
Example: schema=Pydantic class, method='function_calling', include_raw=False, strict=False

.. code-block:: python

from typing import Optional

from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field

class AnswerWithJustification(BaseModel):
    '''An answer to the user question along with justification for the answer.'''

    answer: str
    justification: Optional[str] = Field(
        default=..., description="A justification for the answer."
    )

llm = ChatOpenAI(model="gpt-4o", temperature=0)
structured_llm = llm.with_structured_output(
    AnswerWithJustification, method="function_calling"
)

structured_llm.invoke(
    "What weighs more a pound of bricks or a pound of feathers"
)

# -> AnswerWithJustification(
#     answer='They weigh the same',
#     justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.'
# )
Example: schema=Pydantic class, method='json_schema', include_raw=True

.. code-block:: python

from langchain_openai import ChatOpenAI
from pydantic import BaseModel

class AnswerWithJustification(BaseModel):
    '''An answer to the user question along with justification for the answer.'''

    answer: str
    justification: str

llm = ChatOpenAI(model="gpt-4o", temperature=0)
structured_llm = llm.with_structured_output(
    AnswerWithJustification, include_raw=True
)

structured_llm.invoke(
    "What weighs more a pound of bricks or a pound of feathers"
)
# -> {
#     'raw': AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_Ao02pnFYXD6GN1yzc0uXPsvF', 'function': {'arguments': '{"answer":"They weigh the same.","justification":"Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ."}', 'name': 'AnswerWithJustification'}, 'type': 'function'}]}),
#     'parsed': AnswerWithJustification(answer='They weigh the same.', justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.'),
#     'parsing_error': None
# }
Example: schema=TypedDict class, method='json_schema', include_raw=False, strict=False

.. code-block:: python

# IMPORTANT: If you are using Python <=3.8, you need to import Annotated
# from typing_extensions, not from typing.
from typing_extensions import Annotated, TypedDict

from langchain_openai import ChatOpenAI

class AnswerWithJustification(TypedDict):
    '''An answer to the user question along with justification for the answer.'''

    answer: str
    justification: Annotated[
        Optional[str], None, "A justification for the answer."
    ]

llm = ChatOpenAI(model="gpt-4o", temperature=0)
structured_llm = llm.with_structured_output(AnswerWithJustification)

structured_llm.invoke(
    "What weighs more a pound of bricks or a pound of feathers"
)
# -> {
#     'answer': 'They weigh the same',
#     'justification': 'Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume and density of the two substances differ.'
# }
Example: schema=OpenAI function schema, method='json_schema', include_raw=False

.. code-block:: python

from langchain_openai import ChatOpenAI

oai_schema = {
    'name': 'AnswerWithJustification',
    'description': 'An answer to the user question along with justification for the answer.',
    'parameters': {
        'type': 'object',
        'properties': {
            'answer': {'type': 'string'},
            'justification': {'description': 'A justification for the answer.', 'type': 'string'}
        },
       'required': ['answer']
   }

}

llm = ChatOpenAI(model="gpt-4o", temperature=0)
structured_llm = llm.with_structured_output(oai_schema)

structured_llm.invoke(
    "What weighs more a pound of bricks or a pound of feathers"
)
# -> {
#     'answer': 'They weigh the same',
#     'justification': 'Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume and density of the two substances differ.'
# }
Example: schema=Pydantic class, method='json_mode', include_raw=True

.. code-block::

from langchain_openai import ChatOpenAI
from pydantic import BaseModel

class AnswerWithJustification(BaseModel):
    answer: str
    justification: str

llm = ChatOpenAI(model="gpt-4o", temperature=0)
structured_llm = llm.with_structured_output(
    AnswerWithJustification,
    method="json_mode",
    include_raw=True
)

structured_llm.invoke(
    "Answer the following question. "
    "Make sure to return a JSON blob with keys 'answer' and 'justification'.\\n\\n"
    "What's heavier a pound of bricks or a pound of feathers?"
)
# -> {
#     'raw': AIMessage(content='{\\n    "answer": "They are both the same weight.",\\n    "justification": "Both a pound of bricks and a pound of feathers weigh one pound. The difference lies in the volume and density of the materials, not the weight." \\n}'),
#     'parsed': AnswerWithJustification(answer='They are both the same weight.', justification='Both a pound of bricks and a pound of feathers weigh one pound. The difference lies in the volume and density of the materials, not the weight.'),
#     'parsing_error': None
# }
Example: schema=None, method='json_mode', include_raw=True

.. code-block::

structured_llm = llm.with_structured_output(method="json_mode", include_raw=True)

structured_llm.invoke(
    "Answer the following question. "
    "Make sure to return a JSON blob with keys 'answer' and 'justification'.\\n\\n"
    "What's heavier a pound of bricks or a pound of feathers?"
)
# -> {
#     'raw': AIMessage(content='{\\n    "answer": "They are both the same weight.",\\n    "justification": "Both a pound of bricks and a pound of feathers weigh one pound. The difference lies in the volume and density of the materials, not the weight." \\n}'),
#     'parsed': {
#         'answer': 'They are both the same weight.',
#         'justification': 'Both a pound of bricks and a pound of feathers weigh one pound. The difference lies in the volume and density of the materials, not the weight.'
#     },
#     'parsing_error': None
# }

AzureOpenAIEmbeddings

Bases: OpenAIEmbeddings

AzureOpenAI embedding model integration.

Setup

To access AzureOpenAI embedding models you'll need to create an Azure account, get an API key, and install the langchain-openai integration package.

You'll need to have an Azure OpenAI instance deployed. You can deploy a version on Azure Portal following this guide.

Once you have your instance running, make sure you have the name of your instance and key. You can find the key in the Azure Portal, under the “Keys and Endpoint” section of your instance.

.. code-block:: bash

pip install -U langchain_openai

# Set up your environment variables (or pass them directly to the model)
export AZURE_OPENAI_API_KEY="your-api-key"
export AZURE_OPENAI_ENDPOINT="https://<your-endpoint>.openai.azure.com/"
export AZURE_OPENAI_API_VERSION="2024-02-01"

Key init args — completion params: model: str Name of AzureOpenAI model to use. dimensions: Optional[int] Number of dimensions for the embeddings. Can be specified only if the underlying model supports it.

Key init args — client params: api_key: Optional[SecretStr]

See full list of supported init args and their descriptions in the params section.

Instantiate

.. code-block:: python

from langchain_openai import AzureOpenAIEmbeddings

embeddings = AzureOpenAIEmbeddings(
    model="text-embedding-3-large"
    # dimensions: Optional[int] = None, # Can specify dimensions with new text-embedding-3 models
    # azure_endpoint="https://<your-endpoint>.openai.azure.com/", If not provided, will read env variable AZURE_OPENAI_ENDPOINT
    # api_key=... # Can provide an API key directly. If missing read env variable AZURE_OPENAI_API_KEY
    # openai_api_version=..., # If not provided, will read env variable AZURE_OPENAI_API_VERSION
)
Embed single text

.. code-block:: python

input_text = "The meaning of life is 42"
vector = embed.embed_query(input_text)
print(vector[:3])

.. code-block:: python

[-0.024603435769677162, -0.007543657906353474, 0.0039630369283258915]
Embed multiple texts

.. code-block:: python

 input_texts = ["Document 1...", "Document 2..."]
vectors = embed.embed_documents(input_texts)
print(len(vectors))
# The first 3 coordinates for the first vector
print(vectors[0][:3])

.. code-block:: python

2
[-0.024603435769677162, -0.007543657906353474, 0.0039630369283258915]
Async

.. code-block:: python

vector = await embed.aembed_query(input_text)

print(vector[:3])

# multiple:
# await embed.aembed_documents(input_texts)

.. code-block:: python

[-0.009100092574954033, 0.005071679595857859, -0.0029193938244134188]

Methods:

Name Description
embed_documents

Call out to OpenAI's embedding endpoint for embedding search docs.

embed_query

Call out to OpenAI's embedding endpoint for embedding query text.

aembed_documents

Call out to OpenAI's embedding endpoint async for embedding search docs.

aembed_query

Call out to OpenAI's embedding endpoint async for embedding query text.

build_extra

Build extra kwargs from additional params that were passed in.

validate_environment

Validate that api key and python package exists in environment.

Attributes:

Name Type Description
dimensions Optional[int]

The number of dimensions the resulting output embeddings should have.

openai_api_base Optional[str]

Base URL path for API requests, leave blank if not using a proxy or service

embedding_ctx_length int

The maximum number of tokens to embed at once.

openai_organization Optional[str]

Automatically inferred from env var OPENAI_ORG_ID if not provided.

max_retries int

Maximum number of retries to make when generating.

request_timeout Optional[Union[float, tuple[float, float], Any]]

Timeout for requests to OpenAI completion API. Can be float, httpx.Timeout or

tiktoken_enabled bool

Set this to False for non-OpenAI implementations of the embeddings API, e.g.

tiktoken_model_name Optional[str]

The model name to pass to tiktoken when using this class.

show_progress_bar bool

Whether to show a progress bar when embedding.

model_kwargs dict[str, Any]

Holds any model parameters valid for create call not explicitly specified.

skip_empty bool

Whether to skip empty strings when embedding or raise an error.

retry_min_seconds int

Min number of seconds to wait between retries

retry_max_seconds int

Max number of seconds to wait between retries

http_client Union[Any, None]

Optional httpx.Client. Only used for sync invocations. Must specify

http_async_client Union[Any, None]

Optional httpx.AsyncClient. Only used for async invocations. Must specify

check_embedding_ctx_length bool

Whether to check the token length of inputs and automatically split inputs

azure_endpoint Optional[str]

Your Azure endpoint, including the resource.

deployment Optional[str]

A model deployment.

openai_api_key Optional[SecretStr]

Automatically inferred from env var AZURE_OPENAI_API_KEY if not provided.

openai_api_version Optional[str]

Automatically inferred from env var OPENAI_API_VERSION if not provided.

azure_ad_token Optional[SecretStr]

Your Azure Active Directory token.

azure_ad_token_provider Union[Callable[[], str], None]

A function that returns an Azure Active Directory token.

azure_ad_async_token_provider Union[Callable[[], Awaitable[str]], None]

A function that returns an Azure Active Directory token.

chunk_size int

Maximum number of texts to embed in each batch

dimensions class-attribute instance-attribute

dimensions: Optional[int] = None

The number of dimensions the resulting output embeddings should have.

Only supported in text-embedding-3 and later models.

openai_api_base class-attribute instance-attribute

openai_api_base: Optional[str] = Field(
    alias="base_url",
    default_factory=from_env(
        "OPENAI_API_BASE", default=None
    ),
)

Base URL path for API requests, leave blank if not using a proxy or service emulator.

embedding_ctx_length class-attribute instance-attribute

embedding_ctx_length: int = 8191

The maximum number of tokens to embed at once.

openai_organization class-attribute instance-attribute

openai_organization: Optional[str] = Field(
    alias="organization",
    default_factory=from_env(
        ["OPENAI_ORG_ID", "OPENAI_ORGANIZATION"],
        default=None,
    ),
)

Automatically inferred from env var OPENAI_ORG_ID if not provided.

max_retries class-attribute instance-attribute

max_retries: int = 2

Maximum number of retries to make when generating.

request_timeout class-attribute instance-attribute

request_timeout: Optional[
    Union[float, tuple[float, float], Any]
] = Field(default=None, alias="timeout")

Timeout for requests to OpenAI completion API. Can be float, httpx.Timeout or None.

tiktoken_enabled class-attribute instance-attribute

tiktoken_enabled: bool = True

Set this to False for non-OpenAI implementations of the embeddings API, e.g. the --extensions openai extension for text-generation-webui

tiktoken_model_name class-attribute instance-attribute

tiktoken_model_name: Optional[str] = None

The model name to pass to tiktoken when using this class. Tiktoken is used to count the number of tokens in documents to constrain them to be under a certain limit. By default, when set to None, this will be the same as the embedding model name. However, there are some cases where you may want to use this Embedding class with a model name not supported by tiktoken. This can include when using Azure embeddings or when using one of the many model providers that expose an OpenAI-like API but with different models. In those cases, in order to avoid erroring when tiktoken is called, you can specify a model name to use here.

show_progress_bar class-attribute instance-attribute

show_progress_bar: bool = False

Whether to show a progress bar when embedding.

model_kwargs class-attribute instance-attribute

model_kwargs: dict[str, Any] = Field(default_factory=dict)

Holds any model parameters valid for create call not explicitly specified.

skip_empty class-attribute instance-attribute

skip_empty: bool = False

Whether to skip empty strings when embedding or raise an error. Defaults to not skipping.

retry_min_seconds class-attribute instance-attribute

retry_min_seconds: int = 4

Min number of seconds to wait between retries

retry_max_seconds class-attribute instance-attribute

retry_max_seconds: int = 20

Max number of seconds to wait between retries

http_client class-attribute instance-attribute

http_client: Union[Any, None] = None

Optional httpx.Client. Only used for sync invocations. Must specify http_async_client as well if you'd like a custom client for async invocations.

http_async_client class-attribute instance-attribute

http_async_client: Union[Any, None] = None

Optional httpx.AsyncClient. Only used for async invocations. Must specify http_client as well if you'd like a custom client for sync invocations.

check_embedding_ctx_length class-attribute instance-attribute

check_embedding_ctx_length: bool = True

Whether to check the token length of inputs and automatically split inputs longer than embedding_ctx_length.

azure_endpoint class-attribute instance-attribute

azure_endpoint: Optional[str] = Field(
    default_factory=from_env(
        "AZURE_OPENAI_ENDPOINT", default=None
    )
)

Your Azure endpoint, including the resource.

Automatically inferred from env var AZURE_OPENAI_ENDPOINT if not provided.

Example: https://example-resource.azure.openai.com/

deployment class-attribute instance-attribute

deployment: Optional[str] = Field(
    default=None, alias="azure_deployment"
)

A model deployment.

If given sets the base client URL to include /deployments/{azure_deployment}.

Note

This means you won't be able to use non-deployment endpoints.

openai_api_key class-attribute instance-attribute

openai_api_key: Optional[SecretStr] = Field(
    alias="api_key",
    default_factory=secret_from_env(
        ["AZURE_OPENAI_API_KEY", "OPENAI_API_KEY"],
        default=None,
    ),
)

Automatically inferred from env var AZURE_OPENAI_API_KEY if not provided.

openai_api_version class-attribute instance-attribute

openai_api_version: Optional[str] = Field(
    default_factory=from_env(
        "OPENAI_API_VERSION", default="2023-05-15"
    ),
    alias="api_version",
)

Automatically inferred from env var OPENAI_API_VERSION if not provided.

Set to '2023-05-15' by default if env variable OPENAI_API_VERSION is not set.

azure_ad_token class-attribute instance-attribute

azure_ad_token: Optional[SecretStr] = Field(
    default_factory=secret_from_env(
        "AZURE_OPENAI_AD_TOKEN", default=None
    )
)

Your Azure Active Directory token.

Automatically inferred from env var AZURE_OPENAI_AD_TOKEN if not provided.

For more, see this page. <https://www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-id>__

azure_ad_token_provider class-attribute instance-attribute

azure_ad_token_provider: Union[Callable[[], str], None] = (
    None
)

A function that returns an Azure Active Directory token.

Will be invoked on every sync request. For async requests, will be invoked if azure_ad_async_token_provider is not provided.

azure_ad_async_token_provider class-attribute instance-attribute

azure_ad_async_token_provider: Union[
    Callable[[], Awaitable[str]], None
] = None

A function that returns an Azure Active Directory token.

Will be invoked on every async request.

chunk_size class-attribute instance-attribute

chunk_size: int = 2048

Maximum number of texts to embed in each batch

embed_documents

embed_documents(
    texts: list[str],
    chunk_size: Optional[int] = None,
    **kwargs: Any
) -> list[list[float]]

Call out to OpenAI's embedding endpoint for embedding search docs.

Parameters:

Name Type Description Default
texts list[str]

The list of texts to embed.

required
chunk_size Optional[int]

The chunk size of embeddings. If None, will use the chunk size specified by the class.

None
kwargs Any

Additional keyword arguments to pass to the embedding API.

{}

Returns:

Type Description
list[list[float]]

List of embeddings, one for each text.

embed_query

embed_query(text: str, **kwargs: Any) -> list[float]

Call out to OpenAI's embedding endpoint for embedding query text.

Parameters:

Name Type Description Default
text str

The text to embed.

required
kwargs Any

Additional keyword arguments to pass to the embedding API.

{}

Returns:

Type Description
list[float]

Embedding for the text.

aembed_documents async

aembed_documents(
    texts: list[str],
    chunk_size: Optional[int] = None,
    **kwargs: Any
) -> list[list[float]]

Call out to OpenAI's embedding endpoint async for embedding search docs.

Parameters:

Name Type Description Default
texts list[str]

The list of texts to embed.

required
chunk_size Optional[int]

The chunk size of embeddings. If None, will use the chunk size specified by the class.

None
kwargs Any

Additional keyword arguments to pass to the embedding API.

{}

Returns:

Type Description
list[list[float]]

List of embeddings, one for each text.

aembed_query async

aembed_query(text: str, **kwargs: Any) -> list[float]

Call out to OpenAI's embedding endpoint async for embedding query text.

Parameters:

Name Type Description Default
text str

The text to embed.

required
kwargs Any

Additional keyword arguments to pass to the embedding API.

{}

Returns:

Type Description
list[float]

Embedding for the text.

build_extra classmethod

build_extra(values: dict[str, Any]) -> Any

Build extra kwargs from additional params that were passed in.

validate_environment

validate_environment() -> Self

Validate that api key and python package exists in environment.

OpenAIEmbeddings

Bases: BaseModel, Embeddings

OpenAI embedding model integration.

Setup

Install langchain_openai and set environment variable OPENAI_API_KEY.

.. code-block:: bash

pip install -U langchain_openai
export OPENAI_API_KEY="your-api-key"

Key init args — embedding params: model: str Name of OpenAI model to use. dimensions: Optional[int] = None The number of dimensions the resulting output embeddings should have. Only supported in 'text-embedding-3' and later models.

Key init args — client params: api_key: Optional[SecretStr] = None OpenAI API key. organization: Optional[str] = None OpenAI organization ID. If not passed in will be read from env var OPENAI_ORG_ID. max_retries: int = 2 Maximum number of retries to make when generating. request_timeout: Optional[Union[float, Tuple[float, float], Any]] = None Timeout for requests to OpenAI completion API

See full list of supported init args and their descriptions in the params section.

Instantiate

.. code-block:: python

from langchain_openai import OpenAIEmbeddings

embed = OpenAIEmbeddings(
    model="text-embedding-3-large"
    # With the `text-embedding-3` class
    # of models, you can specify the size
    # of the embeddings you want returned.
    # dimensions=1024
)
Embed single text

.. code-block:: python

input_text = "The meaning of life is 42"
vector = embeddings.embed_query("hello")
print(vector[:3])

.. code-block:: python

[-0.024603435769677162, -0.007543657906353474, 0.0039630369283258915]
Embed multiple texts

.. code-block:: python

vectors = embeddings.embed_documents(["hello", "goodbye"])
# Showing only the first 3 coordinates
print(len(vectors))
print(vectors[0][:3])

.. code-block:: python

2
[-0.024603435769677162, -0.007543657906353474, 0.0039630369283258915]
Async

.. code-block:: python

await embed.aembed_query(input_text)
print(vector[:3])

# multiple:
# await embed.aembed_documents(input_texts)

.. code-block:: python

[-0.009100092574954033, 0.005071679595857859, -0.0029193938244134188]

Methods:

Name Description
build_extra

Build extra kwargs from additional params that were passed in.

validate_environment

Validate that api key and python package exists in environment.

embed_documents

Call out to OpenAI's embedding endpoint for embedding search docs.

aembed_documents

Call out to OpenAI's embedding endpoint async for embedding search docs.

embed_query

Call out to OpenAI's embedding endpoint for embedding query text.

aembed_query

Call out to OpenAI's embedding endpoint async for embedding query text.

Attributes:

Name Type Description
dimensions Optional[int]

The number of dimensions the resulting output embeddings should have.

openai_api_version Optional[str]

Automatically inferred from env var OPENAI_API_VERSION if not provided.

openai_api_base Optional[str]

Base URL path for API requests, leave blank if not using a proxy or service

embedding_ctx_length int

The maximum number of tokens to embed at once.

openai_api_key Optional[SecretStr]

Automatically inferred from env var OPENAI_API_KEY if not provided.

openai_organization Optional[str]

Automatically inferred from env var OPENAI_ORG_ID if not provided.

chunk_size int

Maximum number of texts to embed in each batch

max_retries int

Maximum number of retries to make when generating.

request_timeout Optional[Union[float, tuple[float, float], Any]]

Timeout for requests to OpenAI completion API. Can be float, httpx.Timeout or

tiktoken_enabled bool

Set this to False for non-OpenAI implementations of the embeddings API, e.g.

tiktoken_model_name Optional[str]

The model name to pass to tiktoken when using this class.

show_progress_bar bool

Whether to show a progress bar when embedding.

model_kwargs dict[str, Any]

Holds any model parameters valid for create call not explicitly specified.

skip_empty bool

Whether to skip empty strings when embedding or raise an error.

retry_min_seconds int

Min number of seconds to wait between retries

retry_max_seconds int

Max number of seconds to wait between retries

http_client Union[Any, None]

Optional httpx.Client. Only used for sync invocations. Must specify

http_async_client Union[Any, None]

Optional httpx.AsyncClient. Only used for async invocations. Must specify

check_embedding_ctx_length bool

Whether to check the token length of inputs and automatically split inputs

dimensions class-attribute instance-attribute

dimensions: Optional[int] = None

The number of dimensions the resulting output embeddings should have.

Only supported in text-embedding-3 and later models.

openai_api_version class-attribute instance-attribute

openai_api_version: Optional[str] = Field(
    default_factory=from_env(
        "OPENAI_API_VERSION", default=None
    ),
    alias="api_version",
)

Automatically inferred from env var OPENAI_API_VERSION if not provided.

openai_api_base class-attribute instance-attribute

openai_api_base: Optional[str] = Field(
    alias="base_url",
    default_factory=from_env(
        "OPENAI_API_BASE", default=None
    ),
)

Base URL path for API requests, leave blank if not using a proxy or service emulator.

embedding_ctx_length class-attribute instance-attribute

embedding_ctx_length: int = 8191

The maximum number of tokens to embed at once.

openai_api_key class-attribute instance-attribute

openai_api_key: Optional[SecretStr] = Field(
    alias="api_key",
    default_factory=secret_from_env(
        "OPENAI_API_KEY", default=None
    ),
)

Automatically inferred from env var OPENAI_API_KEY if not provided.

openai_organization class-attribute instance-attribute

openai_organization: Optional[str] = Field(
    alias="organization",
    default_factory=from_env(
        ["OPENAI_ORG_ID", "OPENAI_ORGANIZATION"],
        default=None,
    ),
)

Automatically inferred from env var OPENAI_ORG_ID if not provided.

chunk_size class-attribute instance-attribute

chunk_size: int = 1000

Maximum number of texts to embed in each batch

max_retries class-attribute instance-attribute

max_retries: int = 2

Maximum number of retries to make when generating.

request_timeout class-attribute instance-attribute

request_timeout: Optional[
    Union[float, tuple[float, float], Any]
] = Field(default=None, alias="timeout")

Timeout for requests to OpenAI completion API. Can be float, httpx.Timeout or None.

tiktoken_enabled class-attribute instance-attribute

tiktoken_enabled: bool = True

Set this to False for non-OpenAI implementations of the embeddings API, e.g. the --extensions openai extension for text-generation-webui

tiktoken_model_name class-attribute instance-attribute

tiktoken_model_name: Optional[str] = None

The model name to pass to tiktoken when using this class. Tiktoken is used to count the number of tokens in documents to constrain them to be under a certain limit. By default, when set to None, this will be the same as the embedding model name. However, there are some cases where you may want to use this Embedding class with a model name not supported by tiktoken. This can include when using Azure embeddings or when using one of the many model providers that expose an OpenAI-like API but with different models. In those cases, in order to avoid erroring when tiktoken is called, you can specify a model name to use here.

show_progress_bar class-attribute instance-attribute

show_progress_bar: bool = False

Whether to show a progress bar when embedding.

model_kwargs class-attribute instance-attribute

model_kwargs: dict[str, Any] = Field(default_factory=dict)

Holds any model parameters valid for create call not explicitly specified.

skip_empty class-attribute instance-attribute

skip_empty: bool = False

Whether to skip empty strings when embedding or raise an error. Defaults to not skipping.

retry_min_seconds class-attribute instance-attribute

retry_min_seconds: int = 4

Min number of seconds to wait between retries

retry_max_seconds class-attribute instance-attribute

retry_max_seconds: int = 20

Max number of seconds to wait between retries

http_client class-attribute instance-attribute

http_client: Union[Any, None] = None

Optional httpx.Client. Only used for sync invocations. Must specify http_async_client as well if you'd like a custom client for async invocations.

http_async_client class-attribute instance-attribute

http_async_client: Union[Any, None] = None

Optional httpx.AsyncClient. Only used for async invocations. Must specify http_client as well if you'd like a custom client for sync invocations.

check_embedding_ctx_length class-attribute instance-attribute

check_embedding_ctx_length: bool = True

Whether to check the token length of inputs and automatically split inputs longer than embedding_ctx_length.

build_extra classmethod

build_extra(values: dict[str, Any]) -> Any

Build extra kwargs from additional params that were passed in.

validate_environment

validate_environment() -> Self

Validate that api key and python package exists in environment.

embed_documents

embed_documents(
    texts: list[str],
    chunk_size: Optional[int] = None,
    **kwargs: Any
) -> list[list[float]]

Call out to OpenAI's embedding endpoint for embedding search docs.

Parameters:

Name Type Description Default
texts list[str]

The list of texts to embed.

required
chunk_size Optional[int]

The chunk size of embeddings. If None, will use the chunk size specified by the class.

None
kwargs Any

Additional keyword arguments to pass to the embedding API.

{}

Returns:

Type Description
list[list[float]]

List of embeddings, one for each text.

aembed_documents async

aembed_documents(
    texts: list[str],
    chunk_size: Optional[int] = None,
    **kwargs: Any
) -> list[list[float]]

Call out to OpenAI's embedding endpoint async for embedding search docs.

Parameters:

Name Type Description Default
texts list[str]

The list of texts to embed.

required
chunk_size Optional[int]

The chunk size of embeddings. If None, will use the chunk size specified by the class.

None
kwargs Any

Additional keyword arguments to pass to the embedding API.

{}

Returns:

Type Description
list[list[float]]

List of embeddings, one for each text.

embed_query

embed_query(text: str, **kwargs: Any) -> list[float]

Call out to OpenAI's embedding endpoint for embedding query text.

Parameters:

Name Type Description Default
text str

The text to embed.

required
kwargs Any

Additional keyword arguments to pass to the embedding API.

{}

Returns:

Type Description
list[float]

Embedding for the text.

aembed_query async

aembed_query(text: str, **kwargs: Any) -> list[float]

Call out to OpenAI's embedding endpoint async for embedding query text.

Parameters:

Name Type Description Default
text str

The text to embed.

required
kwargs Any

Additional keyword arguments to pass to the embedding API.

{}

Returns:

Type Description
list[float]

Embedding for the text.

AzureOpenAI

Bases: BaseOpenAI

Azure-specific OpenAI large language models.

To use, you should have the openai python package installed, and the environment variable OPENAI_API_KEY set with your API key.

Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class.

Example

.. code-block:: python

from langchain_openai import AzureOpenAI

openai = AzureOpenAI(model_name="gpt-3.5-turbo-instruct")

Methods:

Name Description
get_name

Get the name of the Runnable.

get_input_schema

Get a pydantic model that can be used to validate input to the Runnable.

get_input_jsonschema

Get a JSON schema that represents the input to the Runnable.

get_output_schema

Get a pydantic model that can be used to validate output to the Runnable.

get_output_jsonschema

Get a JSON schema that represents the output of the Runnable.

config_schema

The type of config this Runnable accepts specified as a pydantic model.

get_config_jsonschema

Get a JSON schema that represents the config of the Runnable.

get_graph

Return a graph representation of this Runnable.

get_prompts

Return a list of prompts used by this Runnable.

__or__

Runnable "or" operator.

__ror__

Runnable "reverse-or" operator.

pipe

Pipe runnables.

pick

Pick keys from the output dict of this Runnable.

assign

Assigns new fields to the dict output of this Runnable.

batch_as_completed

Run invoke in parallel on a list of inputs.

abatch_as_completed

Run ainvoke in parallel on a list of inputs.

astream_log

Stream all output from a Runnable, as reported to the callback system.

astream_events

Generate a stream of events.

transform

Transform inputs to outputs.

atransform

Transform inputs to outputs.

bind

Bind arguments to a Runnable, returning a new Runnable.

with_config

Bind config to a Runnable, returning a new Runnable.

with_listeners

Bind lifecycle listeners to a Runnable, returning a new Runnable.

with_alisteners

Bind async lifecycle listeners to a Runnable.

with_types

Bind input and output types to a Runnable, returning a new Runnable.

with_retry

Create a new Runnable that retries the original Runnable on exceptions.

map

Return a new Runnable that maps a list of inputs to a list of outputs.

with_fallbacks

Add fallbacks to a Runnable, returning a new Runnable.

as_tool

Create a BaseTool from a Runnable.

__init__
lc_id

Return a unique identifier for this class for serialization purposes.

to_json

Serialize the Runnable to JSON.

to_json_not_implemented

Serialize a "not implemented" object.

configurable_fields

Configure particular Runnable fields at runtime.

configurable_alternatives

Configure alternatives for Runnables that can be set at runtime.

set_verbose

If verbose is None, set it.

with_structured_output

Not implemented on this class.

get_token_ids

Get the token IDs using the tiktoken package.

get_num_tokens

Get the number of tokens present in the text.

get_num_tokens_from_messages

Get the number of tokens in the messages.

generate

Pass a sequence of prompts to a model and return generations.

agenerate

Asynchronously pass a sequence of prompts to a model and return generations.

__str__

Return a string representation of the object for printing.

dict

Return a dictionary of the LLM.

save

Save the LLM.

build_extra

Build extra kwargs from additional params that were passed in.

get_sub_prompts

Get the sub prompts for llm call.

create_llm_result

Create the LLMResult from the choices and prompts.

modelname_to_contextsize

Calculate the maximum number of tokens possible to generate for a model.

max_tokens_for_prompt

Calculate the maximum number of tokens possible to generate for a prompt.

get_lc_namespace

Get the namespace of the langchain object.

is_lc_serializable

Return whether this model can be serialized by LangChain.

validate_environment

Validate that api key and python package exists in environment.

Attributes:

Name Type Description
InputType TypeAlias

Get the input type for this runnable.

OutputType type[str]

Get the input type for this runnable.

input_schema type[BaseModel]

The type of input this Runnable accepts specified as a pydantic model.

output_schema type[BaseModel]

Output schema.

config_specs list[ConfigurableFieldSpec]

List configurable fields for this Runnable.

cache BaseCache | bool | None

Whether to cache the response.

verbose bool

Whether to print out response text.

callbacks Callbacks

Callbacks to add to the run trace.

tags list[str] | None

Tags to add to the run trace.

metadata dict[str, Any] | None

Metadata to add to the run trace.

custom_get_token_ids Callable[[str], list[int]] | None

Optional encoder to use for counting tokens.

model_name str

Model name to use.

temperature float

What sampling temperature to use.

max_tokens int

The maximum number of tokens to generate in the completion.

top_p float

Total probability mass of tokens to consider at each step.

frequency_penalty float

Penalizes repeated tokens according to frequency.

presence_penalty float

Penalizes repeated tokens.

n int

How many completions to generate for each prompt.

best_of int

Generates best_of completions server-side and returns the "best".

model_kwargs dict[str, Any]

Holds any model parameters valid for create call not explicitly specified.

openai_api_base Optional[str]

Base URL path for API requests, leave blank if not using a proxy or service

openai_organization Optional[str]

Automatically inferred from env var OPENAI_ORG_ID if not provided.

batch_size int

Batch size to use when passing multiple documents to generate.

request_timeout Union[float, tuple[float, float], Any, None]

Timeout for requests to OpenAI completion API. Can be float, httpx.Timeout or

logit_bias Optional[dict[str, float]]

Adjust the probability of specific tokens being generated.

max_retries int

Maximum number of retries to make when generating.

seed Optional[int]

Seed for generation

logprobs Optional[int]

Include the log probabilities on the logprobs most likely output tokens,

streaming bool

Whether to stream the results or not.

allowed_special Union[Literal['all'], set[str]]

Set of special tokens that are allowed。

disallowed_special Union[Literal['all'], Collection[str]]

Set of special tokens that are not allowed。

tiktoken_model_name Optional[str]

The model name to pass to tiktoken when using this class.

http_client Union[Any, None]

Optional httpx.Client. Only used for sync invocations. Must specify

http_async_client Union[Any, None]

Optional httpx.AsyncClient. Only used for async invocations. Must specify

extra_body Optional[Mapping[str, Any]]

Optional additional JSON properties to include in the request parameters when

max_context_size int

Get max context size for this model.

azure_endpoint Optional[str]

Your Azure endpoint, including the resource.

deployment_name Union[str, None]

A model deployment.

openai_api_version Optional[str]

Automatically inferred from env var OPENAI_API_VERSION if not provided.

azure_ad_token Optional[SecretStr]

Your Azure Active Directory token.

azure_ad_token_provider Union[Callable[[], str], None]

A function that returns an Azure Active Directory token.

azure_ad_async_token_provider Union[Callable[[], Awaitable[str]], None]

A function that returns an Azure Active Directory token.

openai_api_type Optional[str]

Legacy, for openai<1.0.0 support.

validate_base_url bool

For backwards compatibility. If legacy val openai_api_base is passed in, try to

lc_secrets dict[str, str]

Mapping of secret keys to environment variables.

lc_attributes dict[str, Any]

Attributes relevant to tracing.

InputType property

InputType: TypeAlias

Get the input type for this runnable.

OutputType property

OutputType: type[str]

Get the input type for this runnable.

input_schema property

input_schema: type[BaseModel]

The type of input this Runnable accepts specified as a pydantic model.

output_schema property

output_schema: type[BaseModel]

Output schema.

The type of output this Runnable produces specified as a pydantic model.

config_specs property

config_specs: list[ConfigurableFieldSpec]

List configurable fields for this Runnable.

cache class-attribute instance-attribute

cache: BaseCache | bool | None = Field(
    default=None, exclude=True
)

Whether to cache the response.

  • If true, will use the global cache.
  • If false, will not use a cache
  • If None, will use the global cache if it's set, otherwise no cache.
  • If instance of BaseCache, will use the provided cache.

Caching is not currently supported for streaming methods of models.

verbose class-attribute instance-attribute

verbose: bool = Field(
    default_factory=_get_verbosity, exclude=True, repr=False
)

Whether to print out response text.

callbacks class-attribute instance-attribute

callbacks: Callbacks = Field(default=None, exclude=True)

Callbacks to add to the run trace.

tags class-attribute instance-attribute

tags: list[str] | None = Field(default=None, exclude=True)

Tags to add to the run trace.

metadata class-attribute instance-attribute

metadata: dict[str, Any] | None = Field(
    default=None, exclude=True
)

Metadata to add to the run trace.

custom_get_token_ids class-attribute instance-attribute

custom_get_token_ids: Callable[[str], list[int]] | None = (
    Field(default=None, exclude=True)
)

Optional encoder to use for counting tokens.

model_name class-attribute instance-attribute

model_name: str = Field(
    default="gpt-3.5-turbo-instruct", alias="model"
)

Model name to use.

temperature class-attribute instance-attribute

temperature: float = 0.7

What sampling temperature to use.

max_tokens class-attribute instance-attribute

max_tokens: int = 256

The maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the models maximal context size.

top_p class-attribute instance-attribute

top_p: float = 1

Total probability mass of tokens to consider at each step.

frequency_penalty class-attribute instance-attribute

frequency_penalty: float = 0

Penalizes repeated tokens according to frequency.

presence_penalty class-attribute instance-attribute

presence_penalty: float = 0

Penalizes repeated tokens.

n class-attribute instance-attribute

n: int = 1

How many completions to generate for each prompt.

best_of class-attribute instance-attribute

best_of: int = 1

Generates best_of completions server-side and returns the "best".

model_kwargs class-attribute instance-attribute

model_kwargs: dict[str, Any] = Field(default_factory=dict)

Holds any model parameters valid for create call not explicitly specified.

openai_api_base class-attribute instance-attribute

openai_api_base: Optional[str] = Field(
    alias="base_url",
    default_factory=from_env(
        "OPENAI_API_BASE", default=None
    ),
)

Base URL path for API requests, leave blank if not using a proxy or service emulator.

openai_organization class-attribute instance-attribute

openai_organization: Optional[str] = Field(
    alias="organization",
    default_factory=from_env(
        ["OPENAI_ORG_ID", "OPENAI_ORGANIZATION"],
        default=None,
    ),
)

Automatically inferred from env var OPENAI_ORG_ID if not provided.

batch_size class-attribute instance-attribute

batch_size: int = 20

Batch size to use when passing multiple documents to generate.

request_timeout class-attribute instance-attribute

request_timeout: Union[
    float, tuple[float, float], Any, None
] = Field(default=None, alias="timeout")

Timeout for requests to OpenAI completion API. Can be float, httpx.Timeout or None.

logit_bias class-attribute instance-attribute

logit_bias: Optional[dict[str, float]] = None

Adjust the probability of specific tokens being generated.

max_retries class-attribute instance-attribute

max_retries: int = 2

Maximum number of retries to make when generating.

seed class-attribute instance-attribute

seed: Optional[int] = None

Seed for generation

logprobs class-attribute instance-attribute

logprobs: Optional[int] = None

Include the log probabilities on the logprobs most likely output tokens, as well the chosen tokens.

streaming class-attribute instance-attribute

streaming: bool = False

Whether to stream the results or not.

allowed_special class-attribute instance-attribute

allowed_special: Union[Literal['all'], set[str]] = set()

Set of special tokens that are allowed。

disallowed_special class-attribute instance-attribute

disallowed_special: Union[
    Literal["all"], Collection[str]
] = "all"

Set of special tokens that are not allowed。

tiktoken_model_name class-attribute instance-attribute

tiktoken_model_name: Optional[str] = None

The model name to pass to tiktoken when using this class. Tiktoken is used to count the number of tokens in documents to constrain them to be under a certain limit. By default, when set to None, this will be the same as the embedding model name. However, there are some cases where you may want to use this Embedding class with a model name not supported by tiktoken. This can include when using Azure embeddings or when using one of the many model providers that expose an OpenAI-like API but with different models. In those cases, in order to avoid erroring when tiktoken is called, you can specify a model name to use here.

http_client class-attribute instance-attribute

http_client: Union[Any, None] = None

Optional httpx.Client. Only used for sync invocations. Must specify http_async_client as well if you'd like a custom client for async invocations.

http_async_client class-attribute instance-attribute

http_async_client: Union[Any, None] = None

Optional httpx.AsyncClient. Only used for async invocations. Must specify http_client as well if you'd like a custom client for sync invocations.

extra_body class-attribute instance-attribute

extra_body: Optional[Mapping[str, Any]] = None

Optional additional JSON properties to include in the request parameters when making requests to OpenAI compatible APIs, such as vLLM.

max_context_size property

max_context_size: int

Get max context size for this model.

azure_endpoint class-attribute instance-attribute

azure_endpoint: Optional[str] = Field(
    default_factory=from_env(
        "AZURE_OPENAI_ENDPOINT", default=None
    )
)

Your Azure endpoint, including the resource.

Automatically inferred from env var AZURE_OPENAI_ENDPOINT if not provided.

Example: 'https://example-resource.azure.openai.com/'

deployment_name class-attribute instance-attribute

deployment_name: Union[str, None] = Field(
    default=None, alias="azure_deployment"
)

A model deployment.

If given sets the base client URL to include /deployments/{azure_deployment}.

Note

This means you won't be able to use non-deployment endpoints.

openai_api_version class-attribute instance-attribute

openai_api_version: Optional[str] = Field(
    alias="api_version",
    default_factory=from_env(
        "OPENAI_API_VERSION", default=None
    ),
)

Automatically inferred from env var OPENAI_API_VERSION if not provided.

azure_ad_token class-attribute instance-attribute

azure_ad_token: Optional[SecretStr] = Field(
    default_factory=secret_from_env(
        "AZURE_OPENAI_AD_TOKEN", default=None
    )
)

Your Azure Active Directory token.

Automatically inferred from env var AZURE_OPENAI_AD_TOKEN if not provided.

For more, see this page <https://www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-id>.__

azure_ad_token_provider class-attribute instance-attribute

azure_ad_token_provider: Union[Callable[[], str], None] = (
    None
)

A function that returns an Azure Active Directory token.

Will be invoked on every sync request. For async requests, will be invoked if azure_ad_async_token_provider is not provided.

azure_ad_async_token_provider class-attribute instance-attribute

azure_ad_async_token_provider: Union[
    Callable[[], Awaitable[str]], None
] = None

A function that returns an Azure Active Directory token.

Will be invoked on every async request.

openai_api_type class-attribute instance-attribute

openai_api_type: Optional[str] = Field(
    default_factory=from_env(
        "OPENAI_API_TYPE", default="azure"
    )
)

Legacy, for openai<1.0.0 support.

validate_base_url class-attribute instance-attribute

validate_base_url: bool = True

For backwards compatibility. If legacy val openai_api_base is passed in, try to infer if it is a base_url or azure_endpoint and update accordingly.

lc_secrets property

lc_secrets: dict[str, str]

Mapping of secret keys to environment variables.

lc_attributes property

lc_attributes: dict[str, Any]

Attributes relevant to tracing.

get_name

get_name(
    suffix: str | None = None, *, name: str | None = None
) -> str

Get the name of the Runnable.

Parameters:

Name Type Description Default
suffix str | None

An optional suffix to append to the name.

None
name str | None

An optional name to use instead of the Runnable's name.

None

Returns:

Type Description
str

The name of the Runnable.

get_input_schema

get_input_schema(
    config: RunnableConfig | None = None,
) -> type[BaseModel]

Get a pydantic model that can be used to validate input to the Runnable.

Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the Runnable is invoked with.

This method allows to get an input schema for a specific configuration.

Parameters:

Name Type Description Default
config RunnableConfig | None

A config to use when generating the schema.

None

Returns:

Type Description
type[BaseModel]

A pydantic model that can be used to validate input.

get_input_jsonschema

get_input_jsonschema(
    config: RunnableConfig | None = None,
) -> dict[str, Any]

Get a JSON schema that represents the input to the Runnable.

Parameters:

Name Type Description Default
config RunnableConfig | None

A config to use when generating the schema.

None

Returns:

Type Description
dict[str, Any]

A JSON schema that represents the input to the Runnable.

Example
from langchain_core.runnables import RunnableLambda


def add_one(x: int) -> int:
    return x + 1


runnable = RunnableLambda(add_one)

print(runnable.get_input_jsonschema())

Added in version 0.3.0

get_output_schema

get_output_schema(
    config: RunnableConfig | None = None,
) -> type[BaseModel]

Get a pydantic model that can be used to validate output to the Runnable.

Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the Runnable is invoked with.

This method allows to get an output schema for a specific configuration.

Parameters:

Name Type Description Default
config RunnableConfig | None

A config to use when generating the schema.

None

Returns:

Type Description
type[BaseModel]

A pydantic model that can be used to validate output.

get_output_jsonschema

get_output_jsonschema(
    config: RunnableConfig | None = None,
) -> dict[str, Any]

Get a JSON schema that represents the output of the Runnable.

Parameters:

Name Type Description Default
config RunnableConfig | None

A config to use when generating the schema.

None

Returns:

Type Description
dict[str, Any]

A JSON schema that represents the output of the Runnable.

Example
from langchain_core.runnables import RunnableLambda


def add_one(x: int) -> int:
    return x + 1


runnable = RunnableLambda(add_one)

print(runnable.get_output_jsonschema())

Added in version 0.3.0

config_schema

config_schema(
    *, include: Sequence[str] | None = None
) -> type[BaseModel]

The type of config this Runnable accepts specified as a pydantic model.

To mark a field as configurable, see the configurable_fields and configurable_alternatives methods.

Parameters:

Name Type Description Default
include Sequence[str] | None

A list of fields to include in the config schema.

None

Returns:

Type Description
type[BaseModel]

A pydantic model that can be used to validate config.

get_config_jsonschema

get_config_jsonschema(
    *, include: Sequence[str] | None = None
) -> dict[str, Any]

Get a JSON schema that represents the config of the Runnable.

Parameters:

Name Type Description Default
include Sequence[str] | None

A list of fields to include in the config schema.

None

Returns:

Type Description
dict[str, Any]

A JSON schema that represents the config of the Runnable.

Added in version 0.3.0

get_graph

get_graph(config: RunnableConfig | None = None) -> Graph

Return a graph representation of this Runnable.

get_prompts

get_prompts(
    config: RunnableConfig | None = None,
) -> list[BasePromptTemplate]

Return a list of prompts used by this Runnable.

__or__

__or__(
    other: (
        Runnable[Any, Other]
        | Callable[[Iterator[Any]], Iterator[Other]]
        | Callable[
            [AsyncIterator[Any]], AsyncIterator[Other]
        ]
        | Callable[[Any], Other]
        | Mapping[
            str,
            Runnable[Any, Other]
            | Callable[[Any], Other]
            | Any,
        ]
    ),
) -> RunnableSerializable[Input, Other]

Runnable "or" operator.

Compose this Runnable with another object to create a RunnableSequence.

Parameters:

Name Type Description Default
other Runnable[Any, Other] | Callable[[Iterator[Any]], Iterator[Other]] | Callable[[AsyncIterator[Any]], AsyncIterator[Other]] | Callable[[Any], Other] | Mapping[str, Runnable[Any, Other] | Callable[[Any], Other] | Any]

Another Runnable or a Runnable-like object.

required

Returns:

Type Description
RunnableSerializable[Input, Other]

A new Runnable.

__ror__

__ror__(
    other: (
        Runnable[Other, Any]
        | Callable[[Iterator[Other]], Iterator[Any]]
        | Callable[
            [AsyncIterator[Other]], AsyncIterator[Any]
        ]
        | Callable[[Other], Any]
        | Mapping[
            str,
            Runnable[Other, Any]
            | Callable[[Other], Any]
            | Any,
        ]
    ),
) -> RunnableSerializable[Other, Output]

Runnable "reverse-or" operator.

Compose this Runnable with another object to create a RunnableSequence.

Parameters:

Name Type Description Default
other Runnable[Other, Any] | Callable[[Iterator[Other]], Iterator[Any]] | Callable[[AsyncIterator[Other]], AsyncIterator[Any]] | Callable[[Other], Any] | Mapping[str, Runnable[Other, Any] | Callable[[Other], Any] | Any]

Another Runnable or a Runnable-like object.

required

Returns:

Type Description
RunnableSerializable[Other, Output]

A new Runnable.

pipe

pipe(
    *others: Runnable[Any, Other] | Callable[[Any], Other],
    name: str | None = None
) -> RunnableSerializable[Input, Other]

Pipe runnables.

Compose this Runnable with Runnable-like objects to make a RunnableSequence.

Equivalent to RunnableSequence(self, *others) or self | others[0] | ...

Example
from langchain_core.runnables import RunnableLambda


def add_one(x: int) -> int:
    return x + 1


def mul_two(x: int) -> int:
    return x * 2


runnable_1 = RunnableLambda(add_one)
runnable_2 = RunnableLambda(mul_two)
sequence = runnable_1.pipe(runnable_2)
# Or equivalently:
# sequence = runnable_1 | runnable_2
# sequence = RunnableSequence(first=runnable_1, last=runnable_2)
sequence.invoke(1)
await sequence.ainvoke(1)
# -> 4

sequence.batch([1, 2, 3])
await sequence.abatch([1, 2, 3])
# -> [4, 6, 8]

Parameters:

Name Type Description Default
*others Runnable[Any, Other] | Callable[[Any], Other]

Other Runnable or Runnable-like objects to compose

()
name str | None

An optional name for the resulting RunnableSequence.

None

Returns:

Type Description
RunnableSerializable[Input, Other]

A new Runnable.

pick

pick(
    keys: str | list[str],
) -> RunnableSerializable[Any, Any]

Pick keys from the output dict of this Runnable.

Pick single key:

```python
import json

from langchain_core.runnables import RunnableLambda, RunnableMap

as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)
chain = RunnableMap(str=as_str, json=as_json)

chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3]}

json_only_chain = chain.pick("json")
json_only_chain.invoke("[1, 2, 3]")
# -> [1, 2, 3]
```

Pick list of keys:

```python
from typing import Any

import json

from langchain_core.runnables import RunnableLambda, RunnableMap

as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)


def as_bytes(x: Any) -> bytes:
    return bytes(x, "utf-8")


chain = RunnableMap(
    str=as_str, json=as_json, bytes=RunnableLambda(as_bytes)
)

chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3], "bytes": b"[1, 2, 3]"}

json_and_bytes_chain = chain.pick(["json", "bytes"])
json_and_bytes_chain.invoke("[1, 2, 3]")
# -> {"json": [1, 2, 3], "bytes": b"[1, 2, 3]"}
```

Parameters:

Name Type Description Default
keys str | list[str]

A key or list of keys to pick from the output dict.

required

Returns:

Type Description
RunnableSerializable[Any, Any]

a new Runnable.

assign

assign(
    **kwargs: (
        Runnable[dict[str, Any], Any]
        | Callable[[dict[str, Any]], Any]
        | Mapping[
            str,
            Runnable[dict[str, Any], Any]
            | Callable[[dict[str, Any]], Any],
        ]
    ),
) -> RunnableSerializable[Any, Any]

Assigns new fields to the dict output of this Runnable.

from langchain_community.llms.fake import FakeStreamingListLLM
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import SystemMessagePromptTemplate
from langchain_core.runnables import Runnable
from operator import itemgetter

prompt = (
    SystemMessagePromptTemplate.from_template("You are a nice assistant.")
    + "{question}"
)
llm = FakeStreamingListLLM(responses=["foo-lish"])

chain: Runnable = prompt | llm | {"str": StrOutputParser()}

chain_with_assign = chain.assign(hello=itemgetter("str") | llm)

print(chain_with_assign.input_schema.model_json_schema())
# {'title': 'PromptInput', 'type': 'object', 'properties':
{'question': {'title': 'Question', 'type': 'string'}}}
print(chain_with_assign.output_schema.model_json_schema())
# {'title': 'RunnableSequenceOutput', 'type': 'object', 'properties':
{'str': {'title': 'Str',
'type': 'string'}, 'hello': {'title': 'Hello', 'type': 'string'}}}

Parameters:

Name Type Description Default
**kwargs Runnable[dict[str, Any], Any] | Callable[[dict[str, Any]], Any] | Mapping[str, Runnable[dict[str, Any], Any] | Callable[[dict[str, Any]], Any]]

A mapping of keys to Runnable or Runnable-like objects that will be invoked with the entire output dict of this Runnable.

{}

Returns:

Type Description
RunnableSerializable[Any, Any]

A new Runnable.

batch_as_completed

batch_as_completed(
    inputs: Sequence[Input],
    config: (
        RunnableConfig | Sequence[RunnableConfig] | None
    ) = None,
    *,
    return_exceptions: bool = False,
    **kwargs: Any | None
) -> Iterator[tuple[int, Output | Exception]]

Run invoke in parallel on a list of inputs.

Yields results as they complete.

Parameters:

Name Type Description Default
inputs Sequence[Input]

A list of inputs to the Runnable.

required
config RunnableConfig | Sequence[RunnableConfig] | None

A config to use when invoking the Runnable. The config supports standard keys like 'tags', 'metadata' for tracing purposes, 'max_concurrency' for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Defaults to None.

None
return_exceptions bool

Whether to return exceptions instead of raising them. Defaults to False.

False
**kwargs Any | None

Additional keyword arguments to pass to the Runnable.

{}

Yields:

Type Description
tuple[int, Output | Exception]

Tuples of the index of the input and the output from the Runnable.

abatch_as_completed async

abatch_as_completed(
    inputs: Sequence[Input],
    config: (
        RunnableConfig | Sequence[RunnableConfig] | None
    ) = None,
    *,
    return_exceptions: bool = False,
    **kwargs: Any | None
) -> AsyncIterator[tuple[int, Output | Exception]]

Run ainvoke in parallel on a list of inputs.

Yields results as they complete.

Parameters:

Name Type Description Default
inputs Sequence[Input]

A list of inputs to the Runnable.

required
config RunnableConfig | Sequence[RunnableConfig] | None

A config to use when invoking the Runnable. The config supports standard keys like 'tags', 'metadata' for tracing purposes, 'max_concurrency' for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Defaults to None.

None
return_exceptions bool

Whether to return exceptions instead of raising them. Defaults to False.

False
kwargs Any | None

Additional keyword arguments to pass to the Runnable.

{}

Yields:

Type Description
AsyncIterator[tuple[int, Output | Exception]]

A tuple of the index of the input and the output from the Runnable.

astream_log async

astream_log(
    input: Any,
    config: RunnableConfig | None = None,
    *,
    diff: bool = True,
    with_streamed_output_list: bool = True,
    include_names: Sequence[str] | None = None,
    include_types: Sequence[str] | None = None,
    include_tags: Sequence[str] | None = None,
    exclude_names: Sequence[str] | None = None,
    exclude_types: Sequence[str] | None = None,
    exclude_tags: Sequence[str] | None = None,
    **kwargs: Any
) -> AsyncIterator[RunLogPatch] | AsyncIterator[RunLog]

Stream all output from a Runnable, as reported to the callback system.

This includes all inner runs of LLMs, Retrievers, Tools, etc.

Output is streamed as Log objects, which include a list of Jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run.

The Jsonpatch ops can be applied in order to construct state.

Parameters:

Name Type Description Default
input Any

The input to the Runnable.

required
config RunnableConfig | None

The config to use for the Runnable.

None
diff bool

Whether to yield diffs between each step or the current state.

True
with_streamed_output_list bool

Whether to yield the streamed_output list.

True
include_names Sequence[str] | None

Only include logs with these names.

None
include_types Sequence[str] | None

Only include logs with these types.

None
include_tags Sequence[str] | None

Only include logs with these tags.

None
exclude_names Sequence[str] | None

Exclude logs with these names.

None
exclude_types Sequence[str] | None

Exclude logs with these types.

None
exclude_tags Sequence[str] | None

Exclude logs with these tags.

None
kwargs Any

Additional keyword arguments to pass to the Runnable.

{}

Yields:

Type Description
AsyncIterator[RunLogPatch] | AsyncIterator[RunLog]

A RunLogPatch or RunLog object.

astream_events async

astream_events(
    input: Any,
    config: RunnableConfig | None = None,
    *,
    version: Literal["v1", "v2"] = "v2",
    include_names: Sequence[str] | None = None,
    include_types: Sequence[str] | None = None,
    include_tags: Sequence[str] | None = None,
    exclude_names: Sequence[str] | None = None,
    exclude_types: Sequence[str] | None = None,
    exclude_tags: Sequence[str] | None = None,
    **kwargs: Any
) -> AsyncIterator[StreamEvent]

Generate a stream of events.

Use to create an iterator over StreamEvents that provide real-time information about the progress of the Runnable, including StreamEvents from intermediate results.

A StreamEvent is a dictionary with the following schema:

  • event: str - Event names are of the format: on_[runnable_type]_(start|stream|end).
  • name: str - The name of the Runnable that generated the event.
  • run_id: str - randomly generated ID associated with the given execution of the Runnable that emitted the event. A child Runnable that gets invoked as part of the execution of a parent Runnable is assigned its own unique ID.
  • parent_ids: list[str] - The IDs of the parent runnables that generated the event. The root Runnable will have an empty list. The order of the parent IDs is from the root to the immediate parent. Only available for v2 version of the API. The v1 version of the API will return an empty list.
  • tags: Optional[list[str]] - The tags of the Runnable that generated the event.
  • metadata: Optional[dict[str, Any]] - The metadata of the Runnable that generated the event.
  • data: dict[str, Any]

Below is a table that illustrates some events that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table.

Note

This reference table is for the v2 version of the schema.

+--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | event | name | chunk | input | output | +==========================+==================+=====================================+===================================================+=====================================================+ | on_chat_model_start | [model name] | | {"messages": [[SystemMessage, HumanMessage]]} | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_chat_model_stream | [model name] | AIMessageChunk(content="hello") | | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_chat_model_end | [model name] | | {"messages": [[SystemMessage, HumanMessage]]} | AIMessageChunk(content="hello world") | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_llm_start | [model name] | | {'input': 'hello'} | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_llm_stream | [model name] | 'Hello' | | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_llm_end | [model name] | | 'Hello human!' | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_chain_start | format_docs | | | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_chain_stream | format_docs | 'hello world!, goodbye world!' | | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_chain_end | format_docs | | [Document(...)] | 'hello world!, goodbye world!' | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_tool_start | some_tool | | {"x": 1, "y": "2"} | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_tool_end | some_tool | | | {"x": 1, "y": "2"} | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_retriever_start | [retriever name] | | {"query": "hello"} | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_retriever_end | [retriever name] | | {"query": "hello"} | [Document(...), ..] | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_prompt_start | [template_name] | | {"question": "hello"} | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_prompt_end | [template_name] | | {"question": "hello"} | ChatPromptValue(messages: [SystemMessage, ...]) | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+

In addition to the standard events, users can also dispatch custom events (see example below).

Custom events will be only be surfaced with in the v2 version of the API!

A custom event has following format:

+-----------+------+-----------------------------------------------------------------------------------------------------------+ | Attribute | Type | Description | +===========+======+===========================================================================================================+ | name | str | A user defined name for the event. | +-----------+------+-----------------------------------------------------------------------------------------------------------+ | data | Any | The data associated with the event. This can be anything, though we suggest making it JSON serializable. | +-----------+------+-----------------------------------------------------------------------------------------------------------+

Here are declarations associated with the standard events shown above:

format_docs:

def format_docs(docs: list[Document]) -> str:
    '''Format the docs.'''
    return ", ".join([doc.page_content for doc in docs])


format_docs = RunnableLambda(format_docs)

some_tool:

@tool
def some_tool(x: int, y: str) -> dict:
    '''Some_tool.'''
    return {"x": x, "y": y}

prompt:

template = ChatPromptTemplate.from_messages(
    [
        ("system", "You are Cat Agent 007"),
        ("human", "{question}"),
    ]
).with_config({"run_name": "my_template", "tags": ["my_template"]})
Example:

from langchain_core.runnables import RunnableLambda


async def reverse(s: str) -> str:
    return s[::-1]


chain = RunnableLambda(func=reverse)

events = [event async for event in chain.astream_events("hello", version="v2")]

# will produce the following events (run_id, and parent_ids
# has been omitted for brevity):
[
    {
        "data": {"input": "hello"},
        "event": "on_chain_start",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
    {
        "data": {"chunk": "olleh"},
        "event": "on_chain_stream",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
    {
        "data": {"output": "olleh"},
        "event": "on_chain_end",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
]

Example: Dispatch Custom Event

from langchain_core.callbacks.manager import (
    adispatch_custom_event,
)
from langchain_core.runnables import RunnableLambda, RunnableConfig
import asyncio


async def slow_thing(some_input: str, config: RunnableConfig) -> str:
    """Do something that takes a long time."""
    await asyncio.sleep(1) # Placeholder for some slow operation
    await adispatch_custom_event(
        "progress_event",
        {"message": "Finished step 1 of 3"},
        config=config # Must be included for python < 3.10
    )
    await asyncio.sleep(1) # Placeholder for some slow operation
    await adispatch_custom_event(
        "progress_event",
        {"message": "Finished step 2 of 3"},
        config=config # Must be included for python < 3.10
    )
    await asyncio.sleep(1) # Placeholder for some slow operation
    return "Done"

slow_thing = RunnableLambda(slow_thing)

async for event in slow_thing.astream_events("some_input", version="v2"):
    print(event)

Parameters:

Name Type Description Default
input Any

The input to the Runnable.

required
config RunnableConfig | None

The config to use for the Runnable.

None
version Literal['v1', 'v2']

The version of the schema to use either 'v2' or 'v1'. Users should use 'v2'. 'v1' is for backwards compatibility and will be deprecated in 0.4.0. No default will be assigned until the API is stabilized. custom events will only be surfaced in 'v2'.

'v2'
include_names Sequence[str] | None

Only include events from Runnables with matching names.

None
include_types Sequence[str] | None

Only include events from Runnables with matching types.

None
include_tags Sequence[str] | None

Only include events from Runnables with matching tags.

None
exclude_names Sequence[str] | None

Exclude events from Runnables with matching names.

None
exclude_types Sequence[str] | None

Exclude events from Runnables with matching types.

None
exclude_tags Sequence[str] | None

Exclude events from Runnables with matching tags.

None
kwargs Any

Additional keyword arguments to pass to the Runnable. These will be passed to astream_log as this implementation of astream_events is built on top of astream_log.

{}

Yields:

Type Description
AsyncIterator[StreamEvent]

An async stream of StreamEvents.

Raises:

Type Description
NotImplementedError

If the version is not 'v1' or 'v2'.

transform

transform(
    input: Iterator[Input],
    config: RunnableConfig | None = None,
    **kwargs: Any | None
) -> Iterator[Output]

Transform inputs to outputs.

Default implementation of transform, which buffers input and calls astream.

Subclasses should override this method if they can start producing output while input is still being generated.

Parameters:

Name Type Description Default
input Iterator[Input]

An iterator of inputs to the Runnable.

required
config RunnableConfig | None

The config to use for the Runnable. Defaults to None.

None
kwargs Any | None

Additional keyword arguments to pass to the Runnable.

{}

Yields:

Type Description
Output

The output of the Runnable.

atransform async

atransform(
    input: AsyncIterator[Input],
    config: RunnableConfig | None = None,
    **kwargs: Any | None
) -> AsyncIterator[Output]

Transform inputs to outputs.

Default implementation of atransform, which buffers input and calls astream.

Subclasses should override this method if they can start producing output while input is still being generated.

Parameters:

Name Type Description Default
input AsyncIterator[Input]

An async iterator of inputs to the Runnable.

required
config RunnableConfig | None

The config to use for the Runnable. Defaults to None.

None
kwargs Any | None

Additional keyword arguments to pass to the Runnable.

{}

Yields:

Type Description
AsyncIterator[Output]

The output of the Runnable.

bind

bind(**kwargs: Any) -> Runnable[Input, Output]

Bind arguments to a Runnable, returning a new Runnable.

Useful when a Runnable in a chain requires an argument that is not in the output of the previous Runnable or included in the user input.

Parameters:

Name Type Description Default
kwargs Any

The arguments to bind to the Runnable.

{}

Returns:

Type Description
Runnable[Input, Output]

A new Runnable with the arguments bound.

Example
from langchain_ollama import ChatOllama
from langchain_core.output_parsers import StrOutputParser

llm = ChatOllama(model="llama3.1")

# Without bind.
chain = llm | StrOutputParser()

chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two three four five.'

# With bind.
chain = llm.bind(stop=["three"]) | StrOutputParser()

chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two'

with_config

with_config(
    config: RunnableConfig | None = None, **kwargs: Any
) -> Runnable[Input, Output]

Bind config to a Runnable, returning a new Runnable.

Parameters:

Name Type Description Default
config RunnableConfig | None

The config to bind to the Runnable.

None
kwargs Any

Additional keyword arguments to pass to the Runnable.

{}

Returns:

Type Description
Runnable[Input, Output]

A new Runnable with the config bound.

with_listeners

with_listeners(
    *,
    on_start: (
        Callable[[Run], None]
        | Callable[[Run, RunnableConfig], None]
        | None
    ) = None,
    on_end: (
        Callable[[Run], None]
        | Callable[[Run, RunnableConfig], None]
        | None
    ) = None,
    on_error: (
        Callable[[Run], None]
        | Callable[[Run, RunnableConfig], None]
        | None
    ) = None
) -> Runnable[Input, Output]

Bind lifecycle listeners to a Runnable, returning a new Runnable.

The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run.

Parameters:

Name Type Description Default
on_start Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None

Called before the Runnable starts running, with the Run object. Defaults to None.

None
on_end Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None

Called after the Runnable finishes running, with the Run object. Defaults to None.

None
on_error Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None

Called if the Runnable throws an error, with the Run object. Defaults to None.

None

Returns:

Type Description
Runnable[Input, Output]

A new Runnable with the listeners bound.

Example
from langchain_core.runnables import RunnableLambda
from langchain_core.tracers.schemas import Run

import time


def test_runnable(time_to_sleep: int):
    time.sleep(time_to_sleep)


def fn_start(run_obj: Run):
    print("start_time:", run_obj.start_time)


def fn_end(run_obj: Run):
    print("end_time:", run_obj.end_time)


chain = RunnableLambda(test_runnable).with_listeners(
    on_start=fn_start, on_end=fn_end
)
chain.invoke(2)

with_alisteners

with_alisteners(
    *,
    on_start: AsyncListener | None = None,
    on_end: AsyncListener | None = None,
    on_error: AsyncListener | None = None
) -> Runnable[Input, Output]

Bind async lifecycle listeners to a Runnable.

Returns a new Runnable.

The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run.

Parameters:

Name Type Description Default
on_start AsyncListener | None

Called asynchronously before the Runnable starts running, with the Run object. Defaults to None.

None
on_end AsyncListener | None

Called asynchronously after the Runnable finishes running, with the Run object. Defaults to None.

None
on_error AsyncListener | None

Called asynchronously if the Runnable throws an error, with the Run object. Defaults to None.

None

Returns:

Type Description
Runnable[Input, Output]

A new Runnable with the listeners bound.

Example
from langchain_core.runnables import RunnableLambda, Runnable
from datetime import datetime, timezone
import time
import asyncio

def format_t(timestamp: float) -> str:
    return datetime.fromtimestamp(timestamp, tz=timezone.utc).isoformat()

async def test_runnable(time_to_sleep: int):
    print(f"Runnable[{time_to_sleep}s]: starts at {format_t(time.time())}")
    await asyncio.sleep(time_to_sleep)
    print(f"Runnable[{time_to_sleep}s]: ends at {format_t(time.time())}")

async def fn_start(run_obj: Runnable):
    print(f"on start callback starts at {format_t(time.time())}")
    await asyncio.sleep(3)
    print(f"on start callback ends at {format_t(time.time())}")

async def fn_end(run_obj: Runnable):
    print(f"on end callback starts at {format_t(time.time())}")
    await asyncio.sleep(2)
    print(f"on end callback ends at {format_t(time.time())}")

runnable = RunnableLambda(test_runnable).with_alisteners(
    on_start=fn_start,
    on_end=fn_end
)
async def concurrent_runs():
    await asyncio.gather(runnable.ainvoke(2), runnable.ainvoke(3))

asyncio.run(concurrent_runs())
Result:
on start callback starts at 2025-03-01T07:05:22.875378+00:00
on start callback starts at 2025-03-01T07:05:22.875495+00:00
on start callback ends at 2025-03-01T07:05:25.878862+00:00
on start callback ends at 2025-03-01T07:05:25.878947+00:00
Runnable[2s]: starts at 2025-03-01T07:05:25.879392+00:00
Runnable[3s]: starts at 2025-03-01T07:05:25.879804+00:00
Runnable[2s]: ends at 2025-03-01T07:05:27.881998+00:00
on end callback starts at 2025-03-01T07:05:27.882360+00:00
Runnable[3s]: ends at 2025-03-01T07:05:28.881737+00:00
on end callback starts at 2025-03-01T07:05:28.882428+00:00
on end callback ends at 2025-03-01T07:05:29.883893+00:00
on end callback ends at 2025-03-01T07:05:30.884831+00:00

with_types

with_types(
    *,
    input_type: type[Input] | None = None,
    output_type: type[Output] | None = None
) -> Runnable[Input, Output]

Bind input and output types to a Runnable, returning a new Runnable.

Parameters:

Name Type Description Default
input_type type[Input] | None

The input type to bind to the Runnable. Defaults to None.

None
output_type type[Output] | None

The output type to bind to the Runnable. Defaults to None.

None

Returns:

Type Description
Runnable[Input, Output]

A new Runnable with the types bound.

with_retry

with_retry(
    *,
    retry_if_exception_type: tuple[
        type[BaseException], ...
    ] = (Exception,),
    wait_exponential_jitter: bool = True,
    exponential_jitter_params: (
        ExponentialJitterParams | None
    ) = None,
    stop_after_attempt: int = 3
) -> Runnable[Input, Output]

Create a new Runnable that retries the original Runnable on exceptions.

Parameters:

Name Type Description Default
retry_if_exception_type tuple[type[BaseException], ...]

A tuple of exception types to retry on. Defaults to (Exception,).

(Exception,)
wait_exponential_jitter bool

Whether to add jitter to the wait time between retries. Defaults to True.

True
stop_after_attempt int

The maximum number of attempts to make before giving up. Defaults to 3.

3
exponential_jitter_params ExponentialJitterParams | None

Parameters for tenacity.wait_exponential_jitter. Namely: initial, max, exp_base, and jitter (all float values).

None

Returns:

Type Description
Runnable[Input, Output]

A new Runnable that retries the original Runnable on exceptions.

Example
from langchain_core.runnables import RunnableLambda

count = 0


def _lambda(x: int) -> None:
    global count
    count = count + 1
    if x == 1:
        raise ValueError("x is 1")
    else:
        pass


runnable = RunnableLambda(_lambda)
try:
    runnable.with_retry(
        stop_after_attempt=2,
        retry_if_exception_type=(ValueError,),
    ).invoke(1)
except ValueError:
    pass

assert count == 2

map

map() -> Runnable[list[Input], list[Output]]

Return a new Runnable that maps a list of inputs to a list of outputs.

Calls invoke with each input.

Returns:

Type Description
Runnable[list[Input], list[Output]]

A new Runnable that maps a list of inputs to a list of outputs.

Example
from langchain_core.runnables import RunnableLambda


def _lambda(x: int) -> int:
    return x + 1


runnable = RunnableLambda(_lambda)
print(runnable.map().invoke([1, 2, 3]))  # [2, 3, 4]

with_fallbacks

with_fallbacks(
    fallbacks: Sequence[Runnable[Input, Output]],
    *,
    exceptions_to_handle: tuple[
        type[BaseException], ...
    ] = (Exception,),
    exception_key: str | None = None
) -> RunnableWithFallbacks[Input, Output]

Add fallbacks to a Runnable, returning a new Runnable.

The new Runnable will try the original Runnable, and then each fallback in order, upon failures.

Parameters:

Name Type Description Default
fallbacks Sequence[Runnable[Input, Output]]

A sequence of runnables to try if the original Runnable fails.

required
exceptions_to_handle tuple[type[BaseException], ...]

A tuple of exception types to handle. Defaults to (Exception,).

(Exception,)
exception_key str | None

If string is specified then handled exceptions will be passed to fallbacks as part of the input under the specified key. If None, exceptions will not be passed to fallbacks. If used, the base Runnable and its fallbacks must accept a dictionary as input. Defaults to None.

None

Returns:

Type Description
RunnableWithFallbacks[Input, Output]

A new Runnable that will try the original Runnable, and then each

RunnableWithFallbacks[Input, Output]

fallback in order, upon failures.

Example
from typing import Iterator

from langchain_core.runnables import RunnableGenerator


def _generate_immediate_error(input: Iterator) -> Iterator[str]:
    raise ValueError()
    yield ""


def _generate(input: Iterator) -> Iterator[str]:
    yield from "foo bar"


runnable = RunnableGenerator(_generate_immediate_error).with_fallbacks(
    [RunnableGenerator(_generate)]
)
print("".join(runnable.stream({})))  # foo bar

Parameters:

Name Type Description Default
fallbacks Sequence[Runnable[Input, Output]]

A sequence of runnables to try if the original Runnable fails.

required
exceptions_to_handle tuple[type[BaseException], ...]

A tuple of exception types to handle.

(Exception,)
exception_key str | None

If string is specified then handled exceptions will be passed to fallbacks as part of the input under the specified key. If None, exceptions will not be passed to fallbacks. If used, the base Runnable and its fallbacks must accept a dictionary as input.

None

Returns:

Type Description
RunnableWithFallbacks[Input, Output]

A new Runnable that will try the original Runnable, and then each

RunnableWithFallbacks[Input, Output]

fallback in order, upon failures.

as_tool

as_tool(
    args_schema: type[BaseModel] | None = None,
    *,
    name: str | None = None,
    description: str | None = None,
    arg_types: dict[str, type] | None = None
) -> BaseTool

Create a BaseTool from a Runnable.

as_tool will instantiate a BaseTool with a name, description, and args_schema from a Runnable. Where possible, schemas are inferred from runnable.get_input_schema. Alternatively (e.g., if the Runnable takes a dict as input and the specific dict keys are not typed), the schema can be specified directly with args_schema. You can also pass arg_types to just specify the required arguments and their types.

Parameters:

Name Type Description Default
args_schema type[BaseModel] | None

The schema for the tool. Defaults to None.

None
name str | None

The name of the tool. Defaults to None.

None
description str | None

The description of the tool. Defaults to None.

None
arg_types dict[str, type] | None

A dictionary of argument names to types. Defaults to None.

None

Returns:

Type Description
BaseTool

A BaseTool instance.

Typed dict input:

from typing_extensions import TypedDict
from langchain_core.runnables import RunnableLambda


class Args(TypedDict):
    a: int
    b: list[int]


def f(x: Args) -> str:
    return str(x["a"] * max(x["b"]))


runnable = RunnableLambda(f)
as_tool = runnable.as_tool()
as_tool.invoke({"a": 3, "b": [1, 2]})

dict input, specifying schema via args_schema:

from typing import Any
from pydantic import BaseModel, Field
from langchain_core.runnables import RunnableLambda

def f(x: dict[str, Any]) -> str:
    return str(x["a"] * max(x["b"]))

class FSchema(BaseModel):
    """Apply a function to an integer and list of integers."""

    a: int = Field(..., description="Integer")
    b: list[int] = Field(..., description="List of ints")

runnable = RunnableLambda(f)
as_tool = runnable.as_tool(FSchema)
as_tool.invoke({"a": 3, "b": [1, 2]})

dict input, specifying schema via arg_types:

from typing import Any
from langchain_core.runnables import RunnableLambda


def f(x: dict[str, Any]) -> str:
    return str(x["a"] * max(x["b"]))


runnable = RunnableLambda(f)
as_tool = runnable.as_tool(arg_types={"a": int, "b": list[int]})
as_tool.invoke({"a": 3, "b": [1, 2]})

String input:

from langchain_core.runnables import RunnableLambda


def f(x: str) -> str:
    return x + "a"


def g(x: str) -> str:
    return x + "z"


runnable = RunnableLambda(f) | g
as_tool = runnable.as_tool()
as_tool.invoke("b")

Added in version 0.2.14

__init__

__init__(*args: Any, **kwargs: Any) -> None

lc_id classmethod

lc_id() -> list[str]

Return a unique identifier for this class for serialization purposes.

The unique identifier is a list of strings that describes the path to the object. For example, for the class langchain.llms.openai.OpenAI, the id is ["langchain", "llms", "openai", "OpenAI"].

to_json

to_json() -> (
    SerializedConstructor | SerializedNotImplemented
)

Serialize the Runnable to JSON.

Returns:

Type Description
SerializedConstructor | SerializedNotImplemented

A JSON-serializable representation of the Runnable.

to_json_not_implemented

to_json_not_implemented() -> SerializedNotImplemented

Serialize a "not implemented" object.

Returns:

Type Description
SerializedNotImplemented

SerializedNotImplemented.

configurable_fields

configurable_fields(
    **kwargs: AnyConfigurableField,
) -> RunnableSerializable[Input, Output]

Configure particular Runnable fields at runtime.

Parameters:

Name Type Description Default
**kwargs AnyConfigurableField

A dictionary of ConfigurableField instances to configure.

{}

Raises:

Type Description
ValueError

If a configuration key is not found in the Runnable.

Returns:

Type Description
RunnableSerializable[Input, Output]

A new Runnable with the fields configured.

from langchain_core.runnables import ConfigurableField
from langchain_openai import ChatOpenAI

model = ChatOpenAI(max_tokens=20).configurable_fields(
    max_tokens=ConfigurableField(
        id="output_token_number",
        name="Max tokens in the output",
        description="The maximum number of tokens in the output",
    )
)

# max_tokens = 20
print("max_tokens_20: ", model.invoke("tell me something about chess").content)

# max_tokens = 200
print(
    "max_tokens_200: ",
    model.with_config(configurable={"output_token_number": 200})
    .invoke("tell me something about chess")
    .content,
)

configurable_alternatives

configurable_alternatives(
    which: ConfigurableField,
    *,
    default_key: str = "default",
    prefix_keys: bool = False,
    **kwargs: (
        Runnable[Input, Output]
        | Callable[[], Runnable[Input, Output]]
    )
) -> RunnableSerializable[Input, Output]

Configure alternatives for Runnables that can be set at runtime.

Parameters:

Name Type Description Default
which ConfigurableField

The ConfigurableField instance that will be used to select the alternative.

required
default_key str

The default key to use if no alternative is selected. Defaults to 'default'.

'default'
prefix_keys bool

Whether to prefix the keys with the ConfigurableField id. Defaults to False.

False
**kwargs Runnable[Input, Output] | Callable[[], Runnable[Input, Output]]

A dictionary of keys to Runnable instances or callables that return Runnable instances.

{}

Returns:

Type Description
RunnableSerializable[Input, Output]

A new Runnable with the alternatives configured.

from langchain_anthropic import ChatAnthropic
from langchain_core.runnables.utils import ConfigurableField
from langchain_openai import ChatOpenAI

model = ChatAnthropic(
    model_name="claude-3-7-sonnet-20250219"
).configurable_alternatives(
    ConfigurableField(id="llm"),
    default_key="anthropic",
    openai=ChatOpenAI(),
)

# uses the default model ChatAnthropic
print(model.invoke("which organization created you?").content)

# uses ChatOpenAI
print(
    model.with_config(configurable={"llm": "openai"})
    .invoke("which organization created you?")
    .content
)

set_verbose

set_verbose(verbose: bool | None) -> bool

If verbose is None, set it.

This allows users to pass in None as verbose to access the global setting.

Parameters:

Name Type Description Default
verbose bool | None

The verbosity setting to use.

required

Returns:

Type Description
bool

The verbosity setting to use.

with_structured_output

with_structured_output(
    schema: dict | type, **kwargs: Any
) -> Runnable[LanguageModelInput, dict | BaseModel]

Not implemented on this class.

get_token_ids

get_token_ids(text: str) -> list[int]

Get the token IDs using the tiktoken package.

get_num_tokens

get_num_tokens(text: str) -> int

Get the number of tokens present in the text.

Useful for checking if an input fits in a model's context window.

Parameters:

Name Type Description Default
text str

The string input to tokenize.

required

Returns:

Type Description
int

The integer number of tokens in the text.

get_num_tokens_from_messages

get_num_tokens_from_messages(
    messages: list[BaseMessage],
    tools: Sequence | None = None,
) -> int

Get the number of tokens in the messages.

Useful for checking if an input fits in a model's context window.

Note

The base implementation of get_num_tokens_from_messages ignores tool schemas.

Parameters:

Name Type Description Default
messages list[BaseMessage]

The message inputs to tokenize.

required
tools Sequence | None

If provided, sequence of dict, BaseModel, function, or BaseTools to be converted to tool schemas.

None

Returns:

Type Description
int

The sum of the number of tokens across the messages.

generate

generate(
    prompts: list[str],
    stop: list[str] | None = None,
    callbacks: Callbacks | list[Callbacks] | None = None,
    *,
    tags: list[str] | list[list[str]] | None = None,
    metadata: (
        dict[str, Any] | list[dict[str, Any]] | None
    ) = None,
    run_name: str | list[str] | None = None,
    run_id: UUID | list[UUID | None] | None = None,
    **kwargs: Any
) -> LLMResult

Pass a sequence of prompts to a model and return generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:

  1. Take advantage of batched calls,
  2. Need more output from the model than just the top generated value,
  3. Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).

Parameters:

Name Type Description Default
prompts list[str]

List of string prompts.

required
stop list[str] | None

Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

None
callbacks Callbacks | list[Callbacks] | None

Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

None
tags list[str] | list[list[str]] | None

List of tags to associate with each prompt. If provided, the length of the list must match the length of the prompts list.

None
metadata dict[str, Any] | list[dict[str, Any]] | None

List of metadata dictionaries to associate with each prompt. If provided, the length of the list must match the length of the prompts list.

None
run_name str | list[str] | None

List of run names to associate with each prompt. If provided, the length of the list must match the length of the prompts list.

None
run_id UUID | list[UUID | None] | None

List of run IDs to associate with each prompt. If provided, the length of the list must match the length of the prompts list.

None
**kwargs Any

Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

{}

Raises:

Type Description
ValueError

If prompts is not a list.

ValueError

If the length of callbacks, tags, metadata, or run_name (if provided) does not match the length of prompts.

Returns:

Type Description
LLMResult

An LLMResult, which contains a list of candidate Generations for each input prompt and additional model provider-specific output.

agenerate async

agenerate(
    prompts: list[str],
    stop: list[str] | None = None,
    callbacks: Callbacks | list[Callbacks] | None = None,
    *,
    tags: list[str] | list[list[str]] | None = None,
    metadata: (
        dict[str, Any] | list[dict[str, Any]] | None
    ) = None,
    run_name: str | list[str] | None = None,
    run_id: UUID | list[UUID | None] | None = None,
    **kwargs: Any
) -> LLMResult

Asynchronously pass a sequence of prompts to a model and return generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:

  1. Take advantage of batched calls,
  2. Need more output from the model than just the top generated value,
  3. Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).

Parameters:

Name Type Description Default
prompts list[str]

List of string prompts.

required
stop list[str] | None

Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

None
callbacks Callbacks | list[Callbacks] | None

Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

None
tags list[str] | list[list[str]] | None

List of tags to associate with each prompt. If provided, the length of the list must match the length of the prompts list.

None
metadata dict[str, Any] | list[dict[str, Any]] | None

List of metadata dictionaries to associate with each prompt. If provided, the length of the list must match the length of the prompts list.

None
run_name str | list[str] | None

List of run names to associate with each prompt. If provided, the length of the list must match the length of the prompts list.

None
run_id UUID | list[UUID | None] | None

List of run IDs to associate with each prompt. If provided, the length of the list must match the length of the prompts list.

None
**kwargs Any

Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

{}

Raises:

Type Description
ValueError

If the length of callbacks, tags, metadata, or run_name (if provided) does not match the length of prompts.

Returns:

Type Description
LLMResult

An LLMResult, which contains a list of candidate Generations for each input prompt and additional model provider-specific output.

__str__

__str__() -> str

Return a string representation of the object for printing.

dict

dict(**kwargs: Any) -> dict

Return a dictionary of the LLM.

save

save(file_path: Path | str) -> None

Save the LLM.

Parameters:

Name Type Description Default
file_path Path | str

Path to file to save the LLM to.

required

Raises:

Type Description
ValueError

If the file path is not a string or Path object.

Example:

.. code-block:: python

    llm.save(file_path="path/llm.yaml")

build_extra classmethod

build_extra(values: dict[str, Any]) -> Any

Build extra kwargs from additional params that were passed in.

get_sub_prompts

get_sub_prompts(
    params: dict[str, Any],
    prompts: list[str],
    stop: Optional[list[str]] = None,
) -> list[list[str]]

Get the sub prompts for llm call.

create_llm_result

create_llm_result(
    choices: Any,
    prompts: list[str],
    params: dict[str, Any],
    token_usage: dict[str, int],
    *,
    system_fingerprint: Optional[str] = None
) -> LLMResult

Create the LLMResult from the choices and prompts.

modelname_to_contextsize staticmethod

modelname_to_contextsize(modelname: str) -> int

Calculate the maximum number of tokens possible to generate for a model.

Parameters:

Name Type Description Default
modelname str

The modelname we want to know the context size for.

required

Returns:

Type Description
int

The maximum context size

Example

.. code-block:: python

max_tokens = openai.modelname_to_contextsize("gpt-3.5-turbo-instruct")

max_tokens_for_prompt

max_tokens_for_prompt(prompt: str) -> int

Calculate the maximum number of tokens possible to generate for a prompt.

Parameters:

Name Type Description Default
prompt str

The prompt to pass into the model.

required

Returns:

Type Description
int

The maximum number of tokens to generate for a prompt.

Example

.. code-block:: python

max_tokens = openai.max_tokens_for_prompt("Tell me a joke.")

get_lc_namespace classmethod

get_lc_namespace() -> list[str]

Get the namespace of the langchain object.

is_lc_serializable classmethod

is_lc_serializable() -> bool

Return whether this model can be serialized by LangChain.

validate_environment

validate_environment() -> Self

Validate that api key and python package exists in environment.

OpenAI

Bases: BaseOpenAI

OpenAI completion model integration.

Setup

Install langchain-openai and set environment variable OPENAI_API_KEY.

.. code-block:: bash

pip install -U langchain-openai
export OPENAI_API_KEY="your-api-key"

Key init args — completion params: model: str Name of OpenAI model to use. temperature: float Sampling temperature. max_tokens: Optional[int] Max number of tokens to generate. logprobs: Optional[bool] Whether to return logprobs. stream_options: Dict Configure streaming outputs, like whether to return token usage when streaming ({"include_usage": True}).

Key init args — client params: timeout: Union[float, Tuple[float, float], Any, None] Timeout for requests. max_retries: int Max number of retries. api_key: Optional[str] OpenAI API key. If not passed in will be read from env var OPENAI_API_KEY. base_url: Optional[str] Base URL for API requests. Only specify if using a proxy or service emulator. organization: Optional[str] OpenAI organization ID. If not passed in will be read from env var OPENAI_ORG_ID.

See full list of supported init args and their descriptions in the params section.

Instantiate

.. code-block:: python

from langchain_openai import OpenAI

llm = OpenAI(
    model="gpt-3.5-turbo-instruct",
    temperature=0,
    max_retries=2,
    # api_key="...",
    # base_url="...",
    # organization="...",
    # other params...
)
Invoke

.. code-block:: python

input_text = "The meaning of life is "
llm.invoke(input_text)

.. code-block::

"a philosophical question that has been debated by thinkers and scholars for centuries."
Stream

.. code-block:: python

for chunk in llm.stream(input_text):
    print(chunk, end="|")

.. code-block::

a| philosophical| question| that| has| been| debated| by| thinkers| and| scholars| for| centuries|.

.. code-block:: python

"".join(llm.stream(input_text))

.. code-block::

"a philosophical question that has been debated by thinkers and scholars for centuries."
Async

.. code-block:: python

await llm.ainvoke(input_text)

# stream:
# async for chunk in (await llm.astream(input_text)):
#    print(chunk)

# batch:
# await llm.abatch([input_text])

.. code-block::

"a philosophical question that has been debated by thinkers and scholars for centuries."

Methods:

Name Description
get_name

Get the name of the Runnable.

get_input_schema

Get a pydantic model that can be used to validate input to the Runnable.

get_input_jsonschema

Get a JSON schema that represents the input to the Runnable.

get_output_schema

Get a pydantic model that can be used to validate output to the Runnable.

get_output_jsonschema

Get a JSON schema that represents the output of the Runnable.

config_schema

The type of config this Runnable accepts specified as a pydantic model.

get_config_jsonschema

Get a JSON schema that represents the config of the Runnable.

get_graph

Return a graph representation of this Runnable.

get_prompts

Return a list of prompts used by this Runnable.

__or__

Runnable "or" operator.

__ror__

Runnable "reverse-or" operator.

pipe

Pipe runnables.

pick

Pick keys from the output dict of this Runnable.

assign

Assigns new fields to the dict output of this Runnable.

batch_as_completed

Run invoke in parallel on a list of inputs.

abatch_as_completed

Run ainvoke in parallel on a list of inputs.

astream_log

Stream all output from a Runnable, as reported to the callback system.

astream_events

Generate a stream of events.

transform

Transform inputs to outputs.

atransform

Transform inputs to outputs.

bind

Bind arguments to a Runnable, returning a new Runnable.

with_config

Bind config to a Runnable, returning a new Runnable.

with_listeners

Bind lifecycle listeners to a Runnable, returning a new Runnable.

with_alisteners

Bind async lifecycle listeners to a Runnable.

with_types

Bind input and output types to a Runnable, returning a new Runnable.

with_retry

Create a new Runnable that retries the original Runnable on exceptions.

map

Return a new Runnable that maps a list of inputs to a list of outputs.

with_fallbacks

Add fallbacks to a Runnable, returning a new Runnable.

as_tool

Create a BaseTool from a Runnable.

__init__
lc_id

Return a unique identifier for this class for serialization purposes.

to_json

Serialize the Runnable to JSON.

to_json_not_implemented

Serialize a "not implemented" object.

configurable_fields

Configure particular Runnable fields at runtime.

configurable_alternatives

Configure alternatives for Runnables that can be set at runtime.

set_verbose

If verbose is None, set it.

with_structured_output

Not implemented on this class.

get_token_ids

Get the token IDs using the tiktoken package.

get_num_tokens

Get the number of tokens present in the text.

get_num_tokens_from_messages

Get the number of tokens in the messages.

generate

Pass a sequence of prompts to a model and return generations.

agenerate

Asynchronously pass a sequence of prompts to a model and return generations.

__str__

Return a string representation of the object for printing.

dict

Return a dictionary of the LLM.

save

Save the LLM.

build_extra

Build extra kwargs from additional params that were passed in.

validate_environment

Validate that api key and python package exists in environment.

get_sub_prompts

Get the sub prompts for llm call.

create_llm_result

Create the LLMResult from the choices and prompts.

modelname_to_contextsize

Calculate the maximum number of tokens possible to generate for a model.

max_tokens_for_prompt

Calculate the maximum number of tokens possible to generate for a prompt.

get_lc_namespace

Get the namespace of the langchain object.

is_lc_serializable

Return whether this model can be serialized by LangChain.

Attributes:

Name Type Description
InputType TypeAlias

Get the input type for this runnable.

OutputType type[str]

Get the input type for this runnable.

input_schema type[BaseModel]

The type of input this Runnable accepts specified as a pydantic model.

output_schema type[BaseModel]

Output schema.

config_specs list[ConfigurableFieldSpec]

List configurable fields for this Runnable.

cache BaseCache | bool | None

Whether to cache the response.

verbose bool

Whether to print out response text.

callbacks Callbacks

Callbacks to add to the run trace.

tags list[str] | None

Tags to add to the run trace.

metadata dict[str, Any] | None

Metadata to add to the run trace.

custom_get_token_ids Callable[[str], list[int]] | None

Optional encoder to use for counting tokens.

model_name str

Model name to use.

temperature float

What sampling temperature to use.

max_tokens int

The maximum number of tokens to generate in the completion.

top_p float

Total probability mass of tokens to consider at each step.

frequency_penalty float

Penalizes repeated tokens according to frequency.

presence_penalty float

Penalizes repeated tokens.

n int

How many completions to generate for each prompt.

best_of int

Generates best_of completions server-side and returns the "best".

model_kwargs dict[str, Any]

Holds any model parameters valid for create call not explicitly specified.

openai_api_key Optional[SecretStr]

Automatically inferred from env var OPENAI_API_KEY if not provided.

openai_api_base Optional[str]

Base URL path for API requests, leave blank if not using a proxy or service

openai_organization Optional[str]

Automatically inferred from env var OPENAI_ORG_ID if not provided.

batch_size int

Batch size to use when passing multiple documents to generate.

request_timeout Union[float, tuple[float, float], Any, None]

Timeout for requests to OpenAI completion API. Can be float, httpx.Timeout or

logit_bias Optional[dict[str, float]]

Adjust the probability of specific tokens being generated.

max_retries int

Maximum number of retries to make when generating.

seed Optional[int]

Seed for generation

logprobs Optional[int]

Include the log probabilities on the logprobs most likely output tokens,

streaming bool

Whether to stream the results or not.

allowed_special Union[Literal['all'], set[str]]

Set of special tokens that are allowed。

disallowed_special Union[Literal['all'], Collection[str]]

Set of special tokens that are not allowed。

tiktoken_model_name Optional[str]

The model name to pass to tiktoken when using this class.

http_client Union[Any, None]

Optional httpx.Client. Only used for sync invocations. Must specify

http_async_client Union[Any, None]

Optional httpx.AsyncClient. Only used for async invocations. Must specify

extra_body Optional[Mapping[str, Any]]

Optional additional JSON properties to include in the request parameters when

max_context_size int

Get max context size for this model.

lc_secrets dict[str, str]

Mapping of secret keys to environment variables.

lc_attributes dict[str, Any]

LangChain attributes for this class.

InputType property

InputType: TypeAlias

Get the input type for this runnable.

OutputType property

OutputType: type[str]

Get the input type for this runnable.

input_schema property

input_schema: type[BaseModel]

The type of input this Runnable accepts specified as a pydantic model.

output_schema property

output_schema: type[BaseModel]

Output schema.

The type of output this Runnable produces specified as a pydantic model.

config_specs property

config_specs: list[ConfigurableFieldSpec]

List configurable fields for this Runnable.

cache class-attribute instance-attribute

cache: BaseCache | bool | None = Field(
    default=None, exclude=True
)

Whether to cache the response.

  • If true, will use the global cache.
  • If false, will not use a cache
  • If None, will use the global cache if it's set, otherwise no cache.
  • If instance of BaseCache, will use the provided cache.

Caching is not currently supported for streaming methods of models.

verbose class-attribute instance-attribute

verbose: bool = Field(
    default_factory=_get_verbosity, exclude=True, repr=False
)

Whether to print out response text.

callbacks class-attribute instance-attribute

callbacks: Callbacks = Field(default=None, exclude=True)

Callbacks to add to the run trace.

tags class-attribute instance-attribute

tags: list[str] | None = Field(default=None, exclude=True)

Tags to add to the run trace.

metadata class-attribute instance-attribute

metadata: dict[str, Any] | None = Field(
    default=None, exclude=True
)

Metadata to add to the run trace.

custom_get_token_ids class-attribute instance-attribute

custom_get_token_ids: Callable[[str], list[int]] | None = (
    Field(default=None, exclude=True)
)

Optional encoder to use for counting tokens.

model_name class-attribute instance-attribute

model_name: str = Field(
    default="gpt-3.5-turbo-instruct", alias="model"
)

Model name to use.

temperature class-attribute instance-attribute

temperature: float = 0.7

What sampling temperature to use.

max_tokens class-attribute instance-attribute

max_tokens: int = 256

The maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the models maximal context size.

top_p class-attribute instance-attribute

top_p: float = 1

Total probability mass of tokens to consider at each step.

frequency_penalty class-attribute instance-attribute

frequency_penalty: float = 0

Penalizes repeated tokens according to frequency.

presence_penalty class-attribute instance-attribute

presence_penalty: float = 0

Penalizes repeated tokens.

n class-attribute instance-attribute

n: int = 1

How many completions to generate for each prompt.

best_of class-attribute instance-attribute

best_of: int = 1

Generates best_of completions server-side and returns the "best".

model_kwargs class-attribute instance-attribute

model_kwargs: dict[str, Any] = Field(default_factory=dict)

Holds any model parameters valid for create call not explicitly specified.

openai_api_key class-attribute instance-attribute

openai_api_key: Optional[SecretStr] = Field(
    alias="api_key",
    default_factory=secret_from_env(
        "OPENAI_API_KEY", default=None
    ),
)

Automatically inferred from env var OPENAI_API_KEY if not provided.

openai_api_base class-attribute instance-attribute

openai_api_base: Optional[str] = Field(
    alias="base_url",
    default_factory=from_env(
        "OPENAI_API_BASE", default=None
    ),
)

Base URL path for API requests, leave blank if not using a proxy or service emulator.

openai_organization class-attribute instance-attribute

openai_organization: Optional[str] = Field(
    alias="organization",
    default_factory=from_env(
        ["OPENAI_ORG_ID", "OPENAI_ORGANIZATION"],
        default=None,
    ),
)

Automatically inferred from env var OPENAI_ORG_ID if not provided.

batch_size class-attribute instance-attribute

batch_size: int = 20

Batch size to use when passing multiple documents to generate.

request_timeout class-attribute instance-attribute

request_timeout: Union[
    float, tuple[float, float], Any, None
] = Field(default=None, alias="timeout")

Timeout for requests to OpenAI completion API. Can be float, httpx.Timeout or None.

logit_bias class-attribute instance-attribute

logit_bias: Optional[dict[str, float]] = None

Adjust the probability of specific tokens being generated.

max_retries class-attribute instance-attribute

max_retries: int = 2

Maximum number of retries to make when generating.

seed class-attribute instance-attribute

seed: Optional[int] = None

Seed for generation

logprobs class-attribute instance-attribute

logprobs: Optional[int] = None

Include the log probabilities on the logprobs most likely output tokens, as well the chosen tokens.

streaming class-attribute instance-attribute

streaming: bool = False

Whether to stream the results or not.

allowed_special class-attribute instance-attribute

allowed_special: Union[Literal['all'], set[str]] = set()

Set of special tokens that are allowed。

disallowed_special class-attribute instance-attribute

disallowed_special: Union[
    Literal["all"], Collection[str]
] = "all"

Set of special tokens that are not allowed。

tiktoken_model_name class-attribute instance-attribute

tiktoken_model_name: Optional[str] = None

The model name to pass to tiktoken when using this class. Tiktoken is used to count the number of tokens in documents to constrain them to be under a certain limit. By default, when set to None, this will be the same as the embedding model name. However, there are some cases where you may want to use this Embedding class with a model name not supported by tiktoken. This can include when using Azure embeddings or when using one of the many model providers that expose an OpenAI-like API but with different models. In those cases, in order to avoid erroring when tiktoken is called, you can specify a model name to use here.

http_client class-attribute instance-attribute

http_client: Union[Any, None] = None

Optional httpx.Client. Only used for sync invocations. Must specify http_async_client as well if you'd like a custom client for async invocations.

http_async_client class-attribute instance-attribute

http_async_client: Union[Any, None] = None

Optional httpx.AsyncClient. Only used for async invocations. Must specify http_client as well if you'd like a custom client for sync invocations.

extra_body class-attribute instance-attribute

extra_body: Optional[Mapping[str, Any]] = None

Optional additional JSON properties to include in the request parameters when making requests to OpenAI compatible APIs, such as vLLM.

max_context_size property

max_context_size: int

Get max context size for this model.

lc_secrets property

lc_secrets: dict[str, str]

Mapping of secret keys to environment variables.

lc_attributes property

lc_attributes: dict[str, Any]

LangChain attributes for this class.

get_name

get_name(
    suffix: str | None = None, *, name: str | None = None
) -> str

Get the name of the Runnable.

Parameters:

Name Type Description Default
suffix str | None

An optional suffix to append to the name.

None
name str | None

An optional name to use instead of the Runnable's name.

None

Returns:

Type Description
str

The name of the Runnable.

get_input_schema

get_input_schema(
    config: RunnableConfig | None = None,
) -> type[BaseModel]

Get a pydantic model that can be used to validate input to the Runnable.

Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the Runnable is invoked with.

This method allows to get an input schema for a specific configuration.

Parameters:

Name Type Description Default
config RunnableConfig | None

A config to use when generating the schema.

None

Returns:

Type Description
type[BaseModel]

A pydantic model that can be used to validate input.

get_input_jsonschema

get_input_jsonschema(
    config: RunnableConfig | None = None,
) -> dict[str, Any]

Get a JSON schema that represents the input to the Runnable.

Parameters:

Name Type Description Default
config RunnableConfig | None

A config to use when generating the schema.

None

Returns:

Type Description
dict[str, Any]

A JSON schema that represents the input to the Runnable.

Example
from langchain_core.runnables import RunnableLambda


def add_one(x: int) -> int:
    return x + 1


runnable = RunnableLambda(add_one)

print(runnable.get_input_jsonschema())

Added in version 0.3.0

get_output_schema

get_output_schema(
    config: RunnableConfig | None = None,
) -> type[BaseModel]

Get a pydantic model that can be used to validate output to the Runnable.

Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the Runnable is invoked with.

This method allows to get an output schema for a specific configuration.

Parameters:

Name Type Description Default
config RunnableConfig | None

A config to use when generating the schema.

None

Returns:

Type Description
type[BaseModel]

A pydantic model that can be used to validate output.

get_output_jsonschema

get_output_jsonschema(
    config: RunnableConfig | None = None,
) -> dict[str, Any]

Get a JSON schema that represents the output of the Runnable.

Parameters:

Name Type Description Default
config RunnableConfig | None

A config to use when generating the schema.

None

Returns:

Type Description
dict[str, Any]

A JSON schema that represents the output of the Runnable.

Example
from langchain_core.runnables import RunnableLambda


def add_one(x: int) -> int:
    return x + 1


runnable = RunnableLambda(add_one)

print(runnable.get_output_jsonschema())

Added in version 0.3.0

config_schema

config_schema(
    *, include: Sequence[str] | None = None
) -> type[BaseModel]

The type of config this Runnable accepts specified as a pydantic model.

To mark a field as configurable, see the configurable_fields and configurable_alternatives methods.

Parameters:

Name Type Description Default
include Sequence[str] | None

A list of fields to include in the config schema.

None

Returns:

Type Description
type[BaseModel]

A pydantic model that can be used to validate config.

get_config_jsonschema

get_config_jsonschema(
    *, include: Sequence[str] | None = None
) -> dict[str, Any]

Get a JSON schema that represents the config of the Runnable.

Parameters:

Name Type Description Default
include Sequence[str] | None

A list of fields to include in the config schema.

None

Returns:

Type Description
dict[str, Any]

A JSON schema that represents the config of the Runnable.

Added in version 0.3.0

get_graph

get_graph(config: RunnableConfig | None = None) -> Graph

Return a graph representation of this Runnable.

get_prompts

get_prompts(
    config: RunnableConfig | None = None,
) -> list[BasePromptTemplate]

Return a list of prompts used by this Runnable.

__or__

__or__(
    other: (
        Runnable[Any, Other]
        | Callable[[Iterator[Any]], Iterator[Other]]
        | Callable[
            [AsyncIterator[Any]], AsyncIterator[Other]
        ]
        | Callable[[Any], Other]
        | Mapping[
            str,
            Runnable[Any, Other]
            | Callable[[Any], Other]
            | Any,
        ]
    ),
) -> RunnableSerializable[Input, Other]

Runnable "or" operator.

Compose this Runnable with another object to create a RunnableSequence.

Parameters:

Name Type Description Default
other Runnable[Any, Other] | Callable[[Iterator[Any]], Iterator[Other]] | Callable[[AsyncIterator[Any]], AsyncIterator[Other]] | Callable[[Any], Other] | Mapping[str, Runnable[Any, Other] | Callable[[Any], Other] | Any]

Another Runnable or a Runnable-like object.

required

Returns:

Type Description
RunnableSerializable[Input, Other]

A new Runnable.

__ror__

__ror__(
    other: (
        Runnable[Other, Any]
        | Callable[[Iterator[Other]], Iterator[Any]]
        | Callable[
            [AsyncIterator[Other]], AsyncIterator[Any]
        ]
        | Callable[[Other], Any]
        | Mapping[
            str,
            Runnable[Other, Any]
            | Callable[[Other], Any]
            | Any,
        ]
    ),
) -> RunnableSerializable[Other, Output]

Runnable "reverse-or" operator.

Compose this Runnable with another object to create a RunnableSequence.

Parameters:

Name Type Description Default
other Runnable[Other, Any] | Callable[[Iterator[Other]], Iterator[Any]] | Callable[[AsyncIterator[Other]], AsyncIterator[Any]] | Callable[[Other], Any] | Mapping[str, Runnable[Other, Any] | Callable[[Other], Any] | Any]

Another Runnable or a Runnable-like object.

required

Returns:

Type Description
RunnableSerializable[Other, Output]

A new Runnable.

pipe

pipe(
    *others: Runnable[Any, Other] | Callable[[Any], Other],
    name: str | None = None
) -> RunnableSerializable[Input, Other]

Pipe runnables.

Compose this Runnable with Runnable-like objects to make a RunnableSequence.

Equivalent to RunnableSequence(self, *others) or self | others[0] | ...

Example
from langchain_core.runnables import RunnableLambda


def add_one(x: int) -> int:
    return x + 1


def mul_two(x: int) -> int:
    return x * 2


runnable_1 = RunnableLambda(add_one)
runnable_2 = RunnableLambda(mul_two)
sequence = runnable_1.pipe(runnable_2)
# Or equivalently:
# sequence = runnable_1 | runnable_2
# sequence = RunnableSequence(first=runnable_1, last=runnable_2)
sequence.invoke(1)
await sequence.ainvoke(1)
# -> 4

sequence.batch([1, 2, 3])
await sequence.abatch([1, 2, 3])
# -> [4, 6, 8]

Parameters:

Name Type Description Default
*others Runnable[Any, Other] | Callable[[Any], Other]

Other Runnable or Runnable-like objects to compose

()
name str | None

An optional name for the resulting RunnableSequence.

None

Returns:

Type Description
RunnableSerializable[Input, Other]

A new Runnable.

pick

pick(
    keys: str | list[str],
) -> RunnableSerializable[Any, Any]

Pick keys from the output dict of this Runnable.

Pick single key:

```python
import json

from langchain_core.runnables import RunnableLambda, RunnableMap

as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)
chain = RunnableMap(str=as_str, json=as_json)

chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3]}

json_only_chain = chain.pick("json")
json_only_chain.invoke("[1, 2, 3]")
# -> [1, 2, 3]
```

Pick list of keys:

```python
from typing import Any

import json

from langchain_core.runnables import RunnableLambda, RunnableMap

as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)


def as_bytes(x: Any) -> bytes:
    return bytes(x, "utf-8")


chain = RunnableMap(
    str=as_str, json=as_json, bytes=RunnableLambda(as_bytes)
)

chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3], "bytes": b"[1, 2, 3]"}

json_and_bytes_chain = chain.pick(["json", "bytes"])
json_and_bytes_chain.invoke("[1, 2, 3]")
# -> {"json": [1, 2, 3], "bytes": b"[1, 2, 3]"}
```

Parameters:

Name Type Description Default
keys str | list[str]

A key or list of keys to pick from the output dict.

required

Returns:

Type Description
RunnableSerializable[Any, Any]

a new Runnable.

assign

assign(
    **kwargs: (
        Runnable[dict[str, Any], Any]
        | Callable[[dict[str, Any]], Any]
        | Mapping[
            str,
            Runnable[dict[str, Any], Any]
            | Callable[[dict[str, Any]], Any],
        ]
    ),
) -> RunnableSerializable[Any, Any]

Assigns new fields to the dict output of this Runnable.

from langchain_community.llms.fake import FakeStreamingListLLM
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import SystemMessagePromptTemplate
from langchain_core.runnables import Runnable
from operator import itemgetter

prompt = (
    SystemMessagePromptTemplate.from_template("You are a nice assistant.")
    + "{question}"
)
llm = FakeStreamingListLLM(responses=["foo-lish"])

chain: Runnable = prompt | llm | {"str": StrOutputParser()}

chain_with_assign = chain.assign(hello=itemgetter("str") | llm)

print(chain_with_assign.input_schema.model_json_schema())
# {'title': 'PromptInput', 'type': 'object', 'properties':
{'question': {'title': 'Question', 'type': 'string'}}}
print(chain_with_assign.output_schema.model_json_schema())
# {'title': 'RunnableSequenceOutput', 'type': 'object', 'properties':
{'str': {'title': 'Str',
'type': 'string'}, 'hello': {'title': 'Hello', 'type': 'string'}}}

Parameters:

Name Type Description Default
**kwargs Runnable[dict[str, Any], Any] | Callable[[dict[str, Any]], Any] | Mapping[str, Runnable[dict[str, Any], Any] | Callable[[dict[str, Any]], Any]]

A mapping of keys to Runnable or Runnable-like objects that will be invoked with the entire output dict of this Runnable.

{}

Returns:

Type Description
RunnableSerializable[Any, Any]

A new Runnable.

batch_as_completed

batch_as_completed(
    inputs: Sequence[Input],
    config: (
        RunnableConfig | Sequence[RunnableConfig] | None
    ) = None,
    *,
    return_exceptions: bool = False,
    **kwargs: Any | None
) -> Iterator[tuple[int, Output | Exception]]

Run invoke in parallel on a list of inputs.

Yields results as they complete.

Parameters:

Name Type Description Default
inputs Sequence[Input]

A list of inputs to the Runnable.

required
config RunnableConfig | Sequence[RunnableConfig] | None

A config to use when invoking the Runnable. The config supports standard keys like 'tags', 'metadata' for tracing purposes, 'max_concurrency' for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Defaults to None.

None
return_exceptions bool

Whether to return exceptions instead of raising them. Defaults to False.

False
**kwargs Any | None

Additional keyword arguments to pass to the Runnable.

{}

Yields:

Type Description
tuple[int, Output | Exception]

Tuples of the index of the input and the output from the Runnable.

abatch_as_completed async

abatch_as_completed(
    inputs: Sequence[Input],
    config: (
        RunnableConfig | Sequence[RunnableConfig] | None
    ) = None,
    *,
    return_exceptions: bool = False,
    **kwargs: Any | None
) -> AsyncIterator[tuple[int, Output | Exception]]

Run ainvoke in parallel on a list of inputs.

Yields results as they complete.

Parameters:

Name Type Description Default
inputs Sequence[Input]

A list of inputs to the Runnable.

required
config RunnableConfig | Sequence[RunnableConfig] | None

A config to use when invoking the Runnable. The config supports standard keys like 'tags', 'metadata' for tracing purposes, 'max_concurrency' for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Defaults to None.

None
return_exceptions bool

Whether to return exceptions instead of raising them. Defaults to False.

False
kwargs Any | None

Additional keyword arguments to pass to the Runnable.

{}

Yields:

Type Description
AsyncIterator[tuple[int, Output | Exception]]

A tuple of the index of the input and the output from the Runnable.

astream_log async

astream_log(
    input: Any,
    config: RunnableConfig | None = None,
    *,
    diff: bool = True,
    with_streamed_output_list: bool = True,
    include_names: Sequence[str] | None = None,
    include_types: Sequence[str] | None = None,
    include_tags: Sequence[str] | None = None,
    exclude_names: Sequence[str] | None = None,
    exclude_types: Sequence[str] | None = None,
    exclude_tags: Sequence[str] | None = None,
    **kwargs: Any
) -> AsyncIterator[RunLogPatch] | AsyncIterator[RunLog]

Stream all output from a Runnable, as reported to the callback system.

This includes all inner runs of LLMs, Retrievers, Tools, etc.

Output is streamed as Log objects, which include a list of Jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run.

The Jsonpatch ops can be applied in order to construct state.

Parameters:

Name Type Description Default
input Any

The input to the Runnable.

required
config RunnableConfig | None

The config to use for the Runnable.

None
diff bool

Whether to yield diffs between each step or the current state.

True
with_streamed_output_list bool

Whether to yield the streamed_output list.

True
include_names Sequence[str] | None

Only include logs with these names.

None
include_types Sequence[str] | None

Only include logs with these types.

None
include_tags Sequence[str] | None

Only include logs with these tags.

None
exclude_names Sequence[str] | None

Exclude logs with these names.

None
exclude_types Sequence[str] | None

Exclude logs with these types.

None
exclude_tags Sequence[str] | None

Exclude logs with these tags.

None
kwargs Any

Additional keyword arguments to pass to the Runnable.

{}

Yields:

Type Description
AsyncIterator[RunLogPatch] | AsyncIterator[RunLog]

A RunLogPatch or RunLog object.

astream_events async

astream_events(
    input: Any,
    config: RunnableConfig | None = None,
    *,
    version: Literal["v1", "v2"] = "v2",
    include_names: Sequence[str] | None = None,
    include_types: Sequence[str] | None = None,
    include_tags: Sequence[str] | None = None,
    exclude_names: Sequence[str] | None = None,
    exclude_types: Sequence[str] | None = None,
    exclude_tags: Sequence[str] | None = None,
    **kwargs: Any
) -> AsyncIterator[StreamEvent]

Generate a stream of events.

Use to create an iterator over StreamEvents that provide real-time information about the progress of the Runnable, including StreamEvents from intermediate results.

A StreamEvent is a dictionary with the following schema:

  • event: str - Event names are of the format: on_[runnable_type]_(start|stream|end).
  • name: str - The name of the Runnable that generated the event.
  • run_id: str - randomly generated ID associated with the given execution of the Runnable that emitted the event. A child Runnable that gets invoked as part of the execution of a parent Runnable is assigned its own unique ID.
  • parent_ids: list[str] - The IDs of the parent runnables that generated the event. The root Runnable will have an empty list. The order of the parent IDs is from the root to the immediate parent. Only available for v2 version of the API. The v1 version of the API will return an empty list.
  • tags: Optional[list[str]] - The tags of the Runnable that generated the event.
  • metadata: Optional[dict[str, Any]] - The metadata of the Runnable that generated the event.
  • data: dict[str, Any]

Below is a table that illustrates some events that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table.

Note

This reference table is for the v2 version of the schema.

+--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | event | name | chunk | input | output | +==========================+==================+=====================================+===================================================+=====================================================+ | on_chat_model_start | [model name] | | {"messages": [[SystemMessage, HumanMessage]]} | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_chat_model_stream | [model name] | AIMessageChunk(content="hello") | | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_chat_model_end | [model name] | | {"messages": [[SystemMessage, HumanMessage]]} | AIMessageChunk(content="hello world") | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_llm_start | [model name] | | {'input': 'hello'} | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_llm_stream | [model name] | 'Hello' | | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_llm_end | [model name] | | 'Hello human!' | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_chain_start | format_docs | | | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_chain_stream | format_docs | 'hello world!, goodbye world!' | | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_chain_end | format_docs | | [Document(...)] | 'hello world!, goodbye world!' | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_tool_start | some_tool | | {"x": 1, "y": "2"} | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_tool_end | some_tool | | | {"x": 1, "y": "2"} | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_retriever_start | [retriever name] | | {"query": "hello"} | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_retriever_end | [retriever name] | | {"query": "hello"} | [Document(...), ..] | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_prompt_start | [template_name] | | {"question": "hello"} | | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+ | on_prompt_end | [template_name] | | {"question": "hello"} | ChatPromptValue(messages: [SystemMessage, ...]) | +--------------------------+------------------+-------------------------------------+---------------------------------------------------+-----------------------------------------------------+

In addition to the standard events, users can also dispatch custom events (see example below).

Custom events will be only be surfaced with in the v2 version of the API!

A custom event has following format:

+-----------+------+-----------------------------------------------------------------------------------------------------------+ | Attribute | Type | Description | +===========+======+===========================================================================================================+ | name | str | A user defined name for the event. | +-----------+------+-----------------------------------------------------------------------------------------------------------+ | data | Any | The data associated with the event. This can be anything, though we suggest making it JSON serializable. | +-----------+------+-----------------------------------------------------------------------------------------------------------+

Here are declarations associated with the standard events shown above:

format_docs:

def format_docs(docs: list[Document]) -> str:
    '''Format the docs.'''
    return ", ".join([doc.page_content for doc in docs])


format_docs = RunnableLambda(format_docs)

some_tool:

@tool
def some_tool(x: int, y: str) -> dict:
    '''Some_tool.'''
    return {"x": x, "y": y}

prompt:

template = ChatPromptTemplate.from_messages(
    [
        ("system", "You are Cat Agent 007"),
        ("human", "{question}"),
    ]
).with_config({"run_name": "my_template", "tags": ["my_template"]})
Example:

from langchain_core.runnables import RunnableLambda


async def reverse(s: str) -> str:
    return s[::-1]


chain = RunnableLambda(func=reverse)

events = [event async for event in chain.astream_events("hello", version="v2")]

# will produce the following events (run_id, and parent_ids
# has been omitted for brevity):
[
    {
        "data": {"input": "hello"},
        "event": "on_chain_start",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
    {
        "data": {"chunk": "olleh"},
        "event": "on_chain_stream",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
    {
        "data": {"output": "olleh"},
        "event": "on_chain_end",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
]

Example: Dispatch Custom Event

from langchain_core.callbacks.manager import (
    adispatch_custom_event,
)
from langchain_core.runnables import RunnableLambda, RunnableConfig
import asyncio


async def slow_thing(some_input: str, config: RunnableConfig) -> str:
    """Do something that takes a long time."""
    await asyncio.sleep(1) # Placeholder for some slow operation
    await adispatch_custom_event(
        "progress_event",
        {"message": "Finished step 1 of 3"},
        config=config # Must be included for python < 3.10
    )
    await asyncio.sleep(1) # Placeholder for some slow operation
    await adispatch_custom_event(
        "progress_event",
        {"message": "Finished step 2 of 3"},
        config=config # Must be included for python < 3.10
    )
    await asyncio.sleep(1) # Placeholder for some slow operation
    return "Done"

slow_thing = RunnableLambda(slow_thing)

async for event in slow_thing.astream_events("some_input", version="v2"):
    print(event)

Parameters:

Name Type Description Default
input Any

The input to the Runnable.

required
config RunnableConfig | None

The config to use for the Runnable.

None
version Literal['v1', 'v2']

The version of the schema to use either 'v2' or 'v1'. Users should use 'v2'. 'v1' is for backwards compatibility and will be deprecated in 0.4.0. No default will be assigned until the API is stabilized. custom events will only be surfaced in 'v2'.

'v2'
include_names Sequence[str] | None

Only include events from Runnables with matching names.

None
include_types Sequence[str] | None

Only include events from Runnables with matching types.

None
include_tags Sequence[str] | None

Only include events from Runnables with matching tags.

None
exclude_names Sequence[str] | None

Exclude events from Runnables with matching names.

None
exclude_types Sequence[str] | None

Exclude events from Runnables with matching types.

None
exclude_tags Sequence[str] | None

Exclude events from Runnables with matching tags.

None
kwargs Any

Additional keyword arguments to pass to the Runnable. These will be passed to astream_log as this implementation of astream_events is built on top of astream_log.

{}

Yields:

Type Description
AsyncIterator[StreamEvent]

An async stream of StreamEvents.

Raises:

Type Description
NotImplementedError

If the version is not 'v1' or 'v2'.

transform

transform(
    input: Iterator[Input],
    config: RunnableConfig | None = None,
    **kwargs: Any | None
) -> Iterator[Output]

Transform inputs to outputs.

Default implementation of transform, which buffers input and calls astream.

Subclasses should override this method if they can start producing output while input is still being generated.

Parameters:

Name Type Description Default
input Iterator[Input]

An iterator of inputs to the Runnable.

required
config RunnableConfig | None

The config to use for the Runnable. Defaults to None.

None
kwargs Any | None

Additional keyword arguments to pass to the Runnable.

{}

Yields:

Type Description
Output

The output of the Runnable.

atransform async

atransform(
    input: AsyncIterator[Input],
    config: RunnableConfig | None = None,
    **kwargs: Any | None
) -> AsyncIterator[Output]

Transform inputs to outputs.

Default implementation of atransform, which buffers input and calls astream.

Subclasses should override this method if they can start producing output while input is still being generated.

Parameters:

Name Type Description Default
input AsyncIterator[Input]

An async iterator of inputs to the Runnable.

required
config RunnableConfig | None

The config to use for the Runnable. Defaults to None.

None
kwargs Any | None

Additional keyword arguments to pass to the Runnable.

{}

Yields:

Type Description
AsyncIterator[Output]

The output of the Runnable.

bind

bind(**kwargs: Any) -> Runnable[Input, Output]

Bind arguments to a Runnable, returning a new Runnable.

Useful when a Runnable in a chain requires an argument that is not in the output of the previous Runnable or included in the user input.

Parameters:

Name Type Description Default
kwargs Any

The arguments to bind to the Runnable.

{}

Returns:

Type Description
Runnable[Input, Output]

A new Runnable with the arguments bound.

Example
from langchain_ollama import ChatOllama
from langchain_core.output_parsers import StrOutputParser

llm = ChatOllama(model="llama3.1")

# Without bind.
chain = llm | StrOutputParser()

chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two three four five.'

# With bind.
chain = llm.bind(stop=["three"]) | StrOutputParser()

chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two'

with_config

with_config(
    config: RunnableConfig | None = None, **kwargs: Any
) -> Runnable[Input, Output]

Bind config to a Runnable, returning a new Runnable.

Parameters:

Name Type Description Default
config RunnableConfig | None

The config to bind to the Runnable.

None
kwargs Any

Additional keyword arguments to pass to the Runnable.

{}

Returns:

Type Description
Runnable[Input, Output]

A new Runnable with the config bound.

with_listeners

with_listeners(
    *,
    on_start: (
        Callable[[Run], None]
        | Callable[[Run, RunnableConfig], None]
        | None
    ) = None,
    on_end: (
        Callable[[Run], None]
        | Callable[[Run, RunnableConfig], None]
        | None
    ) = None,
    on_error: (
        Callable[[Run], None]
        | Callable[[Run, RunnableConfig], None]
        | None
    ) = None
) -> Runnable[Input, Output]

Bind lifecycle listeners to a Runnable, returning a new Runnable.

The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run.

Parameters:

Name Type Description Default
on_start Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None

Called before the Runnable starts running, with the Run object. Defaults to None.

None
on_end Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None

Called after the Runnable finishes running, with the Run object. Defaults to None.

None
on_error Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None

Called if the Runnable throws an error, with the Run object. Defaults to None.

None

Returns:

Type Description
Runnable[Input, Output]

A new Runnable with the listeners bound.

Example
from langchain_core.runnables import RunnableLambda
from langchain_core.tracers.schemas import Run

import time


def test_runnable(time_to_sleep: int):
    time.sleep(time_to_sleep)


def fn_start(run_obj: Run):
    print("start_time:", run_obj.start_time)


def fn_end(run_obj: Run):
    print("end_time:", run_obj.end_time)


chain = RunnableLambda(test_runnable).with_listeners(
    on_start=fn_start, on_end=fn_end
)
chain.invoke(2)

with_alisteners

with_alisteners(
    *,
    on_start: AsyncListener | None = None,
    on_end: AsyncListener | None = None,
    on_error: AsyncListener | None = None
) -> Runnable[Input, Output]

Bind async lifecycle listeners to a Runnable.

Returns a new Runnable.

The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run.

Parameters:

Name Type Description Default
on_start AsyncListener | None

Called asynchronously before the Runnable starts running, with the Run object. Defaults to None.

None
on_end AsyncListener | None

Called asynchronously after the Runnable finishes running, with the Run object. Defaults to None.

None
on_error AsyncListener | None

Called asynchronously if the Runnable throws an error, with the Run object. Defaults to None.

None

Returns:

Type Description
Runnable[Input, Output]

A new Runnable with the listeners bound.

Example
from langchain_core.runnables import RunnableLambda, Runnable
from datetime import datetime, timezone
import time
import asyncio

def format_t(timestamp: float) -> str:
    return datetime.fromtimestamp(timestamp, tz=timezone.utc).isoformat()

async def test_runnable(time_to_sleep: int):
    print(f"Runnable[{time_to_sleep}s]: starts at {format_t(time.time())}")
    await asyncio.sleep(time_to_sleep)
    print(f"Runnable[{time_to_sleep}s]: ends at {format_t(time.time())}")

async def fn_start(run_obj: Runnable):
    print(f"on start callback starts at {format_t(time.time())}")
    await asyncio.sleep(3)
    print(f"on start callback ends at {format_t(time.time())}")

async def fn_end(run_obj: Runnable):
    print(f"on end callback starts at {format_t(time.time())}")
    await asyncio.sleep(2)
    print(f"on end callback ends at {format_t(time.time())}")

runnable = RunnableLambda(test_runnable).with_alisteners(
    on_start=fn_start,
    on_end=fn_end
)
async def concurrent_runs():
    await asyncio.gather(runnable.ainvoke(2), runnable.ainvoke(3))

asyncio.run(concurrent_runs())
Result:
on start callback starts at 2025-03-01T07:05:22.875378+00:00
on start callback starts at 2025-03-01T07:05:22.875495+00:00
on start callback ends at 2025-03-01T07:05:25.878862+00:00
on start callback ends at 2025-03-01T07:05:25.878947+00:00
Runnable[2s]: starts at 2025-03-01T07:05:25.879392+00:00
Runnable[3s]: starts at 2025-03-01T07:05:25.879804+00:00
Runnable[2s]: ends at 2025-03-01T07:05:27.881998+00:00
on end callback starts at 2025-03-01T07:05:27.882360+00:00
Runnable[3s]: ends at 2025-03-01T07:05:28.881737+00:00
on end callback starts at 2025-03-01T07:05:28.882428+00:00
on end callback ends at 2025-03-01T07:05:29.883893+00:00
on end callback ends at 2025-03-01T07:05:30.884831+00:00

with_types

with_types(
    *,
    input_type: type[Input] | None = None,
    output_type: type[Output] | None = None
) -> Runnable[Input, Output]

Bind input and output types to a Runnable, returning a new Runnable.

Parameters:

Name Type Description Default
input_type type[Input] | None

The input type to bind to the Runnable. Defaults to None.

None
output_type type[Output] | None

The output type to bind to the Runnable. Defaults to None.

None

Returns:

Type Description
Runnable[Input, Output]

A new Runnable with the types bound.

with_retry

with_retry(
    *,
    retry_if_exception_type: tuple[
        type[BaseException], ...
    ] = (Exception,),
    wait_exponential_jitter: bool = True,
    exponential_jitter_params: (
        ExponentialJitterParams | None
    ) = None,
    stop_after_attempt: int = 3
) -> Runnable[Input, Output]

Create a new Runnable that retries the original Runnable on exceptions.

Parameters:

Name Type Description Default
retry_if_exception_type tuple[type[BaseException], ...]

A tuple of exception types to retry on. Defaults to (Exception,).

(Exception,)
wait_exponential_jitter bool

Whether to add jitter to the wait time between retries. Defaults to True.

True
stop_after_attempt int

The maximum number of attempts to make before giving up. Defaults to 3.

3
exponential_jitter_params ExponentialJitterParams | None

Parameters for tenacity.wait_exponential_jitter. Namely: initial, max, exp_base, and jitter (all float values).

None

Returns:

Type Description
Runnable[Input, Output]

A new Runnable that retries the original Runnable on exceptions.

Example
from langchain_core.runnables import RunnableLambda

count = 0


def _lambda(x: int) -> None:
    global count
    count = count + 1
    if x == 1:
        raise ValueError("x is 1")
    else:
        pass


runnable = RunnableLambda(_lambda)
try:
    runnable.with_retry(
        stop_after_attempt=2,
        retry_if_exception_type=(ValueError,),
    ).invoke(1)
except ValueError:
    pass

assert count == 2

map

map() -> Runnable[list[Input], list[Output]]

Return a new Runnable that maps a list of inputs to a list of outputs.

Calls invoke with each input.

Returns:

Type Description
Runnable[list[Input], list[Output]]

A new Runnable that maps a list of inputs to a list of outputs.

Example
from langchain_core.runnables import RunnableLambda


def _lambda(x: int) -> int:
    return x + 1


runnable = RunnableLambda(_lambda)
print(runnable.map().invoke([1, 2, 3]))  # [2, 3, 4]

with_fallbacks

with_fallbacks(
    fallbacks: Sequence[Runnable[Input, Output]],
    *,
    exceptions_to_handle: tuple[
        type[BaseException], ...
    ] = (Exception,),
    exception_key: str | None = None
) -> RunnableWithFallbacks[Input, Output]

Add fallbacks to a Runnable, returning a new Runnable.

The new Runnable will try the original Runnable, and then each fallback in order, upon failures.

Parameters:

Name Type Description Default
fallbacks Sequence[Runnable[Input, Output]]

A sequence of runnables to try if the original Runnable fails.

required
exceptions_to_handle tuple[type[BaseException], ...]

A tuple of exception types to handle. Defaults to (Exception,).

(Exception,)
exception_key str | None

If string is specified then handled exceptions will be passed to fallbacks as part of the input under the specified key. If None, exceptions will not be passed to fallbacks. If used, the base Runnable and its fallbacks must accept a dictionary as input. Defaults to None.

None

Returns:

Type Description
RunnableWithFallbacks[Input, Output]

A new Runnable that will try the original Runnable, and then each

RunnableWithFallbacks[Input, Output]

fallback in order, upon failures.

Example
from typing import Iterator

from langchain_core.runnables import RunnableGenerator


def _generate_immediate_error(input: Iterator) -> Iterator[str]:
    raise ValueError()
    yield ""


def _generate(input: Iterator) -> Iterator[str]:
    yield from "foo bar"


runnable = RunnableGenerator(_generate_immediate_error).with_fallbacks(
    [RunnableGenerator(_generate)]
)
print("".join(runnable.stream({})))  # foo bar

Parameters:

Name Type Description Default
fallbacks Sequence[Runnable[Input, Output]]

A sequence of runnables to try if the original Runnable fails.

required
exceptions_to_handle tuple[type[BaseException], ...]

A tuple of exception types to handle.

(Exception,)
exception_key str | None

If string is specified then handled exceptions will be passed to fallbacks as part of the input under the specified key. If None, exceptions will not be passed to fallbacks. If used, the base Runnable and its fallbacks must accept a dictionary as input.

None

Returns:

Type Description
RunnableWithFallbacks[Input, Output]

A new Runnable that will try the original Runnable, and then each

RunnableWithFallbacks[Input, Output]

fallback in order, upon failures.

as_tool

as_tool(
    args_schema: type[BaseModel] | None = None,
    *,
    name: str | None = None,
    description: str | None = None,
    arg_types: dict[str, type] | None = None
) -> BaseTool

Create a BaseTool from a Runnable.

as_tool will instantiate a BaseTool with a name, description, and args_schema from a Runnable. Where possible, schemas are inferred from runnable.get_input_schema. Alternatively (e.g., if the Runnable takes a dict as input and the specific dict keys are not typed), the schema can be specified directly with args_schema. You can also pass arg_types to just specify the required arguments and their types.

Parameters:

Name Type Description Default
args_schema type[BaseModel] | None

The schema for the tool. Defaults to None.

None
name str | None

The name of the tool. Defaults to None.

None
description str | None

The description of the tool. Defaults to None.

None
arg_types dict[str, type] | None

A dictionary of argument names to types. Defaults to None.

None

Returns:

Type Description
BaseTool

A BaseTool instance.

Typed dict input:

from typing_extensions import TypedDict
from langchain_core.runnables import RunnableLambda


class Args(TypedDict):
    a: int
    b: list[int]


def f(x: Args) -> str:
    return str(x["a"] * max(x["b"]))


runnable = RunnableLambda(f)
as_tool = runnable.as_tool()
as_tool.invoke({"a": 3, "b": [1, 2]})

dict input, specifying schema via args_schema:

from typing import Any
from pydantic import BaseModel, Field
from langchain_core.runnables import RunnableLambda

def f(x: dict[str, Any]) -> str:
    return str(x["a"] * max(x["b"]))

class FSchema(BaseModel):
    """Apply a function to an integer and list of integers."""

    a: int = Field(..., description="Integer")
    b: list[int] = Field(..., description="List of ints")

runnable = RunnableLambda(f)
as_tool = runnable.as_tool(FSchema)
as_tool.invoke({"a": 3, "b": [1, 2]})

dict input, specifying schema via arg_types:

from typing import Any
from langchain_core.runnables import RunnableLambda


def f(x: dict[str, Any]) -> str:
    return str(x["a"] * max(x["b"]))


runnable = RunnableLambda(f)
as_tool = runnable.as_tool(arg_types={"a": int, "b": list[int]})
as_tool.invoke({"a": 3, "b": [1, 2]})

String input:

from langchain_core.runnables import RunnableLambda


def f(x: str) -> str:
    return x + "a"


def g(x: str) -> str:
    return x + "z"


runnable = RunnableLambda(f) | g
as_tool = runnable.as_tool()
as_tool.invoke("b")

Added in version 0.2.14

__init__

__init__(*args: Any, **kwargs: Any) -> None

lc_id classmethod

lc_id() -> list[str]

Return a unique identifier for this class for serialization purposes.

The unique identifier is a list of strings that describes the path to the object. For example, for the class langchain.llms.openai.OpenAI, the id is ["langchain", "llms", "openai", "OpenAI"].

to_json

to_json() -> (
    SerializedConstructor | SerializedNotImplemented
)

Serialize the Runnable to JSON.

Returns:

Type Description
SerializedConstructor | SerializedNotImplemented

A JSON-serializable representation of the Runnable.

to_json_not_implemented

to_json_not_implemented() -> SerializedNotImplemented

Serialize a "not implemented" object.

Returns:

Type Description
SerializedNotImplemented

SerializedNotImplemented.

configurable_fields

configurable_fields(
    **kwargs: AnyConfigurableField,
) -> RunnableSerializable[Input, Output]

Configure particular Runnable fields at runtime.

Parameters:

Name Type Description Default
**kwargs AnyConfigurableField

A dictionary of ConfigurableField instances to configure.

{}

Raises:

Type Description
ValueError

If a configuration key is not found in the Runnable.

Returns:

Type Description
RunnableSerializable[Input, Output]

A new Runnable with the fields configured.

from langchain_core.runnables import ConfigurableField
from langchain_openai import ChatOpenAI

model = ChatOpenAI(max_tokens=20).configurable_fields(
    max_tokens=ConfigurableField(
        id="output_token_number",
        name="Max tokens in the output",
        description="The maximum number of tokens in the output",
    )
)

# max_tokens = 20
print("max_tokens_20: ", model.invoke("tell me something about chess").content)

# max_tokens = 200
print(
    "max_tokens_200: ",
    model.with_config(configurable={"output_token_number": 200})
    .invoke("tell me something about chess")
    .content,
)

configurable_alternatives

configurable_alternatives(
    which: ConfigurableField,
    *,
    default_key: str = "default",
    prefix_keys: bool = False,
    **kwargs: (
        Runnable[Input, Output]
        | Callable[[], Runnable[Input, Output]]
    )
) -> RunnableSerializable[Input, Output]

Configure alternatives for Runnables that can be set at runtime.

Parameters:

Name Type Description Default
which ConfigurableField

The ConfigurableField instance that will be used to select the alternative.

required
default_key str

The default key to use if no alternative is selected. Defaults to 'default'.

'default'
prefix_keys bool

Whether to prefix the keys with the ConfigurableField id. Defaults to False.

False
**kwargs Runnable[Input, Output] | Callable[[], Runnable[Input, Output]]

A dictionary of keys to Runnable instances or callables that return Runnable instances.

{}

Returns:

Type Description
RunnableSerializable[Input, Output]

A new Runnable with the alternatives configured.

from langchain_anthropic import ChatAnthropic
from langchain_core.runnables.utils import ConfigurableField
from langchain_openai import ChatOpenAI

model = ChatAnthropic(
    model_name="claude-3-7-sonnet-20250219"
).configurable_alternatives(
    ConfigurableField(id="llm"),
    default_key="anthropic",
    openai=ChatOpenAI(),
)

# uses the default model ChatAnthropic
print(model.invoke("which organization created you?").content)

# uses ChatOpenAI
print(
    model.with_config(configurable={"llm": "openai"})
    .invoke("which organization created you?")
    .content
)

set_verbose

set_verbose(verbose: bool | None) -> bool

If verbose is None, set it.

This allows users to pass in None as verbose to access the global setting.

Parameters:

Name Type Description Default
verbose bool | None

The verbosity setting to use.

required

Returns:

Type Description
bool

The verbosity setting to use.

with_structured_output

with_structured_output(
    schema: dict | type, **kwargs: Any
) -> Runnable[LanguageModelInput, dict | BaseModel]

Not implemented on this class.

get_token_ids

get_token_ids(text: str) -> list[int]

Get the token IDs using the tiktoken package.

get_num_tokens

get_num_tokens(text: str) -> int

Get the number of tokens present in the text.

Useful for checking if an input fits in a model's context window.

Parameters:

Name Type Description Default
text str

The string input to tokenize.

required

Returns:

Type Description
int

The integer number of tokens in the text.

get_num_tokens_from_messages

get_num_tokens_from_messages(
    messages: list[BaseMessage],
    tools: Sequence | None = None,
) -> int

Get the number of tokens in the messages.

Useful for checking if an input fits in a model's context window.

Note

The base implementation of get_num_tokens_from_messages ignores tool schemas.

Parameters:

Name Type Description Default
messages list[BaseMessage]

The message inputs to tokenize.

required
tools Sequence | None

If provided, sequence of dict, BaseModel, function, or BaseTools to be converted to tool schemas.

None

Returns:

Type Description
int

The sum of the number of tokens across the messages.

generate

generate(
    prompts: list[str],
    stop: list[str] | None = None,
    callbacks: Callbacks | list[Callbacks] | None = None,
    *,
    tags: list[str] | list[list[str]] | None = None,
    metadata: (
        dict[str, Any] | list[dict[str, Any]] | None
    ) = None,
    run_name: str | list[str] | None = None,
    run_id: UUID | list[UUID | None] | None = None,
    **kwargs: Any
) -> LLMResult

Pass a sequence of prompts to a model and return generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:

  1. Take advantage of batched calls,
  2. Need more output from the model than just the top generated value,
  3. Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).

Parameters:

Name Type Description Default
prompts list[str]

List of string prompts.

required
stop list[str] | None

Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

None
callbacks Callbacks | list[Callbacks] | None

Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

None
tags list[str] | list[list[str]] | None

List of tags to associate with each prompt. If provided, the length of the list must match the length of the prompts list.

None
metadata dict[str, Any] | list[dict[str, Any]] | None

List of metadata dictionaries to associate with each prompt. If provided, the length of the list must match the length of the prompts list.

None
run_name str | list[str] | None

List of run names to associate with each prompt. If provided, the length of the list must match the length of the prompts list.

None
run_id UUID | list[UUID | None] | None

List of run IDs to associate with each prompt. If provided, the length of the list must match the length of the prompts list.

None
**kwargs Any

Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

{}

Raises:

Type Description
ValueError

If prompts is not a list.

ValueError

If the length of callbacks, tags, metadata, or run_name (if provided) does not match the length of prompts.

Returns:

Type Description
LLMResult

An LLMResult, which contains a list of candidate Generations for each input prompt and additional model provider-specific output.

agenerate async

agenerate(
    prompts: list[str],
    stop: list[str] | None = None,
    callbacks: Callbacks | list[Callbacks] | None = None,
    *,
    tags: list[str] | list[list[str]] | None = None,
    metadata: (
        dict[str, Any] | list[dict[str, Any]] | None
    ) = None,
    run_name: str | list[str] | None = None,
    run_id: UUID | list[UUID | None] | None = None,
    **kwargs: Any
) -> LLMResult

Asynchronously pass a sequence of prompts to a model and return generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:

  1. Take advantage of batched calls,
  2. Need more output from the model than just the top generated value,
  3. Are building chains that are agnostic to the underlying language model type (e.g., pure text completion models vs chat models).

Parameters:

Name Type Description Default
prompts list[str]

List of string prompts.

required
stop list[str] | None

Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

None
callbacks Callbacks | list[Callbacks] | None

Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

None
tags list[str] | list[list[str]] | None

List of tags to associate with each prompt. If provided, the length of the list must match the length of the prompts list.

None
metadata dict[str, Any] | list[dict[str, Any]] | None

List of metadata dictionaries to associate with each prompt. If provided, the length of the list must match the length of the prompts list.

None
run_name str | list[str] | None

List of run names to associate with each prompt. If provided, the length of the list must match the length of the prompts list.

None
run_id UUID | list[UUID | None] | None

List of run IDs to associate with each prompt. If provided, the length of the list must match the length of the prompts list.

None
**kwargs Any

Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

{}

Raises:

Type Description
ValueError

If the length of callbacks, tags, metadata, or run_name (if provided) does not match the length of prompts.

Returns:

Type Description
LLMResult

An LLMResult, which contains a list of candidate Generations for each input prompt and additional model provider-specific output.

__str__

__str__() -> str

Return a string representation of the object for printing.

dict

dict(**kwargs: Any) -> dict

Return a dictionary of the LLM.

save

save(file_path: Path | str) -> None

Save the LLM.

Parameters:

Name Type Description Default
file_path Path | str

Path to file to save the LLM to.

required

Raises:

Type Description
ValueError

If the file path is not a string or Path object.

Example:

.. code-block:: python

    llm.save(file_path="path/llm.yaml")

build_extra classmethod

build_extra(values: dict[str, Any]) -> Any

Build extra kwargs from additional params that were passed in.

validate_environment

validate_environment() -> Self

Validate that api key and python package exists in environment.

get_sub_prompts

get_sub_prompts(
    params: dict[str, Any],
    prompts: list[str],
    stop: Optional[list[str]] = None,
) -> list[list[str]]

Get the sub prompts for llm call.

create_llm_result

create_llm_result(
    choices: Any,
    prompts: list[str],
    params: dict[str, Any],
    token_usage: dict[str, int],
    *,
    system_fingerprint: Optional[str] = None
) -> LLMResult

Create the LLMResult from the choices and prompts.

modelname_to_contextsize staticmethod

modelname_to_contextsize(modelname: str) -> int

Calculate the maximum number of tokens possible to generate for a model.

Parameters:

Name Type Description Default
modelname str

The modelname we want to know the context size for.

required

Returns:

Type Description
int

The maximum context size

Example

.. code-block:: python

max_tokens = openai.modelname_to_contextsize("gpt-3.5-turbo-instruct")

max_tokens_for_prompt

max_tokens_for_prompt(prompt: str) -> int

Calculate the maximum number of tokens possible to generate for a prompt.

Parameters:

Name Type Description Default
prompt str

The prompt to pass into the model.

required

Returns:

Type Description
int

The maximum number of tokens to generate for a prompt.

Example

.. code-block:: python

max_tokens = openai.max_tokens_for_prompt("Tell me a joke.")

get_lc_namespace classmethod

get_lc_namespace() -> list[str]

Get the namespace of the langchain object.

is_lc_serializable classmethod

is_lc_serializable() -> bool

Return whether this model can be serialized by LangChain.