LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
    • Overview
    • Caches
    • Callbacks
    • Documents
    • Document loaders
    • Embeddings
    • Exceptions
    • Language models
    • Serialization
    • Output parsers
    • Prompts
    • Rate limiters
    • Retrievers
    • Runnables
    • Utilities
    • Vector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    OverviewCachesCallbacksDocumentsDocument loadersEmbeddingsExceptionsLanguage modelsSerializationOutput parsersPromptsRate limitersRetrieversRunnablesUtilitiesVector stores
    MCP Adapters
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-corerunnablesfallbacksRunnableWithFallbacks
    Class●Since v0.1

    RunnableWithFallbacks

    Runnable that can fallback to other Runnable objects if it fails.

    External APIs (e.g., APIs for a language model) may at times experience degraded performance or even downtime.

    In these cases, it can be useful to have a fallback Runnable that can be used in place of the original Runnable (e.g., fallback to another LLM provider).

    Fallbacks can be defined at the level of a single Runnable, or at the level of a chain of Runnables. Fallbacks are tried in order until one succeeds or all fail.

    While you can instantiate a RunnableWithFallbacks directly, it is usually more convenient to use the with_fallbacks method on a Runnable.

    Copy
    RunnableWithFallbacks(
        self,
        *args: Any = (),
        **kwargs: Any = {},
    )

    Bases

    RunnableSerializable[Input, Output]

    Example:

    from langchain_core.chat_models.openai import ChatOpenAI
    from langchain_core.chat_models.anthropic import ChatAnthropic
    
    model = ChatAnthropic(model="claude-3-haiku-20240307").with_fallbacks(
        [ChatOpenAI(model="gpt-3.5-turbo-0125")]
    )
    # Will usually use ChatAnthropic, but fallback to ChatOpenAI
    # if ChatAnthropic fails.
    model.invoke("hello")
    
    # And you can also use fallbacks at the level of a chain.
    # Here if both LLM providers fail, we'll fallback to a good hardcoded
    # response.
    
    from langchain_core.prompts import PromptTemplate
    from langchain_core.output_parser import StrOutputParser
    from langchain_core.runnables import RunnableLambda
    
    def when_all_is_lost(inputs):
        return (
            "Looks like our LLM providers are down. "
            "Here's a nice 🦜️ emoji for you instead."
        )
    
    chain_with_fallback = (
        PromptTemplate.from_template("Tell me a joke about {topic}")
        | model
        | StrOutputParser()
    ).with_fallbacks([RunnableLambda(when_all_is_lost)])

    Attributes

    attribute
    runnable: Runnable[Input, Output]

    The Runnable to run first.

    attribute
    fallbacks: Sequence[Runnable[Input, Output]]

    A sequence of fallbacks to try.

    attribute
    exceptions_to_handle: tuple[type[BaseException], ...]

    The exceptions on which fallbacks should be tried.

    Any exception that is not a subclass of these exceptions will be raised immediately.

    attribute
    exception_key: str | None

    If string is specified then handled exceptions will be passed to fallbacks as part of the input under the specified key.

    If None, exceptions will not be passed to fallbacks.

    If used, the base Runnable and its fallbacks must accept a dictionary as input.

    attribute
    model_config
    attribute
    InputType: type[Input]
    attribute
    OutputType: type[Output]
    attribute
    config_specs: list[ConfigurableFieldSpec]
    attribute
    runnables: Iterator[Runnable[Input, Output]]

    Iterator over the Runnable and its fallbacks.

    Methods

    method
    get_input_schema
    method
    get_output_schema
    method
    is_lc_serializable

    Return True as this class is serializable.

    method
    get_lc_namespace

    Get the namespace of the LangChain object.

    method
    invoke
    method
    ainvoke
    method
    batch
    method
    abatch
    method
    stream
    method
    astream

    Inherited fromRunnableSerializable

    Attributes

    Aname: str
    —

    The name of the function.

    Methods

    Mto_json
    —

    Convert the graph to a JSON-serializable format.

    Mconfigurable_fieldsMconfigurable_alternatives
    —

    Configure alternatives for Runnable objects that can be set at runtime.

    Inherited fromSerializable

    Attributes

    Alc_secrets: dict[str, str]
    —

    A map of constructor argument names to secret ids.

    Alc_attributes: dict
    —

    List of attribute names that should be included in the serialized kwargs.

    Methods

    Mlc_id
    —

    Return a unique identifier for this class for serialization purposes.

    Mto_json
    —

    Convert the graph to a JSON-serializable format.

    Mto_json_not_implemented
    —

    Serialize a "not implemented" object.

    Inherited fromRunnable

    Attributes

    Aname: str
    —

    The name of the function.

    Ainput_schema: type[BaseModel]
    —

    The type of input this Runnable accepts specified as a Pydantic model.

    Aoutput_schema: type[BaseModel]
    —

    Output schema.

    Methods

    Mget_nameMget_input_jsonschema
    —

    Get a JSON schema that represents the input to the Runnable.

    Mget_output_jsonschema
    —

    Get a JSON schema that represents the output of the Runnable.

    Mconfig_schema
    —

    The type of config this Runnable accepts specified as a Pydantic model.

    Mget_config_jsonschema
    —

    Get a JSON schema that represents the config of the Runnable.

    Mget_graphMget_prompts
    —

    Return a list of prompts used by this Runnable.

    Mpipe
    —

    Pipe Runnable objects.

    Mpick
    —

    Pick keys from the output dict of this Runnable.

    Massign
    —

    Merge the Dict input with the output produced by the mapping argument.

    Mbatch_as_completed
    —

    Run invoke in parallel on a list of inputs.

    Mabatch_as_completed
    —

    Run ainvoke in parallel on a list of inputs.

    Mastream_log
    —

    Stream all output from a Runnable, as reported to the callback system.

    Mastream_events
    —

    Generate a stream of events.

    MtransformMatransformMbind
    —

    Bind arguments to a Runnable, returning a new Runnable.

    Mwith_configMwith_listeners
    —

    Bind lifecycle listeners to a Runnable, returning a new Runnable.

    Mwith_alisteners
    —

    Bind async lifecycle listeners to a Runnable.

    Mwith_types
    —

    Bind input and output types to a Runnable, returning a new Runnable.

    Mwith_retry
    —

    Create a new Runnable that retries the original Runnable on exceptions.

    Mmap
    —

    Map a function to multiple iterables.

    Mwith_fallbacks
    —

    Add fallbacks to a Runnable, returning a new Runnable.

    Mas_tool
    —

    Create a BaseTool from a Runnable.

    View source on GitHub