LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicchainsrouterllm_routerLLMRouterChain
    Class●Since v1.0Deprecated

    LLMRouterChain

    Copy
    LLMRouterChain
    (
    )

    Bases

    RouterChain

    Attributes

    attribute
    llm_chain: LLMChain
    attribute
    input_keys: list[str]

    Methods

    method
    from_llm

    Inherited fromRouterChain

    Attributes

    Aoutput_keys: list[str]
    —

    The keys to use for the output.

    Methods

    Mroute
    —

    Route inputs to a destination chain.

    Maroute
    —

    Route inputs to a destination chain.

    Inherited fromChain

    Attributes

    Amemory: BaseMemory | None
    —

    Optional memory object.

    Acallbacks: CallbacksAverbose: boolAtags: list[str] | NoneAmetadata

    Inherited fromRunnableSerializable(langchain_core)

    Attributes

    AnameAmodel_config

    Methods

    Mto_jsonMconfigurable_fields

    Inherited fromSerializable(langchain_core)

    Attributes

    Alc_secretsAlc_attributesAmodel_config

    Methods

    Mis_lc_serializable

    Inherited fromRunnable(langchain_core)

    Attributes

    AnameAInputTypeAOutputTypeAinput_schemaA
    View source on GitHub

    A router chain that uses an LLM chain to perform routing.

    This class is deprecated. See below for a replacement, which offers several benefits, including streaming and batch support.

    Below is an example implementation:

    from operator import itemgetter
    from typing import Literal
    from typing_extensions import TypedDict
    
    from langchain_core.output_parsers import StrOutputParser
    from langchain_core.prompts import ChatPromptTemplate
    from langchain_core.runnables import RunnableLambda, RunnablePassthrough
    from langchain_openai import ChatOpenAI
    
    model = ChatOpenAI(model="gpt-4o-mini")
    
    prompt_1 = ChatPromptTemplate.from_messages(
        [
            ("system", "You are an expert on animals."),
            ("human", "{query}"),
        ]
    )
    prompt_2 = ChatPromptTemplate.from_messages(
        [
            ("system", "You are an expert on vegetables."),
            ("human", "{query}"),
        ]
    )
    
    chain_1 = prompt_1 | model | StrOutputParser()
    chain_2 = prompt_2 | model | StrOutputParser()
    
    route_system = "Route the user's query to either the animal "
    "or vegetable expert."
    route_prompt = ChatPromptTemplate.from_messages(
        [
            ("system", route_system),
            ("human", "{query}"),
        ]
    )
    
    class RouteQuery(TypedDict):
        """Route query to destination."""
        destination: Literal["animal", "vegetable"]
    
    route_chain = (
        route_prompt
        | model.with_structured_output(RouteQuery)
        | itemgetter("destination")
    )
    
    chain = {
        "destination": route_chain,  # "animal" or "vegetable"
        "query": lambda x: x["query"],  # pass through input query
    } | RunnableLambda(
        # if animal, chain_1. otherwise, chain_2.
        lambda x: chain_1 if x["destination"] == "animal" else chain_2,
    )
    
    chain.invoke({"query": "what color are carrots"})
    
    : dict[str, Any] | None
    Acallback_manager: BaseCallbackManager | None
    —

    [DEPRECATED] Use callbacks instead.

    Amodel_config
    Aoutput_keys: list[str]
    —

    The keys to use for the output.

    Methods

    Mget_input_schemaMget_output_schemaMinvokeMainvokeMraise_callback_manager_deprecation
    —

    Raise deprecation warning if callback_manager is used.

    Mset_verbose
    —

    Set the chain verbosity.

    Macall
    —

    Asynchronously execute the chain.

    Mprep_outputs
    —

    Validate and prepare chain outputs, and save info about this run to memory.

    Maprep_outputs
    —

    Validate and prepare chain outputs, and save info about this run to memory.

    Mprep_inputs
    —

    Prepare chain inputs, including adding inputs from memory.

    Maprep_inputs
    —

    Prepare chain inputs, including adding inputs from memory.

    Mrun
    —

    Convenience method for executing chain.

    Marun
    —

    Convenience method for executing chain.

    Mdict
    —

    Return dictionary representation of agent.

    Msave
    —

    Save the agent.

    Mapply
    —

    Utilize the LLM generate method for speed gains.

    M
    configurable_alternatives
    M
    get_lc_namespace
    Mlc_id
    Mto_json
    Mto_json_not_implemented
    output_schema
    Aconfig_specs

    Methods

    Mget_nameMget_input_schemaMget_input_jsonschemaMget_output_schemaMget_output_jsonschemaMconfig_schemaMget_config_jsonschemaMget_graphMget_promptsMpipeMpickMassignMinvokeMainvokeMbatchMbatch_as_completedMabatchMabatch_as_completedMstreamMastreamMastream_logMastream_eventsMtransformMatransformMbindMwith_configMwith_listenersMwith_alistenersMwith_typesMwith_retryMmapMwith_fallbacksMas_tool

    LLM chain used to perform routing

    Will be whatever keys the LLM chain prompt expects.

    Convenience constructor.