LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • MCP Adapters
    • Overview
    • Agents
    • Callbacks
    • Chains
    • Chat models
    • Embeddings
    • Evaluation
    • Globals
    • Hub
    • Memory
    • Output parsers
    • Retrievers
    • Runnables
    • LangSmith
    • Storage
    Standard Tests
    Text Splitters
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    MCP Adapters
    OverviewAgentsCallbacksChainsChat modelsEmbeddingsEvaluationGlobalsHubMemoryOutput parsersRetrieversRunnablesLangSmithStorage
    Standard Tests
    Text Splitters
    Language
    Theme
    Pythonlangchain-classicchainsllm_mathbaseLLMMathChain
    Class●Since v1.0Deprecated

    LLMMathChain

    Copy
    LLMMathChain()

    Bases

    Chain

    Attributes

    Methods

    Inherited fromChain

    Attributes

    Amemory: BaseMemory | None
    —

    Optional memory object.

    Acallbacks: CallbacksAverbose: boolAtags
    View source on GitHub
    : list[str] | None
    Ametadata: dict[str, Any] | None
    Acallback_manager: BaseCallbackManager | None
    —

    [DEPRECATED] Use callbacks instead.

    Methods

    Mget_input_schemaMget_output_schemaMinvokeMainvokeMraise_callback_manager_deprecation
    —

    Raise deprecation warning if callback_manager is used.

    Mset_verbose
    —

    Set the chain verbosity.

    Macall
    —

    Asynchronously execute the chain.

    Mprep_outputs
    —

    Validate and prepare chain outputs, and save info about this run to memory.

    Maprep_outputs
    —

    Validate and prepare chain outputs, and save info about this run to memory.

    Mprep_inputs
    —

    Prepare chain inputs, including adding inputs from memory.

    Maprep_inputs
    —

    Prepare chain inputs, including adding inputs from memory.

    Mrun
    —

    Convenience method for executing chain.

    Marun
    —

    Convenience method for executing chain.

    Mdict
    —

    Return dictionary representation of agent.

    Msave
    —

    Save the agent.

    Mapply
    —

    Utilize the LLM generate method for speed gains.

    Inherited fromRunnableSerializable(langchain_core)

    Attributes

    Aname

    Methods

    Mto_jsonMconfigurable_fieldsMconfigurable_alternatives

    Inherited fromSerializable(langchain_core)

    Attributes

    Alc_secretsAlc_attributes

    Methods

    Mis_lc_serializableMget_lc_namespaceMlc_idMto_jsonMto_json_not_implemented

    Inherited fromRunnable(langchain_core)

    Attributes

    AnameAInputTypeAOutputTypeAinput_schemaAoutput_schemaAconfig_specs

    Methods

    Mget_nameMget_input_schemaMget_input_jsonschemaMget_output_schemaMget_output_jsonschemaM
    attribute
    llm_chain: LLMChain
    attribute
    llm: BaseLanguageModel | None

    [Deprecated] LLM wrapper to use.

    attribute
    prompt: BasePromptTemplate

    [Deprecated] Prompt to use to translate to python if necessary.

    attribute
    input_key: str
    attribute
    output_key: str
    attribute
    model_config
    attribute
    input_keys: list[str]

    Expect input key.

    attribute
    output_keys: list[str]

    Expect output key.

    method
    from_llm

    Create a LLMMathChain from a language model.

    Chain that interprets a prompt and executes python code to do math.

    Note

    This class is deprecated. See below for a replacement implementation using LangGraph. The benefits of this implementation are:

    • Uses LLM tool calling features;
    • Support for both token-by-token and step-by-step streaming;
    • Support for checkpointing and memory of chat history;
    • Easier to modify or extend (e.g., with additional tools, structured responses, etc.)

    Install LangGraph with:

    pip install -U langgraph
    import math
    from typing import Annotated, Sequence
    
    from langchain_core.messages import BaseMessage
    from langchain_core.runnables import RunnableConfig
    from langchain_core.tools import tool
    from langchain_openai import ChatOpenAI
    from langgraph.graph import END, StateGraph
    from langgraph.graph.message import add_messages
    from langgraph.prebuilt.tool_node import ToolNode
    import numexpr
    from typing_extensions import TypedDict
    
    @tool
    def calculator(expression: str) -> str:
        """Calculate expression using Python's numexpr library.
    
        Expression should be a single line mathematical expression
        that solves the problem.

    """ local_dict = {"pi": math.pi, "e": math.e} return str( numexpr.evaluate( expression.strip(), global_dict={}, # restrict access to globals local_dict=local_dict, # add common mathematical functions ) )

    model = ChatOpenAI(model="gpt-4o-mini", temperature=0)
    tools = [calculator]
    model_with_tools = model.bind_tools(tools, tool_choice="any")
    
    class ChainState(TypedDict):
        """LangGraph state."""
    
        messages: Annotated[Sequence[BaseMessage], add_messages]
    
    async def acall_chain(state: ChainState, config: RunnableConfig):
        last_message = state["messages"][-1]
        response = await model_with_tools.ainvoke(state["messages"], config)
        return {"messages": [response]}
    
    async def acall_model(state: ChainState, config: RunnableConfig):
        response = await model.ainvoke(state["messages"], config)
        return {"messages": [response]}
    
    graph_builder = StateGraph(ChainState)
    graph_builder.add_node("call_tool", acall_chain)
    graph_builder.add_node("execute_tool", ToolNode(tools))
    graph_builder.add_node("call_model", acall_model)
    graph_builder.set_entry_point("call_tool")
    graph_builder.add_edge("call_tool", "execute_tool")
    graph_builder.add_edge("execute_tool", "call_model")
    graph_builder.add_edge("call_model", END)
    chain = graph_builder.compile()
    
    example_query = "What is 551368 divided by 82"
    
    events = chain.astream(
        {"messages": [("user", example_query)]},
        stream_mode="values",
    )
    async for event in events:
        event["messages"][-1].pretty_print()
    ================================ Human Message =================================
    
    What is 551368 divided by 82
    ================================== Ai Message ==================================
    Tool Calls:
    calculator (call_MEiGXuJjJ7wGU4aOT86QuGJS)
    Call ID: call_MEiGXuJjJ7wGU4aOT86QuGJS
    Args:
        expression: 551368 / 82
    ================================= Tool Message =================================
    Name: calculator
    
    6724.0
    ================================== Ai Message ==================================
    
    551368 divided by 82 equals 6724.

    Example:

    from langchain_classic.chains import LLMMathChain
    from langchain_openai import OpenAI
    
    llm_math = LLMMathChain.from_llm(OpenAI())
    config_schema
    Mget_config_jsonschema
    Mget_graph
    Mget_prompts
    Mpipe
    Mpick
    Massign
    Minvoke
    Mainvoke
    Mbatch
    Mbatch_as_completed
    Mabatch
    Mabatch_as_completed
    Mstream
    Mastream
    Mastream_log
    Mastream_events
    Mtransform
    Matransform
    Mbind
    Mwith_config
    Mwith_listeners
    Mwith_alisteners
    Mwith_types
    Mwith_retry
    Mmap
    Mwith_fallbacks
    Mas_tool